id
stringlengths 1
265
| text
stringlengths 6
5.19M
| dataset_id
stringclasses 7
values |
---|---|---|
/Flask-CKEditor-0.4.6.tar.gz/Flask-CKEditor-0.4.6/flask_ckeditor/static/full/plugins/specialchar/dialogs/lang/en.js | /*
Copyright (c) 2003-2020, CKSource - Frederico Knabben. All rights reserved.
For licensing, see LICENSE.md or https://ckeditor.com/legal/ckeditor-oss-license
*/
CKEDITOR.plugins.setLang("specialchar","en",{euro:"Euro sign",lsquo:"Left single quotation mark",rsquo:"Right single quotation mark",ldquo:"Left double quotation mark",rdquo:"Right double quotation mark",ndash:"En dash",mdash:"Em dash",iexcl:"Inverted exclamation mark",cent:"Cent sign",pound:"Pound sign",curren:"Currency sign",yen:"Yen sign",brvbar:"Broken bar",sect:"Section sign",uml:"Diaeresis",copy:"Copyright sign",ordf:"Feminine ordinal indicator",laquo:"Left-pointing double angle quotation mark",
not:"Not sign",reg:"Registered sign",macr:"Macron",deg:"Degree sign",sup2:"Superscript two",sup3:"Superscript three",acute:"Acute accent",micro:"Micro sign",para:"Pilcrow sign",middot:"Middle dot",cedil:"Cedilla",sup1:"Superscript one",ordm:"Masculine ordinal indicator",raquo:"Right-pointing double angle quotation mark",frac14:"Vulgar fraction one quarter",frac12:"Vulgar fraction one half",frac34:"Vulgar fraction three quarters",iquest:"Inverted question mark",Agrave:"Latin capital letter A with grave accent",
Aacute:"Latin capital letter A with acute accent",Acirc:"Latin capital letter A with circumflex",Atilde:"Latin capital letter A with tilde",Auml:"Latin capital letter A with diaeresis",Aring:"Latin capital letter A with ring above",AElig:"Latin capital letter Æ",Ccedil:"Latin capital letter C with cedilla",Egrave:"Latin capital letter E with grave accent",Eacute:"Latin capital letter E with acute accent",Ecirc:"Latin capital letter E with circumflex",Euml:"Latin capital letter E with diaeresis",Igrave:"Latin capital letter I with grave accent",
Iacute:"Latin capital letter I with acute accent",Icirc:"Latin capital letter I with circumflex",Iuml:"Latin capital letter I with diaeresis",ETH:"Latin capital letter Eth",Ntilde:"Latin capital letter N with tilde",Ograve:"Latin capital letter O with grave accent",Oacute:"Latin capital letter O with acute accent",Ocirc:"Latin capital letter O with circumflex",Otilde:"Latin capital letter O with tilde",Ouml:"Latin capital letter O with diaeresis",times:"Multiplication sign",Oslash:"Latin capital letter O with stroke",
Ugrave:"Latin capital letter U with grave accent",Uacute:"Latin capital letter U with acute accent",Ucirc:"Latin capital letter U with circumflex",Uuml:"Latin capital letter U with diaeresis",Yacute:"Latin capital letter Y with acute accent",THORN:"Latin capital letter Thorn",szlig:"Latin small letter sharp s",agrave:"Latin small letter a with grave accent",aacute:"Latin small letter a with acute accent",acirc:"Latin small letter a with circumflex",atilde:"Latin small letter a with tilde",auml:"Latin small letter a with diaeresis",
aring:"Latin small letter a with ring above",aelig:"Latin small letter æ",ccedil:"Latin small letter c with cedilla",egrave:"Latin small letter e with grave accent",eacute:"Latin small letter e with acute accent",ecirc:"Latin small letter e with circumflex",euml:"Latin small letter e with diaeresis",igrave:"Latin small letter i with grave accent",iacute:"Latin small letter i with acute accent",icirc:"Latin small letter i with circumflex",iuml:"Latin small letter i with diaeresis",eth:"Latin small letter eth",
ntilde:"Latin small letter n with tilde",ograve:"Latin small letter o with grave accent",oacute:"Latin small letter o with acute accent",ocirc:"Latin small letter o with circumflex",otilde:"Latin small letter o with tilde",ouml:"Latin small letter o with diaeresis",divide:"Division sign",oslash:"Latin small letter o with stroke",ugrave:"Latin small letter u with grave accent",uacute:"Latin small letter u with acute accent",ucirc:"Latin small letter u with circumflex",uuml:"Latin small letter u with diaeresis",
yacute:"Latin small letter y with acute accent",thorn:"Latin small letter thorn",yuml:"Latin small letter y with diaeresis",OElig:"Latin capital ligature OE",oelig:"Latin small ligature oe",372:"Latin capital letter W with circumflex",374:"Latin capital letter Y with circumflex",373:"Latin small letter w with circumflex",375:"Latin small letter y with circumflex",sbquo:"Single low-9 quotation mark",8219:"Single high-reversed-9 quotation mark",bdquo:"Double low-9 quotation mark",hellip:"Horizontal ellipsis",
trade:"Trade mark sign",9658:"Black right-pointing pointer",bull:"Bullet",rarr:"Rightwards arrow",rArr:"Rightwards double arrow",hArr:"Left right double arrow",diams:"Black diamond suit",asymp:"Almost equal to"}); | PypiClean |
/MCTSRLV2-0.8-py3-none-any.whl/MCTSRLV1/ChildrenND.py | from MCTSRLV2.Extent import CalculateND
import pandas as pd
class ChildrenND:
def __init__(self, dataset, Root, Pattern, minimum_support):
self.df = dataset
self.Root = Root
self.Pattern = Pattern
self.constant = pd.DataFrame(Pattern)
self.minimum_support = minimum_support
# 1: Direct children, to build the tree:
def Direct_ChildrenND(self):
# Returns:
Direct_Children = []
# First we will represent each feature as a list:
Columns = []
df = self.df.iloc[:, :-1]
for col in df:
column = df[col].unique().tolist()
Columns.append(sorted(column))
obj = CalculateND(dataset=df, pattern=self.Pattern, root=self.Root, mValue=None, label=None)
Extent_of_parent = obj.extentND()[1]
# If the current Pattern is the Root pattern:
for i in range(len(self.Pattern)):
self.Pattern = self.constant.values.tolist()
Current_Column = Columns[i]
Current_Interval = self.Pattern[i]
Left = Current_Column.index(Current_Interval[0])
Right = Current_Column.index(Current_Interval[1])
if Right - Left > 1:
Split_index = int((Left + Right) / 2)
Child1 = [Current_Column[Left], Current_Column[Split_index]]
Child2 = [Current_Column[Split_index], Current_Column[Right]]
# compose the full child pattern:
self.Pattern[i] = Child1
# 1: pattern is frequent:
obj = CalculateND(dataset=df, pattern=self.Pattern, root=self.Root, mValue=None,
label=None)
if obj.extentND()[1] >= self.minimum_support:
# 2: the child has different support from the parent:
if obj.extentND()[1] != Extent_of_parent:
Direct_Children.append(self.Pattern)
else:
pass
else:
pass
self.Pattern = self.constant.values.tolist()
self.Pattern[i] = Child2
# 1: pattern is frequent:
obj2 = CalculateND(dataset=df, pattern=self.Pattern, root=self.Root, mValue=None,
label=None)
if obj2.extentND()[1] >= self.minimum_support:
# 2: the child has different support from the parent:
if obj2.extentND()[1] != Extent_of_parent:
Direct_Children.append(self.Pattern)
else:
pass
else:
pass
else:
pass
return Direct_Children
# Allow the generation of the non-frequent patterns, to stop the simulation.
def Direct_ChildrenND_Simulation(self):
Direct_Children = []
Columns = []
df = self.df.iloc[:, :-1]
for col in df:
column = df[col].unique().tolist()
Columns.append(sorted(column))
obj = CalculateND(dataset=df, pattern=self.Pattern, root=self.Root, mValue=None, label=None)
Extent_of_parent = obj.extentND()[1]
for i in range(len(self.Pattern)):
self.Pattern = self.constant.values.tolist()
Current_Column = Columns[i]
Current_Interval = self.Pattern[i]
Left = Current_Column.index(Current_Interval[0])
Right = Current_Column.index(Current_Interval[1])
if Right - Left > 1:
Split_index = int((Left + Right) / 2)
Child1 = [Current_Column[Left], Current_Column[Split_index]]
Child2 = [Current_Column[Split_index], Current_Column[Right]]
self.Pattern[i] = Child1
# the child has different support from the parent:
obj = CalculateND(dataset=df, pattern=self.Pattern, root=self.Root, mValue=None,
label=None)
if obj.extentND()[1] != Extent_of_parent:
Direct_Children.append(self.Pattern)
else:
pass
self.Pattern = self.constant.values.tolist()
self.Pattern[i] = Child2
obj2 = CalculateND(dataset=df, pattern=self.Pattern, root=self.Root, mValue=None,
label=None)
if obj2.extentND()[1] != Extent_of_parent:
Direct_Children.append(self.Pattern)
else:
pass
else:
pass
return Direct_Children | PypiClean |
/MergePythonSDK.ticketing-2.2.2-py3-none-any.whl/MergePythonSDK/accounting/model/issue_status_enum.py | import re # noqa: F401
import sys # noqa: F401
from typing import (
Optional,
Union,
List,
Dict,
)
from MergePythonSDK.shared.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
OpenApiModel,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
)
from MergePythonSDK.shared.exceptions import ApiAttributeError
from MergePythonSDK.shared.model_utils import import_model_by_name
from MergePythonSDK.shared.model_utils import MergeEnumType
class IssueStatusEnum(ModelNormal, MergeEnumType):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
('value',): {
'ONGOING': "ONGOING",
'RESOLVED': "RESOLVED",
},
}
validations = {
}
additional_properties_type = None
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
defined_types = {
'value': (str,),
}
return defined_types
@cached_property
def discriminator():
return None
attribute_map = {
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, value, *args, **kwargs): # noqa: E501
"""IssueStatusEnum - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', True)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.value = value
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, value, *args, **kwargs): # noqa: E501
"""IssueStatusEnum - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
for arg in args:
if isinstance(arg, dict):
kwargs.update(arg)
else:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.value = value | PypiClean |
/MongoContentManager-1.0.0.tar.gz/MongoContentManager-1.0.0/mongo_notebook_manager/notebooks_importer.py | import argparse
import datetime
import fnmatch
import os
import logging
from pymongo import MongoClient
def insert_or_update(db, name, content):
now = datetime.datetime.now()
sname = name.strip('/').split('/')
data = {'path': '' if len(sname) < 2 else sname[0],
'type': 'notebook',
'name': sname[-1], 'content': content,
'created': now, 'lastModified': now}
if not db.find_one({'name': name, 'type': 'notebook'}):
db.insert(data)
return 'Notebook "{}" created'.format(name)
else:
db.update({'name': sname[-1], 'type': 'notebook'}, data)
return 'Notebook "{}" updated'.format(name)
def prepare_directories(db, path, root, dirnames):
root1 = root.replace(path, '').strip('/')
for dirname in dirnames:
data = {'name': dirname,
'type': 'directory',
'created': datetime.datetime.now(),
'lastModified': datetime.datetime.now(),
'path': root1}
if not db.find_one({'path': root1, 'name': dirname}):
db.insert(data)
logging.info('Directory "{}" created'.format(dirname))
def get_notebooks(db, path, ext):
for root, dirnames, filenames in os.walk(path):
prepare_directories(db, path, root, dirnames)
for filename in fnmatch.filter(filenames, ext):
filepath = os.path.join(root, filename)
yield [filepath.replace(path, ''), open(filepath).read()]
def import_notebooks(db, path, ext):
files = get_notebooks(db, path, ext)
for notebook in files:
message = insert_or_update(db, *notebook)
logging.info(message)
else:
logging.error('No notebooks found')
def main():
parser = argparse.ArgumentParser(description='FS2MongoDB importer')
parser.add_argument('--mongodb', required=True, type=str,
help='MongoDB connection string')
parser.add_argument('--path', required=True, default='.',
type=str,
help='Source directory with notbeooks')
parser.add_argument('--database', default='notebooks',
type=str,
help='MongoDB Database name (default: notebooks)')
parser.add_argument('--collection', default='notebooks',
type=str,
help='MongoDB Collection name (default: notebooks)')
parser.add_argument('--ext', default='*.ipynb',
type=str,
help='Notbeooks extension (default: *.ipynb)')
args = parser.parse_args()
conn = MongoClient(args.mongodb)[args.database][args.collection]
import_notebooks(conn, args.path, args.ext)
if __name__ == '__main__':
main() | PypiClean |
/DTP_Emulator-1.8-py3-none-any.whl/simple_emulator/common/sender_obs.py |
import numpy as np
# The monitor interval class used to pass data from the PCC subsystem to
# the machine learning module.
#
class SenderMonitorInterval():
def __init__(self,
sender_id,
bytes_sent=0.0,
bytes_acked=0.0,
bytes_lost=0.0,
send_start=0.0,
send_end=0.0,
recv_start=0.0,
recv_end=0.0,
rtt_samples=[],
packet_size=1500):
self.features = {}
self.sender_id = sender_id
self.bytes_acked = bytes_acked
self.bytes_sent = bytes_sent
self.bytes_lost = bytes_lost
self.send_start = send_start
self.send_end = send_end
self.recv_start = recv_start
self.recv_end = recv_end
self.rtt_samples = rtt_samples
self.packet_size = packet_size
def get(self, feature):
if feature in self.features.keys():
return self.features[feature]
else:
result = SenderMonitorIntervalMetric.eval_by_name(feature, self)
self.features[feature] = result
return result
# Convert the observation parts of the monitor interval into a numpy array
def as_array(self, features):
return np.array([self.get(f) / SenderMonitorIntervalMetric.get_by_name(f).scale for f in features])
class SenderHistory():
def __init__(self, length, features, sender_id):
self.features = features
self.values = []
self.sender_id = sender_id
for i in range(0, length):
self.values.append(SenderMonitorInterval(self.sender_id))
def step(self, new_mi):
self.values.pop(0)
self.values.append(new_mi)
def as_array(self):
arrays = []
for mi in self.values:
arrays.append(mi.as_array(self.features))
arrays = np.array(arrays).flatten()
return arrays
class SenderMonitorIntervalMetric():
_all_metrics = {}
def __init__(self, name, func, min_val, max_val, scale=1.0):
self.name = name
self.func = func
self.min_val = min_val
self.max_val = max_val
self.scale = scale
SenderMonitorIntervalMetric._all_metrics[name] = self
def eval(self, mi):
return self.func(mi)
def eval_by_name(name, mi):
return SenderMonitorIntervalMetric._all_metrics[name].eval(mi)
def get_by_name(name):
return SenderMonitorIntervalMetric._all_metrics[name]
def get_min_obs_vector(feature_names):
print("Getting min obs for %s" % feature_names)
result = []
for feature_name in feature_names:
feature = SenderMonitorIntervalMetric.get_by_name(feature_name)
result.append(feature.min_val)
return np.array(result)
def get_max_obs_vector(feature_names):
result = []
for feature_name in feature_names:
feature = SenderMonitorIntervalMetric.get_by_name(feature_name)
result.append(feature.max_val)
return np.array(result)
def _mi_metric_recv_rate(mi):
dur = mi.get("recv dur")
if dur > 0.0:
return 8.0 * (mi.bytes_acked - mi.packet_size) / dur
return 0.0
def _mi_metric_recv_dur(mi):
return mi.recv_end - mi.recv_start
def _mi_metric_avg_latency(mi):
if len(mi.rtt_samples) > 0:
return np.mean(mi.rtt_samples)
return 0.0
def _mi_metric_send_rate(mi):
dur = mi.get("send dur")
if dur > 0.0:
return 8.0 * mi.bytes_sent / dur
return 0.0
def _mi_metric_send_dur(mi):
return mi.send_end - mi.send_start
def _mi_metric_loss_ratio(mi):
if mi.bytes_lost + mi.bytes_acked > 0:
return mi.bytes_lost / (mi.bytes_lost + mi.bytes_acked)
return 0.0
def _mi_metric_latency_increase(mi):
half = int(len(mi.rtt_samples) / 2)
if half >= 1:
return np.mean(mi.rtt_samples[half:]) - np.mean(mi.rtt_samples[:half])
return 0.0
def _mi_metric_ack_latency_inflation(mi):
dur = mi.get("recv dur")
latency_increase = mi.get("latency increase")
if dur > 0.0:
return latency_increase / dur
return 0.0
def _mi_metric_sent_latency_inflation(mi):
dur = mi.get("send dur")
latency_increase = mi.get("latency increase")
if dur > 0.0:
return latency_increase / dur
return 0.0
_conn_min_latencies = {}
def _mi_metric_conn_min_latency(mi):
latency = mi.get("avg latency")
if mi.sender_id in _conn_min_latencies.keys():
prev_min = _conn_min_latencies[mi.sender_id]
if latency == 0.0:
return prev_min
else:
if latency < prev_min:
_conn_min_latencies[mi.sender_id] = latency
return latency
else:
return prev_min
else:
if latency > 0.0:
_conn_min_latencies[mi.sender_id] = latency
return latency
else:
return 0.0
def _mi_metric_send_ratio(mi):
thpt = mi.get("recv rate")
send_rate = mi.get("send rate")
if (thpt > 0.0) and (send_rate < 1000.0 * thpt):
return send_rate / thpt
return 1.0
def _mi_metric_latency_ratio(mi):
min_lat = mi.get("conn min latency")
cur_lat = mi.get("avg latency")
if min_lat > 0.0:
return cur_lat / min_lat
return 1.0
SENDER_MI_METRICS = [
SenderMonitorIntervalMetric("send rate", _mi_metric_send_rate, 0.0, 1e9, 1e7),
SenderMonitorIntervalMetric("recv rate", _mi_metric_recv_rate, 0.0, 1e9, 1e7),
SenderMonitorIntervalMetric("recv dur", _mi_metric_recv_dur, 0.0, 100.0),
SenderMonitorIntervalMetric("send dur", _mi_metric_send_dur, 0.0, 100.0),
SenderMonitorIntervalMetric("avg latency", _mi_metric_avg_latency, 0.0, 100.0),
SenderMonitorIntervalMetric("loss ratio", _mi_metric_loss_ratio, 0.0, 1.0),
SenderMonitorIntervalMetric("ack latency inflation", _mi_metric_ack_latency_inflation, -1.0, 10.0),
SenderMonitorIntervalMetric("sent latency inflation", _mi_metric_sent_latency_inflation, -1.0, 10.0),
SenderMonitorIntervalMetric("conn min latency", _mi_metric_conn_min_latency, 0.0, 100.0),
SenderMonitorIntervalMetric("latency increase", _mi_metric_latency_increase, 0.0, 100.0),
SenderMonitorIntervalMetric("latency ratio", _mi_metric_latency_ratio, 1.0, 10000.0),
SenderMonitorIntervalMetric("send ratio", _mi_metric_send_ratio, 0.0, 1000.0)
] | PypiClean |
/Mopidy_YouTube-3.7-py3-none-any.whl/mopidy_youtube/converters.py | from mopidy.models import Album, Artist, Track
# from mopidy_youtube import logger
from mopidy_youtube.data import format_playlist_uri, format_video_uri
def convert_video_to_track(
video, album_name: str = None, album_id: str = None, **kwargs
) -> Track:
try:
adjustedLength = video.length.get() * 1000
except Exception:
adjustedLength = 0
if not album_name:
album = video.album.get()
else:
album = {"name": album_name, "uri": f"yt:playlist:{album_id}"}
track = Track(
uri=format_video_uri(video.id),
name=video.title.get(),
artists=[
Artist(name=artist["name"], uri=artist["uri"])
for artist in video.artists.get()
],
album=Album(name=album["name"], uri=album["uri"]),
length=adjustedLength,
comment=video.id,
track_no=video.track_no.get(),
**kwargs,
)
return track
# YouTube Music supports 'songs'; probably should take advantage and use
#
# def convert_ytmsong_to_track(
# video: youtube.Video, album_name: str, **kwargs
# ) -> Track:
#
# try:
# adjustedLength = video.length.get() * 1000
# except Exception:
# adjustedLength = 0
#
# return Track(
# uri=format_video_uri(video.id),
# name=video.title.get(),
# artists=[Artist(name=video.channel.get())],
# album=Album(name=album_name),
# length=adjustedLength,
# comment=video.id,
# **kwargs,
# )
def convert_playlist_to_album(playlist) -> Album:
return Album(
uri=format_playlist_uri(playlist.id),
name=playlist.title.get(),
artists=[
Artist(
name=playlist.channel.get()
) # f"YouTube Playlist ({playlist.video_count.get()} videos)")
],
num_tracks=playlist.video_count.get(),
)
# YouTube Music supports 'Albums'; probably should take advantage and use
#
# def convert_ytmalbum_to_album(album: youtube.Album) -> Album:
# return Album(
# uri=format_album_uri(album.id),
# name=f"{album.title.get()} (YouTube Music Album, {album.track_count.get()} tracks),
# artists=[
# Artist(
# # actual artists from the ytm album, including a name and uri
# )
# ],
# num_tracks=
# num_discs=
# date=
# musicbrainz_id=
# ) | PypiClean |
/bevy-2.0.1-py3-none-any.whl/bevy/injectors/functions.py | from asyncio import iscoroutinefunction
from functools import wraps
from inspect import signature, get_annotations
from typing import Any, Callable, TypeVar, ParamSpec, TypeAlias
from bevy.injectors.classes import Dependency
from bevy.repository import get_repository
_P = ParamSpec("_P")
_R = TypeVar("_R")
_C: TypeAlias = Callable[_P, _R]
def inject(func: _C) -> Callable[_P, _R]:
"""This decorator creates a wrapper function that scans the decorated function for dependencies. When the function
is later called any dependency parameters that weren't passed in will be injected by the wrapper function. This can
intelligently handle positional only arguments, positional arguments, named arguments, and keyword only arguments.
All parameters that should be injected need to be given a type hint that is supported by a provider and must have a
default value that is an instance of `bevy.injectors.classes.Dependency`.
"""
sig = signature(func)
ns = _get_function_namespace(func)
annotations = get_annotations(func, globals=ns, eval_str=True)
# Determine which parameters have a declared dependency
inject_parameters = {
name: annotations[name]
for name, parameter in sig.parameters.items()
if isinstance(parameter.default, Dependency)
}
@wraps(func)
def injector(*args: _P.args, **kwargs: _P.kwargs) -> _R:
params = sig.bind_partial(*args, **kwargs)
repo = get_repository()
# Get instances from the repo to fill all missing dependency parameters
params.arguments |= {
name: repo.get(dependency_type)
for name, dependency_type in inject_parameters.items()
if name not in params.arguments
}
return func(*params.args, **params.kwargs)
if iscoroutinefunction(func):
@wraps(func)
async def async_injector(*args, **kwargs):
return await injector(*args, **kwargs)
return async_injector
return injector
def _get_function_namespace(func: _C) -> dict[str, Any]:
"""Get the variables that the function had in its name space."""
return _unwrap_function(func).__globals__
def _unwrap_function(func: _C) -> _C:
"""Attempt to unwrap a function from all decorators."""
if hasattr(func, "__func__"):
return _unwrap_function(func.__func__)
if hasattr(func, "__wrapped__"):
return _unwrap_function(func.__wrapped__)
return func | PypiClean |
/123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/builders/post_processing_builder.py |
"""Builder function for post processing operations."""
import functools
import tensorflow.compat.v1 as tf
from object_detection.builders import calibration_builder
from object_detection.core import post_processing
from object_detection.protos import post_processing_pb2
def build(post_processing_config):
"""Builds callables for post-processing operations.
Builds callables for non-max suppression, score conversion, and (optionally)
calibration based on the configuration.
Non-max suppression callable takes `boxes`, `scores`, and optionally
`clip_window`, `parallel_iterations` `masks, and `scope` as inputs. It returns
`nms_boxes`, `nms_scores`, `nms_classes` `nms_masks` and `num_detections`. See
post_processing.batch_multiclass_non_max_suppression for the type and shape
of these tensors.
Score converter callable should be called with `input` tensor. The callable
returns the output from one of 3 tf operations based on the configuration -
tf.identity, tf.sigmoid or tf.nn.softmax. If a calibration config is provided,
score_converter also applies calibration transformations, as defined in
calibration_builder.py. See tensorflow documentation for argument and return
value descriptions.
Args:
post_processing_config: post_processing.proto object containing the
parameters for the post-processing operations.
Returns:
non_max_suppressor_fn: Callable for non-max suppression.
score_converter_fn: Callable for score conversion.
Raises:
ValueError: if the post_processing_config is of incorrect type.
"""
if not isinstance(post_processing_config, post_processing_pb2.PostProcessing):
raise ValueError('post_processing_config not of type '
'post_processing_pb2.Postprocessing.')
non_max_suppressor_fn = _build_non_max_suppressor(
post_processing_config.batch_non_max_suppression)
score_converter_fn = _build_score_converter(
post_processing_config.score_converter,
post_processing_config.logit_scale)
if post_processing_config.HasField('calibration_config'):
score_converter_fn = _build_calibrated_score_converter(
score_converter_fn,
post_processing_config.calibration_config)
return non_max_suppressor_fn, score_converter_fn
def _build_non_max_suppressor(nms_config):
"""Builds non-max suppresson based on the nms config.
Args:
nms_config: post_processing_pb2.PostProcessing.BatchNonMaxSuppression proto.
Returns:
non_max_suppressor_fn: Callable non-max suppressor.
Raises:
ValueError: On incorrect iou_threshold or on incompatible values of
max_total_detections and max_detections_per_class or on negative
soft_nms_sigma.
"""
if nms_config.iou_threshold < 0 or nms_config.iou_threshold > 1.0:
raise ValueError('iou_threshold not in [0, 1.0].')
if nms_config.max_detections_per_class > nms_config.max_total_detections:
raise ValueError('max_detections_per_class should be no greater than '
'max_total_detections.')
if nms_config.soft_nms_sigma < 0.0:
raise ValueError('soft_nms_sigma should be non-negative.')
if nms_config.use_combined_nms and nms_config.use_class_agnostic_nms:
raise ValueError('combined_nms does not support class_agnostic_nms.')
non_max_suppressor_fn = functools.partial(
post_processing.batch_multiclass_non_max_suppression,
score_thresh=nms_config.score_threshold,
iou_thresh=nms_config.iou_threshold,
max_size_per_class=nms_config.max_detections_per_class,
max_total_size=nms_config.max_total_detections,
use_static_shapes=nms_config.use_static_shapes,
use_class_agnostic_nms=nms_config.use_class_agnostic_nms,
max_classes_per_detection=nms_config.max_classes_per_detection,
soft_nms_sigma=nms_config.soft_nms_sigma,
use_partitioned_nms=nms_config.use_partitioned_nms,
use_combined_nms=nms_config.use_combined_nms,
change_coordinate_frame=nms_config.change_coordinate_frame,
use_hard_nms=nms_config.use_hard_nms,
use_cpu_nms=nms_config.use_cpu_nms)
return non_max_suppressor_fn
def _score_converter_fn_with_logit_scale(tf_score_converter_fn, logit_scale):
"""Create a function to scale logits then apply a Tensorflow function."""
def score_converter_fn(logits):
scaled_logits = tf.multiply(logits, 1.0 / logit_scale, name='scale_logits')
return tf_score_converter_fn(scaled_logits, name='convert_scores')
score_converter_fn.__name__ = '%s_with_logit_scale' % (
tf_score_converter_fn.__name__)
return score_converter_fn
def _build_score_converter(score_converter_config, logit_scale):
"""Builds score converter based on the config.
Builds one of [tf.identity, tf.sigmoid, tf.softmax] score converters based on
the config.
Args:
score_converter_config: post_processing_pb2.PostProcessing.score_converter.
logit_scale: temperature to use for SOFTMAX score_converter.
Returns:
Callable score converter op.
Raises:
ValueError: On unknown score converter.
"""
if score_converter_config == post_processing_pb2.PostProcessing.IDENTITY:
return _score_converter_fn_with_logit_scale(tf.identity, logit_scale)
if score_converter_config == post_processing_pb2.PostProcessing.SIGMOID:
return _score_converter_fn_with_logit_scale(tf.sigmoid, logit_scale)
if score_converter_config == post_processing_pb2.PostProcessing.SOFTMAX:
return _score_converter_fn_with_logit_scale(tf.nn.softmax, logit_scale)
raise ValueError('Unknown score converter.')
def _build_calibrated_score_converter(score_converter_fn, calibration_config):
"""Wraps a score_converter_fn, adding a calibration step.
Builds a score converter function with a calibration transformation according
to calibration_builder.py. The score conversion function may be applied before
or after the calibration transformation, depending on the calibration method.
If the method is temperature scaling, the score conversion is
after the calibration transformation. Otherwise, the score conversion is
before the calibration transformation. Calibration applies positive monotonic
transformations to inputs (i.e. score ordering is strictly preserved or
adjacent scores are mapped to the same score). When calibration is
class-agnostic, the highest-scoring class remains unchanged, unless two
adjacent scores are mapped to the same value and one class arbitrarily
selected to break the tie. In per-class calibration, it's possible (though
rare in practice) that the highest-scoring class will change, since positive
monotonicity is only required to hold within each class.
Args:
score_converter_fn: callable that takes logit scores as input.
calibration_config: post_processing_pb2.PostProcessing.calibration_config.
Returns:
Callable calibrated score coverter op.
"""
calibration_fn = calibration_builder.build(calibration_config)
def calibrated_score_converter_fn(logits):
if (calibration_config.WhichOneof('calibrator') ==
'temperature_scaling_calibration'):
calibrated_logits = calibration_fn(logits)
return score_converter_fn(calibrated_logits)
else:
converted_logits = score_converter_fn(logits)
return calibration_fn(converted_logits)
calibrated_score_converter_fn.__name__ = (
'calibrate_with_%s' % calibration_config.WhichOneof('calibrator'))
return calibrated_score_converter_fn | PypiClean |
/Kallithea-0.7.0.tar.gz/Kallithea-0.7.0/kallithea/lib/pidlock.py |
import errno
import os
from multiprocessing.util import Finalize
from kallithea.lib.compat import kill
class LockHeld(Exception):
pass
class DaemonLock(object):
"""daemon locking
USAGE:
try:
l = DaemonLock('/path/tolockfile',desc='test lock')
main()
l.release()
except LockHeld:
sys.exit(1)
"""
def __init__(self, file_, callbackfn=None,
desc='daemon lock', debug=False):
self.pidfile = file_
self.callbackfn = callbackfn
self.desc = desc
self.debug = debug
self.held = False
# run the lock automatically!
self.lock()
self._finalize = Finalize(self, DaemonLock._on_finalize,
args=(self, debug), exitpriority=10)
@staticmethod
def _on_finalize(lock, debug):
if lock.held:
if debug:
print('lock held finalizing and running lock.release()')
lock.release()
def lock(self):
"""
locking function, if lock is present it
will raise LockHeld exception
"""
lockname = str(os.getpid())
if self.debug:
print('running lock')
self.trylock()
self.makelock(lockname, self.pidfile)
return True
def trylock(self):
running_pid = False
if self.debug:
print('checking for already running process')
try:
with open(self.pidfile, 'r') as f:
try:
running_pid = int(f.readline())
except ValueError:
running_pid = -1
if self.debug:
print('lock file present running_pid: %s, '
'checking for execution' % (running_pid,))
# Now we check the PID from lock file matches to the current
# process PID
if running_pid:
try:
kill(running_pid, 0)
except OSError as exc:
if exc.errno in (errno.ESRCH, errno.EPERM):
print ("Lock File is there but"
" the program is not running")
print("Removing lock file for the: %s" % running_pid)
self.release()
else:
raise
else:
print("You already have an instance of the program running")
print("It is running as process %s" % running_pid)
raise LockHeld()
except IOError as e:
if e.errno != 2:
raise
def release(self):
"""releases the pid by removing the pidfile
"""
if self.debug:
print('trying to release the pidlock')
if self.callbackfn:
#execute callback function on release
if self.debug:
print('executing callback function %s' % self.callbackfn)
self.callbackfn()
try:
if self.debug:
print('removing pidfile %s' % self.pidfile)
os.remove(self.pidfile)
self.held = False
except OSError as e:
if self.debug:
print('removing pidfile failed %s' % e)
pass
def makelock(self, lockname, pidfile):
"""
this function will make an actual lock
:param lockname: actual pid of file
:param pidfile: the file to write the pid in
"""
if self.debug:
print('creating a file %s and pid: %s' % (pidfile, lockname))
dir_, file_ = os.path.split(pidfile)
if not os.path.isdir(dir_):
os.makedirs(dir_)
with open(self.pidfile, 'w') as f:
f.write(lockname)
self.held = True | PypiClean |
/Biosig-2.1.0.tar.gz/Biosig-2.1.0/README.md | # Biosig Package
Biosig contains tools for processing of biomedical signals
like EEG, ECG/EKG, EMG, EOG, polysomnograms (PSG). Currently,
it contains import filters of about 50 different file formats
including EDF, GDF, BDF, HL7aECG, SCP (EN1064).
More information is available at
[Biosig project homepage](https://biosig.sourceforge.io)
# Installation:
## GNU/Linux, Unix,
pip install https://pub.ist.ac.at/~schloegl/biosig/prereleases/Biosig-2.0.4.tar.gz
## MacOSX/Homebrew
brew install biosig
pip install numpy
pip install https://pub.ist.ac.at/~schloegl/biosig/prereleases/Biosig-2.0.4.tar.gz
## MS-Windows
currently not supported natively; installing it through cygwin, mingw,
or WSL might work.
# Usage:
import biosig
import json
# read header/metainformation
HDR=json.loads(biosig.header(FILENAME))
# read data
DATA=biosig.data(FILENAME)
| PypiClean |
/MorningstarAutoTestingFramework-0.1.2.zip/MorningstarAutoTestingFramework-0.1.2/Src/XMLFunction/ParseXMLBySAX.py | from xml.sax import handler, parseString
import sys
import pymysql
reload(sys)
sys.setdefaultencoding('utf-8')
in_sql = "insert into person(id,sex,address,fansNum,summary,wbNum,gzNum,blog,edu,work,renZh,brithday) \
values(%s, %s, %s, %s, %s, %s,%s, %s, %s, %s, %s, %s)"
fields = ("id", "sex", "address", "fansNum", "summary", "wbNum", "gzNum", "blog", "edu", "work", "renZh", "brithday")
object_instance_name = "person" # XML中的节点标记
class Db_Connect:
def __init__(self, db_host, user, pwd, db_name, charset="utf8", use_unicode=True):
print "init begin"
print db_host, user, pwd, db_name, charset, use_unicode
# self.conn = MySQLdb.Connection(db_host, user, pwd, db_name, charset=charset, use_unicode=use_unicode)
print "init end"
def insert(self, sql):
try:
n = self.conn.cursor().execute(sql)
return n
except pymysql.Warning, e:
print "Error: execute sql '", sql, "' failed"
def close(self):
# self.conn.close()
print "Close."
class ObjectHandler(handler.ContentHandler):
def __init__(self, db_ops):
"""
从外部传入的处理对象,此处是以数据库为处理模式
:param db_ops:
"""
self.object_instance = {}
self.current_tag = ""
self.in_quote = 0
self.db_ops = db_ops
def startElement(self, name, attr):
if name == object_instance_name:
# 此处还可以处理属性的值
print attr["name"]
self.object_instance = {}
self.current_tag = name
self.in_quote = 1
def endElement(self, name):
if name == object_instance_name:
in_fields = tuple([('"''' + self.object_instance.get(i, "") + '"') for i in fields])
print in_fields
print in_sql % in_fields
# db_ops.insert(in_sql % (in_fields))
self.in_quote = 0
def characters(self, content):
# 若是在tag之间的内容,更新到map中
if self.in_quote:
self.object_instance.update({self.current_tag: content})
if __name__ == "__main__":
f = open(r"D:\PythonCode\CodeLib\Template\persons.xml")
# 如果源文件gbk 转码 若是utf-8,去掉decode.encode
db_ops = Db_Connect("127.0.0.1", "root", "root", "test")
parseString(f.read().decode("gbk").encode("utf-8"), ObjectHandler(db_ops))
f.close()
db_ops.close() | PypiClean |
/CleanAdminDjango-1.5.3.1.tar.gz/CleanAdminDjango-1.5.3.1/django/core/management/validation.py | import collections
import sys
from django.conf import settings
from django.core.management.color import color_style
from django.utils.encoding import force_str
from django.utils.itercompat import is_iterable
from django.utils import six
class ModelErrorCollection:
def __init__(self, outfile=sys.stdout):
self.errors = []
self.outfile = outfile
self.style = color_style()
def add(self, context, error):
self.errors.append((context, error))
self.outfile.write(self.style.ERROR(force_str("%s: %s\n" % (context, error))))
def get_validation_errors(outfile, app=None):
"""
Validates all models that are part of the specified app. If no app name is provided,
validates all models of all installed apps. Writes errors, if any, to outfile.
Returns number of errors.
"""
from django.db import models, connection
from django.db.models.loading import get_app_errors
from django.db.models.fields.related import RelatedObject
from django.db.models.deletion import SET_NULL, SET_DEFAULT
e = ModelErrorCollection(outfile)
for (app_name, error) in get_app_errors().items():
e.add(app_name, error)
for cls in models.get_models(app, include_swapped=True):
opts = cls._meta
# Check swappable attribute.
if opts.swapped:
try:
app_label, model_name = opts.swapped.split('.')
except ValueError:
e.add(opts, "%s is not of the form 'app_label.app_name'." % opts.swappable)
continue
if not models.get_model(app_label, model_name):
e.add(opts, "Model has been swapped out for '%s' which has not been installed or is abstract." % opts.swapped)
# No need to perform any other validation checks on a swapped model.
continue
# If this is the current User model, check known validation problems with User models
if settings.AUTH_USER_MODEL == '%s.%s' % (opts.app_label, opts.object_name):
# Check that the USERNAME FIELD isn't included in REQUIRED_FIELDS.
if cls.USERNAME_FIELD in cls.REQUIRED_FIELDS:
e.add(opts, 'The field named as the USERNAME_FIELD should not be included in REQUIRED_FIELDS on a swappable User model.')
# Check that the username field is unique
if not opts.get_field(cls.USERNAME_FIELD).unique:
e.add(opts, 'The USERNAME_FIELD must be unique. Add unique=True to the field parameters.')
# Model isn't swapped; do field-specific validation.
for f in opts.local_fields:
if f.name == 'id' and not f.primary_key and opts.pk.name == 'id':
e.add(opts, '"%s": You can\'t use "id" as a field name, because each model automatically gets an "id" field if none of the fields have primary_key=True. You need to either remove/rename your "id" field or add primary_key=True to a field.' % f.name)
if f.name.endswith('_'):
e.add(opts, '"%s": Field names cannot end with underscores, because this would lead to ambiguous queryset filters.' % f.name)
if (f.primary_key and f.null and
not connection.features.interprets_empty_strings_as_nulls):
# We cannot reliably check this for backends like Oracle which
# consider NULL and '' to be equal (and thus set up
# character-based fields a little differently).
e.add(opts, '"%s": Primary key fields cannot have null=True.' % f.name)
if isinstance(f, models.CharField):
try:
max_length = int(f.max_length)
if max_length <= 0:
e.add(opts, '"%s": CharFields require a "max_length" attribute that is a positive integer.' % f.name)
except (ValueError, TypeError):
e.add(opts, '"%s": CharFields require a "max_length" attribute that is a positive integer.' % f.name)
if isinstance(f, models.DecimalField):
decimalp_ok, mdigits_ok = False, False
decimalp_msg = '"%s": DecimalFields require a "decimal_places" attribute that is a non-negative integer.'
try:
decimal_places = int(f.decimal_places)
if decimal_places < 0:
e.add(opts, decimalp_msg % f.name)
else:
decimalp_ok = True
except (ValueError, TypeError):
e.add(opts, decimalp_msg % f.name)
mdigits_msg = '"%s": DecimalFields require a "max_digits" attribute that is a positive integer.'
try:
max_digits = int(f.max_digits)
if max_digits <= 0:
e.add(opts, mdigits_msg % f.name)
else:
mdigits_ok = True
except (ValueError, TypeError):
e.add(opts, mdigits_msg % f.name)
invalid_values_msg = '"%s": DecimalFields require a "max_digits" attribute value that is greater than or equal to the value of the "decimal_places" attribute.'
if decimalp_ok and mdigits_ok:
if decimal_places > max_digits:
e.add(opts, invalid_values_msg % f.name)
if isinstance(f, models.FileField) and not f.upload_to:
e.add(opts, '"%s": FileFields require an "upload_to" attribute.' % f.name)
if isinstance(f, models.ImageField):
# Try to import PIL in either of the two ways it can end up installed.
try:
from PIL import Image
except ImportError:
try:
import Image
except ImportError:
e.add(opts, '"%s": To use ImageFields, you need to install the Python Imaging Library. Get it at http://www.pythonware.com/products/pil/ .' % f.name)
if isinstance(f, models.BooleanField) and getattr(f, 'null', False):
e.add(opts, '"%s": BooleanFields do not accept null values. Use a NullBooleanField instead.' % f.name)
if isinstance(f, models.FilePathField) and not (f.allow_files or f.allow_folders):
e.add(opts, '"%s": FilePathFields must have either allow_files or allow_folders set to True.' % f.name)
if f.choices:
if isinstance(f.choices, six.string_types) or not is_iterable(f.choices):
e.add(opts, '"%s": "choices" should be iterable (e.g., a tuple or list).' % f.name)
else:
for c in f.choices:
if not isinstance(c, (list, tuple)) or len(c) != 2:
e.add(opts, '"%s": "choices" should be a sequence of two-tuples.' % f.name)
if f.db_index not in (None, True, False):
e.add(opts, '"%s": "db_index" should be either None, True or False.' % f.name)
# Perform any backend-specific field validation.
connection.validation.validate_field(e, opts, f)
# Check if the on_delete behavior is sane
if f.rel and hasattr(f.rel, 'on_delete'):
if f.rel.on_delete == SET_NULL and not f.null:
e.add(opts, "'%s' specifies on_delete=SET_NULL, but cannot be null." % f.name)
elif f.rel.on_delete == SET_DEFAULT and not f.has_default():
e.add(opts, "'%s' specifies on_delete=SET_DEFAULT, but has no default value." % f.name)
# Check to see if the related field will clash with any existing
# fields, m2m fields, m2m related objects or related objects
if f.rel:
if f.rel.to not in models.get_models():
# If the related model is swapped, provide a hint;
# otherwise, the model just hasn't been installed.
if not isinstance(f.rel.to, six.string_types) and f.rel.to._meta.swapped:
e.add(opts, "'%s' defines a relation with the model '%s.%s', which has been swapped out. Update the relation to point at settings.%s." % (f.name, f.rel.to._meta.app_label, f.rel.to._meta.object_name, f.rel.to._meta.swappable))
else:
e.add(opts, "'%s' has a relation with model %s, which has either not been installed or is abstract." % (f.name, f.rel.to))
# it is a string and we could not find the model it refers to
# so skip the next section
if isinstance(f.rel.to, six.string_types):
continue
# Make sure the related field specified by a ForeignKey is unique
if not f.rel.to._meta.get_field(f.rel.field_name).unique:
e.add(opts, "Field '%s' under model '%s' must have a unique=True constraint." % (f.rel.field_name, f.rel.to.__name__))
rel_opts = f.rel.to._meta
rel_name = RelatedObject(f.rel.to, cls, f).get_accessor_name()
rel_query_name = f.related_query_name()
if not f.rel.is_hidden():
for r in rel_opts.fields:
if r.name == rel_name:
e.add(opts, "Accessor for field '%s' clashes with field '%s.%s'. Add a related_name argument to the definition for '%s'." % (f.name, rel_opts.object_name, r.name, f.name))
if r.name == rel_query_name:
e.add(opts, "Reverse query name for field '%s' clashes with field '%s.%s'. Add a related_name argument to the definition for '%s'." % (f.name, rel_opts.object_name, r.name, f.name))
for r in rel_opts.local_many_to_many:
if r.name == rel_name:
e.add(opts, "Accessor for field '%s' clashes with m2m field '%s.%s'. Add a related_name argument to the definition for '%s'." % (f.name, rel_opts.object_name, r.name, f.name))
if r.name == rel_query_name:
e.add(opts, "Reverse query name for field '%s' clashes with m2m field '%s.%s'. Add a related_name argument to the definition for '%s'." % (f.name, rel_opts.object_name, r.name, f.name))
for r in rel_opts.get_all_related_many_to_many_objects():
if r.get_accessor_name() == rel_name:
e.add(opts, "Accessor for field '%s' clashes with related m2m field '%s.%s'. Add a related_name argument to the definition for '%s'." % (f.name, rel_opts.object_name, r.get_accessor_name(), f.name))
if r.get_accessor_name() == rel_query_name:
e.add(opts, "Reverse query name for field '%s' clashes with related m2m field '%s.%s'. Add a related_name argument to the definition for '%s'." % (f.name, rel_opts.object_name, r.get_accessor_name(), f.name))
for r in rel_opts.get_all_related_objects():
if r.field is not f:
if r.get_accessor_name() == rel_name:
e.add(opts, "Accessor for field '%s' clashes with related field '%s.%s'. Add a related_name argument to the definition for '%s'." % (f.name, rel_opts.object_name, r.get_accessor_name(), f.name))
if r.get_accessor_name() == rel_query_name:
e.add(opts, "Reverse query name for field '%s' clashes with related field '%s.%s'. Add a related_name argument to the definition for '%s'." % (f.name, rel_opts.object_name, r.get_accessor_name(), f.name))
seen_intermediary_signatures = []
for i, f in enumerate(opts.local_many_to_many):
# Check to see if the related m2m field will clash with any
# existing fields, m2m fields, m2m related objects or related
# objects
if f.rel.to not in models.get_models():
# If the related model is swapped, provide a hint;
# otherwise, the model just hasn't been installed.
if not isinstance(f.rel.to, six.string_types) and f.rel.to._meta.swapped:
e.add(opts, "'%s' defines a relation with the model '%s.%s', which has been swapped out. Update the relation to point at settings.%s." % (f.name, f.rel.to._meta.app_label, f.rel.to._meta.object_name, f.rel.to._meta.swappable))
else:
e.add(opts, "'%s' has an m2m relation with model %s, which has either not been installed or is abstract." % (f.name, f.rel.to))
# it is a string and we could not find the model it refers to
# so skip the next section
if isinstance(f.rel.to, six.string_types):
continue
# Check that the field is not set to unique. ManyToManyFields do not support unique.
if f.unique:
e.add(opts, "ManyToManyFields cannot be unique. Remove the unique argument on '%s'." % f.name)
if f.rel.through is not None and not isinstance(f.rel.through, six.string_types):
from_model, to_model = cls, f.rel.to
if from_model == to_model and f.rel.symmetrical and not f.rel.through._meta.auto_created:
e.add(opts, "Many-to-many fields with intermediate tables cannot be symmetrical.")
seen_from, seen_to, seen_self = False, False, 0
for inter_field in f.rel.through._meta.fields:
rel_to = getattr(inter_field.rel, 'to', None)
if from_model == to_model: # relation to self
if rel_to == from_model:
seen_self += 1
if seen_self > 2:
e.add(opts, "Intermediary model %s has more than "
"two foreign keys to %s, which is ambiguous "
"and is not permitted." % (
f.rel.through._meta.object_name,
from_model._meta.object_name
)
)
else:
if rel_to == from_model:
if seen_from:
e.add(opts, "Intermediary model %s has more "
"than one foreign key to %s, which is "
"ambiguous and is not permitted." % (
f.rel.through._meta.object_name,
from_model._meta.object_name
)
)
else:
seen_from = True
elif rel_to == to_model:
if seen_to:
e.add(opts, "Intermediary model %s has more "
"than one foreign key to %s, which is "
"ambiguous and is not permitted." % (
f.rel.through._meta.object_name,
rel_to._meta.object_name
)
)
else:
seen_to = True
if f.rel.through not in models.get_models(include_auto_created=True):
e.add(opts, "'%s' specifies an m2m relation through model "
"%s, which has not been installed." % (f.name, f.rel.through)
)
signature = (f.rel.to, cls, f.rel.through)
if signature in seen_intermediary_signatures:
e.add(opts, "The model %s has two manually-defined m2m "
"relations through the model %s, which is not "
"permitted. Please consider using an extra field on "
"your intermediary model instead." % (
cls._meta.object_name,
f.rel.through._meta.object_name
)
)
else:
seen_intermediary_signatures.append(signature)
if not f.rel.through._meta.auto_created:
seen_related_fk, seen_this_fk = False, False
for field in f.rel.through._meta.fields:
if field.rel:
if not seen_related_fk and field.rel.to == f.rel.to:
seen_related_fk = True
elif field.rel.to == cls:
seen_this_fk = True
if not seen_related_fk or not seen_this_fk:
e.add(opts, "'%s' is a manually-defined m2m relation "
"through model %s, which does not have foreign keys "
"to %s and %s" % (f.name, f.rel.through._meta.object_name,
f.rel.to._meta.object_name, cls._meta.object_name)
)
elif isinstance(f.rel.through, six.string_types):
e.add(opts, "'%s' specifies an m2m relation through model %s, "
"which has not been installed" % (f.name, f.rel.through)
)
rel_opts = f.rel.to._meta
rel_name = RelatedObject(f.rel.to, cls, f).get_accessor_name()
rel_query_name = f.related_query_name()
# If rel_name is none, there is no reverse accessor (this only
# occurs for symmetrical m2m relations to self). If this is the
# case, there are no clashes to check for this field, as there are
# no reverse descriptors for this field.
if rel_name is not None:
for r in rel_opts.fields:
if r.name == rel_name:
e.add(opts, "Accessor for m2m field '%s' clashes with field '%s.%s'. Add a related_name argument to the definition for '%s'." % (f.name, rel_opts.object_name, r.name, f.name))
if r.name == rel_query_name:
e.add(opts, "Reverse query name for m2m field '%s' clashes with field '%s.%s'. Add a related_name argument to the definition for '%s'." % (f.name, rel_opts.object_name, r.name, f.name))
for r in rel_opts.local_many_to_many:
if r.name == rel_name:
e.add(opts, "Accessor for m2m field '%s' clashes with m2m field '%s.%s'. Add a related_name argument to the definition for '%s'." % (f.name, rel_opts.object_name, r.name, f.name))
if r.name == rel_query_name:
e.add(opts, "Reverse query name for m2m field '%s' clashes with m2m field '%s.%s'. Add a related_name argument to the definition for '%s'." % (f.name, rel_opts.object_name, r.name, f.name))
for r in rel_opts.get_all_related_many_to_many_objects():
if r.field is not f:
if r.get_accessor_name() == rel_name:
e.add(opts, "Accessor for m2m field '%s' clashes with related m2m field '%s.%s'. Add a related_name argument to the definition for '%s'." % (f.name, rel_opts.object_name, r.get_accessor_name(), f.name))
if r.get_accessor_name() == rel_query_name:
e.add(opts, "Reverse query name for m2m field '%s' clashes with related m2m field '%s.%s'. Add a related_name argument to the definition for '%s'." % (f.name, rel_opts.object_name, r.get_accessor_name(), f.name))
for r in rel_opts.get_all_related_objects():
if r.get_accessor_name() == rel_name:
e.add(opts, "Accessor for m2m field '%s' clashes with related field '%s.%s'. Add a related_name argument to the definition for '%s'." % (f.name, rel_opts.object_name, r.get_accessor_name(), f.name))
if r.get_accessor_name() == rel_query_name:
e.add(opts, "Reverse query name for m2m field '%s' clashes with related field '%s.%s'. Add a related_name argument to the definition for '%s'." % (f.name, rel_opts.object_name, r.get_accessor_name(), f.name))
# Check ordering attribute.
if opts.ordering:
for field_name in opts.ordering:
if field_name == '?':
continue
if field_name.startswith('-'):
field_name = field_name[1:]
if opts.order_with_respect_to and field_name == '_order':
continue
# Skip ordering in the format field1__field2 (FIXME: checking
# this format would be nice, but it's a little fiddly).
if '__' in field_name:
continue
# Skip ordering on pk. This is always a valid order_by field
# but is an alias and therefore won't be found by opts.get_field.
if field_name == 'pk':
continue
try:
opts.get_field(field_name, many_to_many=False)
except models.FieldDoesNotExist:
e.add(opts, '"ordering" refers to "%s", a field that doesn\'t exist.' % field_name)
# Check unique_together.
for ut in opts.unique_together:
validate_local_fields(e, opts, "unique_together", ut)
if not isinstance(opts.index_together, collections.Sequence):
e.add(opts, '"index_together" must a sequence')
else:
for it in opts.index_together:
validate_local_fields(e, opts, "index_together", it)
return len(e.errors)
def validate_local_fields(e, opts, field_name, fields):
from django.db import models
if not isinstance(fields, collections.Sequence):
e.add(opts, 'all %s elements must be sequences' % field_name)
else:
for field in fields:
try:
f = opts.get_field(field, many_to_many=True)
except models.FieldDoesNotExist:
e.add(opts, '"%s" refers to %s, a field that doesn\'t exist.' % (field_name, field))
else:
if isinstance(f.rel, models.ManyToManyRel):
e.add(opts, '"%s" refers to %s. ManyToManyFields are not supported in %s.' % (field_name, f.name, field_name))
if f not in opts.local_fields:
e.add(opts, '"%s" refers to %s. This is not in the same model as the %s statement.' % (field_name, f.name, field_name)) | PypiClean |
/Euphorie-15.0.2.tar.gz/Euphorie-15.0.2/src/euphorie/client/resources/oira/script/chunks/44148.a6a331bb4a0fb9043d17.min.js | (self.webpackChunk_patternslib_patternslib=self.webpackChunk_patternslib_patternslib||[]).push([[44148],{44148:function(e){e.exports=function(e){return{name:"Processing",keywords:{keyword:"BufferedReader PVector PFont PImage PGraphics HashMap boolean byte char color double float int long String Array FloatDict FloatList IntDict IntList JSONArray JSONObject Object StringDict StringList Table TableRow XML false synchronized int abstract float private char boolean static null if const for true while long throw strictfp finally protected import native final return void enum else break transient new catch instanceof byte super volatile case assert short package default double public try this switch continue throws protected public private",literal:"P2D P3D HALF_PI PI QUARTER_PI TAU TWO_PI",title:"setup draw",built_in:"displayHeight displayWidth mouseY mouseX mousePressed pmouseX pmouseY key keyCode pixels focused frameCount frameRate height width size createGraphics beginDraw createShape loadShape PShape arc ellipse line point quad rect triangle bezier bezierDetail bezierPoint bezierTangent curve curveDetail curvePoint curveTangent curveTightness shape shapeMode beginContour beginShape bezierVertex curveVertex endContour endShape quadraticVertex vertex ellipseMode noSmooth rectMode smooth strokeCap strokeJoin strokeWeight mouseClicked mouseDragged mouseMoved mousePressed mouseReleased mouseWheel keyPressed keyPressedkeyReleased keyTyped print println save saveFrame day hour millis minute month second year background clear colorMode fill noFill noStroke stroke alpha blue brightness color green hue lerpColor red saturation modelX modelY modelZ screenX screenY screenZ ambient emissive shininess specular add createImage beginCamera camera endCamera frustum ortho perspective printCamera printProjection cursor frameRate noCursor exit loop noLoop popStyle pushStyle redraw binary boolean byte char float hex int str unbinary unhex join match matchAll nf nfc nfp nfs split splitTokens trim append arrayCopy concat expand reverse shorten sort splice subset box sphere sphereDetail createInput createReader loadBytes loadJSONArray loadJSONObject loadStrings loadTable loadXML open parseXML saveTable selectFolder selectInput beginRaw beginRecord createOutput createWriter endRaw endRecord PrintWritersaveBytes saveJSONArray saveJSONObject saveStream saveStrings saveXML selectOutput popMatrix printMatrix pushMatrix resetMatrix rotate rotateX rotateY rotateZ scale shearX shearY translate ambientLight directionalLight lightFalloff lights lightSpecular noLights normal pointLight spotLight image imageMode loadImage noTint requestImage tint texture textureMode textureWrap blend copy filter get loadPixels set updatePixels blendMode loadShader PShaderresetShader shader createFont loadFont text textFont textAlign textLeading textMode textSize textWidth textAscent textDescent abs ceil constrain dist exp floor lerp log mag map max min norm pow round sq sqrt acos asin atan atan2 cos degrees radians sin tan noise noiseDetail noiseSeed random randomGaussian randomSeed"},contains:[e.C_LINE_COMMENT_MODE,e.C_BLOCK_COMMENT_MODE,e.APOS_STRING_MODE,e.QUOTE_STRING_MODE,e.C_NUMBER_MODE]}}}}]);
//# sourceMappingURL=44148.a6a331bb4a0fb9043d17.min.js.map | PypiClean |
/Allegra-0.63.zip/Allegra-0.63/lib/netstring.py |
"http://laurentszyster.be/blog/netstring/"
class NetstringsError (Exception): pass
def encode (strings):
"encode an sequence of 8-bit byte strings as netstrings"
return ''.join (['%d:%s,' % (len (s), s) for s in strings])
def decode (buffer):
"decode the netstrings found in the buffer, trunk garbage"
size = len (buffer)
prev = 0
while prev < size:
pos = buffer.find (':', prev)
if pos < 1:
break
try:
next = pos + int (buffer[prev:pos]) + 1
except:
break
if next >= size:
break
if buffer[next] == ',':
yield buffer[pos+1:next]
else:
break
prev = next + 1
def validate (buffer, length):
"decode the netstrings, but keep garbage and fit to size"
size = len (buffer)
prev = 0
while prev < size and length:
pos = buffer.find (':', prev)
if pos < 1:
if prev == 0:
raise StopIteration # not a netstring!
break
try:
next = pos + int (buffer[prev:pos]) + 1
except:
break
if next >= size:
break
if buffer[next] == ',':
length -= 1
yield buffer[pos+1:next]
else:
break
prev = next + 1
if length:
length -= 1
yield buffer[max (prev, pos+1):]
while length:
length -= 1
yield ''
def outline (encoded, format, indent):
"recursively format nested netstrings as a CRLF outline"
n = tuple (decode (encoded))
if len (n) > 0:
return ''.join ((outline (
e, indent + format, indent
) for e in n))
return format % encoded
def netstrings (instance):
"encode a tree of instances as nested netstrings"
t = type (instance)
if t == str:
return instance
if t in (tuple, list, set, frozenset):
return encode ((netstrings (i) for i in instance))
if t == dict:
return encode ((netstrings (i) for i in instance.items ()))
try:
return '%s' % instance
except:
return '%r' % instance
def netlist (encoded):
"return a list of strings or [encoded] if no netstrings found"
return list (decode (encoded)) or [encoded]
def nettree (encoded):
"decode the nested encoded strings in a tree of lists"
leaves = [nettree (s) for s in decode (encoded)]
if len (leaves) > 0:
return leaves
return encoded
def netlines (encoded, format='%s\n', indent=' '):
"beautify a netstring as an outline ready to log"
n = tuple (decode (encoded))
if len (n) > 0:
return format % ''.join ((outline (
e, format, indent
) for e in n))
return format % encoded
def netoutline (encoded, indent=''):
"recursively format nested netstrings as an outline with length"
n = tuple (decode (encoded))
if len (n) > 0:
return '%s%d:\n%s%s,\n' % (
indent, len (encoded), ''.join ((netoutline (
e, indent + ' '
) for e in n)), indent)
return '%s%d:%s,\n' % (indent, len (encoded), encoded)
def netpipe (more, BUFFER_MAX=0):
"""A practical netstrings pipe generator
Decode the stream of netstrings produced by more (), raise
a NetstringsError exception on protocol failure or StopIteration
when the producer is exhausted.
If specified, the BUFFER_MAX size must be more than twice as
big as the largest netstring piped through (although netstrings
strictly smaller than BUFFER_MAX may pass through without raising
an exception).
"""
buffer = more ()
while buffer:
pos = buffer.find (':')
if pos < 0:
raise NetstringsError, '1 not a netstring'
try:
next = pos + int (buffer[:pos]) + 1
except:
raise NetstringsError, '2 not a valid length'
if 0 < BUFFER_MAX < next:
raise (
NetstringsError,
'4 buffer overflow (%d bytes)' % BUFFER_MAX
)
while next >= len (buffer):
data = more ()
if data:
buffer += data
else:
raise NetstringsError, '5 end of pipe'
if buffer[next] == ',':
yield buffer[pos+1:next]
else:
raise NetstringsError, '3 missing coma'
buffer = buffer[next+1:]
if buffer == '' or buffer.isdigit ():
buffer += more ()
#
# Note also that the first call to more must return at least the
# encoded length of the first netstring, which practically is (or
# should be) allways the case (for instance, piping in a netstring
# sequence from a file will be done by blocks of pages, typically
# between 512 and 4096 bytes, maybe more certainly not less).
if __name__ == '__main__':
import sys
assert None == sys.stderr.write (
'Allegra Netstrings'
' - Copyright 2005 Laurent A.V. Szyster'
' | Copyleft GPL 2.0\n\n'
)
if len (sys.argv) > 1:
command = sys.argv[1]
else:
command = 'outline'
if command in ('outline', 'decode'):
if len (sys.argv) > 2:
if len (sys.argv) > 3:
try:
buffer_max = int (sys.argv[3])
except:
sys.stderr.write (
'3 invalid buffer max\n'
)
sys.exit (3)
else:
buffer_max = 0
try:
buffer_more = int (sys.argv[2])
except:
sys.stderr.write ('2 invalid buffer size\n')
sys.exit (2)
else:
buffer_max = 0
buffer_more = 4096
def more ():
return sys.stdin.read (buffer_more)
count = 0
try:
if command == 'outline':
write = sys.stdout.write
for n in netpipe (more, buffer_max):
write (netlines (n))
count += 1
else:
for n in netpipe (more, buffer_max):
count += 1
finally:
assert None == sys.stderr.write ('%d' % count)
elif command == 'encode':
write = sys.stdout.write
for line in sys.stdin.xreadlines ():
write ('%d:%s,' % (len (line)-1, line[:-1]))
else:
sys.stderr.write ('1 invalid command\n')
sys.exit (1)
sys.exit (0) | PypiClean |
/Neto-0.6.2.tar.gz/Neto-0.6.2/neto/plugins/analysis/suspicious.py |
import os
import re
import timeout_decorator
REGEXPS = {
"possible_obfuscation": [
b"eval\(",
b"encode",
b"base64",
b"base58",
b"atob"
],
"possible_payments": [
b"visa",
b"VISA",
b"[Mm]aster[Cc]ard",
b"MASTERCARD",
b"[Aa]merican_?[Ee]xpress",
b"AMERICAN_?EXPRESS",
b"[Pp]aypal"
],
"pattern_recognition": [
b"pattern",
b"regex",
b"regularexpression"
],
"external_connections": [
b"XMLHttpRequest",
b"GET",
b"POST",
b"PUT"
]
}
#@timeout_decorator.timeout(30, timeout_exception=StopIteration)
def runAnalysis(**kwargs):
"""
Method that runs an analysis
This method is dinamically loaded by neto.lib.extensions.Extension objects
to conduct an analysis. The analyst can choose to perform the analysis on
kwargs["extensionFile"] or on kwargs["unzippedFiles"]. It SHOULD return a
dictionary with the results of the analysis that will be updated to the
features property of the Extension.
Args:
-----
kwargs: It currently contain:
- extensionFile: A string to the local path of the extension.
- unzippedFiles: A dictionary where the key is the relative path to
the file and the the value the absolute path to the extension.
{
"manifest.json": "/tmp/extension/manifest.json"
…
}
Returns:
--------
A dictionary where the key is the name given to the analysis and the
value is the result of the analysis. This result can be of any
format.
"""
results = {}
# Iterate through all the regexps
for e, valuesRe in REGEXPS.items():
results[e] = []
# Iterate through all the files in the folder
for f, realPath in kwargs["unzippedFiles"].items():
if os.path.isfile(realPath):
fileType = f.split(".")[-1].lower()
# Extract matching strings from text files
if fileType in ["js", "html", "htm", "css", "txt"]:
# Read the data
raw_data = open(realPath, "rb").read()
for exp in valuesRe:
values = re.findall(exp, raw_data)
for v in values:
# TODO: properly handle:
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 29: unexpected end of data
try:
aux = {
"value": v.decode("utf-8"),
"path": f
}
results[e].append(aux)
except:
pass
return {"suspicious": results} | PypiClean |
/Jux-1.0.0.tar.gz/Jux-1.0.0/jux/sun/helper/modelling.py | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from .functions import linear_func, log_exp_func
class ModelFlares():
"""
Class that accepts raw flare data and models into the Flare Profile.
Also finds out the class of the flare.
"""
def __init__(self, time, rate, start, peak, end):
self.time = time
self.rate = rate
self.s = start
self.p = peak
self.e = end
self.background = np.min(rate)
self.s_calc = self.__calc_starts()
self.e_calc = self.__calc_ends()
self.classes = self.__classify()
self.data = self.__generate_df()
def __calc_starts(self):
s_calc = []
for i in range(len(self.s)):
x_rise = self.time[self.s[i]:self.p[i]+1]
y_rise = self.rate[self.s[i]:self.p[i]+1]
popt, pcov = curve_fit(linear_func, x_rise, y_rise)
m, c = popt
s_calc.append(int((self.background - c) / m))
return s_calc
def __calc_ends(self):
e_calc = []
for i in range(len(self.p)):
x_fall = self.time[self.p[i]:self.e[i]+1] - self.time[self.p[i]]
y_fall = np.log(self.rate[self.p[i]:self.e[i]+1])
popt, pcov = curve_fit(log_exp_func, x_fall, y_fall)
ln_a, b = popt
end_time = np.square((np.log(self.background) - ln_a) / (-1 * b)) + self.p[i]
e_calc.append(int(end_time))
return e_calc
def __classify(self):
classes = []
for i in range(len(self.p)):
peak_intensity = self.rate[self.p[i]]
val = np.log10(peak_intensity / 25)
_val = str(int(val*100) / 10)[-3:]
c = ""
if int(val) < 1:
c = "A" + _val
elif int(val) == 1:
c = "B" + _val
elif int(val) == 2:
c = "C" + _val
elif int(val) == 3:
c = "M" + _val
elif int(val) > 3:
c = "X" + _val
classes.append(c)
return classes
def __generate_df(self):
start_intensities = [self.rate[i] for i in self.s]
peak_intensities = [self.rate[i] for i in self.p]
df = pd.DataFrame([self.s, self.p, self.e, self.s_calc, self.e_calc, start_intensities, peak_intensities, self.classes])
df = df.T
df.columns = ['Observed Start Time', 'Peak Time', 'Observed End Time', 'Calculated Start TIme', 'Calculated End Time', 'Pre-Flare Count Rate', 'Total Count Rate', 'Class']
return df | PypiClean |
/Chrysalio-1.0.tar.gz/Chrysalio-1.0/chrysalio/models/__init__.py |
from sys import exit as sys_exit
from sqlalchemy import engine_from_config
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.schema import MetaData
from sqlalchemy.exc import ProgrammingError, OperationalError
import zope.sqlalchemy
from ..lib.i18n import _, translate
DB_METADATA = MetaData(naming_convention={
'ix': 'ix_%(column_0_label)s',
'uq': 'uq_%(table_name)s_%(column_0_name)s',
'ck': 'ck_%(table_name)s_%(constraint_name)s',
'fk': 'fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s',
'pk': 'pk_%(table_name)s'
})
ID_LEN = 32
NAME_LEN = 64
EMAIL_LEN = 64
LABEL_LEN = 128
DESCRIPTION_LEN = 255
VALUE_LEN = 255
MODULE_LEN = 128
DBDeclarativeClass = declarative_base(metadata=DB_METADATA)
# =============================================================================
def includeme(configurator):
"""Initialize the model for a Chrysalio application.
:type configurator: pyramid.config.Configurator
:param configurator:
Object used to do configuration declaration within the application.
"""
settings = configurator.get_settings()
settings['tm.manager_hook'] = 'pyramid_tm.explicit_manager'
# Use pyramid_tm to hook the transaction lifecycle to the request
configurator.include('pyramid_tm')
# Use pyramid_retry to retry a request when transient exceptions occur
configurator.include('pyramid_retry')
# Get database engine
try:
dbengine = get_dbengine(settings)
except KeyError:
sys_exit(translate(_('*** Database is not defined.')))
DB_METADATA.bind = dbengine
# Make request.dbsession available for use in Pyramid
dbsession_factory = get_dbsession_factory(dbengine)
configurator.registry['dbsession_factory'] = dbsession_factory
configurator.add_request_method(
# r.tm is the transaction manager used by pyramid_tm
lambda r: get_tm_dbsession(dbsession_factory, r.tm),
'dbsession', reify=True)
# =============================================================================
def get_dbengine(settings, prefix='sqlalchemy.'):
"""Get SQLAlchemy engine.
:type settings: pyramid.registry.Registry.settings
:param settings:
Application settings.
:param str prefix: (default='sqlalchemy')
Prefix in settings for SQLAlchemy configuration.
:rtype: sqlalchemy.engine.base.Engine
"""
return engine_from_config(settings, prefix)
# =============================================================================
def get_dbsession_factory(dbengine):
"""Get SQLAlchemy session factory.
:type dbengine: sqlalchemy.engine.base.Engine
:param dbengine:
Database engine.
:rtype: :func:`sqlalchemy.orm.session.sessionmaker`
"""
factory = sessionmaker()
factory.configure(bind=dbengine)
return factory
# =============================================================================
def get_tm_dbsession(dbsession_factory, transaction_manager):
"""Get a ``sqlalchemy.orm.Session`` instance backed by a transaction.
:type dbsession_factory: sqlalchemy.orm.session.sessionmaker
:param dbsession_factory:
Function to create session.
:type transaction_manager: transaction._manager.ThreadTransactionManager
:param transaction_manager:
Transaction manager.
:rtype: sqlalchemy.orm.session.Session
This function will hook the session to the transaction manager which
will take care of committing any changes.
- When using pyramid_tm it will automatically be committed or aborted
depending on whether an exception is raised.
- When using scripts you should wrap the session in a manager yourself.
For example::
import transaction
dbsession_factory = get_dbsession_factory(DB_METADATA.bind)
with transaction.manager:
dbsession = get_tm_dbsession(
dbsession_factory, transaction.manager)
"""
dbsession = dbsession_factory()
zope.sqlalchemy.register(
dbsession, transaction_manager=transaction_manager)
return dbsession
# =============================================================================
def add_column(table_class, column):
"""Add a column to a database table.
:type table_class: DBDeclarativeClass
:param table_class:
SQLAlchemy object of the table to proceed.
:type column: sqlalchemy.schema.Column
:param column:
Column to create.
"""
table_class.__table__.append_column(column)
dialect = DB_METADATA.bind.dialect
table_name = str(column.compile(dialect=dialect))
table_name = table_name.partition('.')[0] if '.' in table_name \
else table_class.__table__.name
try:
DB_METADATA.bind.execute(
'ALTER TABLE {table} ADD COLUMN {column} {type}'.format(
table=table_name, column=column.name,
type=column.type.compile(dialect=dialect)))
except (ProgrammingError, OperationalError): # pragma: nocover
pass
setattr(table_class, column.name, column) | PypiClean |
/Flask-Scaffold-0.5.1.tar.gz/Flask-Scaffold-0.5.1/app/templates/static/node_modules/angular-grid/src/ts/masterSlaveService.ts |
module awk.grid {
export class MasterSlaveService {
private gridOptionsWrapper: GridOptionsWrapper;
private columnController: ColumnController;
private gridPanel: GridPanel;
private logger: Logger;
// flag to mark if we are consuming. to avoid cyclic events (ie slave firing back to master
// while processing a master event) we mark this if consuming an event, and if we are, then
// we don't fire back any events.
private consuming = false;
public init(gridOptionsWrapper: GridOptionsWrapper,
columnController: ColumnController,
gridPanel: GridPanel,
loggerFactory: LoggerFactory) {
this.gridOptionsWrapper = gridOptionsWrapper;
this.columnController = columnController;
this.gridPanel = gridPanel;
this.logger = loggerFactory.create('MasterSlaveService');
}
// common logic across all the fire methods
private fireEvent(callback: (slaveService: MasterSlaveService)=>void): void {
// if we are already consuming, then we are acting on an event from a master,
// so we don't cause a cyclic firing of events
if (this.consuming) {
return;
}
// iterate through the slave grids, and pass each slave service to the callback
var slaveGrids = this.gridOptionsWrapper.getSlaveGrids();
if (slaveGrids) {
slaveGrids.forEach( (slaveGridOptions: GridOptions) => {
if (slaveGridOptions.api) {
var slaveService = slaveGridOptions.api.__getMasterSlaveService();
callback(slaveService);
}
});
}
}
// common logic across all consume methods. very little common logic, however extracting
// guarantees consistency across the methods.
private onEvent(callback: ()=>void): void {
this.consuming = true;
callback();
this.consuming = false;
}
public fireColumnEvent(event: ColumnChangeEvent): void {
this.fireEvent( (slaveService: MasterSlaveService)=> {
slaveService.onColumnEvent(event);
});
}
public fireHorizontalScrollEvent(horizontalScroll: number): void {
this.fireEvent( (slaveService: MasterSlaveService)=> {
slaveService.onScrollEvent(horizontalScroll);
});
}
public onScrollEvent(horizontalScroll: number): void {
this.onEvent(()=> {
this.gridPanel.setHorizontalScrollPosition(horizontalScroll);
});
}
public onColumnEvent(event: ColumnChangeEvent): void {
this.onEvent(() => {
// the column in the even is from the master grid. need to
// look up the equivalent from this (slave) grid
var masterColumn = event.getColumn();
var slaveColumn: Column;
if (masterColumn) {
slaveColumn = this.columnController.getColumn(masterColumn.colId);
}
// if event was with respect to a master column, that is not present in this
// grid, then we ignore the event
if (masterColumn && !slaveColumn) { return; }
// likewise for column group
var masterColumnGroup = event.getColumnGroup();
var slaveColumnGroup: ColumnGroup;
if (masterColumnGroup) {
slaveColumnGroup = this.columnController.getColumnGroup(masterColumnGroup.name);
}
if (masterColumnGroup && !slaveColumnGroup) { return; }
switch (event.getType()) {
// we don't do anything for these three events
//case ColumnChangeEvent.TYPE_EVERYTHING:
//case ColumnChangeEvent.TYPE_PIVOT_CHANGE:
//case ColumnChangeEvent.TYPE_VALUE_CHANGE:
case ColumnChangeEvent.TYPE_COLUMN_MOVED:
this.logger.log('onColumnEvent-> processing '+event+' fromIndex = '+ event.getFromIndex() + ', toIndex = ' + event.getToIndex());
this.columnController.moveColumn(event.getFromIndex(), event.getToIndex());
break;
case ColumnChangeEvent.TYPE_COLUMN_VISIBLE:
this.logger.log('onColumnEvent-> processing '+event+' visible = '+ masterColumn.visible);
this.columnController.setColumnVisible(slaveColumn, masterColumn.visible);
break;
case ColumnChangeEvent.TYPE_COLUMN_GROUP_OPENED:
this.logger.log('onColumnEvent-> processing '+event+' expanded = '+ masterColumnGroup.expanded);
this.columnController.columnGroupOpened(slaveColumnGroup, masterColumnGroup.expanded);
break;
case ColumnChangeEvent.TYPE_COLUMN_RESIZED:
this.logger.log('onColumnEvent-> processing '+event+' actualWidth = '+ masterColumn.actualWidth);
this.columnController.setColumnWidth(slaveColumn, masterColumn.actualWidth);
break;
case ColumnChangeEvent.TYPE_PINNED_COUNT_CHANGED:
this.logger.log('onColumnEvent-> processing '+event);
this.columnController.setPinnedColumnCount(event.getPinnedColumnCount());
break;
}
});
}
}
} | PypiClean |
/ExGrads-0.1.12.tar.gz/ExGrads-0.1.12/exgrads/hooks.py |
import torch
# -------------------------------------------------------
def _get_module_type(module):
return module.__class__.__name__
def _is_support_layer(module):
return _get_module_type(module) in {'Linear', 'Conv2d','BatchNorm2d','BatchNorm1d'}
# -------------------------------------------------------
def _compute_grad1_for_linear(module, Fp, Bp):
module.weight.grad1 = torch.einsum('ni,nj->nij', Bp, Fp)
if module.bias is not None:
module.bias.grad1 = Bp
def _compute_grad1_for_conv2d(module, Fp, Bp):
n = Fp.shape[0]
Fp = torch.nn.functional.unfold(Fp, module.kernel_size, module.dilation, module.padding, module.stride)
Bp = Bp.reshape(n, -1, Fp.shape[-1])
grad1 = torch.einsum('ijk,ilk->ijl', Bp, Fp)
shape = [n] + list(module.weight.shape)
grad1 = grad1.reshape(shape)
module.weight.grad1 = grad1
if module.bias is not None:
module.bias.grad1 = torch.sum(Bp, dim=2)
def _compute_grad1_for_batchnorm2d(module, Fp, Bp):
module.weight.grad1 = torch.sum(Fp*Bp, dim=[2,3])
if module.bias is not None:
module.bias.grad1 = torch.sum(Bp, dim=[2,3])
def _compute_grad1_for_batchnorm1d(module, Fp, Bp):
module.weight.grad1 = Fp*Bp
if module.bias is not None:
module.bias.grad1 = Bp
# -------------------------------------------------------
def _check_FpBp(model):
for module in model.modules():
if _is_support_layer(module):
assert hasattr(module, '_forward1'), 'no _forward1. run forward after add_hooks(model)'
assert hasattr(module, '_backward1'), 'no _backward1. run backward after add_hooks(model)'
def _set_grad1(module):
module_type = _get_module_type(module)
Fp = module._forward1
Bp = module._backward1
if module_type=='Linear':
_compute_grad1_for_linear(module, Fp, Bp)
if module_type=='Conv2d':
_compute_grad1_for_conv2d(module, Fp, Bp)
if module_type=='BatchNorm2d':
_compute_grad1_for_batchnorm2d(module, Fp, Bp)
if module_type=='BatchNorm1d':
_compute_grad1_for_batchnorm1d(module, Fp, Bp)
def compute_grad1(model):
_check_FpBp(model)
for module in model.modules():
if _is_support_layer(module):
_set_grad1(module)
def generate_grad1(model):
_check_FpBp(model)
for module in model.modules():
if _is_support_layer(module):
_set_grad1(module)
yield module.weight.grad1
del module.weight.grad1
if module.bias is not None:
yield module.bias.grad1
del module.bias.grad1
# -------------------------------------------------------
def _capture_forwardprops(module, inputs, _outputs):
module._forward1 = inputs[0].detach()
def _capture_backwardprops(module, _grad_inputs, grad_outputs):
module._backward1 = grad_outputs[0].detach()
def register(model):
handles = []
for module in model.modules():
if _is_support_layer(module):
handles.append(module.register_forward_hook(_capture_forwardprops))
handles.append(module.register_full_backward_hook(_capture_backwardprops))
model.__dict__.setdefault('autograd_hacks_hooks', []).extend(handles)
# -------------------------------------------------------
def deregister(model):
for module in model.modules():
if hasattr(module, '_forward1'):
del module._forward1
if hasattr(module, '_backward1'):
del module._backward1
for param in model.parameters():
if hasattr(param, 'grad1'):
del param.grad1
if hasattr(model, 'autograd_hacks_hooks'):
for handle in model.autograd_hacks_hooks:
handle.remove()
del model.autograd_hacks_hooks
# ------------------------------------------------------- | PypiClean |
/FamcyDev-0.3.71-py3-none-any.whl/Famcy/node_modules/bootstrap/js/src/carousel.js | import {
defineJQueryPlugin,
getElementFromSelector,
isRTL,
isVisible,
getNextActiveElement,
reflow,
triggerTransitionEnd,
typeCheckConfig
} from './util/index'
import EventHandler from './dom/event-handler'
import Manipulator from './dom/manipulator'
import SelectorEngine from './dom/selector-engine'
import BaseComponent from './base-component'
/**
* ------------------------------------------------------------------------
* Constants
* ------------------------------------------------------------------------
*/
const NAME = 'carousel'
const DATA_KEY = 'bs.carousel'
const EVENT_KEY = `.${DATA_KEY}`
const DATA_API_KEY = '.data-api'
const ARROW_LEFT_KEY = 'ArrowLeft'
const ARROW_RIGHT_KEY = 'ArrowRight'
const TOUCHEVENT_COMPAT_WAIT = 500 // Time for mouse compat events to fire after touch
const SWIPE_THRESHOLD = 40
const Default = {
interval: 5000,
keyboard: true,
slide: false,
pause: 'hover',
wrap: true,
touch: true
}
const DefaultType = {
interval: '(number|boolean)',
keyboard: 'boolean',
slide: '(boolean|string)',
pause: '(string|boolean)',
wrap: 'boolean',
touch: 'boolean'
}
const ORDER_NEXT = 'next'
const ORDER_PREV = 'prev'
const DIRECTION_LEFT = 'left'
const DIRECTION_RIGHT = 'right'
const KEY_TO_DIRECTION = {
[ARROW_LEFT_KEY]: DIRECTION_RIGHT,
[ARROW_RIGHT_KEY]: DIRECTION_LEFT
}
const EVENT_SLIDE = `slide${EVENT_KEY}`
const EVENT_SLID = `slid${EVENT_KEY}`
const EVENT_KEYDOWN = `keydown${EVENT_KEY}`
const EVENT_MOUSEENTER = `mouseenter${EVENT_KEY}`
const EVENT_MOUSELEAVE = `mouseleave${EVENT_KEY}`
const EVENT_TOUCHSTART = `touchstart${EVENT_KEY}`
const EVENT_TOUCHMOVE = `touchmove${EVENT_KEY}`
const EVENT_TOUCHEND = `touchend${EVENT_KEY}`
const EVENT_POINTERDOWN = `pointerdown${EVENT_KEY}`
const EVENT_POINTERUP = `pointerup${EVENT_KEY}`
const EVENT_DRAG_START = `dragstart${EVENT_KEY}`
const EVENT_LOAD_DATA_API = `load${EVENT_KEY}${DATA_API_KEY}`
const EVENT_CLICK_DATA_API = `click${EVENT_KEY}${DATA_API_KEY}`
const CLASS_NAME_CAROUSEL = 'carousel'
const CLASS_NAME_ACTIVE = 'active'
const CLASS_NAME_SLIDE = 'slide'
const CLASS_NAME_END = 'carousel-item-end'
const CLASS_NAME_START = 'carousel-item-start'
const CLASS_NAME_NEXT = 'carousel-item-next'
const CLASS_NAME_PREV = 'carousel-item-prev'
const CLASS_NAME_POINTER_EVENT = 'pointer-event'
const SELECTOR_ACTIVE = '.active'
const SELECTOR_ACTIVE_ITEM = '.active.carousel-item'
const SELECTOR_ITEM = '.carousel-item'
const SELECTOR_ITEM_IMG = '.carousel-item img'
const SELECTOR_NEXT_PREV = '.carousel-item-next, .carousel-item-prev'
const SELECTOR_INDICATORS = '.carousel-indicators'
const SELECTOR_INDICATOR = '[data-bs-target]'
const SELECTOR_DATA_SLIDE = '[data-bs-slide], [data-bs-slide-to]'
const SELECTOR_DATA_RIDE = '[data-bs-ride="carousel"]'
const POINTER_TYPE_TOUCH = 'touch'
const POINTER_TYPE_PEN = 'pen'
/**
* ------------------------------------------------------------------------
* Class Definition
* ------------------------------------------------------------------------
*/
class Carousel extends BaseComponent {
constructor(element, config) {
super(element)
this._items = null
this._interval = null
this._activeElement = null
this._isPaused = false
this._isSliding = false
this.touchTimeout = null
this.touchStartX = 0
this.touchDeltaX = 0
this._config = this._getConfig(config)
this._indicatorsElement = SelectorEngine.findOne(SELECTOR_INDICATORS, this._element)
this._touchSupported = 'ontouchstart' in document.documentElement || navigator.maxTouchPoints > 0
this._pointerEvent = Boolean(window.PointerEvent)
this._addEventListeners()
}
// Getters
static get Default() {
return Default
}
static get NAME() {
return NAME
}
// Public
next() {
this._slide(ORDER_NEXT)
}
nextWhenVisible() {
// Don't call next when the page isn't visible
// or the carousel or its parent isn't visible
if (!document.hidden && isVisible(this._element)) {
this.next()
}
}
prev() {
this._slide(ORDER_PREV)
}
pause(event) {
if (!event) {
this._isPaused = true
}
if (SelectorEngine.findOne(SELECTOR_NEXT_PREV, this._element)) {
triggerTransitionEnd(this._element)
this.cycle(true)
}
clearInterval(this._interval)
this._interval = null
}
cycle(event) {
if (!event) {
this._isPaused = false
}
if (this._interval) {
clearInterval(this._interval)
this._interval = null
}
if (this._config && this._config.interval && !this._isPaused) {
this._updateInterval()
this._interval = setInterval(
(document.visibilityState ? this.nextWhenVisible : this.next).bind(this),
this._config.interval
)
}
}
to(index) {
this._activeElement = SelectorEngine.findOne(SELECTOR_ACTIVE_ITEM, this._element)
const activeIndex = this._getItemIndex(this._activeElement)
if (index > this._items.length - 1 || index < 0) {
return
}
if (this._isSliding) {
EventHandler.one(this._element, EVENT_SLID, () => this.to(index))
return
}
if (activeIndex === index) {
this.pause()
this.cycle()
return
}
const order = index > activeIndex ?
ORDER_NEXT :
ORDER_PREV
this._slide(order, this._items[index])
}
// Private
_getConfig(config) {
config = {
...Default,
...Manipulator.getDataAttributes(this._element),
...(typeof config === 'object' ? config : {})
}
typeCheckConfig(NAME, config, DefaultType)
return config
}
_handleSwipe() {
const absDeltax = Math.abs(this.touchDeltaX)
if (absDeltax <= SWIPE_THRESHOLD) {
return
}
const direction = absDeltax / this.touchDeltaX
this.touchDeltaX = 0
if (!direction) {
return
}
this._slide(direction > 0 ? DIRECTION_RIGHT : DIRECTION_LEFT)
}
_addEventListeners() {
if (this._config.keyboard) {
EventHandler.on(this._element, EVENT_KEYDOWN, event => this._keydown(event))
}
if (this._config.pause === 'hover') {
EventHandler.on(this._element, EVENT_MOUSEENTER, event => this.pause(event))
EventHandler.on(this._element, EVENT_MOUSELEAVE, event => this.cycle(event))
}
if (this._config.touch && this._touchSupported) {
this._addTouchEventListeners()
}
}
_addTouchEventListeners() {
const start = event => {
if (this._pointerEvent && (event.pointerType === POINTER_TYPE_PEN || event.pointerType === POINTER_TYPE_TOUCH)) {
this.touchStartX = event.clientX
} else if (!this._pointerEvent) {
this.touchStartX = event.touches[0].clientX
}
}
const move = event => {
// ensure swiping with one touch and not pinching
this.touchDeltaX = event.touches && event.touches.length > 1 ?
0 :
event.touches[0].clientX - this.touchStartX
}
const end = event => {
if (this._pointerEvent && (event.pointerType === POINTER_TYPE_PEN || event.pointerType === POINTER_TYPE_TOUCH)) {
this.touchDeltaX = event.clientX - this.touchStartX
}
this._handleSwipe()
if (this._config.pause === 'hover') {
// If it's a touch-enabled device, mouseenter/leave are fired as
// part of the mouse compatibility events on first tap - the carousel
// would stop cycling until user tapped out of it;
// here, we listen for touchend, explicitly pause the carousel
// (as if it's the second time we tap on it, mouseenter compat event
// is NOT fired) and after a timeout (to allow for mouse compatibility
// events to fire) we explicitly restart cycling
this.pause()
if (this.touchTimeout) {
clearTimeout(this.touchTimeout)
}
this.touchTimeout = setTimeout(event => this.cycle(event), TOUCHEVENT_COMPAT_WAIT + this._config.interval)
}
}
SelectorEngine.find(SELECTOR_ITEM_IMG, this._element).forEach(itemImg => {
EventHandler.on(itemImg, EVENT_DRAG_START, e => e.preventDefault())
})
if (this._pointerEvent) {
EventHandler.on(this._element, EVENT_POINTERDOWN, event => start(event))
EventHandler.on(this._element, EVENT_POINTERUP, event => end(event))
this._element.classList.add(CLASS_NAME_POINTER_EVENT)
} else {
EventHandler.on(this._element, EVENT_TOUCHSTART, event => start(event))
EventHandler.on(this._element, EVENT_TOUCHMOVE, event => move(event))
EventHandler.on(this._element, EVENT_TOUCHEND, event => end(event))
}
}
_keydown(event) {
if (/input|textarea/i.test(event.target.tagName)) {
return
}
const direction = KEY_TO_DIRECTION[event.key]
if (direction) {
event.preventDefault()
this._slide(direction)
}
}
_getItemIndex(element) {
this._items = element && element.parentNode ?
SelectorEngine.find(SELECTOR_ITEM, element.parentNode) :
[]
return this._items.indexOf(element)
}
_getItemByOrder(order, activeElement) {
const isNext = order === ORDER_NEXT
return getNextActiveElement(this._items, activeElement, isNext, this._config.wrap)
}
_triggerSlideEvent(relatedTarget, eventDirectionName) {
const targetIndex = this._getItemIndex(relatedTarget)
const fromIndex = this._getItemIndex(SelectorEngine.findOne(SELECTOR_ACTIVE_ITEM, this._element))
return EventHandler.trigger(this._element, EVENT_SLIDE, {
relatedTarget,
direction: eventDirectionName,
from: fromIndex,
to: targetIndex
})
}
_setActiveIndicatorElement(element) {
if (this._indicatorsElement) {
const activeIndicator = SelectorEngine.findOne(SELECTOR_ACTIVE, this._indicatorsElement)
activeIndicator.classList.remove(CLASS_NAME_ACTIVE)
activeIndicator.removeAttribute('aria-current')
const indicators = SelectorEngine.find(SELECTOR_INDICATOR, this._indicatorsElement)
for (let i = 0; i < indicators.length; i++) {
if (Number.parseInt(indicators[i].getAttribute('data-bs-slide-to'), 10) === this._getItemIndex(element)) {
indicators[i].classList.add(CLASS_NAME_ACTIVE)
indicators[i].setAttribute('aria-current', 'true')
break
}
}
}
}
_updateInterval() {
const element = this._activeElement || SelectorEngine.findOne(SELECTOR_ACTIVE_ITEM, this._element)
if (!element) {
return
}
const elementInterval = Number.parseInt(element.getAttribute('data-bs-interval'), 10)
if (elementInterval) {
this._config.defaultInterval = this._config.defaultInterval || this._config.interval
this._config.interval = elementInterval
} else {
this._config.interval = this._config.defaultInterval || this._config.interval
}
}
_slide(directionOrOrder, element) {
const order = this._directionToOrder(directionOrOrder)
const activeElement = SelectorEngine.findOne(SELECTOR_ACTIVE_ITEM, this._element)
const activeElementIndex = this._getItemIndex(activeElement)
const nextElement = element || this._getItemByOrder(order, activeElement)
const nextElementIndex = this._getItemIndex(nextElement)
const isCycling = Boolean(this._interval)
const isNext = order === ORDER_NEXT
const directionalClassName = isNext ? CLASS_NAME_START : CLASS_NAME_END
const orderClassName = isNext ? CLASS_NAME_NEXT : CLASS_NAME_PREV
const eventDirectionName = this._orderToDirection(order)
if (nextElement && nextElement.classList.contains(CLASS_NAME_ACTIVE)) {
this._isSliding = false
return
}
if (this._isSliding) {
return
}
const slideEvent = this._triggerSlideEvent(nextElement, eventDirectionName)
if (slideEvent.defaultPrevented) {
return
}
if (!activeElement || !nextElement) {
// Some weirdness is happening, so we bail
return
}
this._isSliding = true
if (isCycling) {
this.pause()
}
this._setActiveIndicatorElement(nextElement)
this._activeElement = nextElement
const triggerSlidEvent = () => {
EventHandler.trigger(this._element, EVENT_SLID, {
relatedTarget: nextElement,
direction: eventDirectionName,
from: activeElementIndex,
to: nextElementIndex
})
}
if (this._element.classList.contains(CLASS_NAME_SLIDE)) {
nextElement.classList.add(orderClassName)
reflow(nextElement)
activeElement.classList.add(directionalClassName)
nextElement.classList.add(directionalClassName)
const completeCallBack = () => {
nextElement.classList.remove(directionalClassName, orderClassName)
nextElement.classList.add(CLASS_NAME_ACTIVE)
activeElement.classList.remove(CLASS_NAME_ACTIVE, orderClassName, directionalClassName)
this._isSliding = false
setTimeout(triggerSlidEvent, 0)
}
this._queueCallback(completeCallBack, activeElement, true)
} else {
activeElement.classList.remove(CLASS_NAME_ACTIVE)
nextElement.classList.add(CLASS_NAME_ACTIVE)
this._isSliding = false
triggerSlidEvent()
}
if (isCycling) {
this.cycle()
}
}
_directionToOrder(direction) {
if (![DIRECTION_RIGHT, DIRECTION_LEFT].includes(direction)) {
return direction
}
if (isRTL()) {
return direction === DIRECTION_LEFT ? ORDER_PREV : ORDER_NEXT
}
return direction === DIRECTION_LEFT ? ORDER_NEXT : ORDER_PREV
}
_orderToDirection(order) {
if (![ORDER_NEXT, ORDER_PREV].includes(order)) {
return order
}
if (isRTL()) {
return order === ORDER_PREV ? DIRECTION_LEFT : DIRECTION_RIGHT
}
return order === ORDER_PREV ? DIRECTION_RIGHT : DIRECTION_LEFT
}
// Static
static carouselInterface(element, config) {
const data = Carousel.getOrCreateInstance(element, config)
let { _config } = data
if (typeof config === 'object') {
_config = {
..._config,
...config
}
}
const action = typeof config === 'string' ? config : _config.slide
if (typeof config === 'number') {
data.to(config)
} else if (typeof action === 'string') {
if (typeof data[action] === 'undefined') {
throw new TypeError(`No method named "${action}"`)
}
data[action]()
} else if (_config.interval && _config.ride) {
data.pause()
data.cycle()
}
}
static jQueryInterface(config) {
return this.each(function () {
Carousel.carouselInterface(this, config)
})
}
static dataApiClickHandler(event) {
const target = getElementFromSelector(this)
if (!target || !target.classList.contains(CLASS_NAME_CAROUSEL)) {
return
}
const config = {
...Manipulator.getDataAttributes(target),
...Manipulator.getDataAttributes(this)
}
const slideIndex = this.getAttribute('data-bs-slide-to')
if (slideIndex) {
config.interval = false
}
Carousel.carouselInterface(target, config)
if (slideIndex) {
Carousel.getInstance(target).to(slideIndex)
}
event.preventDefault()
}
}
/**
* ------------------------------------------------------------------------
* Data Api implementation
* ------------------------------------------------------------------------
*/
EventHandler.on(document, EVENT_CLICK_DATA_API, SELECTOR_DATA_SLIDE, Carousel.dataApiClickHandler)
EventHandler.on(window, EVENT_LOAD_DATA_API, () => {
const carousels = SelectorEngine.find(SELECTOR_DATA_RIDE)
for (let i = 0, len = carousels.length; i < len; i++) {
Carousel.carouselInterface(carousels[i], Carousel.getInstance(carousels[i]))
}
})
/**
* ------------------------------------------------------------------------
* jQuery
* ------------------------------------------------------------------------
* add .Carousel to jQuery only if jQuery is present
*/
defineJQueryPlugin(Carousel)
export default Carousel | PypiClean |
/NeurIPS22-CellSeg-0.0.1.tar.gz/NeurIPS22-CellSeg-0.0.1/baseline/model_training_3class.py | import argparse
import os
join = os.path.join
import numpy as np
import torch
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
import monai
from monai.data import decollate_batch, PILReader
from monai.inferers import sliding_window_inference
from monai.metrics import DiceMetric
from monai.transforms import (
Activations,
AsChannelFirstd,
AddChanneld,
AsDiscrete,
Compose,
LoadImaged,
SpatialPadd,
RandSpatialCropd,
# RandCropByPosNegLabeld,
RandRotate90d,
ScaleIntensityd,
RandAxisFlipd,
RandZoomd,
RandGaussianNoised,
# RandShiftIntensityd,
RandAdjustContrastd,
RandGaussianSmoothd,
RandHistogramShiftd,
EnsureTyped,
EnsureType,
)
from baseline.unetr2d import UNETR2D
from monai.visualize import plot_2d_or_3d_image
import matplotlib.pyplot as plt
from datetime import datetime
import shutil
monai.config.print_config()
# logging.basicConfig(stream=sys.stdout, level=logging.INFO)
print('Successfully import all requirements!')
def main():
parser = argparse.ArgumentParser('Baseline for Microscopy image segmentation', add_help=False)
# Dataset parameters
parser.add_argument('--data_path', default='./data/Train_Pre_3class/', type=str,
help='training data path; subfolders: images, labels')
parser.add_argument('--work_dir', default='./work_dir',
help='path where to save models and logs')
parser.add_argument('--seed', default=2022, type=int)
parser.add_argument('--resume', default=False,
help='resume from checkpoint')
parser.add_argument('--num_workers', default=4, type=int)
# Model parameters
parser.add_argument('--model_name', default='unet', help='select mode: unet, unetr, swinunetr')
parser.add_argument('--num_class', default=3, type=int, help='segmentation classes')
parser.add_argument('--input_size', default=256, type=int, help='segmentation classes')
# Training parameters
parser.add_argument('--batch_size', default=8, type=int, help='Batch size per GPU')
parser.add_argument('--max_epochs', default=2000, type=int)
parser.add_argument('--val_interval', default=2, type=int)
parser.add_argument('--epoch_tolerance', default=100, type=int)
parser.add_argument('--initial_lr', type=float, default=6e-4, help='learning rate')
# def main():
# args = parser.parse_args()
args = parser.parse_args()
#%% set training/validation split
np.random.seed(args.seed)
model_path = join(args.work_dir, args.model_name+'_3class')
os.makedirs(model_path, exist_ok=True)
run_id = datetime.now().strftime("%Y%m%d-%H%M")
shutil.copyfile(__file__, join(model_path, run_id + '_' + os.path.basename(__file__)))
img_path = join(args.data_path, 'images')
gt_path = join(args.data_path, 'labels')
img_names = sorted(os.listdir(img_path))
gt_names = [img_name.split('.')[0]+'_label.png' for img_name in img_names]
img_num = len(img_names)
val_frac = 0.1
indices = np.arange(img_num)
np.random.shuffle(indices)
val_split = int(img_num*val_frac)
train_indices = indices[val_split:]
val_indices = indices[:val_split]
train_files = [{"img": join(img_path, img_names[i]), "label": join(gt_path, gt_names[i])} for i in train_indices]
val_files = [{"img": join(img_path, img_names[i]), "label": join(gt_path, gt_names[i])} for i in val_indices]
print(f"training image num: {len(train_files)}, validation image num: {len(val_files)}")
#%% define transforms for image and segmentation
train_transforms = Compose(
[
LoadImaged(keys=["img", "label"], reader=PILReader, dtype=np.uint8), # image three channels (H, W, 3); label: (H, W)
AddChanneld(keys=["label"], allow_missing_keys=True), # label: (1, H, W)
AsChannelFirstd(keys=['img'], channel_dim=-1, allow_missing_keys=True), # image: (3, H, W)
ScaleIntensityd(keys=["img"], allow_missing_keys=True), # Do not scale label
SpatialPadd(keys=["img","label"], spatial_size=args.input_size),
RandSpatialCropd(keys=["img", "label"], roi_size=args.input_size, random_size=False),
RandAxisFlipd(keys=["img", "label"], prob=0.5),
RandRotate90d(keys=["img", "label"], prob=0.5, spatial_axes=[0, 1]),
# # intensity transform
RandGaussianNoised(keys=['img'], prob=0.25, mean=0, std=0.1),
RandAdjustContrastd(keys=["img"], prob=0.25, gamma=(1,2)),
RandGaussianSmoothd(keys=["img"], prob=0.25, sigma_x=(1,2)),
RandHistogramShiftd(keys=["img"], prob=0.25, num_control_points=3),
RandZoomd(keys=["img", "label"], prob=0.15, min_zoom=0.8, max_zoom=1.5, mode=['area', 'nearest']),
EnsureTyped(keys=["img", "label"]),
]
)
val_transforms = Compose(
[
LoadImaged(keys=["img", "label"], reader=PILReader, dtype=np.uint8),
AddChanneld(keys=["label"], allow_missing_keys=True),
AsChannelFirstd(keys=['img'], channel_dim=-1, allow_missing_keys=True),
ScaleIntensityd(keys=["img"], allow_missing_keys=True),
# AsDiscreted(keys=['label'], to_onehot=3),
EnsureTyped(keys=["img", "label"]),
]
)
#% define dataset, data loader
check_ds = monai.data.Dataset(data=train_files, transform=train_transforms)
check_loader = DataLoader(check_ds, batch_size=1, num_workers=4)
check_data = monai.utils.misc.first(check_loader)
print('sanity check:', check_data["img"].shape, torch.max(check_data["img"]), check_data["label"].shape, torch.max(check_data["label"]))
#%% create a training data loader
train_ds = monai.data.Dataset(data=train_files, transform=train_transforms)
# use batch_size=2 to load images and use RandCropByPosNegLabeld to generate 2 x 4 images for network training
train_loader = DataLoader(
train_ds,
batch_size=args.batch_size,
shuffle=True,
num_workers=args.num_workers,
pin_memory=torch.cuda.is_available()
)
# create a validation data loader
val_ds = monai.data.Dataset(data=val_files, transform=val_transforms)
val_loader = DataLoader(val_ds, batch_size=1, shuffle=False, num_workers=1)
dice_metric = DiceMetric(include_background=False, reduction="mean", get_not_nans=False)
post_pred = Compose([EnsureType(), Activations(softmax=True), AsDiscrete(threshold=0.5)])
post_gt = Compose([EnsureType(), AsDiscrete(to_onehot=None)])
# create UNet, DiceLoss and Adam optimizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
if args.model_name.lower() == 'unet':
model = monai.networks.nets.UNet(
spatial_dims=2,
in_channels=3,
out_channels=args.num_class,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
).to(device)
if args.model_name.lower() == 'unetr':
model = UNETR2D(
in_channels=3,
out_channels=args.num_class,
img_size=(args.input_size, args.input_size),
feature_size=16,
hidden_size=768,
mlp_dim=3072,
num_heads=12,
pos_embed="perceptron",
norm_name="instance",
res_block=True,
dropout_rate=0.0,
).to(device)
if args.model_name.lower() == 'swinunetr':
model = monai.networks.nets.SwinUNETR(
img_size=(args.input_size, args.input_size),
in_channels=3,
out_channels=args.num_class,
feature_size=24, # should be divisible by 12
spatial_dims=2
).to(device)
loss_function = monai.losses.DiceCELoss(softmax=True)
initial_lr = args.initial_lr
optimizer = torch.optim.AdamW(model.parameters(), initial_lr)
# start a typical PyTorch training
max_epochs = args.max_epochs
epoch_tolerance = args.epoch_tolerance
val_interval = args.val_interval
best_metric = -1
best_metric_epoch = -1
epoch_loss_values = list()
metric_values = list()
writer = SummaryWriter(model_path)
for epoch in range(1, max_epochs):
model.train()
epoch_loss = 0
for step, batch_data in enumerate(train_loader, 1):
inputs, labels = batch_data["img"].to(device), batch_data["label"].to(device)
optimizer.zero_grad()
outputs = model(inputs)
labels_onehot = monai.networks.one_hot(labels, args.num_class) # (b,cls,256,256)
loss = loss_function(outputs, labels_onehot)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_len = len(train_ds) // train_loader.batch_size
# print(f"{step}/{epoch_len}, train_loss: {loss.item():.4f}")
writer.add_scalar("train_loss", loss.item(), epoch_len * epoch + step)
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
print(f"epoch {epoch} average loss: {epoch_loss:.4f}")
checkpoint = {'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': epoch_loss_values,
}
if epoch>20 and epoch % val_interval == 0:
model.eval()
with torch.no_grad():
val_images = None
val_labels = None
val_outputs = None
for val_data in val_loader:
val_images, val_labels = val_data["img"].to(device), val_data["label"].to(device)
val_labels_onehot = monai.networks.one_hot(val_labels, args.num_class)
roi_size = (256, 256)
sw_batch_size = 4
val_outputs = sliding_window_inference(val_images, roi_size, sw_batch_size, model)
val_outputs = [post_pred(i) for i in decollate_batch(val_outputs)]
val_labels_onehot = [post_gt(i) for i in decollate_batch(val_labels_onehot)]
# compute metric for current iteration
print(os.path.basename(val_data['img_meta_dict']['filename_or_obj'][0]),
dice_metric(y_pred=val_outputs, y=val_labels_onehot))
# aggregate the final mean dice result
metric = dice_metric.aggregate().item()
# reset the status for next validation round
dice_metric.reset()
metric_values.append(metric)
if metric > best_metric:
best_metric = metric
best_metric_epoch = epoch + 1
torch.save(checkpoint, join(model_path, "best_Dice_model.pth"))
print("saved new best metric model")
print(
"current epoch: {} current mean dice: {:.4f} best mean dice: {:.4f} at epoch {}".format(
epoch + 1, metric, best_metric, best_metric_epoch
)
)
writer.add_scalar("val_mean_dice", metric, epoch + 1)
# plot the last model output as GIF image in TensorBoard with the corresponding image and label
plot_2d_or_3d_image(val_images, epoch, writer, index=0, tag="image")
plot_2d_or_3d_image(val_labels, epoch, writer, index=0, tag="label")
plot_2d_or_3d_image(val_outputs, epoch, writer, index=0, tag="output")
if (epoch - best_metric_epoch) > epoch_tolerance:
print(f"validation metric does not improve for {epoch_tolerance} epochs! current {epoch=}, {best_metric_epoch=}")
break
print(f"train completed, best_metric: {best_metric:.4f} at epoch: {best_metric_epoch}")
writer.close()
torch.save(checkpoint, join(model_path, 'final_model.pth'))
np.savez_compressed(join(model_path, 'train_log.npz'), val_dice=metric_values, epoch_loss=epoch_loss_values)
if __name__ == "__main__":
main() | PypiClean |
/HIIdentify-0.0.2.tar.gz/HIIdentify-0.0.2/README.md | <img src="https://github.com/BethanEaseman/HIIdentify/blob/master/Images/HIIdentify-logo.png" width="200" height="165">
# HIIdentify
![GitHub last commit](https://img.shields.io/github/last-commit/BethanEaseman/HIIdentify?style=plastic)
![version](https://img.shields.io/pypi/v/HIIdentify?style=plastic)
![license](https://img.shields.io/badge/license-%20%20GNU%20GPLv3%20-green?style=plastic)
Welcome to HIIdentify! This code identifies HII regions within a galaxy, using a map of the H$\alpha$ emssion line flux.
*Please note, HIIdentify is under active development - any contributions and / or feedback would be very welcome*.
HIIdentify works by identifying the brightest pixels within the image, then growing the region to include the surrounding pixels with fluxes greater than the specified background flux, up to a maximum size. Where regions merge, the distance from the merging pixels to the peaks of the two regions are considered, and the pixel is assigned to the region with the closest peak.
In the below example map (*left*), the flux of the H$\alpha$ emission line can be seen, with the highest flux regions show in yellow, and lowest flux regions in purple. The regions identfied by HIIdentify can be seen as the red outlines. Here it can be seen that the regions are not restricted to being a particular shape, and that all regions with a peak flux above a given limit have been identified.
In the right-hand image, the segmentation map returned by HIIdentify can be seen. A 2D map is returned, with all pixels corresponding to a particular HII region set to the ID number of the region. This allows the segmentation map to be used to mask out regions of maps of other parameters, such as line fluxes or metallicity maps, pertaining to the selected HII region.
<img align="center" src="https://github.com/BethanEaseman/HIIdentify/blob/master/Images/NGC1483_ha_regionoutline_segmentationmap.png" height="400">
## Installing HIIdentify
To install `HIIdentify` using [pip](https://pip.pypa.io/en/stable/), run:
```
pip install HIIdentify
```
## Using HIIdentify
## How to cite HIIdentify
If you use HIIdentify as part of your work, please cite Easeman et al. (2022) *in prep*.
| PypiClean |
/LocalNote-1.0.14.zip/LocalNote-1.0.14/README.rst | LocalNote
=========
|Gitter| |Python27| `Chinese Version <https://github.com/littlecodersh/LocalNote/blob/master/README.md>`__
LocalNote enables you to use evernote in the local-file way.
Popular markdown format is supported and can be perfectly performed in evernote (the download format will remain as .md instead of .html).
Majority of notes in evernote can also be translated into markdown format easily.
LocalNote is also available on three platforms, which ensure that linux users can also have a good experience with evernote.
You are welcomed to join this project on `Github <https://github.com/littlecodersh/LocalNote>`__.
**Screenshot**
|GifDemo|
Video is `here <http://v.youku.com/v_show/id_XMTU3Nzc5NzU1Ng==>`__.
**Installation**
.. code:: bash
pip install localnote
**Usage**
Commonly used commands
.. code:: bash
# initialize the root folder, please use this in an empty folder
localnote init
# download your notes from your evernote to your computer
localnote pull
# show the difference between your local notes and online notes
localnote status
# upload your notes from your computer to your evernote
localnote push
# translate html documents into markdown formats
localnote convert file_need_convert.html
Patterns
You may use your whole evernote with LocalNote, or choose the ones you like.
.. code:: bash
# Set notebooks that sync with LocalNote
localnote notebook
Storage format
- A folder in the root folder means a notebook
- A document, folder as well, in notebook folder means a note
- A note can contain main document only or have attachments in any format.
- Main document of a note must be a `.md` or `.html` file, its file name will be used as note name.
Example file tree
::
Root
My default notebook
My first notebook.html
My second notebook.html
Attachment notebook
My third notebook
My third notebook.md
Packed documents.zip
Note of packing.txt
Empty notebook
**FAQ**
Q: Will the first pull take a long time?
A: It depands how big your files are, the downloading speed is about 200k/s.
Q: How to preview markdown files locally?
A: You need a web browser plugin. Take Chrom for example, it's `Markdown Preview Plus <https://chrome.google.com/webstore/detail/markdown-preview-plus/febilkbfcbhebfnokafefeacimjdckgl>`__.
**Comments**
If you have any question or suggestion, you can discuss with me in this `Issue <https://github.com/littlecodersh/LocalNote/issues/1>`__ .
Or you may contact me on gitter: |Gitter|
.. |Python27| image:: https://img.shields.io/badge/python-2.7-ff69b4.svg
.. |Gitter| image:: https://badges.gitter.im/littlecodersh/LocalNote.svg
:target: https://github.com/littlecodersh/ItChat/tree/robot
.. |GifDemo| image:: http://7xrip4.com1.z0.glb.clouddn.com/LocalNoteDemo.gif
| PypiClean |
/ClueDojo-1.4.3-1.tar.gz/ClueDojo-1.4.3-1/src/cluedojo/static/dojox/widget/README | -------------------------------------------------------------------------------
dojox.widget Collection
-------------------------------------------------------------------------------
Version 1.0
Release date: 10/31/2007
-------------------------------------------------------------------------------
Project state:
[Calendar] experimental
[CalendarFx] experimental
[ColorPicker] beta
[Dialog] experimental
[FeedPortlet] experimental
[FilePicker] experimental
[FisheyeList] experimental
[FisheyeLite] beta
[Iterator] experimental
[Loader] experimental
[Pager] experimental
[Portlet] experimental
[PlaceholderMenuItem] experimental
[Roller] experimental
[RollingList] experimental
[SortList] experimental
[Toaster] experimental
[Wizard] experimental
[AnalogGauge] experimental
[BarGauge] experimental
[Standby] experimental
-------------------------------------------------------------------------------
Credits:
[Calendar] Shane O'Sullivan
[CalendarFx] Shane O'Sullivan
[ColorPicker] Peter Higgins (dante)
[Dialog] Peter Higgins (dante)
[FeedPortlet] Shane O'Sullivan
[FilePicker] Nathan Toone (toonetown)
[FisheyeList] Karl Tiedt (kteidt)
[FisheyeLite] Peter Higgins (dante)
[Iterator] Alex Russell (slightlyoff)
[Loader] Peter Higgins (dante)
[Pager] Nikolai Onken (nonken), Peter Higgins (dante);
[PlaceholderMenuItem] Nathan Toone (toonetown)
[Portlet] Shane O'Sullivan
[Roller] Peter Higgins (dante)
[RollingList] Nathan Toone (toonetown)
[SortList] Peter Higgins (dante)
[Toaster] Adam Peller (peller)
[Wizard] Peter Higgins (dante)
[AnalogGauge] Benjamin Schell (bmschell) CCLA
[BarGauge] Benjamin Schell (bmschell) CCLA
[Standby] Jared Jurkiewicz (jaredj) CCLA
[UpgradeBar] Mike Wilcox (mwilcox), Revin Guillen
-------------------------------------------------------------------------------
Project description
This is a collection of standalone widgets for use in
your website. Each individual widget is independent
of the others.
-------------------------------------------------------------------------------
Dependencies:
Each widget has it's own requirements and dependencies.
Most inherit from dijit base-classes such as dijit._Widget,
dijit._Templated, etc ... So we will assume the availablility
of dojo (core), and dijit packages.
Each individual component stores resources in a folder that shares
a name with the Widget. For instance:
the Dialog lives in
dojox/widget/Dialog.js ...
and the folder:
dojox/widget/Dialog/ contains a 'Dialog.css', the required
styles for that particular widget. All required templates and
images reside in the folder.
This differs slightly from the rest of DojoX in that each other
project uses a shared resources/ folder in the project folder,
though uses the same naming convention for stylesheets and templates.
eg:
dojox/layout/resources/ExpandoPane.css
dojox.layout.ExpandoPane
-------------------------------------------------------------------------------
Documentation
Please refer to the API-tool, or in-line documentation. All of these
widgets are of varying use, quality, and documentation completion.
-------------------------------------------------------------------------------
Installation instructions
These are standalone Widgets, so putting the [widget].js file
in your dojox/widget folder, and copying any files in the
/dojox/widget/[widget]/ folder as supplements/templates/etc
should be all you need to do.
eg: FisheyeList:
/dojox/widget/FisheyeList.js
/dojox/widget/FisheyeList/FisheyeList.css
should be all you need to use the Fisheye widget.
you can safely import the whole widget project into your
dojox/ root directory from the following SVN url:
http://svn.dojotoolkit.org/src/dojox/trunk/widget
-------------------------------------------------------------------------------
Other Notes (Brief widget list):
* ColorPicker - An HSV ColorPicker intended to be a drop down
* Calendar - An extension on the dijit._Calendar providing a different UI
* CalendarFx - additional mixable FX for transitions in dojox.widget.Calendar
* Dialog - An extended version of dijit.Dialog with man options and transition.
* FilePicker - a widget for browsing server-side file systems (can use
dojox.data.FileStore as backend store)
* FisheyeList - the classic FishEye Picker (abandoned)
* FisheyeLite - A partial replacement for the FisheyeList - serious performance
gains, and entirely more extensible in that it simply animates defined
properties, relying on the natural styling as a foundation.
* Iterator - Basic array and data store iterator class
* Loader - an experimental Class that listens to XHR
connections in the background, and displays
a loading indicator. Loader will be removed in 1.3, and is (abandoned).
* PlaceholderMenuItem - a menu item that can be used to inject other menu
items at a given location. Extends dijit.Menu directly.
* Roller - A component to show many lines of text in a single area, rotating
through the options available. Also provides RollerSlide, an extension
to the stock fading roller to add a slide animation to the transition.
* RollingList - A component of the FilePicker widget
* SortList - a degradable UL with a fixed header, scrolling,
and sorting. Can be the direct descendant of a
LayoutContainer and will size to fit.
* Toaster - a messaging system to display unobtrusive
alerts on screen.
* Wizard - a StackContainer with built-in navigation to
ease in the creation of 'step-based' content.
Requires dojo >= 1.1
* AnalogGauge - an analog style customizable gauge for displaying values in an
animated fashion and with multiple indicators. Supports easings for
indicator animations, transparent overlays, etc. Very flexible.
Requires dojo >= 1.3
* BarGauge - a bar style gauge for displaying values in an animated fashion
and with multiple indicators. Supports easings for indicator animations,
etc. Very flexible.
Requires dojo >= 1.3
* Standby - a 'blocker' style widget to overlay a translucent div + image over a DOM node/widget to indicate busy.
Overlay color, image, and alt text can all be customized.
Requires dojo >= 1.3
* UpgradeBar - Displays the "yellow bar" at the top of a page to indicate the user
needs to upgrade their browser or a plugin
Requires dojo >= 1.3 | PypiClean |
/Avatar_Utils-1.8.11-py3-none-any.whl/avatar_utils/objects/abstracts/serializable.py | import re
from copy import deepcopy
from dataclasses import dataclass as python_dataclass, Field
from logging import getLogger
from typing import Any, Dict, Optional
# import avatar_utils.objects - required
import avatar_utils.objects
logger = getLogger(__name__)
@python_dataclass
class Serializable:
def __post_init__(self):
self.repr_type = self.__fullname()
def to_dict(self, remove_defaults: bool = True) -> dict:
result = {}
dataclass_fields: Dict = self._get_dataclass_fields()
field: Field
for key, field in dataclass_fields.items():
value = self.__getattribute__(key)
if not remove_defaults or remove_defaults and field.default != value:
result[key] = Serializable.any_to_dict(value, remove_defaults=remove_defaults)
return result
@classmethod
def _get_dataclass_fields(cls) -> Optional[Dict]:
keys = cls.__dict__.keys()
for key in keys:
if key == '__dataclass_fields__':
dataclass_fields = getattr(cls, key)
return dataclass_fields
@classmethod
def class_attributes(cls) -> list:
dataclass_fields: Optional[Dict] = cls._get_dataclass_fields()
attributes = [field for field in dataclass_fields.keys() if dataclass_fields]
return attributes
@staticmethod
def any_to_dict(data: Any, remove_defaults: bool = True) -> Any:
data_copy = deepcopy(data)
if isinstance(data_copy, Serializable):
return data_copy.to_dict(remove_defaults=remove_defaults)
elif isinstance(data_copy, list):
for i in range(len(data_copy)):
data_copy[i] = Serializable.any_to_dict(data=data_copy[i], remove_defaults=remove_defaults)
elif isinstance(data_copy, dict):
for key, value in data_copy.items():
data_copy[key] = Serializable.any_to_dict(value, remove_defaults=remove_defaults)
return data_copy
# все вложенные объекты десериализуются как Serializable, то есть у них должно быть поле repr_type
@classmethod
def from_dict(cls, data: Any, objects_existing_check: bool = False) -> Any:
def from_dict_with_check(cls, data: Any) -> (Any, bool):
if isinstance(data, dict):
# serializable class
if cls.__name__ == 'Serializable':
repr_type = data.pop('repr_type', None)
if repr_type:
# check precisely
if not injection_detected(cls=cls, repr_type=repr_type):
# evaluate actual class
actual_cls = eval(repr_type)
return extract_class(actual_cls, data)
else:
logger.error('Injection detected. Skip evaluation')
return {}, False
else:
logger.debug('Dict is received')
result = {}
object_has_met = False
for k, v in data.items():
result[k], inner_object_has_met = from_dict_with_check(cls=Serializable, data=v)
object_has_met = object_has_met | inner_object_has_met
return result, object_has_met
else:
# concrete class received
return extract_class(cls, data)
elif isinstance(data, list):
object_has_met = False
i: int
for i in range(data.__len__()):
data[i], inner_object_has_met = from_dict_with_check(cls, data[i])
object_has_met = object_has_met | inner_object_has_met
return data, object_has_met
elif isinstance(data, Serializable):
dataclass_fields: Dict = data._get_dataclass_fields()
field: Field
for key, field in dataclass_fields.items():
value = data.__getattribute__(key)
data.__setattr__(key, Serializable.from_dict(value))
return data, True
else:
return data, False
def injection_detected(cls, repr_type: str):
module_name = 'avatar_utils.objects'
import sys, inspect
for name, obj in inspect.getmembers(sys.modules[module_name]):
if inspect.isclass(obj):
qualify_path = str(obj)[8:-2]
# concrete classes in avatar_utils.objects module
if qualify_path.startswith(module_name) and not qualify_path.lower().__contains__('abstract'):
# class exists
if qualify_path.endswith(repr_type):
# acceptable only
if re.match('^[A-Za-z_\\.]*$', qualify_path):
return False
return True
def extract_class(cls, data: Any):
attributes = cls.class_attributes()
result = {}
for k, v in data.items():
if k in attributes:
result[k], skip = from_dict_with_check(cls=Serializable, data=v)
else:
logger.warning('Skip unexpected keyword argument "%s"', k)
extracted_cls = cls(**result)
return extracted_cls, True
result, object_has_met = from_dict_with_check(cls, deepcopy(data))
if objects_existing_check:
if not object_has_met:
raise TypeError('No object has met')
return result
def __fullname(self):
# o.__module__ + "." + o.__class__.__qualname__ is an example in
# this context of H.L. Mencken's "neat, plausible, and wrong."
# Python makes no guarantees as to whether the __module__ special
# attribute is defined, so we take a more circumspect approach.
# Alas, the module name is explicitly excluded from __qualname__
# in Python 3.
module = self.__class__.__module__
if module is None or module == str.__class__.__module__:
return self.__class__.__name__ # Avoid reporting __builtin__
else:
return module + '.' + self.__class__.__name__ | PypiClean |
/D-Analyst-1.0.6.tar.gz/D-Analyst-1.0.6/main/analyst/processors/navigation_processor.py | import inspect
import time
import numpy as np
from .processor import EventProcessor
from analyst import Manager, TextVisual, get_color
__all__ = ['NavigationEventProcessor']
MAX_VIEWBOX = (-1., -1., 1., 1.)
class NavigationEventProcessor(EventProcessor):
"""Handle navigation-related events."""
def initialize(self, constrain_navigation=False,
normalization_viewbox=None, momentum=False):
self.navigation_rectangle = None
self.constrain_navigation = constrain_navigation
self.normalization_viewbox = normalization_viewbox
self.reset()
self.set_navigation_constraints()
self.activate_navigation_constrain()
self.register('Pan', self.process_pan_event)
self.register('Rotation', self.process_rotation_event)
self.register('Zoom', self.process_zoom_event)
self.register('ZoomBox', self.process_zoombox_event)
self.register('Reset', self.process_reset_event)
self.register('ResetZoom', self.process_resetzoom_event)
self.register('SetPosition', self.process_setposition_event)
self.register('SetViewbox', self.process_setviewbox_event)
if momentum:
self.register('Animate', self.process_animate_event)
self.pan_list = []
self.pan_list_maxsize = 10
self.pan_vec = np.zeros(2)
self.is_panning = False
self.momentum = False
self.register('Grid', self.process_grid_event)
self.grid_visible = getattr(self.parent, 'show_grid', False)
self.activate_grid()
def activate_grid(self):
self.set_data(visual='grid_lines', visible=self.grid_visible)
self.set_data(visual='grid_text', visible=self.grid_visible)
processor = self.get_processor('grid')
if processor:
processor.activate(self.grid_visible)
if self.grid_visible:
processor.update_axes(None)
def process_grid_event(self, parameter):
self.grid_visible = not(self.grid_visible)
self.activate_grid()
def transform_view(self):
"""Change uniform variables to implement interactive navigation."""
translation = self.get_translation()
scale = self.get_scaling()
for visual in self.paint_manager.get_visuals():
if not visual.get('is_static', False):
self.set_data(visual=visual['name'],
scale=scale, translation=translation)
def process_none(self):
"""Process the None event, i.e. where there is no event. Useful
to trigger an action at the end of a long-lasting event."""
if (self.navigation_rectangle is not None):
self.set_relative_viewbox(*self.navigation_rectangle)
self.paint_manager.hide_navigation_rectangle()
self.navigation_rectangle = None
if self.is_panning:
self.is_panning = False
if len(self.pan_list) >= self.pan_list_maxsize:
self.momentum = True
self.parent.block_refresh = False
self.transform_view()
def add_pan(self, parameter):
self.pan_list.append(parameter)
if len(self.pan_list) > self.pan_list_maxsize:
del self.pan_list[0]
self.pan_vec = np.array(self.pan_list).mean(axis=0)
def process_pan_event(self, parameter):
self.is_panning = True
self.momentum = False
self.parent.block_refresh = False
self.add_pan(parameter)
self.pan(parameter)
self.set_cursor('ClosedHandCursor')
self.transform_view()
def process_animate_event(self, parameter):
if self.is_panning:
self.add_pan((0., 0.))
if self.momentum:
self.pan(self.pan_vec)
self.pan_vec *= .975
if (np.abs(self.pan_vec) < .0001).all():
self.momentum = False
self.parent.block_refresh = True
self.pan_list = []
self.pan_vec = np.zeros(2)
self.transform_view()
def process_rotation_event(self, parameter):
self.rotate(parameter)
self.set_cursor('ClosedHandCursor')
self.transform_view()
def process_zoom_event(self, parameter):
self.zoom(parameter)
self.parent.block_refresh = False
self.momentum = False
self.set_cursor('MagnifyingGlassCursor')
self.transform_view()
def process_zoombox_event(self, parameter):
self.zoombox(parameter)
self.parent.block_refresh = False
self.set_cursor('MagnifyingGlassCursor')
self.transform_view()
def process_reset_event(self, parameter):
self.reset()
self.parent.block_refresh = False
self.set_cursor(None)
self.transform_view()
def process_resetzoom_event(self, parameter):
self.reset_zoom()
self.parent.block_refresh = False
self.set_cursor(None)
self.transform_view()
def process_setposition_event(self, parameter):
self.set_position(*parameter)
self.parent.block_refresh = False
self.transform_view()
def process_setviewbox_event(self, parameter):
self.set_viewbox(*parameter)
self.parent.block_refresh = False
self.transform_view()
def set_navigation_constraints(self, constraints=None, maxzoom=1e6):
"""Set the navigation constraints.
Arguments:
* constraints=None: the coordinates of the bounding box as
(xmin, ymin, xmax, ymax), by default (+-1).
"""
if not constraints:
constraints = MAX_VIEWBOX
self.xmin, self.ymin, self.xmax, self.ymax = constraints
self.sxmin = 1./min(self.xmax, -self.xmin)
self.symin = 1./min(self.ymax, -self.ymin)
self.sxmax = self.symax = maxzoom
def reset(self):
"""Reset the navigation."""
self.tx, self.ty, self.tz = 0., 0., 0.
self.sx, self.sy = 1., 1.
self.sxl, self.syl = 1., 1.
self.rx, self.ry = 0., 0.
self.navigation_rectangle = None
def pan(self, parameter):
"""Pan along the x,y coordinates.
Arguments:
* parameter: (dx, dy)
"""
self.tx += parameter[0] / self.sx
self.ty += parameter[1] / self.sy
def rotate(self, parameter):
self.rx += parameter[0]
self.ry += parameter[1]
def zoom(self, parameter):
"""Zoom along the x,y coordinates.
Arguments:
* parameter: (dx, px, dy, py)
"""
dx, px, dy, py = parameter
if self.parent.constrain_ratio:
if (dx >= 0) and (dy >= 0):
dx, dy = (max(dx, dy),) * 2
elif (dx <= 0) and (dy <= 0):
dx, dy = (min(dx, dy),) * 2
else:
dx = dy = 0
self.sx *= np.exp(dx)
self.sy *= np.exp(dy)
if self.constrain_navigation:
self.sx = np.clip(self.sx, self.sxmin, self.sxmax)
self.sy = np.clip(self.sy, self.symin, self.symax)
self.tx += -px * (1./self.sxl - 1./self.sx)
self.ty += -py * (1./self.syl - 1./self.sy)
self.sxl = self.sx
self.syl = self.sy
def zoombox(self, parameter):
"""Indicate to draw a zoom box.
Arguments:
* parameter: the box coordinates (xmin, ymin, xmax, ymax)
"""
self.navigation_rectangle = parameter
self.paint_manager.show_navigation_rectangle(parameter)
def reset_zoom(self):
"""Reset the zoom."""
self.sx, self.sy = 1, 1
self.sxl, self.syl = 1, 1
self.navigation_rectangle = None
def get_viewbox(self, scale=1.):
"""Return the coordinates of the current view box.
Returns:
* (xmin, ymin, xmax, ymax): the current view box in data coordinate
system.
"""
x0, y0 = self.get_data_coordinates(-scale, -scale)
x1, y1 = self.get_data_coordinates(scale, scale)
return (x0, y0, x1, y1)
def get_data_coordinates(self, x, y):
"""Convert window relative coordinates into data coordinates.
Arguments:
* x, y: coordinates in [-1, 1] of a point in the window.
Returns:
* x', y': coordinates of this point in the data coordinate system.
"""
return x/self.sx - self.tx, y/self.sy - self.ty
def get_window_coordinates(self, x, y):
"""Inverse of get_data_coordinates.
"""
return (x + self.tx) * self.sx, (y + self.ty) * self.sy
def constrain_viewbox(self, x0, y0, x1, y1):
"""Constrain the viewbox ratio."""
if (x1-x0) > (y1-y0):
d = ((x1-x0)-(y1-y0))/2
y0 -= d
y1 += d
else:
d = ((y1-y0)-(x1-x0))/2
x0 -= d
x1 += d
return x0, y0, x1, y1
def set_viewbox(self, x0, y0, x1, y1):
"""Set the view box in the data coordinate system.
Arguments:
* x0, y0, x1, y1: viewbox coordinates in the data coordinate system.
"""
if self.parent.constrain_ratio:
x0, y0, x1, y1 = self.constrain_viewbox(x0, y0, x1, y1)
if (x1-x0) and (y1-y0):
self.tx = -(x1+x0)/2
self.ty = -(y1+y0)/2
self.sx = 2./abs(x1-x0)
self.sy = 2./abs(y1-y0)
self.sxl, self.syl = self.sx, self.sy
def set_relative_viewbox(self, x0, y0, x1, y1):
"""Set the view box in the window coordinate system.
Arguments:
* x0, y0, x1, y1: viewbox coordinates in the window coordinate system.
These coordinates are all in [-1, 1].
"""
if (np.abs(x1 - x0) < .07) & (np.abs(y1 - y0) < .07):
return
if self.parent.constrain_ratio:
x0, y0, x1, y1 = self.constrain_viewbox(x0, y0, x1, y1)
if (x1-x0) and (y1-y0):
self.tx += -(x1+x0)/(2*self.sx)
self.ty += -(y1+y0)/(2*self.sy)
self.sx *= 2./abs(x1-x0)
self.sy *= 2./abs(y1-y0)
self.sxl, self.syl = self.sx, self.sy
def set_position(self, x, y):
"""Set the current position.
Arguments:
* x, y: coordinates in the data coordinate system.
"""
self.tx = -x
self.ty = -y
def activate_navigation_constrain(self):
"""Constrain the navigation to a bounding box."""
if self.constrain_navigation:
self.sx = np.clip(self.sx, self.sxmin, self.sxmax)
self.sy = np.clip(self.sy, self.symin, self.symax)
self.tx = np.clip(self.tx, 1./self.sx - self.xmax,
-1./self.sx - self.xmin)
self.ty = np.clip(self.ty, 1./self.sy + self.ymin,
-1./self.sy + self.ymax)
else:
self.sx = min(self.sx, self.sxmax)
self.sy = min(self.sy, self.symax)
def get_translation(self):
"""Return the translation vector.
Returns:
* tx, ty: translation coordinates.
"""
self.activate_navigation_constrain()
return self.tx, self.ty
def get_rotation(self):
return self.rx, self.ry
def get_scaling(self):
"""Return the scaling vector.
Returns:
* sx, sy: scaling coordinates.
"""
if self.constrain_navigation:
self.activate_navigation_constrain()
return self.sx, self.sy | PypiClean |
/Dhelpers-0.1.5rc1.tar.gz/Dhelpers-0.1.5rc1/Dlib/Dpowers/editing/ressource_classes.py | import inspect
from .. import Adaptor, adaptionmethod
from Dhelpers.adaptor import AdaptionMethod
from types import GeneratorType
import os, functools, warnings
from types import FunctionType
from Dhelpers.file_iteration import FileIterator
from Dhelpers.baseclasses import iter_all_vars
from Dhelpers.named import NamedObj
path = os.path
iter_types = (list, tuple, GeneratorType)
class ClosedResource: pass
class ClosedResourceError(Exception): pass
edit_tag = "__edit"
class NamedValue(NamedObj):
pass
class EditorAdaptor(Adaptor):
property_value_dict = None
named_classes = dict()
@classmethod
def Values_for_Property(cls, property_name):
NamedClass = type(f"NamedValue_for_property_{property_name}",
(NamedValue,), {})
cls.named_classes[property_name] = NamedClass
def decorator(container_class):
NamedClass.update_names(container_class)
return decorator
baseclass = True #for Adaptor mechanism
# def __init_subclass__(cls):
# super().__init_subclass__()
# cls.NamedValue = type(f"{cls.__name__}.NamedValue", (NamedValueBase,),{})
save_in_place = False #if False, automatically rename before saving
def _check(self, obj):
if isinstance(obj, iter_types):
for o in obj:
if not isinstance(o, self.obj_class): raise TypeError
else:
if not isinstance(obj, self.obj_class): raise TypeError
return obj
@adaptionmethod(require=True)
def load(self, file, **load_kwargs):
backend_object = self._check(self.load.target_with_args())
return backend_object
obj_class = load.adaptive_property("obj_class", NotImplementedError)
names = load.adaptive_property("names", default={}, ty=dict)
value_names = load.adaptive_property("value_names", {}, dict)
@load.target_modifier
def _load_tm(self, target):
self.create_effective_dict()
return target
def create_effective_dict(self):
self.property_value_dict = dict()
for prop_name, NamedCls in self.named_classes.items():
stand_dict = NamedCls.StandardizingDict()
try:
value_list = self.value_names[prop_name]
except KeyError:
pass
else:
assert isinstance(value_list, (list,tuple))
stand_dict.update(value_list)
stand_dict.other_dict.clear() #this only contains mappings on
# itself anyway
self.property_value_dict[prop_name] = stand_dict
@adaptionmethod
def load_multi(self, file, **load_kwargs):
backend_objects = self._check(self.load_multi.target_with_args())
return backend_objects
@adaptionmethod
def construct_multi(self, sequence_of_backend_objs):
sequ = tuple(sequence_of_backend_objs)
self._check(sequ)
return self.construct_multi.target(sequ)
@adaptionmethod(require=True)
def save(self, backend_obj, destination=None):
self._check(backend_obj)
if self.save_in_place:
assert destination is None
else:
assert destination is not None
self.save.target(backend_obj, destination)
@adaptionmethod(require=True)
def close(self, backend_obj):
self._check(backend_obj)
self.close.target_with_args()
def _check_method(self, name):
method = None
try:
amethod = getattr(self,name)
except AttributeError:
pass
else:
if isinstance(amethod,AdaptionMethod): method = amethod
if method is None:
target_space = self.load.target_space
try:
method = getattr(target_space, name)
except AttributeError:
return False, False
if not callable(method): return False, False
argnum = len(inspect.signature(method).parameters)
if argnum < 2: raise TypeError
return method, argnum
def forward_property(self, name, backend_obj, value = None):
if self.property_value_dict and value is not None:
try:
stand_dict = self.property_value_dict[name]
except KeyError:
pass
else:
value = stand_dict.apply(value)
method, argnum = self._check_method(name)
if method:
assert argnum == 2
return method(backend_obj, value)
backend_name = self.names.get(name, name)
if value is None:
ret = getattr(backend_obj, backend_name)
try:
NamedCls = self.named_classes[name]
except KeyError:
pass
else:
try:
ret = NamedCls.get_stnd_name(ret)
except KeyError:
pass
return ret
setattr(backend_obj, backend_name, value)
def forward_func_call(self, name, backend_obj, *args, **kwargs):
method, argnum = self._check_method(name)
if method: return method(backend_obj, *args,**kwargs)
backend_name = self.names.get(name, name)
return getattr(backend_obj, backend_name)(*args,**kwargs)
def _inspect_unkown(self, name):
backend_name = self.names.get(name, name)
obj = getattr(self.obj_class, backend_name)
# this might raise AttributeError!
if callable(obj):
attr_type = 1
elif isinstance(obj, property):
attr_type = 2
else:
raise AttributeError(obj)
return backend_name, attr_type
def get_unknown(self, name, backend_obj):
method, argnum = self._check_method(name)
if method:
if argnum == 2: return method(backend_obj, None)
return functools.partial(method,backend_obj)
backend_name, attr_type = self._inspect_unkown(name)
return getattr(backend_obj, backend_name)
#this might raise Attribute error
# otherwise this is a callable object, or the result of a
# property
def set_unknown(self, name, backend_obj, value):
method, argnum = self._check_method(name)
if method:
method(backend_obj, value)
return True
try:
backend_name, attr_type = self._inspect_unkown(name)
except AttributeError:
return
if attr_type is 1: return #ignore forwarding if its a callable
setattr(backend_obj, backend_name , value) #only happens for properties
return True
def resource_property(name, *name_alias):
def func(self):
return self.adaptor.forward_property(name,self.backend_obj)
func.__name__ = name
func.__qualname__ = name
func._resource = True
func._names = name_alias
func = property(func)
@func.setter
def func(self, val):
return self.adaptor.forward_property(name,self.backend_obj, val)
return func
def multi_resource_property(name):
func = lambda self: self.get_from_single(name)
func.__name__ = name
func.__qualname__ = name
func._resource = True
func = property(func)
@func.setter
def func(self, val):
for single in self.sequence: setattr(single,name,val)
return func
def is_resource_property(obj):
return isinstance(obj, property) and hasattr(obj.fget,"_resource")
def resource_func(name_or_func, *names):
"""A decorator to create resource_funcs"""
if isinstance(name_or_func, FunctionType):
inp_func = name_or_func
name = inp_func.__name__
def func(self, *args, **kwargs):
args, kwargs = inp_func(self,*args,**kwargs)
return self.adaptor.forward_func_call(name, self.backend_obj, *args,
**kwargs)
func.__name__ = name
func.__qualname__ = name
func._resource = True
func._names = names
func._inp_func = inp_func
return func
if isinstance(name_or_func, str):
def decorator(inp_func):
return resource_func(inp_func, name_or_func,*names)
return decorator
raise TypeError
def multi_resource_func(name):
def func(self,*args,**kwargs):
return self.get_from_single(name,True,*args,**kwargs)
func.__name__ = name
func.__qualname__ = name
func._resource = True
return func
def is_resource_func(obj):
return callable(obj) and hasattr(obj,"_resource") and obj._resource is True
class Resource:
file = None
adaptor = None #inherited as AdaptiveClass subclass
allowed_file_extensions = None
attr_redirections = None
@classmethod
def list_resource_properties(cls):
return set(iter_all_vars(cls,is_resource_property))
@classmethod
def list_resource_funcs(cls):
return set(iter_all_vars(cls, is_resource_func))
@classmethod
def _update_redirections(cls):
cls.attr_redirections = {}
for name in cls.list_resource_properties():
cls.attr_redirections[name] = name
names = getattr(cls,name).fget._names
for n in names: cls.attr_redirections[n] = name
for name in cls.list_resource_funcs():
cls.attr_redirections[name] = name
names = getattr(cls,name)._names
for n in names: cls.attr_redirections[n] = name
return cls.attr_redirections
def filepath_split(self):
folder, filename = path.split(self.file)
name, ext = path.splitext(filename)
self.folder = folder
self.filename = name
self.ext = ext
def get_destination(self, input_dest=None, num=None, insert_folder=False):
ext = None
inp_name = None
if input_dest:
if input_dest.startswith("."):
inp_name, ext = "", input_dest
else:
inp_name, ext = path.splitext(input_dest)
elif self.adaptor.save_in_place:
return
if not ext: ext = self.ext
if insert_folder:
folder_name = insert_folder if isinstance(insert_folder,
str) else self.filename + edit_tag
if not inp_name:
if insert_folder:
folder = path.join(self.folder, folder_name)
os.makedirs(folder, exist_ok=True)
append = f"_{num}" if num else ""
else:
folder = self.folder
append = f"_{num}" if num else edit_tag
return path.join(folder, self.filename + append + ext)
if insert_folder:
if not path.split(inp_name)[0]:
# that means inp_name is a file not a folder structure
os.makedirs(folder_name, exist_ok=True)
inp_name = path.join(folder_name, inp_name)
append = f"_{num}" if num else ""
return inp_name + append + ext
def _save(self, dest=None):
raise NotImplementedError
def save(self, destination=None, num=None):
dest = self.get_destination(destination, num)
return self._save(dest)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.close()
def close(self):
raise NotImplementedError
class SingleResource(Resource):
multi_base = None
_backend_obj = None
_inst_attributes = "file", "sequence", "backend_obj"
@classmethod
def make_multi_base(cls, MultiClass):
"""A decorator"""
cls.multi_base = MultiClass
cls._set_multi(MultiClass)
return MultiClass
def __init_subclass__(cls):
super().__init_subclass__()
cls._update_redirections()
cls._set_multi(cls.multi_base)
@classmethod
def _set_multi(cls, multi_base):
class multi(multi_base):
__qualname__ = f"{cls.__name__}.multi"
__name__ = __qualname__
__module__ = cls.__module__
SingleClass = cls
attr_redirections = cls.attr_redirections
for name in cls.list_resource_properties():
rp = multi_resource_property(name)
setattr(multi,name,rp)
for name in cls.list_resource_funcs():
rf = multi_resource_func(name)
setattr(multi,name,rf)
cls.multi = multi
def __init__(self, file=None, backend_obj=None, **load_kwargs):
if isinstance(file, iter_types) or isinstance(backend_obj, iter_types):
raise ValueError
self.file = file
self.backend_obj = backend_obj
if file:
assert backend_obj is None
self.filepath_split()
self.backend_obj = self.adaptor.load(file, **load_kwargs)
elif backend_obj:
self.adaptor._check(backend_obj)
self.sequence = (self,)
@classmethod
def iterator(cls, *args,**kwargs):
for filepath in FileIterator(*args,**kwargs).generate_paths():
yield cls(filepath)
@property
def backend_obj(self):
if self._backend_obj is ClosedResource: raise ClosedResourceError
return self._backend_obj
@backend_obj.setter
def backend_obj(self, value):
self._backend_obj = value
def _save(self, dest=None):
self.adaptor.save(self.backend_obj, dest)
return dest
def close(self):
"""Close the resource"""
if self.backend_obj: self.adaptor.close(self.backend_obj)
self.backend_obj = ClosedResource
# def __add__(self, other):
# seq = self.sequence
# assert all(type(s) is self.SingleClass for s in seq)
# try:
# if all(type(s) is self.SingleClass for s in other.sequence):
# return self.SingleClass.Sequence(
# images=self.sequence + other.sequence)
# except AttributeError:
# pass
# return NotImplemented
def __getattr__(self, key):
try:
redirected = self.attr_redirections[key]
except KeyError:
try:
obj = self.adaptor.get_unknown(key,self.backend_obj)
except AttributeError:
pass
else:
_print(f"WARNING: Attribute '{key}' used from backend "
f"'{self.adaptor.backend}'. Undefined in " f"Dpowers")
return obj
else:
return self.__getattribute__(redirected)
raise AttributeError(key)
def __setattr__(self, key, value):
if key not in self._inst_attributes:
try:
key = self.attr_redirections[key]
except KeyError:
if self.adaptor.set_unknown(key,self.backend_obj,value):
_print(f"WARNING: Attribute '{key}' used from backend "
f"'{self.adaptor.backend}'. Undefined in " f"Dpowers.")
return
super().__setattr__(key, value)
@SingleResource.make_multi_base
class MultiResource(Resource):
SingleClass = None # set by init subclass form Image class
allowed_file_extensions = None #might differ from SingleClass
_inst_attributes = "file", "sequence", "backend_obj"
def __init__(self, file=None, **load_kwargs):
self.file = file
self.filepath_split()
self.sequence = []
if isinstance(self.file, iter_types): raise ValueError(self.file)
#self.filepath_split()
if file:
backend_objs = self.adaptor.load_multi(file, **load_kwargs)
images = tuple(self.SingleClass(backend_obj=bo) for bo in backend_objs)
self.sequence += images
def _save(self, dest=None):
self.adaptor.save(self.construct_backend_obj(), dest)
return dest
def save(self, destination=None, split=False):
if destination and not destination.endswith(tuple(self.allowed_file_extensions)):
split = True
if not split: return super().save(destination)
l = len(self.sequence)
if l == 0: raise ValueError
first = self.sequence[0]
if l == 1:
if destination is None and self.adaptor.save_in_place:
return first._save()
try:
dest = self.get_destination(destination)
except AttributeError:
try:
dest = first.get_destination(destination)
except AttributeError:
raise ValueError("Could not find destination name.")
return first._save(dest)
destinations = []
for num in range(l):
single = self.sequence[num]
if destination is None:
if self.adaptor.save_in_place: single._save()
try:
dest = self.get_destination(destination, num=num+1,
insert_folder=True)
except AttributeError:
try:
dest = single.get_destination(destination)
except AttributeError:
raise ValueError("Could not find destination name.")
destinations.append(single._save(dest))
return destinations
@property
def adaptor(self):
return self.SingleClass.adaptor
def backend_objs(self):
return tuple(im.backend_obj for im in self.sequence)
def construct_backend_obj(self):
return self.adaptor.construct_multi(self.backend_objs())
def close(self):
for im in self.sequence: im.close()
def get_from_single(self, name, func = False, *args, **kwargs):
ret = []
same = True
for single in self.sequence:
this = getattr(single,name)
if func: this = this(*args, **kwargs)
ret.append(this)
if this is not ret[0]: same=False
if same: return ret[0]
return ret
def __getattr__(self, key):
if key not in self._inst_attributes:
try:
ret = self.get_from_single(key)
except AttributeError:
pass
else:
return ret
raise AttributeError(key) | PypiClean |
/Nuitka-1.8.tar.gz/Nuitka-1.8/nuitka/nodes/VariableDelNodes.py | from abc import abstractmethod
from nuitka.ModuleRegistry import getOwnerFromCodeName
from nuitka.Options import isExperimental
from nuitka.PythonVersions import getUnboundLocalErrorErrorTemplate
from .NodeBases import StatementBase
from .NodeMakingHelpers import makeRaiseExceptionReplacementStatement
from .VariableReleaseNodes import makeStatementReleaseVariable
class StatementDelVariableBase(StatementBase):
"""Deleting a variable.
All del forms that are not to attributes, slices, subscripts
use this.
The target can be any kind of variable, temporary, local, global, etc.
Deleting a variable is something we trace in a new version, this is
hidden behind target variable reference, which has this version once
it can be determined.
Tolerance means that the value might be unset. That can happen with
re-formulation of ours, and Python3 exception variables.
"""
kind = "STATEMENT_DEL_VARIABLE"
__slots__ = (
"variable",
"variable_version",
"variable_trace",
"previous_trace",
)
def __init__(self, variable, version, source_ref):
if variable is not None:
if version is None:
version = variable.allocateTargetNumber()
StatementBase.__init__(self, source_ref=source_ref)
self.variable = variable
self.variable_version = version
self.variable_trace = None
self.previous_trace = None
@staticmethod
def isStatementDelVariable():
return True
def finalize(self):
del self.parent
del self.variable
del self.variable_trace
del self.previous_trace
def getDetails(self):
return {
"variable": self.variable,
"version": self.variable_version,
}
def getDetailsForDisplay(self):
return {
"variable_name": self.getVariableName(),
"is_temp": self.variable.isTempVariable(),
"var_type": self.variable.getVariableType(),
"owner": self.variable.getOwner().getCodeName(),
}
@classmethod
def fromXML(cls, provider, source_ref, **args):
owner = getOwnerFromCodeName(args["owner"])
if args["is_temp"] == "True":
variable = owner.createTempVariable(
args["variable_name"], temp_type=args["var_type"]
)
else:
variable = owner.getProvidedVariable(args["variable_name"])
del args["is_temp"]
del args["var_type"]
del args["owner"]
version = variable.allocateTargetNumber()
variable.version_number = max(variable.version_number, version)
return cls(variable=variable, source_ref=source_ref, **args)
def makeClone(self):
if self.variable is not None:
version = self.variable.allocateTargetNumber()
else:
version = None
return self.__class__(
variable=self.variable,
version=version,
source_ref=self.source_ref,
)
def getVariableName(self):
return self.variable.getName()
def getVariableTrace(self):
return self.variable_trace
def getPreviousVariableTrace(self):
return self.previous_trace
def getVariable(self):
return self.variable
def setVariable(self, variable):
self.variable = variable
self.variable_version = variable.allocateTargetNumber()
@abstractmethod
def _computeDelWithoutValue(self, trace_collection):
"""For overload, produce result if deleted variable is found known unset."""
def computeStatement(self, trace_collection):
variable = self.variable
# Special case, boolean temp variables need no "del".
# TODO: Later, these might not exist, if we forward propagate them not as "del"
# at all
if variable.isTempVariableBool():
return (
None,
"new_statements",
"Removed 'del' statement of boolean '%s' without effect."
% (self.getVariableName(),),
)
self.previous_trace = trace_collection.getVariableCurrentTrace(variable)
# First eliminate us entirely if we can.
if self.previous_trace.mustNotHaveValue():
return self._computeDelWithoutValue(trace_collection)
if not self.is_tolerant:
self.previous_trace.addNameUsage()
# TODO: Why doesn't this module variable check not follow from other checks done here, e.g. name usages.
# TODO: This currently cannot be done as releases do not create successor traces yet, although they
# probably should.
if isExperimental("del_optimization") and not variable.isModuleVariable():
provider = trace_collection.getOwner()
if variable.hasAccessesOutsideOf(provider) is False:
last_trace = variable.getMatchingDelTrace(self)
if last_trace is not None and not last_trace.getMergeOrNameUsageCount():
if not last_trace.getUsageCount():
result = makeStatementReleaseVariable(
variable=variable, source_ref=self.source_ref
)
return trace_collection.computedStatementResult(
result,
"new_statements",
"Changed del to release for variable '%s' not used afterwards."
% variable.getName(),
)
# If not tolerant, we may exception exit now during the __del__
if not self.is_tolerant and not self.previous_trace.mustHaveValue():
trace_collection.onExceptionRaiseExit(BaseException)
# Record the deletion, needs to start a new version then.
self.variable_trace = trace_collection.onVariableDel(
variable=variable, version=self.variable_version, del_node=self
)
# Any code could be run, note that.
trace_collection.onControlFlowEscape(self)
return self, None, None
def collectVariableAccesses(self, emit_read, emit_write):
emit_write(self.variable)
class StatementDelVariableTolerant(StatementDelVariableBase):
"""Deleting a variable.
All del forms that are not to attributes, slices, subscripts
use this.
The target can be any kind of variable, temporary, local, global, etc.
Deleting a variable is something we trace in a new version, this is
hidden behind target variable reference, which has this version once
it can be determined.
Tolerance means that the value might be unset. That can happen with
re-formulation of ours, and Python3 exception variables.
"""
kind = "STATEMENT_DEL_VARIABLE_TOLERANT"
is_tolerant = True
def getDetailsForDisplay(self):
return {
"variable_name": self.getVariableName(),
"is_temp": self.variable.isTempVariable(),
"var_type": self.variable.getVariableType(),
"owner": self.variable.getOwner().getCodeName(),
}
def _computeDelWithoutValue(self, trace_collection):
return (
None,
"new_statements",
"Removed tolerant 'del' statement of '%s' without effect."
% (self.getVariableName(),),
)
@staticmethod
def mayRaiseException(exception_type):
return False
class StatementDelVariableIntolerant(StatementDelVariableBase):
"""Deleting a variable.
All del forms that are not to attributes, slices, subscripts
use this.
The target can be any kind of variable, temporary, local, global, etc.
Deleting a variable is something we trace in a new version, this is
hidden behind target variable reference, which has this version once
it can be determined.
Tolerance means that the value might be unset. That can happen with
re-formulation of ours, and Python3 exception variables.
"""
kind = "STATEMENT_DEL_VARIABLE_INTOLERANT"
is_tolerant = False
def _computeDelWithoutValue(self, trace_collection):
if self.variable.isLocalVariable():
result = makeRaiseExceptionReplacementStatement(
statement=self,
exception_type="UnboundLocalError",
exception_value=getUnboundLocalErrorErrorTemplate()
% self.variable.getName(),
)
else:
result = makeRaiseExceptionReplacementStatement(
statement=self,
exception_type="NameError",
exception_value="""name '%s' is not defined"""
% self.variable.getName(),
)
return trace_collection.computedStatementResult(
result,
"new_raise",
"Variable del of not initialized variable '%s'" % self.variable.getName(),
)
def mayRaiseException(self, exception_type):
if self.variable_trace is not None:
# Temporary variables deletions won't raise, just because we
# don't create them that way. We can avoid going through SSA in
# these cases.
if self.variable.isTempVariable():
return False
# If SSA knows, that's fine.
if self.previous_trace is not None and self.previous_trace.mustHaveValue():
return False
return True
def makeStatementDelVariable(variable, tolerant, source_ref, version=None):
if tolerant:
return StatementDelVariableTolerant(
variable=variable, version=version, source_ref=source_ref
)
else:
return StatementDelVariableIntolerant(
variable=variable, version=version, source_ref=source_ref
) | PypiClean |
/LWE4-0.0.0.2.tar.gz/LWE4-0.0.0.2/src/LWE4.py | import secrets
import json
class keygen:
# gnerate value between the given range
@staticmethod
def __gen_val(size: int):
return secrets.randbelow(size)
@staticmethod
def keygen(parameters: int, size: int):
if not (3 < parameters and parameters <= 30):
raise Exception("Parameter value invalid")
if not (1024 <= size ):
raise Exception("size value invalid")
private_key_array = [0] * parameters
for temp in range(parameters):
private_key_array[temp] = keygen.__gen_val(size)
key_set_private = {"private_key": private_key_array,'ring_size':size}
equations_to_generate = 2 * parameters
equation_set = []
for temp in range(equations_to_generate):
temp_eq = []
eq_sol = 0
for temp1 in range(parameters):
temp_val = keygen.__gen_val(size)
temp_eq.append(temp_val)
eq_sol += temp_val * private_key_array[temp1]
temp_eq.append((eq_sol + (keygen.__gen_val(int(size * 0.008)))) % size)
equation_set.append(temp_eq)
key_set_public = {}
key_set_public["public_key"] = equation_set
key_set_public["ring_size"] = size
key_set_private = json.dumps(key_set_private)
key_set_public = json.dumps(key_set_public)
return key_set_private,key_set_public
class encrypt:
@staticmethod
def encrypt(data: str, public_key):
ring_size = public_key['ring_size']
public_key = public_key['public_key']
data = bin(int.from_bytes(data.encode(), "big"))[2:]
encrypted_data = []
for temp in data:
encrypted_data.extend(encrypt.encrypt_bit(temp, public_key, ring_size))
payload = {}
payload["data"] = encrypted_data
return json.dumps(payload)
@staticmethod
def encrypt_bit(bit, public_key, ring_size):
equation_set_lenght = len(public_key)
equation_parameter_size = len(public_key[0]) - 1
if bit == '0':
error = secrets.choice(
range(ring_size//50)
)
else:
error = secrets.choice(
range((ring_size // 2) - ring_size//50, ring_size // 2)
)
selection_acceptable = True
while selection_acceptable:
selection_list = [secrets.randbelow(2) for _ in range(equation_set_lenght)]
if selection_list.count(1) == (equation_parameter_size):
selection_acceptable = False
sum_array = [0] * (equation_parameter_size + 1)
for temp in range(len(public_key)):
if selection_list[temp] == 1:
sum_array = [sum(value) for value in zip(sum_array, public_key[temp])]
sum_array[-1] = sum_array[-1] + error
sum_array = [(x) % ring_size for x in sum_array]
return sum_array
class decrypt:
@staticmethod
def decrypt(payload,private_key):
ring_size = private_key['ring_size']
private_key = private_key['private_key']
count = len(private_key)+1
if isinstance(payload,dict) and all(key in payload for key in ['data']):
#ensure the size of data
data = payload['data']
equation_set = []
if len(data)%(count) == 0:
for temp in range(len(data)//(count)):
equation_set.append(data[temp*count : (temp+1)*count])
bin_data = ""
for temp in equation_set:
bin_data += '1' if decrypt.solve_equation(private_key,temp,ring_size) else '0'
bin_data = int('0b'+bin_data,2)
bin_data = bin_data.to_bytes((bin_data.bit_length() + 7) // 8, 'big').decode()
return json.dumps(bin_data)
else:
raise Exception("Invalid equation set and lattice dimentions recived")
@staticmethod
def solve_equation(private_key,data,ring_size):
sum_actual = 0
error_sum = data[-1]
for temp in range(len(private_key)):
sum_actual += (private_key[temp]*data[temp])
sum_actual = sum_actual % ring_size
error_amount = abs(error_sum-sum_actual)
error_amount = abs(error_amount - (ring_size//2))
error_percentage = (error_amount/(ring_size//2))*100
return error_percentage < 50 | PypiClean |
/DXR-1.9.4.tar.gz/DXR-1.9.4/Dxr_video.bak/Server_pb2_grpc.py | """Client and server classes corresponding to protobuf-defined services."""
import grpc
import Server_pb2 as Server__pb2
class MainServerStub(object):
"""server
"""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.getStream = channel.stream_stream(
'/MainServer/getStream',
request_serializer=Server__pb2.Request.SerializeToString,
response_deserializer=Server__pb2.Reply.FromString,
)
class MainServerServicer(object):
"""server
"""
def getStream(self, request_iterator, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_MainServerServicer_to_server(servicer, server):
rpc_method_handlers = {
'getStream': grpc.stream_stream_rpc_method_handler(
servicer.getStream,
request_deserializer=Server__pb2.Request.FromString,
response_serializer=Server__pb2.Reply.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'MainServer', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class MainServer(object):
"""server
"""
@staticmethod
def getStream(request_iterator,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.stream_stream(request_iterator, target, '/MainServer/getStream',
Server__pb2.Request.SerializeToString,
Server__pb2.Reply.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata) | PypiClean |
/FitsGeo-1.0.0.tar.gz/FitsGeo-1.0.0/fitsgeo/cell.py | import itertools
from .material import Material, MAT_WATER
# Counter for objects, every new object will have n+1 surface number
cell_counter = itertools.count(100)
created_cells = [] # All objects after initialisation go here
class Cell: # superclass with common properties/methods for all surfaces
def __init__(
self, cell_def: list, name="Cell",
material=MAT_WATER, volume: float = None):
"""
Define cell
:param cell_def: list with regions and the Boolean operators,
⊔(blank)(AND), :(OR), and #(NOT). Parentheses ( and ) will be added
automatically for regions
:param name: name for object
:param material: material associated with cell
:param volume: cell volume in cm^3
"""
self.cell_def = cell_def
self.name = name
self.material = material
self.volume = volume
self.cn = next(cell_counter)
created_cells.append(self)
@property
def cell_def(self):
"""
Get cell definition
:return: list with cell definition
"""
return self.__cell_def
@cell_def.setter
def cell_def(self, cell_def: list):
"""
Set cell definition
:param cell_def: cell definition
"""
self.__cell_def = cell_def
@property
def name(self):
"""
Get cell object name
:return: string name
"""
return self.__name
@name.setter
def name(self, name: str):
"""
Set cell object name
:param name: surface object name
"""
self.__name = name
@property
def material(self):
"""
Get cell material
:return: Material object
"""
return self.__material
@material.setter
def material(self, material: Material):
"""
Set cell material
:param material: material
"""
self.__material = material
@property
def cn(self):
"""
Get cell object number
:return: int cn
"""
return self.__cn
@cn.setter
def cn(self, cn: int):
"""
Set cell object number
:param cn: cell number [any number from 1 to 999 999]
"""
self.__cn = cn
@property
def volume(self):
"""
Get cell volume
:return: float volume
"""
return self.__volume
@volume.setter
def volume(self, volume: float):
"""
Set cell object number
:param volume: volume of cell
"""
self.__volume = volume
def phits_print(self):
"""
Print PHITS cell definition
:return: string with PHITS cell definition
"""
# ⊔(blank)(AND), :(OR), and #(NOT) must be used to treat the regions.
cell_def = ""
for regions in self.cell_def:
if regions == " " or regions == ":" or regions == "#":
cell_def += regions
elif regions != " " or regions != ":" or regions != "#":
cell_def += f"({regions[:-1]})"
else:
raise ValueError("cell_def incorrect!")
if self.material.matn < 1: # For void and outer
txt = \
f" {self.cn} {self.material.matn} " + \
f"{cell_def}" + \
f" $ name: '{self.name}' \n"
return txt
if self.volume is None:
volume = ""
else:
volume = f"VOL={self.volume}"
txt = \
f" {self.cn} {self.material.matn} " + \
f"{self.material.density} {cell_def} {volume}" + \
f" $ name: '{self.name}' "
return txt | PypiClean |
/crosscat-0.1.55-cp27-none-macosx_10_6_intel.whl/crosscat/MultiprocessingEngine.py | from __future__ import print_function
import multiprocessing
#
import crosscat.LocalEngine as LE
import crosscat.utils.sample_utils as su
class MultiprocessingEngine(LE.LocalEngine):
"""A simple interface to the Cython-wrapped C++ engine
MultiprocessingEngine holds no state.
Methods use resources on the local machine.
"""
def __init__(self, seed=None, cpu_count=None):
"""Initialize a MultiprocessingEngine
"""
super(MultiprocessingEngine, self).__init__(seed=None)
self.pool = multiprocessing.Pool(cpu_count)
self.mapper = self.pool.map
return
def __enter__(self):
return self
def __del__(self):
self.pool.terminate()
def __exit__(self, type, value, traceback):
self.pool.terminate()
if __name__ == '__main__':
import crosscat.utils.data_utils as du
import crosscat.utils.convergence_test_utils as ctu
import random
# settings
gen_seed = 0
inf_seed = 0
num_clusters = 4
num_cols = 32
num_rows = 400
num_views = 2
n_steps = 1
n_times = 5
n_chains = 3
n_test = 100
rng = random.Random(gen_seed)
get_next_seed = lambda: rng.randint(1, 2**31 - 1)
# generate some data
T, M_r, M_c, data_inverse_permutation_indices = du.gen_factorial_data_objects(
get_next_seed(), num_clusters, num_cols, num_rows, num_views,
max_mean=100, max_std=1, send_data_inverse_permutation_indices=True)
view_assignment_truth, X_D_truth = ctu.truth_from_permute_indices(
data_inverse_permutation_indices, num_rows, num_cols, num_views, num_clusters)
X_L_gen, X_D_gen = du.get_generative_clustering(get_next_seed(),
M_c, M_r, T,
data_inverse_permutation_indices, num_clusters, num_views)
T_test = ctu.create_test_set(M_c, T, X_L_gen, X_D_gen, n_test, seed_seed=0)
#
generative_mean_test_log_likelihood = ctu.calc_mean_test_log_likelihood(M_c, T,
X_L_gen, X_D_gen, T_test)
# run some tests
engine = MultiprocessingEngine()
# single state test
single_state_ARIs = []
single_state_mean_test_lls = []
X_L, X_D = engine.initialize(M_c, M_r, T, get_next_seed(), n_chains=1)
single_state_ARIs.append(ctu.get_column_ARI(X_L, view_assignment_truth))
single_state_mean_test_lls.append(
ctu.calc_mean_test_log_likelihood(M_c, T, X_L, X_D, T_test)
)
for time_i in range(n_times):
X_L, X_D = engine.analyze(M_c, T, X_L, X_D, get_next_seed(),
n_steps=n_steps)
single_state_ARIs.append(ctu.get_column_ARI(X_L, view_assignment_truth))
single_state_mean_test_lls.append(
ctu.calc_mean_test_log_likelihood(M_c, T, X_L, X_D, T_test)
)
# multistate test
multi_state_ARIs = []
multi_state_mean_test_lls = []
X_L_list, X_D_list = engine.initialize(M_c, M_r, T, get_next_seed(),
n_chains=n_chains)
multi_state_ARIs.append(ctu.get_column_ARIs(X_L_list, view_assignment_truth))
multi_state_mean_test_lls.append(ctu.calc_mean_test_log_likelihoods(M_c, T,
X_L_list, X_D_list, T_test))
for time_i in range(n_times):
X_L_list, X_D_list = engine.analyze(M_c, T, X_L_list, X_D_list,
get_next_seed(), n_steps=n_steps)
multi_state_ARIs.append(ctu.get_column_ARIs(X_L_list, view_assignment_truth))
multi_state_mean_test_lls.append(ctu.calc_mean_test_log_likelihoods(M_c, T,
X_L_list, X_D_list, T_test))
# print results
print('generative_mean_test_log_likelihood')
print(generative_mean_test_log_likelihood)
#
print('single_state_mean_test_lls:')
print(single_state_mean_test_lls)
#
print('single_state_ARIs:')
print(single_state_ARIs)
#
print('multi_state_mean_test_lls:')
print(multi_state_mean_test_lls)
#
print('multi_state_ARIs:')
print(multi_state_ARIs) | PypiClean |
/Finance-Ultron-1.0.8.1.tar.gz/Finance-Ultron-1.0.8.1/ultron/optimize/grirdsearch/execute.py | """封装常用的分析方式及流程模块"""
import numpy as np
from scipy import interp
from sklearn import metrics
from sklearn import tree
from sklearn.base import ClusterMixin, clone
from sklearn.metrics import roc_curve, auc
from ultron.ump.core.fixes import KFold
__all__ = [
'run_silhouette_cv_estimator', 'run_prob_cv_estimator', 'run_cv_estimator',
'plot_learning_curve', 'plot_decision_boundary', 'plot_confusion_matrices',
'plot_roc_estimator', 'graphviz_tree', 'visualize_tree'
]
# noinspection PyUnresolvedReferences
def run_silhouette_cv_estimator(estimator, x, n_folds=10):
"""
只针对kmean的cv验证,使用silhouette_score对聚类后的结果labels_
进行度量使用silhouette_score,kmean的cv验证只是简单的通过np.random.choice
进行随机筛选x数据进行聚类的silhouette_score度量,并不涉及训练集测试集
:param estimator: keman或者支持estimator.labels_, 只通过if not isinstance(estimator, ClusterMixin)进行过滤
:param x: x特征矩阵
:param n_folds: int,透传KFold参数,切割训练集测试集参数,默认10
:return: eg: array([ 0.693 , 0.652 , 0.6845, 0.6696, 0.6732, 0.6874, 0.668 ,
0.6743, 0.6748, 0.671 ])
"""
if not isinstance(estimator, ClusterMixin):
print('estimator must be ClusterMixin')
return
silhouette_list = list()
# eg: n_folds = 10, len(x) = 150 -> 150 * 0.9 = 135
choice_cnt = int(len(x) * ((n_folds - 1) / n_folds))
choice_source = np.arange(0, x.shape[0])
# 所有执行fit的操作使用clone一个新的
estimator = clone(estimator)
for _ in np.arange(0, n_folds):
# 只是简单的通过np.random.choice进行随机筛选x数据
choice_index = np.random.choice(choice_source, choice_cnt)
x_choice = x[choice_index]
estimator.fit(x_choice)
# 进行聚类的silhouette_score度量
silhouette_score = metrics.silhouette_score(x_choice,
estimator.labels_,
metric='euclidean')
silhouette_list.append(silhouette_score)
return silhouette_list
def run_prob_cv_estimator(estimator, x, y, n_folds=10):
"""
通过KFold和参数n_folds拆分训练集和测试集,使用
np.zeros((len(y), len(np.unique(y))))初始化prob矩阵,
通过训练estimator.fit(x_train, y_train)后的分类器使用
predict_proba将y_prob中的对应填数据
:param estimator: 支持predict_proba的有监督学习, 只通过hasattr(estimator, 'predict_proba')进行过滤
:param x: 训练集x矩阵,numpy矩阵
:param y: 训练集y序列,numpy序列
:param n_folds: int,透传KFold参数,切割训练集测试集参数,默认10
:return: eg: y_prob
array([[ 0.8726, 0.1274],
[ 0.0925, 0.9075],
[ 0.2485, 0.7515],
...,
[ 0.3881, 0.6119],
[ 0.7472, 0.2528],
[ 0.8555, 0.1445]])
"""
if not hasattr(estimator, 'predict_proba'):
print('estimator must has predict_proba')
return
# 所有执行fit的操作使用clone一个新的
estimator = clone(estimator)
kf = KFold(len(y), n_folds=n_folds, shuffle=True)
y_prob = np.zeros((len(y), len(np.unique(y))))
"""
根据y序列的数量以及y的label数量构造全是0的矩阵
eg: y_prob
array([[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.],
..............
[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.],
"""
for train_index, test_index in kf:
x_train, x_test = x[train_index], x[test_index]
y_train = y[train_index]
# clf = clone(estimator)
estimator.fit(x_train, y_train)
# 使用predict_proba将y_prob中的对应填数据
y_prob[test_index] = estimator.predict_proba(x_test)
return y_prob
def run_cv_estimator(estimator, x, y, n_folds=10):
"""
通过KFold和参数n_folds拆分训练集和测试集,使用
y.copy()初始化y_pred矩阵,迭代切割好的训练集与测试集,
不断通过 estimator.predict(x_test)将y_pred中的值逐步替换
:param estimator: 有监督学习器对象
:param x: 训练集x矩阵,numpy矩阵
:param y: 训练集y序列,numpy序列
:param n_folds: int,透传KFold参数,切割训练集测试集参数,默认10
:return: y_pred序列
"""
if not hasattr(estimator, 'predict'):
print('estimator must has predict')
return
# 所有执行fit的操作使用clone一个新的
estimator = clone(estimator)
kf = KFold(len(y), n_folds=n_folds, shuffle=True)
# 首先copy一个一摸一样的y
y_pred = y.copy()
"""
eg: y_pred
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
"""
for train_index, test_index in kf:
x_train, x_test = x[train_index], x[test_index]
y_train = y[train_index]
estimator.fit(x_train, y_train)
# 通过 estimator.predict(x_test)将y_pred中的值逐步替换
y_pred[test_index] = estimator.predict(x_test)
return y_pred | PypiClean |
/MutPy-Pynguin-0.7.1.tar.gz/MutPy-Pynguin-0.7.1/mutpy/operators/logical.py | import ast
from mutpy.operators.base import MutationOperator, AbstractUnaryOperatorDeletion, copy_node
class ConditionalOperatorDeletion(AbstractUnaryOperatorDeletion):
def get_operator_type(self):
return ast.Not
def mutate_NotIn(self, node):
return ast.In()
class ConditionalOperatorInsertion(MutationOperator):
def negate_test(self, node):
not_node = ast.UnaryOp(op=ast.Not(), operand=node.test)
node.test = not_node
return node
@copy_node
def mutate_While(self, node):
return self.negate_test(node)
@copy_node
def mutate_If(self, node):
return self.negate_test(node)
def mutate_In(self, node):
return ast.NotIn()
class LogicalConnectorReplacement(MutationOperator):
def mutate_And(self, node):
return ast.Or()
def mutate_Or(self, node):
return ast.And()
class LogicalOperatorDeletion(AbstractUnaryOperatorDeletion):
def get_operator_type(self):
return ast.Invert
class LogicalOperatorReplacement(MutationOperator):
def mutate_BitAnd(self, node):
return ast.BitOr()
def mutate_BitOr(self, node):
return ast.BitAnd()
def mutate_BitXor(self, node):
return ast.BitAnd()
def mutate_LShift(self, node):
return ast.RShift()
def mutate_RShift(self, node):
return ast.LShift()
class RelationalOperatorReplacement(MutationOperator):
def mutate_Lt(self, node):
return ast.Gt()
def mutate_Lt_to_LtE(self, node):
return ast.LtE()
def mutate_Gt(self, node):
return ast.Lt()
def mutate_Gt_to_GtE(self, node):
return ast.GtE()
def mutate_LtE(self, node):
return ast.GtE()
def mutate_LtE_to_Lt(self, node):
return ast.Lt()
def mutate_GtE(self, node):
return ast.LtE()
def mutate_GtE_to_Gt(self, node):
return ast.Gt()
def mutate_Eq(self, node):
return ast.NotEq()
def mutate_NotEq(self, node):
return ast.Eq() | PypiClean |
/MegEngine-1.13.1-cp37-cp37m-macosx_10_14_x86_64.whl/megengine/optimizer/clip_grad.py | from typing import Iterable, Union
from ..core._imperative_rt.core2 import pop_scope, push_scope
from ..functional import clip, concat, minimum, norm
from ..tensor import Tensor
__all__ = ["clip_grad_norm", "clip_grad_value"]
def clip_grad_norm(
tensors: Union[Tensor, Iterable[Tensor]], max_norm: float, ord: float = 2.0,
):
r"""Clips gradient norm of an iterable of parameters.
The norm is computed over all gradients together, as if they were
concatenated into a single vector. Gradients are modified in-place.
Args:
tensors: an iterable of Tensors or a single Tensor.
max_norm: max norm of the gradients.
ord: type of the used p-norm. Can be ``'inf'`` for infinity norm.
Returns:
total norm of the parameters (viewed as a single vector).
"""
push_scope("clip_grad_norm")
if isinstance(tensors, Tensor):
tensors = [tensors]
tensors = [t for t in tensors if t.grad is not None]
if len(tensors) == 0:
pop_scope("clip_grad_norm")
return Tensor(0.0)
norm_ = [norm(t.grad.flatten(), ord=ord) for t in tensors]
if len(norm_) > 1:
norm_ = norm(concat(norm_), ord=ord)
else:
norm_ = norm_[0]
scale = max_norm / (norm_ + 1e-6)
scale = minimum(scale, 1)
for tensor in tensors:
tensor.grad._reset(tensor.grad * scale)
pop_scope("clip_grad_norm")
return norm_
def clip_grad_value(
tensors: Union[Tensor, Iterable[Tensor]], lower: float, upper: float
):
r"""Clips gradient of an iterable of parameters to a specified lower and
upper. Gradients are modified in-place.
The gradients are clipped in the range:
.. math:: \left[\text{lower}, \text{upper}\right]
Args:
tensors: an iterable of Tensors or a single Tensor.
lower: minimum allowed value of the gradients.
upper: maximum allowed value of the gradients.
"""
push_scope("clip_grad_value")
if isinstance(tensors, Tensor):
tensors = [tensors]
for tensor in tensors:
if tensor.grad is None:
continue
tensor.grad._reset(clip(tensor.grad, lower, upper))
pop_scope("clip_grad_value") | PypiClean |
/Balert-1.1.8.tar.gz/Balert-1.1.8/balert/main.py | from sys import argv
from Voice import Voice
from BatteryStatus import Battery
from Config import Config
from prettytable import PrettyTable
import argparse
import logging
import subprocess
__author__ = 'tushar-rishav'
__version__ = "1.1.8"
def setupCron():
try:
location_f = subprocess.Popen("whereis \
balert", shell=True, stdout=subprocess.PIPE).stdout.read().\
strip().split(':')[1].strip()
cmd = subprocess.Popen("crontab -l", shell=True,
stdout=subprocess.PIPE).stdout.read()
if not ('balert' in cmd):
cmd += "*/10 * * * * " + location_f + "\n"
tmp = open("/tmp/temp_cron.impossible", 'w')
tmp.write(cmd)
tmp.close()
subprocess.Popen("crontab /tmp/temp_cron.impossible", shell=True)
logging.info("Successfully set up the cron job.")
print("Ok") # Do not remove this line ever!
except:
logging.debug("Error writing the cron job.")
def pPrint(cf_data):
data = PrettyTable(['key', 'value'])
for key, val in cf_data.items():
data.add_row([key, val])
print(data)
def extract(func):
def read():
cf = Config()
cf_data = cf.load_pickle()
args = func()
if args.rate:
cf_data["RATE"] = args.rate
if args.vol:
cf_data["VOL"] = args.vol
if args.lang:
cf_data["LANG"] = args.lang
if args.msg:
cf_data["MSG"] = args.msg
if args.charge:
cf_data["CHARGE"] = args.charge
if args.show:
pPrint(cf_data)
cf.set_pickle(cf_data)
return cf, cf_data
return read
@extract
def parse():
parser = argparse.ArgumentParser(description=" \
Listen the voice of your battery whenever she is low!", epilog="\
Author:[email protected]")
parser.add_argument("-r",
"--rate",
help="Rate of speaking.(100-200)",
type=int)
parser.add_argument("-v",
"--vol",
help="Volume of speaking.(1.0)",
type=str)
parser.add_argument("-l",
"--lang",
help="Language speaking",
type=str)
parser.add_argument("-m",
"--msg",
help="Alert message of your own",
type=str)
parser.add_argument("-c",
"--charge",
help="Decide the critical charge level",
type=int)
parser.add_argument("-s",
"--show",
nargs='?',
const=True,
metavar='BOOL',
default=False,
help="Show the currently set config")
args = parser.parse_args()
return args
def main():
logging.getLogger().setLevel(logging.DEBUG)
cf, cf_data = parse()
if len(argv) == 1:
pass
al = Voice()
al.one_thing()
battery_instance = Battery() # READ BATTERY
charge_info = battery_instance.get_low_battery_warning_level()
# logging.debug(charge_info)
if charge_info[0] == 0 and charge_info[1]:
add_msg = " All cool! %s Percent remaining" % charge_info[1]
elif charge_info[0] == 1:
add_msg = " Low Battery! %s Percent remaining" % charge_info[1]
logging.info(cf_data["MSG"]+add_msg)
cf.set_pickle(cf_data)
al.speak(add_msg)
else:
add_msg = " Battery is Charging!"
setupCron()
if __name__ == "__main__":
main() | PypiClean |
/7Wonder-RL-Lib-0.1.1.tar.gz/7Wonder-RL-Lib-0.1.1/docs/_static/sphinx_highlight.js | "use strict";
const SPHINX_HIGHLIGHT_ENABLED = true
/**
* highlight a given string on a node by wrapping it in
* span elements with the given class name.
*/
const _highlight = (node, addItems, text, className) => {
if (node.nodeType === Node.TEXT_NODE) {
const val = node.nodeValue;
const parent = node.parentNode;
const pos = val.toLowerCase().indexOf(text);
if (
pos >= 0 &&
!parent.classList.contains(className) &&
!parent.classList.contains("nohighlight")
) {
let span;
const closestNode = parent.closest("body, svg, foreignObject");
const isInSVG = closestNode && closestNode.matches("svg");
if (isInSVG) {
span = document.createElementNS("http://www.w3.org/2000/svg", "tspan");
} else {
span = document.createElement("span");
span.classList.add(className);
}
span.appendChild(document.createTextNode(val.substr(pos, text.length)));
parent.insertBefore(
span,
parent.insertBefore(
document.createTextNode(val.substr(pos + text.length)),
node.nextSibling
)
);
node.nodeValue = val.substr(0, pos);
if (isInSVG) {
const rect = document.createElementNS(
"http://www.w3.org/2000/svg",
"rect"
);
const bbox = parent.getBBox();
rect.x.baseVal.value = bbox.x;
rect.y.baseVal.value = bbox.y;
rect.width.baseVal.value = bbox.width;
rect.height.baseVal.value = bbox.height;
rect.setAttribute("class", className);
addItems.push({ parent: parent, target: rect });
}
}
} else if (node.matches && !node.matches("button, select, textarea")) {
node.childNodes.forEach((el) => _highlight(el, addItems, text, className));
}
};
const _highlightText = (thisNode, text, className) => {
let addItems = [];
_highlight(thisNode, addItems, text, className);
addItems.forEach((obj) =>
obj.parent.insertAdjacentElement("beforebegin", obj.target)
);
};
/**
* Small JavaScript module for the documentation.
*/
const SphinxHighlight = {
/**
* highlight the search words provided in localstorage in the text
*/
highlightSearchWords: () => {
if (!SPHINX_HIGHLIGHT_ENABLED) return; // bail if no highlight
// get and clear terms from localstorage
const url = new URL(window.location);
const highlight =
localStorage.getItem("sphinx_highlight_terms")
|| url.searchParams.get("highlight")
|| "";
localStorage.removeItem("sphinx_highlight_terms")
url.searchParams.delete("highlight");
window.history.replaceState({}, "", url);
// get individual terms from highlight string
const terms = highlight.toLowerCase().split(/\s+/).filter(x => x);
if (terms.length === 0) return; // nothing to do
// There should never be more than one element matching "div.body"
const divBody = document.querySelectorAll("div.body");
const body = divBody.length ? divBody[0] : document.querySelector("body");
window.setTimeout(() => {
terms.forEach((term) => _highlightText(body, term, "highlighted"));
}, 10);
const searchBox = document.getElementById("searchbox");
if (searchBox === null) return;
searchBox.appendChild(
document
.createRange()
.createContextualFragment(
'<p class="highlight-link">' +
'<a href="javascript:SphinxHighlight.hideSearchWords()">' +
_("Hide Search Matches") +
"</a></p>"
)
);
},
/**
* helper function to hide the search marks again
*/
hideSearchWords: () => {
document
.querySelectorAll("#searchbox .highlight-link")
.forEach((el) => el.remove());
document
.querySelectorAll("span.highlighted")
.forEach((el) => el.classList.remove("highlighted"));
localStorage.removeItem("sphinx_highlight_terms")
},
initEscapeListener: () => {
// only install a listener if it is really needed
if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) return;
document.addEventListener("keydown", (event) => {
// bail for input elements
if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return;
// bail with special keys
if (event.shiftKey || event.altKey || event.ctrlKey || event.metaKey) return;
if (DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS && (event.key === "Escape")) {
SphinxHighlight.hideSearchWords();
event.preventDefault();
}
});
},
};
_ready(SphinxHighlight.highlightSearchWords);
_ready(SphinxHighlight.initEscapeListener); | PypiClean |
/DA-DAPPER-1.2.2.tar.gz/DA-DAPPER-1.2.2/dapper/da_methods/baseline.py | from typing import Optional
import numpy as np
import dapper.tools.series as series
from dapper.stats import center
from dapper.tools.linalg import mrdiv
from dapper.tools.matrices import CovMat
from dapper.tools.progressbar import progbar
from . import da_method
@da_method()
class Climatology:
"""A baseline/reference method.
Note that the "climatology" is computed from truth, which might be
(unfairly) advantageous if the simulation is too short (vs mixing time).
"""
def assimilate(self, HMM, xx, yy):
chrono, stats = HMM.t, self.stats
muC = np.mean(xx, 0)
AC = xx - muC
PC = CovMat(AC, 'A')
stats.assess(0, mu=muC, Cov=PC)
stats.trHK[:] = 0
for k, kObs, _, _ in progbar(chrono.ticker):
fau = 'u' if kObs is None else 'fau'
stats.assess(k, kObs, fau, mu=muC, Cov=PC)
@da_method()
class OptInterp:
"""Optimal Interpolation -- a baseline/reference method.
Uses the Kalman filter equations,
but with a prior from the Climatology.
"""
def assimilate(self, HMM, xx, yy):
Dyn, Obs, chrono, stats = HMM.Dyn, HMM.Obs, HMM.t, self.stats
# Compute "climatological" Kalman gain
muC = np.mean(xx, 0)
AC = xx - muC
PC = (AC.T @ AC) / (xx.shape[0] - 1)
# Setup scalar "time-series" covariance dynamics.
# ONLY USED FOR DIAGNOSTICS, not to affect the Kalman gain.
L = series.estimate_corr_length(AC.ravel(order='F'))
SM = fit_sigmoid(1/2, L, 0)
# Init
mu = muC
stats.assess(0, mu=mu, Cov=PC)
for k, kObs, t, dt in progbar(chrono.ticker):
# Forecast
mu = Dyn(mu, t-dt, dt)
if kObs is not None:
stats.assess(k, kObs, 'f', mu=muC, Cov=PC)
# Analysis
H = Obs.linear(muC, t)
KG = mrdiv([email protected], H@[email protected] + Obs.noise.C.full)
mu = muC + KG@(yy[kObs] - Obs(muC, t))
P = (np.eye(Dyn.M) - KG@H) @ PC
SM = fit_sigmoid(P.trace()/PC.trace(), L, k)
stats.assess(k, kObs, mu=mu, Cov=2*PC*SM(k))
@da_method()
class Var3D:
"""3D-Var -- a baseline/reference method.
This implementation is not "Var"-ish: there is no *iterative* optimzt.
Instead, it does the full analysis update in one step: the Kalman filter,
with the background covariance being user specified, through B and xB.
"""
B: Optional[np.ndarray] = None
xB: float = 1.0
def assimilate(self, HMM, xx, yy):
Dyn, Obs, chrono, X0, stats = HMM.Dyn, HMM.Obs, HMM.t, HMM.X0, self.stats
if isinstance(self.B, np.ndarray):
# compare ndarray 1st to avoid == error for ndarray
B = self.B.astype(float)
elif self.B in (None, 'clim'):
# Use climatological cov, estimated from truth
B = np.cov(xx.T)
elif self.B == 'eye':
B = np.eye(HMM.Nx)
else:
raise ValueError("Bad input B.")
B *= self.xB
# ONLY USED FOR DIAGNOSTICS, not to change the Kalman gain.
CC = 2*np.cov(xx.T)
L = series.estimate_corr_length(center(xx)[0].ravel(order='F'))
P = X0.C.full
SM = fit_sigmoid(P.trace()/CC.trace(), L, 0)
# Init
mu = X0.mu
stats.assess(0, mu=mu, Cov=P)
for k, kObs, t, dt in progbar(chrono.ticker):
# Forecast
mu = Dyn(mu, t-dt, dt)
P = CC*SM(k)
if kObs is not None:
stats.assess(k, kObs, 'f', mu=mu, Cov=P)
# Analysis
H = Obs.linear(mu, t)
KG = mrdiv([email protected], H@[email protected] + Obs.noise.C.full)
mu = mu + KG@(yy[kObs] - Obs(mu, t))
# Re-calibrate fit_sigmoid with new W0 = Pa/B
P = (np.eye(Dyn.M) - KG@H) @ B
SM = fit_sigmoid(P.trace()/CC.trace(), L, k)
stats.assess(k, kObs, mu=mu, Cov=P)
def fit_sigmoid(Sb, L, kb):
"""Return a sigmoid [function S(k)] for approximating error dynamics.
We use the logistic function for the sigmoid; it's the solution of the
"population growth" ODE: dS/dt = a*S*(1-S/S(∞)).
NB: It might be better to use the "error growth ODE" of Lorenz/Dalcher/Kalnay,
but this has a significantly more complicated closed-form solution,
and reduces to the above ODE when there's no model error (ODE source term).
The "normalized" sigmoid, S1, is symmetric around 0, and S1(-∞)=0 and S1(∞)=1.
The sigmoid S(k) = S1(a*(k-kb) + b) is fitted (see docs/snippets/sigmoid.jpg) with
- a corresponding to a given corr. length L.
- b to match values of S(kb) and Sb
"""
def sigmoid(k): return 1/(1+np.exp(-k)) # normalized sigmoid
def inv_sig(s): return np.log(s/(1-s)) # its inverse
a = 1/L
b = inv_sig(Sb)
def S(k):
return sigmoid(b + a*(k-kb))
return S
@da_method()
class EnCheat:
"""A baseline/reference method.
Should be implemented as part of Stats instead.
"""
def assimilate(self, HMM, xx, yy): pass | PypiClean |
/JoUtil-1.3.3-py3-none-any.whl/JoTools/txkj/efficientDetDataset.py |
import os
import copy
import random
import math
import shutil
import numpy as np
import prettytable
from ..utils.JsonUtil import JsonUtil
from ..utils.CsvUtil import CsvUtil
from ..utils.ImageUtil import ImageUtil
from .parseXml import ParseXml, parse_xml
from ..utils.FileOperationUtil import FileOperationUtil
from PIL import Image
"""
* 专门用于处理 efficient 用于训练的数据,也就是将数据转为 efficientdet 可以训练的样式
"""
class CocoDatabaseUtil(object):
# ----------- 训练数据之间互转 ----------------
@staticmethod
def voc2coco(xml_dir, save_path, category_dict):
"""voc 转为 coco文件"""
# 检查 category_dict 是否按照规范填写的
for each in category_dict.values():
if each == 0:
raise ValueError("需要从 category_dict 的值 需要从 1 开始")
coco_dict = {"info": {"description": "", "url": "", "version": "", "year": 2020, "contributor": "",
"data_created": "'2020-04-14 01:45:18.567988'"},
"licenses": [{"id": 1, "name": None, "url": None}],
"categories": [],
"images": [],
"annotations": []}
# 加载分类信息
for each_category in category_dict:
categories_info = {"id": category_dict[each_category], "name": each_category, "supercategory": 'None'}
coco_dict['categories'].append(categories_info)
# 加载
box_id = 0
for index, each_xml_path in enumerate(FileOperationUtil.re_all_file(xml_dir, lambda x: str(x).endswith('.xml'))):
xml_info = parse_xml(each_xml_path)
each_image = {"id": index, "file_name": xml_info["filename"],
"width": int(float(xml_info["size"]["width"])),
"height": int(float(xml_info["size"]["height"]))}
coco_dict['images'].append(each_image)
for bndbox_info in xml_info["object"]:
category_id = category_dict[bndbox_info['name']]
each_box = bndbox_info['bndbox']
bndbox = [float(each_box['xmin']), float(each_box['ymin']),
(float(each_box['xmax']) - float(each_box['xmin'])),
(float(each_box['ymax']) - float(each_box['ymin']))]
area = bndbox[2] * bndbox[3]
segmentation = [bndbox[0], bndbox[1], (bndbox[0] + bndbox[2]), bndbox[1], (bndbox[0] + bndbox[2]),
(bndbox[1] + bndbox[3]), bndbox[0], (bndbox[1] + bndbox[3])]
each_annotations = {"id": box_id, "image_id": index, "category_id": category_id, "iscrowd": 0,
"area": area, "bbox": bndbox, "segmentation": segmentation}
coco_dict['annotations'].append(each_annotations)
box_id += 1
# 保存文件
JsonUtil.save_data_to_json_file(coco_dict, save_path)
@staticmethod
def coco2voc(json_file_path, save_folder):
"""json文件转为xml文件"""
if not os.path.exists(save_folder):
os.makedirs(save_folder)
json_info = JsonUtil.load_data_from_json_file(json_file_path)
# 解析 categories,得到字典
categorie_dict = {}
for each_categorie in json_info["categories"]:
categorie_dict[each_categorie['id']] = each_categorie['name']
# 解析 image 信息
image_dict = {}
for each in json_info["images"]:
id = each['id']
file_name = each['file_name']
width = each['width']
height = each['height']
image_dict[id] = {"filename": each['file_name'], 'object': [], "folder": "None", "path": "None",
"source": {"database": "Unknow"},
"segmented": "0", "size": {"width": str(width), "height": str(height), "depth": "3"}}
# 解析 annotations 信息
for each in json_info["annotations"]:
image_id = each['image_id']
category_id = each['category_id']
each_name = categorie_dict[category_id]
bbox = each["bbox"]
bbox_dict = {"xmin": str(bbox[0]), "ymin": str(bbox[1]), "xmax": str(int(bbox[0]) + int(bbox[2])),
"ymax": str(int(bbox[1]) + int(bbox[3]))}
object_info = {"name": each_name, "pose": "Unspecified", "truncated": "0", "difficult": "0",
"bndbox": bbox_dict}
image_dict[image_id]["object"].append(object_info)
# 将数据转为 xml
aa = ParseXml()
for each_img in image_dict.values():
save_path = os.path.join(save_folder, each_img['filename'][:-3] + 'xml')
aa.save_to_xml(save_path, each_img)
@staticmethod
def csv2voc(csv_path, save_folder):
"""csv 转为 voc 文件"""
csv_info = CsvUtil.read_csv_to_list(csv_path)
image_dict = {}
for each in csv_info[1:]:
image_id = each[0]
bbox = each[3].strip("[]").split(',')
bbox_dict = {"xmin": str(bbox[0]), "ymin": str(bbox[1]), "xmax": str(float(bbox[0]) + float(bbox[2])),
"ymax": str(float(bbox[1]) + float(bbox[3]))}
object_info = {"name": "xiao_mai", "pose": "Unspecified", "truncated": "0", "difficult": "0",
"bndbox": bbox_dict}
#
if image_id not in image_dict:
image_dict[image_id] = {}
image_dict[image_id]["filename"] = "{0}.jpg".format(each[0])
image_dict[image_id]["folder"] = "None"
image_dict[image_id]["source"] = {"database": "Unknow"}
image_dict[image_id]["path"] = "None"
image_dict[image_id]["segmented"] = "0"
image_dict[image_id]["size"] = {"width": str(each[1]), "height": str(each[2]), "depth": "3"}
image_dict[image_id]['object'] = [object_info]
else:
image_dict[image_id]['object'].append(object_info)
# 将数据转为 xml
aa = ParseXml()
for each_img in image_dict.values():
save_path = os.path.join(save_folder, each_img['filename'][:-3] + 'xml')
aa.save_to_xml(save_path, each_img)
@staticmethod
def voc2csv(xml_dir, save_csv_path):
"""voc xml 转为 csv 文件"""
csv_list = []
for each in FileOperationUtil.re_all_file(xml_dir, lambda x: str(x).endswith('.xml')):
xml_info = parse_xml(each)
a = xml_info['object']
width = xml_info['size']['width']
height = xml_info['size']['height']
file_name = xml_info['filename'][:-4]
#
for each_box_info in xml_info['object']:
each_class = each_box_info['name']
each_box = each_box_info['bndbox']
# fixme 这边对防振锤 box 做一下限定, x,y,w,h
x, y, w, h = int(each_box['xmin']), int(each_box['ymin']), (
int(each_box['xmax']) - int(each_box['xmin'])), (
int(each_box['ymax']) - int(each_box['ymin']))
each_line = [file_name, width, height, [x, y, w, h], each_class]
csv_list.append(each_line)
CsvUtil.save_list_to_csv(csv_list, save_csv_path)
# ----------- 图片转为 coco 数据样式 -----------
@staticmethod
def zoom_img_and_xml_to_square(img_path, xml_path, save_dir, assign_length_of_side=1536, assign_save_name=None):
"""将图像先填充为正方形,再将 xml 和 图像拉伸到指定长宽"""
file_list = []
img = ImageUtil(img_path)
img_shape = img.get_img_shape()
# 将图像填充为正方形
img_mat = img.get_img_mat()
length_of_side = max(img_shape[0], img_shape[1])
new_img = np.ones((length_of_side, length_of_side, 4), dtype=np.uint8) * 127
new_img[:img_shape[0], :img_shape[1], :] = img_mat
img.set_img_mat(new_img)
#
img.convert_to_assign_shape((assign_length_of_side, assign_length_of_side))
if assign_save_name is None:
img_name = os.path.split(img_path)[1]
else:
img_name = assign_save_name + '.jpg'
save_path = os.path.join(save_dir, img_name)
img.save_to_image(save_path)
file_list.append(save_path)
# 读取 xml,xml 进行 resize 并保存
img_shape = (length_of_side, length_of_side) # 图像信息已经进行了更新
xml = ParseXml()
each_xml_info = xml.get_xml_info(xml_path)
# 改变图片长宽
each_xml_info['size'] = {'width': str(assign_length_of_side), 'height': str(assign_length_of_side), 'depth': '3'}
each_xml_info["filename"] = img_name
# 遍历每一个 object 改变其中的坐标
for each_object in each_xml_info['object']:
bndbox = each_object['bndbox']
bndbox['xmin'] = str(int(float(bndbox['xmin']) / (img_shape[0] / assign_length_of_side)))
bndbox['xmax'] = str(int(float(bndbox['xmax']) / (img_shape[0] / assign_length_of_side)))
bndbox['ymin'] = str(int(float(bndbox['ymin']) / (img_shape[1] / assign_length_of_side)))
bndbox['ymax'] = str(int(float(bndbox['ymax']) / (img_shape[1] / assign_length_of_side)))
if assign_save_name is None:
xml_name = os.path.split(xml_path)[1]
else:
xml_name = assign_save_name + '.xml'
save_xml_path = os.path.join(save_dir, xml_name)
xml.save_to_xml(save_xml_path)
file_list.append(save_xml_path)
return file_list
@staticmethod
def change_img_to_coco_format(img_folder, save_folder, assign_length_of_side=1536, xml_folder=None, val_train_ratio=0.1, category_dict=None, file_head=""):
"""将文件夹中的文件转为 coco 的样式,正方形,填充需要的部分, 将按照需要对文件进行重命名, file_head 文件的前缀"""
if xml_folder is None:
xml_folder = img_folder
# 创建保存文件夹
if not os.path.exists(save_folder):
os.makedirs(save_folder)
# 创建 train 和 val 文件夹
train_dir = os.path.join(save_folder, "train")
val_dir = os.path.join(save_folder, "val")
annotations_dir = os.path.join(save_folder, "annotations")
if not os.path.exists(train_dir):
os.makedirs(train_dir)
if not os.path.exists(val_dir):
os.makedirs(val_dir)
if not os.path.exists(annotations_dir):
os.makedirs(annotations_dir)
# 计算文件名的长度
count = len(os.listdir(img_folder))
o_count = int(math.log10(count)) + 1
for index, each_img_name in enumerate(os.listdir(img_folder)):
if not each_img_name.endswith('.jpg'):
continue
img_path = os.path.join(img_folder, each_img_name)
xml_path = os.path.join(xml_folder, each_img_name[:-3] + 'xml')
print("{0} : {1}".format(index, img_path))
if not os.path.exists(xml_path):
continue
# 进行转换
assign_save_name = file_head + str(index).rjust(o_count, "0")
file_list = CocoDatabaseUtil.zoom_img_and_xml_to_square(img_path, xml_path, save_folder, assign_length_of_side, assign_save_name=assign_save_name)
# 有一定的概率分配到两个文件夹之中
if random.random() > val_train_ratio:
traget_dir = train_dir
else:
traget_dir = val_dir
for each_file_path in file_list:
new_file_path = os.path.join(traget_dir, os.path.split(each_file_path)[1])
shutil.move(each_file_path, new_file_path)
# 将 xml 生成 json 文件,并存放到指定的文件夹中
if category_dict is not None:
save_train_path = os.path.join(annotations_dir, "instances_train.json")
save_val_path = os.path.join(annotations_dir, "instances_val.json")
CocoDatabaseUtil.voc2coco(train_dir, save_train_path, category_dict=category_dict)
CocoDatabaseUtil.voc2coco(val_dir, save_val_path, category_dict=category_dict)
else:
print("未指定 category_dict 不进行 xml --> json 转换")
if __name__ == "__main__":
img_dir = r"C:\Users\14271\Desktop\优化开口销第二步\001_训练数据\save_small_img_new"
xml_dir = r"C:\Users\14271\Desktop\优化开口销第二步\001_训练数据\save_small_img_new"
save_dir = r"C:\Users\14271\Desktop\优化开口销第二步\001_训练数据\save_small_img_reshape"
# category_dict = {"middle_pole":1, "single":2, "no_single":3}
# category_dict = {"fzc":1, "zd":2, "xj":3, "other":4, "ljj":5}
# category_dict = {"fzc":1, "zfzc":2, "hzfzc":3, "holder":4, "single":5}
category_dict = {"K":1, "KG":2, "Lm":3, "dense2":4, "other_L4kkx":5,"other_fist":6,"other_fzc":7,"other1":8,"other2":9,
"other3":10, "other4":11, "other5":12, "other6":13, "other7":14, "other8":15, "other9":16, "dense1":17, "dense3":18}
CocoDatabaseUtil.change_img_to_coco_format(img_dir, save_dir, 1536, xml_dir, category_dict=category_dict, file_head="") | PypiClean |
/BigJob2-0.54.post73.tar.gz/BigJob2-0.54.post73/docs/source/patterns/simple.rst | ###############
Simple Ensemble
###############
You might be wondering how to create your own BigJob script or how BigJob can be useful for your needs.
The first example, below, submits N jobs using BigJob. This is very useful if you are running many jobs using the same executable. Rather than submit each job individually to the queuing system and then wait for every job to become active and complete, you submit just one 'Big' job that reserves the number of cores needed to run all of your jobs. When this BigJob becomes active, your jobs are pulled by BigJob from the Redis server and executed.
The below examples demonstrates the mapping of a simple job (i.e. executable is /bin/echo) using all of the parameters of a Compute Unit Description. Specifically, it shows how to run 4 jobs on your local machine using fork:
.. literalinclude:: ../../../examples/tutorial/barebones-local/local_simple_ensembles.py
:language: python
| PypiClean |
/KeralaPyApiV2-2.0.2020.tar.gz/KeralaPyApiV2-2.0.2020/pyrogram/client/types/user_and_chats/chat.py |
from typing import Union, List
import pyrogram
from pyrogram.api import types
from .chat_permissions import ChatPermissions
from .chat_photo import ChatPhoto
from .restriction import Restriction
from ..object import Object
from ...ext import utils
class Chat(Object):
"""A chat.
Parameters:
id (``int``):
Unique identifier for this chat.
type (``str``):
Type of chat, can be either "private", "bot", "group", "supergroup" or "channel".
is_verified (``bool``, *optional*):
True, if this chat has been verified by Telegram. Supergroups, channels and bots only.
is_restricted (``bool``, *optional*):
True, if this chat has been restricted. Supergroups, channels and bots only.
See *restriction_reason* for details.
is_scam (``bool``, *optional*):
True, if this chat has been flagged for scam. Supergroups, channels and bots only.
is_support (``bool``):
True, if this chat is part of the Telegram support team. Users and bots only.
title (``str``, *optional*):
Title, for supergroups, channels and basic group chats.
username (``str``, *optional*):
Username, for private chats, bots, supergroups and channels if available.
first_name (``str``, *optional*):
First name of the other party in a private chat, for private chats and bots.
last_name (``str``, *optional*):
Last name of the other party in a private chat, for private chats.
photo (:obj:`ChatPhoto`, *optional*):
Chat photo. Suitable for downloads only.
description (``str``, *optional*):
Bio, for private chats and bots or description for groups, supergroups and channels.
Returned only in :meth:`~Client.get_chat`.
invite_link (``str``, *optional*):
Chat invite link, for groups, supergroups and channels.
Returned only in :meth:`~Client.get_chat`.
pinned_message (:obj:`Message`, *optional*):
Pinned message, for groups, supergroups channels and own chat.
Returned only in :meth:`~Client.get_chat`.
sticker_set_name (``str``, *optional*):
For supergroups, name of group sticker set.
Returned only in :meth:`~Client.get_chat`.
can_set_sticker_set (``bool``, *optional*):
True, if the group sticker set can be changed by you.
Returned only in :meth:`~Client.get_chat`.
members_count (``int``, *optional*):
Chat members count, for groups, supergroups and channels only.
restrictions (List of :obj:`Restriction`, *optional*):
The list of reasons why this chat might be unavailable to some users.
This field is available only in case *is_restricted* is True.
permissions (:obj:`ChatPermissions` *optional*):
Default chat member permissions, for groups and supergroups.
distance (``int``, *optional*):
Distance in meters of this group chat from your location.
Returned only in :meth:`~Client.get_nearby_chats`.
"""
def __init__(
self,
*,
client: "pyrogram.BaseClient" = None,
id: int,
type: str,
is_verified: bool = None,
is_restricted: bool = None,
is_scam: bool = None,
is_support: bool = None,
title: str = None,
username: str = None,
first_name: str = None,
last_name: str = None,
photo: ChatPhoto = None,
description: str = None,
invite_link: str = None,
pinned_message=None,
sticker_set_name: str = None,
can_set_sticker_set: bool = None,
members_count: int = None,
restrictions: List[Restriction] = None,
permissions: "pyrogram.ChatPermissions" = None,
distance: int = None
):
super().__init__(client)
self.id = id
self.type = type
self.is_verified = is_verified
self.is_restricted = is_restricted
self.is_scam = is_scam
self.is_support = is_support
self.title = title
self.username = username
self.first_name = first_name
self.last_name = last_name
self.photo = photo
self.description = description
self.invite_link = invite_link
self.pinned_message = pinned_message
self.sticker_set_name = sticker_set_name
self.can_set_sticker_set = can_set_sticker_set
self.members_count = members_count
self.restrictions = restrictions
self.permissions = permissions
self.distance = distance
@staticmethod
def _parse_user_chat(client, user: types.User) -> "Chat":
peer_id = user.id
return Chat(
id=peer_id,
type="bot" if user.bot else "private",
is_verified=getattr(user, "verified", None),
is_restricted=getattr(user, "restricted", None),
is_scam=getattr(user, "scam", None),
is_support=getattr(user, "support", None),
username=user.username,
first_name=user.first_name,
last_name=user.last_name,
photo=ChatPhoto._parse(client, user.photo, peer_id, user.access_hash),
restrictions=pyrogram.List([Restriction._parse(r) for r in user.restriction_reason]) or None,
client=client
)
@staticmethod
def _parse_chat_chat(client, chat: types.Chat) -> "Chat":
peer_id = -chat.id
return Chat(
id=peer_id,
type="group",
title=chat.title,
photo=ChatPhoto._parse(client, getattr(chat, "photo", None), peer_id, 0),
permissions=ChatPermissions._parse(getattr(chat, "default_banned_rights", None)),
members_count=getattr(chat, "participants_count", None),
client=client
)
@staticmethod
def _parse_channel_chat(client, channel: types.Channel) -> "Chat":
peer_id = utils.get_channel_id(channel.id)
restriction_reason = getattr(channel, "restriction_reason", [])
return Chat(
id=peer_id,
type="supergroup" if channel.megagroup else "channel",
is_verified=getattr(channel, "verified", None),
is_restricted=getattr(channel, "restricted", None),
is_scam=getattr(channel, "scam", None),
title=channel.title,
username=getattr(channel, "username", None),
photo=ChatPhoto._parse(client, getattr(channel, "photo", None), peer_id, channel.access_hash),
restrictions=pyrogram.List([Restriction._parse(r) for r in restriction_reason]) or None,
permissions=ChatPermissions._parse(getattr(channel, "default_banned_rights", None)),
members_count=getattr(channel, "participants_count", None),
client=client
)
@staticmethod
def _parse(client, message: types.Message or types.MessageService, users: dict, chats: dict) -> "Chat":
if isinstance(message.to_id, types.PeerUser):
return Chat._parse_user_chat(client, users[message.to_id.user_id if message.out else message.from_id])
if isinstance(message.to_id, types.PeerChat):
return Chat._parse_chat_chat(client, chats[message.to_id.chat_id])
return Chat._parse_channel_chat(client, chats[message.to_id.channel_id])
@staticmethod
def _parse_dialog(client, peer, users: dict, chats: dict):
if isinstance(peer, types.PeerUser):
return Chat._parse_user_chat(client, users[peer.user_id])
elif isinstance(peer, types.PeerChat):
return Chat._parse_chat_chat(client, chats[peer.chat_id])
else:
return Chat._parse_channel_chat(client, chats[peer.channel_id])
@staticmethod
async def _parse_full(client, chat_full: types.messages.ChatFull or types.UserFull) -> "Chat":
if isinstance(chat_full, types.UserFull):
parsed_chat = Chat._parse_user_chat(client, chat_full.user)
parsed_chat.description = chat_full.about
if chat_full.pinned_msg_id:
parsed_chat.pinned_message = await client.get_messages(
parsed_chat.id,
message_ids=chat_full.pinned_msg_id
)
else:
full_chat = chat_full.full_chat
chat = None
for i in chat_full.chats:
if full_chat.id == i.id:
chat = i
if isinstance(full_chat, types.ChatFull):
parsed_chat = Chat._parse_chat_chat(client, chat)
parsed_chat.description = full_chat.about or None
if isinstance(full_chat.participants, types.ChatParticipants):
parsed_chat.members_count = len(full_chat.participants.participants)
else:
parsed_chat = Chat._parse_channel_chat(client, chat)
parsed_chat.members_count = full_chat.participants_count
parsed_chat.description = full_chat.about or None
# TODO: Add StickerSet type
parsed_chat.can_set_sticker_set = full_chat.can_set_stickers
parsed_chat.sticker_set_name = getattr(full_chat.stickerset, "short_name", None)
if full_chat.pinned_msg_id:
parsed_chat.pinned_message = await client.get_messages(
parsed_chat.id,
message_ids=full_chat.pinned_msg_id
)
if isinstance(full_chat.exported_invite, types.ChatInviteExported):
parsed_chat.invite_link = full_chat.exported_invite.link
return parsed_chat
@staticmethod
def _parse_chat(client, chat: Union[types.Chat, types.User, types.Channel]) -> "Chat":
if isinstance(chat, types.Chat):
return Chat._parse_chat_chat(client, chat)
elif isinstance(chat, types.User):
return Chat._parse_user_chat(client, chat)
else:
return Chat._parse_channel_chat(client, chat)
async def archive(self):
"""Bound method *archive* of :obj:`Chat`.
Use as a shortcut for:
.. code-block:: python
client.archive_chats(-100123456789)
Example:
.. code-block:: python
chat.archive()
Returns:
True on success.
Raises:
RPCError: In case of a Telegram RPC error.
"""
return await self._client.archive_chats(self.id)
async def unarchive(self):
"""Bound method *unarchive* of :obj:`Chat`.
Use as a shortcut for:
.. code-block:: python
client.unarchive_chats(-100123456789)
Example:
.. code-block:: python
chat.unarchive()
Returns:
True on success.
Raises:
RPCError: In case of a Telegram RPC error.
"""
return await self._client.unarchive_chats(self.id)
# TODO: Remove notes about "All Members Are Admins" for basic groups, the attribute doesn't exist anymore
async def set_title(self, title: str) -> bool:
"""Bound method *set_title* of :obj:`Chat`.
Use as a shortcut for:
.. code-block:: python
client.set_chat_title(
chat_id=chat_id,
title=title
)
Example:
.. code-block:: python
chat.set_title("Lounge")
Note:
In regular groups (non-supergroups), this method will only work if the "All Members Are Admins"
setting is off.
Parameters:
title (``str``):
New chat title, 1-255 characters.
Returns:
``bool``: True on success.
Raises:
RPCError: In case of Telegram RPC error.
ValueError: In case a chat_id belongs to user.
"""
return await self._client.set_chat_title(
chat_id=self.id,
title=title
)
async def set_description(self, description: str) -> bool:
"""Bound method *set_description* of :obj:`Chat`.
Use as a shortcut for:
.. code-block:: python
client.set_chat_description(
chat_id=chat_id,
description=description
)
Example:
.. code-block:: python
chat.set_chat_description("Don't spam!")
Parameters:
description (``str``):
New chat description, 0-255 characters.
Returns:
``bool``: True on success.
Raises:
RPCError: In case of Telegram RPC error.
ValueError: If a chat_id doesn't belong to a supergroup or a channel.
"""
return await self._client.set_chat_description(
chat_id=self.id,
description=description
)
async def set_photo(self, photo: str) -> bool:
"""Bound method *set_photo* of :obj:`Chat`.
Use as a shortcut for:
.. code-block:: python
client.set_chat_photo(
chat_id=chat_id,
photo=photo
)
Example:
.. code-block:: python
chat.set_photo("photo.png")
Parameters:
photo (``str``):
New chat photo. You can pass a :obj:`Photo` id or a file path to upload a new photo.
Returns:
``bool``: True on success.
Raises:
RPCError: In case of a Telegram RPC error.
ValueError: if a chat_id belongs to user.
"""
return await self._client.set_chat_photo(
chat_id=self.id,
photo=photo
)
async def kick_member(
self,
user_id: Union[int, str],
until_date: int = 0
) -> Union["pyrogram.Message", bool]:
"""Bound method *kick_member* of :obj:`Chat`.
Use as a shortcut for:
.. code-block:: python
client.kick_chat_member(
chat_id=chat_id,
user_id=user_id
)
Example:
.. code-block:: python
chat.kick_member(123456789)
Note:
In regular groups (non-supergroups), this method will only work if the "All Members Are Admins" setting is
off in the target group. Otherwise members may only be removed by the group's creator or by the member
that added them.
Parameters:
user_id (``int`` | ``str``):
Unique identifier (int) or username (str) of the target user.
For a contact that exists in your Telegram address book you can use his phone number (str).
until_date (``int``, *optional*):
Date when the user will be unbanned, unix time.
If user is banned for more than 366 days or less than 30 seconds from the current time they are
considered to be banned forever. Defaults to 0 (ban forever).
Returns:
:obj:`Message` | ``bool``: On success, a service message will be returned (when applicable), otherwise, in
case a message object couldn't be returned, True is returned.
Raises:
RPCError: In case of a Telegram RPC error.
"""
return await self._client.kick_chat_member(
chat_id=self.id,
user_id=user_id,
until_date=until_date
)
async def unban_member(
self,
user_id: Union[int, str]
) -> bool:
"""Bound method *unban_member* of :obj:`Chat`.
Use as a shortcut for:
.. code-block:: python
client.unban_chat_member(
chat_id=chat_id,
user_id=user_id
)
Example:
.. code-block:: python
chat.unban_member(123456789)
Parameters:
user_id (``int`` | ``str``):
Unique identifier (int) or username (str) of the target user.
For a contact that exists in your Telegram address book you can use his phone number (str).
Returns:
``bool``: True on success.
Raises:
RPCError: In case of a Telegram RPC error.
"""
return await self._client.unban_chat_member(
chat_id=self.id,
user_id=user_id,
)
async def restrict_member(
self,
user_id: Union[int, str],
until_date: int = 0,
can_send_messages: bool = False,
can_send_media_messages: bool = False,
can_send_other_messages: bool = False,
can_add_web_page_previews: bool = False,
can_send_polls: bool = False,
can_change_info: bool = False,
can_invite_users: bool = False,
can_pin_messages: bool = False
) -> "pyrogram.Chat":
"""Bound method *unban_member* of :obj:`Chat`.
Use as a shortcut for:
.. code-block:: python
client.restrict_chat_member(
chat_id=chat_id,
user_id=user_id
)
Example:
.. code-block:: python
chat.restrict_member(123456789)
Parameters:
user_id (``int`` | ``str``):
Unique identifier (int) or username (str) of the target user.
For a contact that exists in your Telegram address book you can use his phone number (str).
until_date (``int``, *optional*):
Date when the user will be unbanned, unix time.
If user is banned for more than 366 days or less than 30 seconds from the current time they are
considered to be banned forever. Defaults to 0 (ban forever).
can_send_messages (``bool``, *optional*):
Pass True, if the user can send text messages, contacts, locations and venues.
can_send_media_messages (``bool``, *optional*):
Pass True, if the user can send audios, documents, photos, videos, video notes and voice notes,
implies can_send_messages.
can_send_other_messages (``bool``, *optional*):
Pass True, if the user can send animations, games, stickers and use inline bots,
implies can_send_media_messages.
can_add_web_page_previews (``bool``, *optional*):
Pass True, if the user may add web page previews to their messages, implies can_send_media_messages.
can_send_polls (``bool``, *optional*):
Pass True, if the user can send polls, implies can_send_media_messages.
can_change_info (``bool``, *optional*):
Pass True, if the user can change the chat title, photo and other settings.
can_invite_users (``bool``, *optional*):
Pass True, if the user can invite new users to the chat.
can_pin_messages (``bool``, *optional*):
Pass True, if the user can pin messages.
Returns:
:obj:`Chat`: On success, a chat object is returned.
Raises:
RPCError: In case of a Telegram RPC error.
"""
return await self._client.restrict_chat_member(
chat_id=self.id,
user_id=user_id,
until_date=until_date,
can_send_messages=can_send_messages,
can_send_media_messages=can_send_media_messages,
can_send_other_messages=can_send_other_messages,
can_add_web_page_previews=can_add_web_page_previews,
can_send_polls=can_send_polls,
can_change_info=can_change_info,
can_invite_users=can_invite_users,
can_pin_messages=can_pin_messages
)
async def promote_member(
self,
user_id: Union[int, str],
can_change_info: bool = True,
can_post_messages: bool = False,
can_edit_messages: bool = False,
can_delete_messages: bool = True,
can_restrict_members: bool = True,
can_invite_users: bool = True,
can_pin_messages: bool = False,
can_promote_members: bool = False
) -> bool:
"""Bound method *promote_member* of :obj:`Chat`.
Use as a shortcut for:
.. code-block:: python
client.promote_chat_member(
chat_id=chat_id,
user_id=user_id
)
Example:
.. code-block:: python
chat.promote_member(123456789)
Parameters:
user_id (``int`` | ``str``):
Unique identifier (int) or username (str) of the target user.
For a contact that exists in your Telegram address book you can use his phone number (str).
can_change_info (``bool``, *optional*):
Pass True, if the administrator can change chat title, photo and other settings.
can_post_messages (``bool``, *optional*):
Pass True, if the administrator can create channel posts, channels only.
can_edit_messages (``bool``, *optional*):
Pass True, if the administrator can edit messages of other users and can pin messages, channels only.
can_delete_messages (``bool``, *optional*):
Pass True, if the administrator can delete messages of other users.
can_restrict_members (``bool``, *optional*):
Pass True, if the administrator can restrict, ban or unban chat members.
can_invite_users (``bool``, *optional*):
Pass True, if the administrator can invite new users to the chat.
can_pin_messages (``bool``, *optional*):
Pass True, if the administrator can pin messages, supergroups only.
can_promote_members (``bool``, *optional*):
Pass True, if the administrator can add new administrators with a subset of his own privileges or
demote administrators that he has promoted, directly or indirectly (promoted by administrators that
were appointed by him).
Returns:
``bool``: True on success.
Raises:
RPCError: In case of a Telegram RPC error.
"""
return await self._client.promote_chat_member(
chat_id=self.id,
user_id=user_id,
can_change_info=can_change_info,
can_post_messages=can_post_messages,
can_edit_messages=can_edit_messages,
can_delete_messages=can_delete_messages,
can_restrict_members=can_restrict_members,
can_invite_users=can_invite_users,
can_pin_messages=can_pin_messages,
can_promote_members=can_promote_members
)
async def join(self):
"""Bound method *join* of :obj:`Chat`.
Use as a shortcut for:
.. code-block:: python
client.join_chat(123456789)
Example:
.. code-block:: python
chat.join()
Note:
This only works for public groups and channels that have set a username.
Returns:
:obj:`Chat`: On success, a chat object is returned.
Raises:
RPCError: In case of a Telegram RPC error.
"""
return await self._client.join_chat(self.username)
async def leave(self):
"""Bound method *leave* of :obj:`Chat`.
Use as a shortcut for:
.. code-block:: python
client.leave_chat(123456789)
Example:
.. code-block:: python
chat.leave()
Raises:
RPCError: In case of a Telegram RPC error.
"""
return await self._client.leave_chat(self.id)
async def export_invite_link(self):
"""Bound method *export_invite_link* of :obj:`Chat`.
Use as a shortcut for:
.. code-block:: python
client.export_chat_invite_link(123456789)
Example:
.. code-block:: python
chat.export_invite_link()
Returns:
``str``: On success, the exported invite link is returned.
Raises:
ValueError: In case the chat_id belongs to a user.
"""
return await self._client.export_invite_link(self.id) | PypiClean |
/H_MCRLLM-0.0.24-py3-none-any.whl/H_MCRLLM/half_mcr.py | # import packages
import numpy as np
from scipy.optimize import minimize
from functools import partial
from H_MCRLLM.preprocessing import preprocess_data,final_process
from sklearn.cluster import KMeans
from sklearn.cluster import MiniBatchKMeans
import pysptools.eea.eea
import math
from tqdm import tqdm
import warnings
warnings.filterwarnings("ignore", category=RuntimeWarning)
"""All intitialisation methods to extract the endmembers"""
################################################################################################################################
class KmeansInit: #Kmeans initialisation
@classmethod
def initialisation(cls, x, nb_c, n_init=10):
s = KMeans(n_clusters=nb_c, n_init = n_init).fit(x).cluster_centers_
return s
################################################################################################################################
class MBKmeansInit: #Mini Batch Kmeans (plus rapide mais moins robuste)
@classmethod
def initialisation(cls, x, nb_c):
ctr_k = MiniBatchKMeans(n_clusters = nb_c).fit(x)
s = ctr_k.cluster_centers_
return s
################################################################################################################################
class NFindrInit:
@classmethod
def initialisation(cls, x, nb_c):
size0 = x.shape[0]
size1 = x.shape[1]
xr = np.reshape(x, (1, size0, size1) ) # NFindr travaille avec des jeux de données 3D...
s = pysptools.eea.NFINDR.extract(cls, xr, nb_c)
return s
################################################################################################################################
class RobustNFindrInit:
@classmethod
def initialisation(cls, x, nb_c, fraction = 0.1, min_spectra = 50, nb_i = 50): # Initialisaiton KMeans + NMF / Voir avec Ryan Gosselin pour les explications et évolutions
"""
fraction : Fraction du jeu de données par échantillon
min_spectra : minimum de spectre pour garder un échantillong
nb_i : nombre d'échantillons à créer
"""
def km(x, nb_c):
k = KMeans(n_clusters=nb_c).fit(x)
IDX = k.labels_
C = k.cluster_centers_
return IDX, C
s1 = x.shape[0]
fX = math.ceil(s1*fraction)
BESTC = np.array(())
DETC = 0
for i in tqdm(range(nb_i)):
randomVector = np.random.choice(s1, fX, replace = False)# Create random vector with unique values
sampX = x[randomVector,:]#Pick a sample in x according to the randomVector
#Run Kmeans
IDX, C = km(sampX, nb_c)
#Check Number of pixels in each kmeans centroid
U, nbU = np.unique(IDX, return_counts=True);
if min(nbU) > min_spectra: #Do not keep this bootstrap if too few pixels fall in a category
a = np.zeros((nb_c,1)) + 1 #Start NMF
C1 = np.column_stack((a, C))
CC = [email protected]
detc = np.linalg.det(CC)
if detc > DETC:
DETC = detc
BESTC = np.copy(C)
#print(nbU)
return BESTC
################################################################################################################################
class RobustNFindrV2Init:
@classmethod
def initialisation(cls, x, nb_c, fraction = 0.1, min_spectra = 50, nb_i = 50):
"""
fraction : Fraction du jeu de données par échantillon
min_spectra : minimum de spectre pour garder un échantillong
nb_i : nombre d'échantillons à créer
"""
def km(x, nb_c):
k = KMeans(n_clusters=nb_c).fit(x)
IDX = k.labels_
C = k.cluster_centers_
return IDX, C
s1 = x.shape[0]
f = fraction # Fraction to keep in each bootstrap
fX = math.ceil(s1*f)
allS = np.array(())
for i in tqdm(range(nb_i)):
randomVector = np.random.choice(s1, fX, replace = False)# Create random vector with unique values
sampX = x[randomVector,:]#Pick a sample in x according to the randomVector
#Run Kmeans
IDX, C = km(sampX, nb_c)
#Check Number of pixels in each kmeans centroid
U, nbU = np.unique(IDX, return_counts=True);
#print(nbU)
if min(nbU) > min_spectra: #Do not keep this bootstrap if too few pixels fall in a category
try:
allS = np.vstack((allS, C));
except ValueError:
allS = np.copy(C)
size0 = allS.shape[0]
size1 = allS.shape[1]
allS = np.reshape(allS, (1, size0, size1) ) # NFindr travaille avec des jeux de données 3D...
s = pysptools.eea.NFINDR.extract(cls, allS, nb_c)
return s
################################################################################################################################
class AtgpInit: # Automatic Target Generation Process
@classmethod
def initialisation(cls, x, nb_c):
s = pysptools.eea.eea.ATGP(x, nb_c)
return s[0]
################################################################################################################################
class FippiInit: # Fast Iterative Pixel Purity Index
@classmethod
def initialisation(cls, x, nb_c):
t = pysptools.eea.eea.FIPPI(x, q = nb_c)
s = t[0]
s = s[:nb_c, :]
return s
################################################################################################################################
class PpiInit: # Pixel Purity Index
@classmethod
def initialisation(cls, x, nb_c):
numSkewers = 10000
s = pysptools.eea.eea.PPI(x, nb_c, numSkewers)
return s[0]
################################################################################################################################
class nKmeansInit:
@classmethod
def initialisation(cls, x, nb_c, n = 15):
"""Sometimes it's necessary to run Kmeans for more component than we want, to get the expected spectras, this version runs
the initialisation for nb_c + n components, and keep the nb_c centroids containing the most pixels"""
nb_ci = nb_c + n
init = KMeans(nb_ci).fit(x)
s = init.cluster_centers_
lab = init.labels_
U, nbU = np.unique(lab, return_counts=True);# nbU is the number of pixels in each centroid
ind = nbU.argsort()[-nb_c:] # Get the indices of the nb_c centroids containing the most pixels
s = s[ind,:] # Keep only the nb_c spectra
return s
#########################################################################################################################################################
class half_mcrllm:
def __init__(self,xraw, nb_c , init):
# We don't use the number of iterations, it is just to accomodate the GUI
self.Xraw = xraw
self.nb_c = nb_c
self.X ,self.Xsum,self.deletedpix, self.deletedLevels, self.check_pix , self.check_level = preprocess_data(self.Xraw)
if self.check_pix:
self.Xraw = np.delete(self.Xraw , self.deletedpix , axis = 0)
if self.check_level:
self.Xraw = np.delete(self.Xraw , self.deletedLevels , axis = 1)
self.define_initial_spectra(init)
c_pred = self.X @ self.S.T @ np.linalg.inv(self.S @ self.S.T)
c = self.C_plm(self.S, self.Xraw, nb_c, c_pred)
self.C = c
self.C,self.S = final_process(self.C ,self.S, self.deletedpix , self.deletedLevels , self.check_pix , self.check_level )
def C_plm(self, s, xraw, nb_c, c_pred):
#initialize C
[nb_pix,nb_lev] = np.shape(xraw)
c_new = np.zeros((nb_pix,nb_c))
# on calcule les concentrations optimales pour chaque pixel par maximum likelihood
for pix in range(nb_pix):
x_sum = np.sum(xraw[pix,:]) #total des counts
sraw = s*x_sum
c_new[pix,:] = self.pyPLM(nb_c, sraw, xraw[pix,:], c_pred[pix,:])
# avoid errors (this part should not be necessary)
c_new[np.isnan(c_new)] = 1/nb_c
c_new[np.isinf(c_new)] = 1/nb_c
c_new[c_new<0] = 0
c_sum1 = np.array([np.sum(c_new,axis=1)])
c_sum [email protected]((1,np.size(c_new,axis =1)))
c_new = c_new/c_sum
return c_new
def pyPLM(self, nb_c, sraw, xrawPix, c_old):
# sum of every value is equal to 1
def con_one(c_old):
return 1-sum(c_old)
# all values are positive
bnds = ((0.0, 1.0),) * nb_c
cons = [{'type': 'eq', 'fun': con_one}]
def regressLLPoisson(sraw, xrawPix, c_pred):
#compute prediction of counts
yPred = c_pred @ sraw
# avoid errors, should not be necessary
yPred[yPred < 1/(10000*len(yPred))] = 1/(10000*len(yPred))
logLik = -np.sum(xrawPix*np.log(yPred)-yPred)
return (logLik)
def jacobians(nb_c, xrawPix, sraw, c_pred):
#compute prediction of counts
yPred = c_pred @ sraw
#compute jacobians
jacC = np.zeros(nb_c)
for phase in range(nb_c):
jacC[phase] = -np.sum(((xrawPix*sraw[phase,:])/yPred)-sraw[phase,:])
return(jacC)
# Run the minimizer
results = minimize(partial(regressLLPoisson, sraw, xrawPix), c_old, method='SLSQP', bounds=bnds, constraints=cons, jac = partial(jacobians, nb_c, xrawPix, sraw))
results = results.x
results = np.asarray(results)
c_new = results.reshape(int(len(results) / nb_c), nb_c)
return c_new
def define_initial_spectra(self,init):
if type(init) == type(''):
if init == 'Kmeans':
print('Initializing with {}'.format(init))
self.Sini = KmeansInit.initialisation(self.X,self.nb_c)
self.S = self.Sini.copy()
elif init == 'MBKmeans':
print('Initializing with {}'.format(init))
self.Sini = MBKmeansInit.initialisation(self.X,self.nb_c)
self.S = self.Sini.copy()
elif init == 'NFindr':
print('Initializing with {}'.format(init))
self.Sini = NFindrInit.initialisation(self.X,self.nb_c)
self.S = self.Sini.copy()
elif init == 'RobustNFindr':
print('Initializing with {}'.format(init))
self.Sini = RobustNFindrInit.initialisation(self.X,self.nb_c)
self.S = self.Sini.copy()
elif init == 'ATGP':
print('Initializing with {}'.format(init))
self.Sini = AtgpInit.initialisation(self.X,self.nb_c)
self.S = self.Sini.copy()
elif init == 'FIPPI':
print('Initializing with {}'.format(init))
self.Sini = FippiInit.initialisation(self.X,self.nb_c)
self.S = self.Sini.copy()
elif init == 'nKmeansInit':
print('Initializing with {}'.format(init))
self.Sini = nKmeansInit.initialisation(self.X,self.nb_c)
self.S = self.Sini.copy()
elif type(init) == type(np.array([1])):
print('Initizaling with given spectra')
self.S = init
else:
raise('Initialization method not found') | PypiClean |
/ConfigEnv-1.2.4.tar.gz/ConfigEnv-1.2.4/README.md | # ConfigEnv [![Build Status](https://travis-ci.org/Nydareld/ConfigEnv.svg?branch=master)](https://travis-ci.org/Nydareld/ConfigEnv) [![Coverage Status](https://coveralls.io/repos/github/Nydareld/ConfigEnv/badge.svg)](https://coveralls.io/github/Nydareld/ConfigEnv) [![PyPI version](https://badge.fury.io/py/ConfigEnv.svg)](https://badge.fury.io/py/ConfigEnv) ![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ConfigEnv.svg)
Gestionnaire de configuration en json, ini avec overide possible en variable d’environnement
## install
with pip :
pip install ConfigEnv
## how to use
You can actualy use ConfigEnv with either:
- json file
- ini file
- environement variable
Notice than the environement variable will take over configuration files
### basic json exemple:
with the file :
```json
// config.json
{
"AWS" : {
"ACCESS_KEY_ID" : "toto"
}
}
```
```python
from ConfigEnv import Config
config = Config("config.json")
print(config.get("AWS_ACCESS_KEY_ID"))
# prints toto
```
### overide file
you can add more file to veride configs
notice that the lib works with cache, so register all your config files before request config
```json
// config.json
{
"AWS" : {
"ACCESS_KEY_ID" : "toto"
}
}
```
```ini
; config.ini
[AWS]
ACCESS_KEY_ID=tata
```
```python
from ConfigEnv import Config
config = Config("config.json")
config.addFile = Config("config.ini")
print(config.get("AWS_ACCESS_KEY_ID"))
# prints tata
```
### overide with environement variable
```json
// config.json
{
"AWS" : {
"ACCESS_KEY_ID" : "toto"
}
}
```
with the environement variable : `AWS_ACCESS_KEY_ID=tata`
```python
from ConfigEnv import Config
config = Config("config.json")
print(config.get("AWS_ACCESS_KEY_ID"))
# prints tata
```
## devlopping guide
we advise you to fork the depot, and if you have goot feature, we would appreciate pull request
### install developement environement
with virtualenv :
virtualenv -p python3 .venv
source .venv/bin/activate
install depenencies :
pip install -r requirements.txt
### test
run tests :
python -m unittest tests
### coverage
run coverage
coverage run --source=ConfigEnv -m unittest tests
report coverage
coverage report -m
### release
create package :
python3 setup.py sdist bdist_wheel
publish :
python -m twine upload dist/*
| PypiClean |
/Bigeleisen_KIE-1.0.tar.gz/Bigeleisen_KIE-1.0/Bigeleisen_KIE.py | import pandas as pd
import numpy as np
################################################
#Parameters :
#Planck constant (J/Hz)
h=6.62607004*10**-34
#Boltzmann constant (J/K)
kB=1.38064852*10**-23
#Light velocity in vaccum (m/s)
c=299792458.0
####################################################################################
#Functions:
######################################################################################
#We check for errors :
#We check if all values are positiv for initial states
def CheckPositiv(Data):
if len(Data)!=len([i for i in Data if (i>0)]):
print("At least one initial state hasn't every frequency that are positiv")
def error(isH,isD,tsH,tsD):
CheckPositiv(isH)
CheckPositiv(isD)
#####################################################################################
#Function which takes the lists of vibration frequencies of 2 states to give the product of the ratio of frequencies
def Operation(Data1,Data2):
if len(Data1)!=len(Data2):
print("The number of frequencies isn't the same for two same states")
return
x=1
for i in range(len(Data1)):
x=x*Data1[i]/Data2[i]
return x
#Function which takes one list of vibration frequencies to give the sinh of Ui = h*x/(kB*T) according to the Biegelheisen equation
def Ui(Data,T):
return pd.Series(Data).apply(lambda x : np.sinh(float(x)*((h*100*c)/(2.0*kB*float(T)))))
#Function which takes in entry the lists of frequencies (cm-1) and the temperature (K) and gives the KIE
#isH is the vibration frequencies of the molecule containing the light isotope at the initial state
#isD is the vibration frequencies of the molecule containing the heavy isotope at the initial state
#tsH is the vibration frequencies of the molecule containing the light isotope at the transition state
#tsD is the vibration frequencies of the molecule containing the heavy isotope at the transition state
#T is the temperature in Kelvin
def KIE(isH,isD,tsH,tsD,T):
error(isH,isD,tsH,tsD)
#We calculate the sinh of h*x/(kB*T)
UisH=Ui(isH,T).tolist()
UtsH=Ui(tsH,T).tolist()
UisD=Ui(isD,T).tolist()
UtsD=Ui(tsD,T).tolist()
#######################
#We begin to calculate the ratio of the two imaginary frequencies
op1=tsH[0]/tsD[0]
Result=op1
#We calculate the second factor
Result=Result*Operation(tsH[1:],tsD[1:])
#We calculate the third factor
Result=Result*Operation(isD,isH)
#We calculate the fourth factor
Result=Result*Operation(UtsD[1:],UtsH[1:])
#We calculate the fifth factor
Result=Result*Operation(UisH,UisD)
return Result
#################################################################################### | PypiClean |
/Aadhaar_extractor-0.0.5-py3-none-any.whl/AADHAAR_EXTRACTOR/Extractor.py |
class AadhaarExtractor:
# assume inputs are file name and not cv2 images #give underscore for private member functions
# and pil images
def __init__(self, data1=None): # and pil images
def invalidate():
if type(data1) == str:
import re
if re.match(r'.*\.(jpg|png|jpeg)',data1,re.M|re.I)!= None:
return False
return True
### check if the input string seems like a image file ###
if type(data1)!=str :
raise ValueError("Invalid input: Give file paths only")
elif invalidate():
raise ValueError("Only image files possible")
self.data1 = data1
# FIXED: check for invalid inputs
self.mainlist = None
self.process()
def load(self, data1): # can load and jsonify be merged?
self.data1 = data1
self.extract_details()
def extract(self):
import json
# try:
# f = open(jsonpath, 'r+')
# except FileNotFoundError: # for the first time
# f = open(jsonpath, 'w+')
# try:
# maindict = json.load(f)
# except ValueError: # for the first time
# print("value error")
# maindict = {}
# if self.maindict['aadhaar_no'] in maindict.keys():
# choice = input(
# "This aadhar number is already present in the database:\n Do you want to update the the data for this aadhaar number (y\n)?")
# if choice.lower() == 'n':
# f.close()
# return self.maindict
# maindict[self.maindict['aadhaar_no']] = self.maindict
# f.seek(0)
# json.dump(maindict, f, indent=2)
# f.close()
return self.mainlist
def file_type(self, file):
# import re
# if re.match(".*\.pdf$", filePath, re.M | re.I):
if file.content_type == r'application/pdf':
return 'pdf'
# if re.match(".*\.(png|jpg|jpeg|bmp|svg)$", filePath, re.M | re.I): # changed and made more flexible
if file.content_type == r'image/jpeg':
return 'img'
return 0
def process(self):
# if self.file_type(data) == 'pdf':
# dict = self.extract_from_pdf(data)
# elif self.file_type(data) == 'img':
self.mainlist = self.extract_from_images()
# else:
# pass
# def extract_details(self):
# if self.data1 != None:
# mainlist = self.give_details_back()
def extract_from_images(self):
import numpy as np
from ultralytics import YOLO
import cv2
import pytesseract
### specify that pytesseract needs to be explicitly installed ###
import logging
import os
logging.basicConfig(level=logging.NOTSET)
# Get the absolute path of the current file
current_dir = os.path.dirname(os.path.abspath(__file__))
# Construct the absolute path to the resource file
MODEL_PATH = os.path.join(current_dir, 'best.pt')
# MODEL_PATH = r"best.pt"
def filter_tuples(lst):
##### filters the list so that only one instance of each class is present ########
d = {}
for tup in lst:
key = tup[1]
value = tup[2]
if key not in d:
d[key] = (tup, value)
else:
if value > d[key][1]:
d[key] = (tup, value)
return [tup for key, (tup, value) in d.items()]
def clean_words(name):
name = name.replace("8", "B")
name = name.replace("0", "D")
name = name.replace("6", "G")
name = name.replace("1", "I")
name = name.replace('5', 'S')
return name
def clean_dob(dob):
dob = dob.strip()
dob = dob.replace('l', '/')
dob = dob.replace('L', '/')
dob = dob.replace('I', '/')
dob = dob.replace('i', '/')
dob = dob.replace('|', '/')
dob = dob.replace('\"', '/1')
# dob = dob.replace(":","")
dob = dob.replace(" ", "")
return dob
def validate_aadhaar_numbers(candidate):
if candidate == None :
return True
candidate = candidate.replace(' ', '')
# The multiplication table
d = [
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 2, 3, 4, 0, 6, 7, 8, 9, 5],
[2, 3, 4, 0, 1, 7, 8, 9, 5, 6],
[3, 4, 0, 1, 2, 8, 9, 5, 6, 7],
[4, 0, 1, 2, 3, 9, 5, 6, 7, 8],
[5, 9, 8, 7, 6, 0, 4, 3, 2, 1],
[6, 5, 9, 8, 7, 1, 0, 4, 3, 2],
[7, 6, 5, 9, 8, 2, 1, 0, 4, 3],
[8, 7, 6, 5, 9, 3, 2, 1, 0, 4],
[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
]
# permutation table p
p = [
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[1, 5, 7, 6, 2, 8, 3, 0, 9, 4],
[5, 8, 0, 3, 7, 9, 6, 1, 4, 2],
[8, 9, 1, 6, 0, 4, 3, 5, 2, 7],
[9, 4, 5, 3, 1, 2, 6, 8, 7, 0],
[4, 2, 8, 6, 5, 7, 3, 9, 0, 1],
[2, 7, 9, 3, 8, 0, 6, 4, 1, 5],
[7, 0, 4, 6, 9, 1, 3, 2, 5, 8]
]
# inverse table inv
inv = [0, 4, 3, 2, 1, 5, 6, 7, 8, 9]
# print("sonddffsddsdd")
# print(len(candidate))
lastDigit = candidate[-1]
c = 0
array = [int(i) for i in candidate if i != ' ']
array.pop()
array.reverse()
for i in range(len(array)):
c = d[c][p[((i + 1) % 8)][array[i]]] # use verheoffs algorithm to validate
if inv[c] == int(lastDigit):
return True
return False
# file.seek(0)
# img_bytes = file.read()
# img_array = np.frombuffer(img_bytes, dtype=np.uint8)
img = cv2.imread(self.data1)
og_height,og_width,_ = img.shape
height_scaling = og_height/640
width_scaling = og_height/640
# TODO for all the self.data types it should support
# img = cv2.imdecode(img_array, cv2.IMREAD_COLOR)
img = cv2.resize(img, (640, 640))
# cv2.imshow("sone", img)
# cv2.waitKey(0)
# cv2.destroyAllWindows()
model = YOLO(MODEL_PATH)
results = model(img)
# rois = []
roidata = []
##### this will create a roidata which will be list of tuples with roi image, cls and confidence #####
for result in results:
# cls = result.boxes.cls
# boxes = result.boxes.xyxy
boxes = result.boxes
for box in boxes:
# x1,y1,x2,y2 = box
l = box.boxes.flatten().tolist()
roicoords = list(map(int, l[0:4]))
# roicoords: x1,y1,x2,y2
confidence, cls = l[4:]
cls = int(cls)
# l = list(box)
# x1,y1,x2,y2 = list(map(int,l))
# print(x1, x2, y1, y2)
# img = cv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), 2)
# cv2.putText(img, str(cls), (x1, y1), cv2.FONT_HERSHEY_SIMPLEX, 2, 255)
# roi = img[y1:y2, x1:x2]
# rois.append(roi)
templist = [roicoords, cls,confidence]
roidata.append(templist)
# cv2.imshow("s", img)
# cv2.waitKey(0)
# cv2.destroyAllWindows()
# print(roidata)
index = {0: "aadhaar_no",
1: "dob",
2: "gender",
3: "name",
4: "address"}
# logging.info('BEFORE FILTERING :')
# logging.info(len(roidata))
# TODO there is no flag to filter roidata , introduce later
# roidata = filter_tuples(roidata)
# maindict = {}
# maindict['aadhaar_no'] = maindict['dob'] = maindict['gender'] = maindict['address'] = maindict['name'] = maindict['phonenumber'] = maindict['vid'] = maindict['enrollment_number'] = None
# logging.info('AFTER FILTERING :')
# logging.info(len(roidata))
img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
for data in roidata:
cls = data[1]
x1,y1,x2,y2 = data[0]
# data = cv2.cvtColor(data, cv2.COLOR_BGR2GRAY)
cropped = img[y1:y2, x1:x2]
info = pytesseract.image_to_string(cropped).strip()
# logging.info(str(cls)+'-'+info)
data[0][0] = x1*width_scaling
data[0][1] = y1*height_scaling
data[0][2] = x2*width_scaling
data[0][3] = y2*height_scaling
#### the scaling is reconverted to the original size dimensions ####
data[1] = index[cls]
# change from indexes to name if the class
if info != None :
if cls == 3:
info = clean_words(info)
elif cls == 1:
info = clean_dob(info)
elif cls == 0:
info = info.replace(' ','')
if len(info) == 12:
try:
if not validate_aadhaar_numbers(info):
info = "INVALID AADHAAR NUMBER"
except ValueError:
info = None
else:
info = None
data.append(info)
# extracted text cleaned up :FIXED
# TODO extract these fields too
# maindict['phonenumber'] = None
# maindict['vid'] = None
# maindict['enrollment_no'] = None
# FIXED: the coords are for 640:640 , fix it for original coords
return roidata
def extract_from_pdf(self, file):
def extract_pymupdf(file):
# Usinf pymupdf
import fitz # this is pymupdf
# extract text page by page
with fitz.open(stream=file.stream.read(), filetype='pdf') as doc:
pymupdf_text = ""
if (doc.is_encrypted):
passw = input("Enter the password")
# TODO display this message where?
doc.authenticate(password=passw)
for page in doc:
pymupdf_text += page.get_text("Text")
return pymupdf_text
def get_details(txt):
import re
pattern = re.compile(
r'Enrolment No\.: (?P<enrolment_no>[^\n]*)\nTo\n[^\n]*\n(?P<name>[^\n]*)\n(?P<relation>[S,W,D])\/O: (?P<fathers_name>[^\n]*)\n(?P<address>.*)(?P<phonenumber>\d{10})\n(?P<aadhaar_number>^\d{4} \d{4} \d{4}\n).*(?P<vid>\d{4} \d{4} \d{4} \d{4})\n.*DOB: (?P<dob>[^\n]*)\n.*(?P<gender>MALE|FEMALE|Female|Male)',
re.M | re.A | re.S)
# gets all info in one match(enrolment to V) which can then be parsed by the groups
return pattern.search(txt)
def get_enrolment_no(txt):
return get_details(txt).group('enrolment_no')
def get_name(txt):
return get_details(txt).group('name')
def get_fathers_name(txt):
matchobj = get_details(txt)
relation = matchobj.group('fathers_name')
if matchobj.group('relation').lower() == 'w':
return None
return relation
def get_husbands_name(txt):
matchobj = get_details(txt)
return matchobj.group('fathers_name')
def get_address(txt):
return get_details(txt).group('address')
def get_phonenumber(txt):
return get_details(txt).group('phonenumber')
def get_aadhaarnumber(txt):
return get_details(txt).group('aadhaar_number').strip()
def get_vid(txt):
return get_details(txt).group('vid')
def get_gender(txt):
return get_details(txt).group('gender')
def get_dob(txt):
return get_details(txt).group('dob')
def get_details_pdf(file):
import re
txt = extract_pymupdf(file)
dict = {'vid': get_vid(txt),
'enrollment_no': get_enrolment_no(txt), # Fathers name':get_fathers_name(txt),
'name': get_name(txt), # if dict['Fathers name'] == None :
'address': get_address(txt),
'phonenumber': get_phonenumber(txt), # dict['Husbands name']=get_husbands_name(txt)
'aadhaar_no': get_aadhaarnumber(txt),
'sex': get_gender(txt),
'dob': get_dob(txt)}
# ,'ID Type':'Aadhaar'}
return dict
return get_details_pdf(file)
if __name__ == '__main__':
obj = AadhaarExtractor(r"C:\Users\91886\Desktop\cardF.jpg")
print(obj.extract())
# file = open(r"C:\Users\91886\Desktop\cardF.jpg",'r')
# obj.load(file)
# print(obj.to_json())
# | PypiClean |
/Boorunaut-0.4.3-py3-none-any.whl/booru/account/models.py | from django.apps import apps
from django.contrib.auth.models import AbstractUser, Group
from django.contrib.contenttypes.fields import (GenericForeignKey,
GenericRelation)
from django.db import models
from django.template.defaultfilters import slugify
from django.urls import reverse
from django.utils import timezone
from django.utils.translation import gettext as _
from rolepermissions.roles import assign_role
from booru.managers import UserManager
class Privilege(models.Model):
name = models.CharField(_('name'), max_length=255)
codename = models.CharField(_('codename'), max_length=100)
class Meta:
verbose_name = _('privilege')
verbose_name_plural = _('privileges')
ordering = ['codename']
def __str__(self):
return "{} | {}".format(self.codename, self.name)
class Timeout(models.Model):
reason = models.CharField(max_length=2500)
expiration = models.DateTimeField()
revoked = models.ManyToManyField(
Privilege,
verbose_name=_('privileges'),
blank=True,
help_text=_('Privileges revoked from this user.'),
related_name="revoked_privs",
related_query_name="user",
)
target_user = models.ForeignKey('account.Account', related_name="user_timedout", on_delete=models.CASCADE)
author = models.ForeignKey('account.Account', related_name="timeout_creator", on_delete=models.CASCADE)
class Account(AbstractUser):
avatar = models.ForeignKey('booru.Post', null=True, blank=True, on_delete=models.SET_NULL)
slug = models.SlugField(max_length=250, default="", blank=True)
email_activated = models.BooleanField(default=False)
comments_locked = models.BooleanField(default=False)
about = models.CharField(max_length=2500, blank=True)
comments = GenericRelation('booru.Comment')
# Account settings
safe_only = models.BooleanField(default=True)
show_comments = models.BooleanField(default=True)
tag_blacklist = models.CharField(max_length=2500, blank=True)
is_deleted = models.BooleanField(default=False)
objects = UserManager()
def save(self, *args, **kwargs):
give_role = False
if self.__is_a_new_user():
give_role = True
self.slug = slugify(self.username)
super(Account, self).save(*args, **kwargs)
if give_role:
if self.is_staff:
assign_role(self, 'administrator')
else:
assign_role(self, 'user')
def __is_a_new_user(self):
return not self.id
def get_absolute_url(self):
if self.is_deleted == False:
return reverse('booru:profile', kwargs={'account_slug': self.slug})
else:
return "#"
def get_name(self):
if self.is_deleted == False:
return self.username
else:
return "Anonymous User"
def get_posts(self):
Post = apps.get_model('booru', 'Post')
return Post.objects.all().filter(uploader=self)
def get_favorites_count(self):
return self.account_favorites.count()
def get_comments_count(self):
Comment = apps.get_model('booru', 'Comment')
return Comment.objects.filter(author=self, content_type__model="post").count()
def get_priv_timeout(self, codename):
timeouts = Timeout.objects.filter(target_user=self, revoked__in=Privilege.objects.filter(codename=codename))
timeouts.filter(expiration__lt=timezone.now()).delete() # deleting timeouts if already expired
return timeouts
def has_priv(self, codename):
return not self.get_priv_timeout(codename=codename).exists()
def anonymize(self):
self.email = ""
self.is_deleted = True
self.save()
class Meta:
permissions = (
("modify_profile", "Can change values from all profiles."),
("change_user_group", "Can set the group of a user."),
("ban_user", "Can ban users."),
("ban_mod", "Can ban moderators."),
) | PypiClean |
/Lunas-0.5.1-py3-none-any.whl/lunas/dataset/sampling.py | import random
from typing import *
import lunas.dataset.core as core
__all__ = ['Sampling']
class Sampling(core.NestedN, core.Sizable):
"""Sampling dataset
This dataset sample examples from multiple datasets by given weights.
"""
def __init__(self, datasets: Iterable[core.Dataset], weights: Iterable[float] = None, virtual_size: int = None,
name: str = None):
super().__init__(datasets, name)
datasets = self._datasets
if weights and sum(weights) != 1:
raise ValueError(f'The sum of weights ({sum(weights)}) must be 1.0.')
if virtual_size is not None and virtual_size < 0:
raise ValueError(f'virtual_size ({virtual_size}) must be a non-negative value or None.')
if weights:
for ds, weight in zip(datasets, weights):
if len(ds) == 0 and weight > 0:
raise ValueError(f'Attempt to sample from an empty dataset '
f'with a non-zero weight ({weight}).')
sizes = [len(ds) for ds in datasets]
weights = weights if weights else [1.0 / len(datasets)] * len(datasets)
max_weight_i, _ = max(enumerate(weights), key=lambda i_weight: i_weight[1])
if virtual_size is None:
virtual_size = int(sizes[max_weight_i] / weights[max_weight_i])
virtual_size = sum(int(virtual_size * weight) for weight in weights)
self._weights = weights
self._virtual_size = virtual_size
@property
def length(self):
return self._virtual_size
def generator(self):
indices = [i for i in range(len(self._datasets))]
weights = [w for w in self._weights]
datasets = [iter(dataset) for dataset in self._datasets]
for _ in range(self._virtual_size):
if len(indices) > 1:
i = random.choices(indices, weights)[0]
else:
i = indices[0]
try:
yield next(datasets[i])
except StopIteration:
datasets[i] = iter(self._datasets[i])
yield next(datasets[i]) | PypiClean |
/HolmesIV-2021.9.8a1.tar.gz/HolmesIV-2021.9.8a1/mycroft/enclosure/gui.py | """ Interface for interacting with the Mycroft gui qml viewer. """
from os.path import join
from mycroft.configuration import Configuration
from mycroft.messagebus.message import Message
from mycroft.util import resolve_resource_file
class _GUIDict(dict):
""" this is an helper dictionay subclass, it ensures that value changed
in it are propagated to the GUI service real time"""
def __init__(self, gui, **kwargs):
self.gui = gui
super().__init__(**kwargs)
def __setitem__(self, key, value):
super(_GUIDict, self).__setitem__(key, value)
self.gui._sync_data()
class SkillGUI:
"""SkillGUI - Interface to the Graphical User Interface
Values set in this class are synced to the GUI, accessible within QML
via the built-in sessionData mechanism. For example, in Python you can
write in a skill:
self.gui['temp'] = 33
self.gui.show_page('Weather.qml')
Then in the Weather.qml you'd access the temp via code such as:
text: sessionData.time
"""
def __init__(self, skill):
self.__session_data = {} # synced to GUI for use by this skill's pages
self.page = None # the active GUI page (e.g. QML template) to show
self.skill = skill
self.on_gui_changed_callback = None
self.config = Configuration.get()
@property
def connected(self):
"""Returns True if at least 1 gui is connected, else False"""
if self.skill.bus:
reply = self.skill.bus.wait_for_response(
Message("gui.status.request"), "gui.status.request.response")
if reply:
return reply.data["connected"]
return False
@property
def remote_url(self):
"""Returns configuration value for url of remote-server."""
return self.config.get('remote-server')
def build_message_type(self, event):
"""Builds a message matching the output from the enclosure."""
return '{}.{}'.format(self.skill.skill_id, event)
def setup_default_handlers(self):
"""Sets the handlers for the default messages."""
msg_type = self.build_message_type('set')
self.skill.add_event(msg_type, self.gui_set)
def register_handler(self, event, handler):
"""Register a handler for GUI events.
When using the triggerEvent method from Qt
triggerEvent("event", {"data": "cool"})
Args:
event (str): event to catch
handler: function to handle the event
"""
msg_type = self.build_message_type(event)
self.skill.add_event(msg_type, handler)
def set_on_gui_changed(self, callback):
"""Registers a callback function to run when a value is
changed from the GUI.
Args:
callback: Function to call when a value is changed
"""
self.on_gui_changed_callback = callback
def gui_set(self, message):
"""Handler catching variable changes from the GUI.
Args:
message: Messagebus message
"""
for key in message.data:
self[key] = message.data[key]
if self.on_gui_changed_callback:
self.on_gui_changed_callback()
def _sync_data(self):
data = self.__session_data.copy()
data.update({'__from': self.skill.skill_id})
self.skill.bus.emit(Message("gui.value.set", data))
def __setitem__(self, key, value):
"""Implements set part of dict-like behaviour with named keys."""
# cast to helper dict subclass that syncs data
if isinstance(value, dict) and not isinstance(value, _GUIDict):
value = _GUIDict(self, **value)
self.__session_data[key] = value
# emit notification (but not needed if page has not been shown yet)
if self.page:
self._sync_data()
def __getitem__(self, key):
"""Implements get part of dict-like behaviour with named keys."""
return self.__session_data[key]
def get(self, *args, **kwargs):
"""Implements the get method for accessing dict keys."""
return self.__session_data.get(*args, **kwargs)
def __contains__(self, key):
"""Implements the "in" operation."""
return self.__session_data.__contains__(key)
def clear(self):
"""Reset the value dictionary, and remove namespace from GUI.
This method does not close the GUI for a Skill. For this purpose see
the `release` method.
"""
self.__session_data = {}
self.page = None
if self.skill.bus:
self.skill.bus.emit(Message("gui.clear.namespace",
{"__from": self.skill.skill_id}))
def send_event(self, event_name, params=None):
"""Trigger a gui event.
Args:
event_name (str): name of event to be triggered
params: json serializable object containing any parameters that
should be sent along with the request.
"""
params = params or {}
self.skill.bus.emit(Message("gui.event.send",
{"__from": self.skill.skill_id,
"event_name": event_name,
"params": params}))
def show_page(self, name, override_idle=None,
override_animations=False):
"""Begin showing the page in the GUI
Args:
name (str): Name of page (e.g "mypage.qml") to display
override_idle (boolean, int):
True: Takes over the resting page indefinitely
(int): Delays resting page for the specified number of
seconds.
override_animations (boolean):
True: Disables showing all platform skill animations.
False: 'Default' always show animations.
"""
self.show_pages([name], 0, override_idle, override_animations)
def show_pages(self, page_names, index=0, override_idle=None,
override_animations=False):
"""Begin showing the list of pages in the GUI.
Args:
page_names (list): List of page names (str) to display, such as
["Weather.qml", "Forecast.qml", "Details.qml"]
index (int): Page number (0-based) to show initially. For the
above list a value of 1 would start on "Forecast.qml"
override_idle (boolean, int):
True: Takes over the resting page indefinitely
(int): Delays resting page for the specified number of
seconds.
override_animations (boolean):
True: Disables showing all platform skill animations.
False: 'Default' always show animations.
"""
if not isinstance(page_names, list):
raise ValueError('page_names must be a list')
if index > len(page_names):
raise ValueError('Default index is larger than page list length')
self.page = page_names[index]
# First sync any data...
data = self.__session_data.copy()
data.update({'__from': self.skill.skill_id})
self.skill.bus.emit(Message("gui.value.set", data))
# Convert pages to full reference
page_urls = []
for name in page_names:
if name.startswith("SYSTEM"):
page = resolve_resource_file(join('ui', name))
else:
page = self.skill.find_resource(name, 'ui')
if page:
if self.config.get('remote'):
page_urls.append(self.remote_url + "/" + page)
else:
page_urls.append("file://" + page)
else:
raise FileNotFoundError("Unable to find page: {}".format(name))
self.skill.bus.emit(Message("gui.page.show",
{"page": page_urls,
"index": index,
"__from": self.skill.skill_id,
"__idle": override_idle,
"__animations": override_animations}))
def remove_page(self, page):
"""Remove a single page from the GUI.
Args:
page (str): Page to remove from the GUI
"""
return self.remove_pages([page])
def remove_pages(self, page_names):
"""Remove a list of pages in the GUI.
Args:
page_names (list): List of page names (str) to display, such as
["Weather.qml", "Forecast.qml", "Other.qml"]
"""
if not isinstance(page_names, list):
raise ValueError('page_names must be a list')
# Convert pages to full reference
page_urls = []
for name in page_names:
if name.startswith("SYSTEM"):
page = resolve_resource_file(join('ui', name))
else:
page = self.skill.find_resource(name, 'ui')
if page:
if self.config.get('remote'):
page_urls.append(self.remote_url + "/" + page)
else:
page_urls.append("file://" + page)
else:
raise FileNotFoundError("Unable to find page: {}".format(name))
self.skill.bus.emit(Message("gui.page.delete",
{"page": page_urls,
"__from": self.skill.skill_id}))
def show_text(self, text, title=None, override_idle=None,
override_animations=False):
"""Display a GUI page for viewing simple text.
Args:
text (str): Main text content. It will auto-paginate
title (str): A title to display above the text content.
override_idle (boolean, int):
True: Takes over the resting page indefinitely
(int): Delays resting page for the specified number of
seconds.
override_animations (boolean):
True: Disables showing all platform skill animations.
False: 'Default' always show animations.
"""
self["text"] = text
self["title"] = title
self.show_page("SYSTEM_TextFrame.qml", override_idle,
override_animations)
def show_image(self, url, caption=None,
title=None, fill=None,
override_idle=None, override_animations=False):
"""Display a GUI page for viewing an image.
Args:
url (str): Pointer to the image
caption (str): A caption to show under the image
title (str): A title to display above the image content
fill (str): Fill type supports 'PreserveAspectFit',
'PreserveAspectCrop', 'Stretch'
override_idle (boolean, int):
True: Takes over the resting page indefinitely
(int): Delays resting page for the specified number of
seconds.
override_animations (boolean):
True: Disables showing all platform skill animations.
False: 'Default' always show animations.
"""
self["image"] = url
self["title"] = title
self["caption"] = caption
self["fill"] = fill
self.show_page("SYSTEM_ImageFrame.qml", override_idle,
override_animations)
def show_animated_image(self, url, caption=None,
title=None, fill=None,
override_idle=None, override_animations=False):
"""Display a GUI page for viewing an image.
Args:
url (str): Pointer to the .gif image
caption (str): A caption to show under the image
title (str): A title to display above the image content
fill (str): Fill type supports 'PreserveAspectFit',
'PreserveAspectCrop', 'Stretch'
override_idle (boolean, int):
True: Takes over the resting page indefinitely
(int): Delays resting page for the specified number of
seconds.
override_animations (boolean):
True: Disables showing all platform skill animations.
False: 'Default' always show animations.
"""
self["image"] = url
self["title"] = title
self["caption"] = caption
self["fill"] = fill
self.show_page("SYSTEM_AnimatedImageFrame.qml", override_idle,
override_animations)
def show_html(self, html, resource_url=None, override_idle=None,
override_animations=False):
"""Display an HTML page in the GUI.
Args:
html (str): HTML text to display
resource_url (str): Pointer to HTML resources
override_idle (boolean, int):
True: Takes over the resting page indefinitely
(int): Delays resting page for the specified number of
seconds.
override_animations (boolean):
True: Disables showing all platform skill animations.
False: 'Default' always show animations.
"""
self["html"] = html
self["resourceLocation"] = resource_url
self.show_page("SYSTEM_HtmlFrame.qml", override_idle,
override_animations)
def show_url(self, url, override_idle=None,
override_animations=False):
"""Display an HTML page in the GUI.
Args:
url (str): URL to render
override_idle (boolean, int):
True: Takes over the resting page indefinitely
(int): Delays resting page for the specified number of
seconds.
override_animations (boolean):
True: Disables showing all platform skill animations.
False: 'Default' always show animations.
"""
self["url"] = url
self.show_page("SYSTEM_UrlFrame.qml", override_idle,
override_animations)
def release(self):
"""Signal that this skill is no longer using the GUI,
allow different platforms to properly handle this event.
Also calls self.clear() to reset the state variables
Platforms can close the window or go back to previous page"""
self.clear()
self.skill.bus.emit(Message("mycroft.gui.screen.close",
{"skill_id": self.skill.skill_id}))
def shutdown(self):
"""Shutdown gui interface.
Clear pages loaded through this interface and remove the skill
reference to make ref counting warning more precise.
"""
self.clear()
self.skill = None | PypiClean |
/Mezzanine-6.0.0.tar.gz/Mezzanine-6.0.0/mezzanine/twitter/models.py | import re
from datetime import datetime
from urllib.parse import quote
import requests
from django.db import models
from django.utils.html import urlize
from django.utils.timezone import make_aware, utc
from django.utils.translation import gettext_lazy as _
from requests_oauthlib import OAuth1
from mezzanine.conf import settings
from mezzanine.twitter import (
QUERY_TYPE_CHOICES,
QUERY_TYPE_LIST,
QUERY_TYPE_SEARCH,
QUERY_TYPE_USER,
get_auth_settings,
)
from mezzanine.twitter.managers import TweetManager
re_usernames = re.compile(r"(^|\W)@([0-9a-zA-Z+_]+)", re.IGNORECASE)
re_hashtags = re.compile(r"#([0-9a-zA-Z+_]+)", re.IGNORECASE)
replace_hashtags = '<a href="http://twitter.com/search?q=%23\\1">#\\1</a>'
replace_usernames = '\\1<a href="http://twitter.com/\\2">@\\2</a>'
class TwitterQueryException(Exception):
pass
class Query(models.Model):
type = models.CharField(_("Type"), choices=QUERY_TYPE_CHOICES, max_length=10)
value = models.CharField(_("Value"), max_length=140)
interested = models.BooleanField("Interested", default=True)
class Meta:
verbose_name = _("Twitter query")
verbose_name_plural = _("Twitter queries")
ordering = ("-id",)
def __str__(self):
return f"{self.get_type_display()}: {self.value}"
def run(self):
"""
Request new tweets from the Twitter API.
"""
try:
value = quote(self.value)
except KeyError:
value = self.value
urls = {
QUERY_TYPE_USER: (
"https://api.twitter.com/1.1/statuses/"
"user_timeline.json?screen_name=%s"
"&include_rts=true" % value.lstrip("@")
),
QUERY_TYPE_LIST: (
"https://api.twitter.com/1.1/lists/statuses.json"
"?list_id=%s&include_rts=true" % value
),
QUERY_TYPE_SEARCH: "https://api.twitter.com/1.1/search/tweets.json"
"?q=%s" % value,
}
try:
url = urls[self.type]
except KeyError:
raise TwitterQueryException("Invalid query type: %s" % self.type)
auth_settings = get_auth_settings()
if not auth_settings:
from mezzanine.conf import registry
if self.value == registry["TWITTER_DEFAULT_QUERY"]["default"]:
# These are some read-only keys and secrets we use
# for the default query (eg nothing has been configured)
auth_settings = (
"KxZTRD3OBft4PP0iQW0aNQ",
"sXpQRSDUVJ2AVPZTfh6MrJjHfOGcdK4wRb1WTGQ",
"1368725588-ldWCsd54AJpG2xcB5nyTHyCeIC3RJcNVUAkB1OI",
"r9u7qS18t8ad4Hu9XVqmCGxlIpzoCN3e1vx6LOSVgyw3R",
)
else:
raise TwitterQueryException("Twitter OAuth settings missing")
try:
tweets = requests.get(url, auth=OAuth1(*auth_settings)).json()
except Exception as e:
raise TwitterQueryException("Error retrieving: %s" % e)
try:
raise TwitterQueryException(tweets["errors"][0]["message"])
except (IndexError, KeyError, TypeError):
pass
if self.type == "search":
tweets = tweets["statuses"]
for tweet_json in tweets:
remote_id = str(tweet_json["id"])
tweet, created = self.tweets.get_or_create(remote_id=remote_id)
if not created:
continue
if "retweeted_status" in tweet_json:
user = tweet_json["user"]
tweet.retweeter_user_name = user["screen_name"]
tweet.retweeter_full_name = user["name"]
tweet.retweeter_profile_image_url = user["profile_image_url"]
tweet_json = tweet_json["retweeted_status"]
if self.type == QUERY_TYPE_SEARCH:
tweet.user_name = tweet_json["user"]["screen_name"]
tweet.full_name = tweet_json["user"]["name"]
tweet.profile_image_url = tweet_json["user"]["profile_image_url"]
date_format = "%a %b %d %H:%M:%S +0000 %Y"
else:
user = tweet_json["user"]
tweet.user_name = user["screen_name"]
tweet.full_name = user["name"]
tweet.profile_image_url = user["profile_image_url"]
date_format = "%a %b %d %H:%M:%S +0000 %Y"
tweet.text = urlize(tweet_json["text"])
tweet.text = re_usernames.sub(replace_usernames, tweet.text)
tweet.text = re_hashtags.sub(replace_hashtags, tweet.text)
if getattr(settings, "TWITTER_STRIP_HIGH_MULTIBYTE", False):
chars = [ch for ch in tweet.text if ord(ch) < 0x800]
tweet.text = "".join(chars)
d = datetime.strptime(tweet_json["created_at"], date_format)
tweet.created_at = make_aware(d, utc)
try:
tweet.save()
except Warning:
pass
tweet.save()
self.interested = False
self.save()
class Tweet(models.Model):
remote_id = models.CharField(_("Twitter ID"), max_length=50)
created_at = models.DateTimeField(_("Date/time"), null=True)
text = models.TextField(_("Message"), null=True)
profile_image_url = models.URLField(_("Profile image URL"), null=True)
user_name = models.CharField(_("User name"), max_length=100, null=True)
full_name = models.CharField(_("Full name"), max_length=100, null=True)
retweeter_profile_image_url = models.URLField(
_("Profile image URL (Retweeted by)"), null=True
)
retweeter_user_name = models.CharField(
_("User name (Retweeted by)"), max_length=100, null=True
)
retweeter_full_name = models.CharField(
_("Full name (Retweeted by)"), max_length=100, null=True
)
query = models.ForeignKey("Query", on_delete=models.CASCADE, related_name="tweets")
objects = TweetManager()
class Meta:
verbose_name = _("Tweet")
verbose_name_plural = _("Tweets")
ordering = ("-created_at",)
def __str__(self):
return f"{self.user_name}: {self.text}"
def is_retweet(self):
return self.retweeter_user_name is not None | PypiClean |
/FastAAI-0.1.20-py3-none-any.whl/fastaai/fastaai.py |
################################################################################
"""---0.0 Import Modules---"""
import subprocess
import argparse
import datetime
import shutil
import textwrap
import multiprocessing
import pickle
import gzip
import tempfile
#Shouldn't play any role.
#from random import randint
#We could probably remove Path, too.
#This as well
import time
from collections import defaultdict
import sys
import os
from math import floor
import sqlite3
#numpy dependency
import numpy as np
import io
import random
import pyrodigal as pd
import pyhmmer
from collections import namedtuple
from math import ceil
import re
#Not an input sanitizer - simply replaces characters that would throw SQLite for a loop.
def sql_safe(string):
#Sanitize for SQL
#These are chars safe for sql
sql_safe = set('_abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789')
current_chars = set(string)
#self.sql_name = self.basename
#Identify SQL-unsafe characters as those outside the permissible set and replace all with underscores.
for char in current_chars - sql_safe:
string = string.replace(char, "_")
return string
#Gonna have to go put this everywhere...
#Consistent file basename behavior
def file_basename(file):
#Get the name after the final directory path
name = os.path.basename(file)
#Extract the portion of a filename prior to the first '.' separator.
while name != os.path.splitext(name)[0]:
name = os.path.splitext(name)[0]
name = sql_safe(name)
return name
def seqID_basename(seqid):
#Get the name after the final directory path
if seqid.startswith(">"):
seqid = seqid[1:]
#Remove everything other than the seqid
name = seqid.split()[0]
#Get SQL-friendly version of the name.
name = sql_safe(name)
return name
class progress_tracker:
def __init__(self, total, step_size = 2, message = None, one_line = True):
self.current_count = 0
self.max_count = total
#Book keeping.
self.start_time = None
self.end_time = None
#Show progrexx every [step] percent
self.step = step_size
self.justify_size = ceil(100/self.step)
self.last_percent = 0
self.message = message
self.pretty_print = one_line
self.start()
def curtime(self):
time_format = "%d/%m/%Y %H:%M:%S"
timer = datetime.datetime.now()
time = timer.strftime(time_format)
return time
def start(self):
print("")
if self.message is not None:
print(self.message)
try:
percentage = (self.current_count/self.max_count)*100
sys.stdout.write("Completion".rjust(3)+ ' |'+('#'*int(percentage/self.step)).ljust(self.justify_size)+'| ' + ('%.2f'%percentage).rjust(7)+'% ( ' + str(self.current_count) + " of " + str(self.max_count) + ' ) at ' + self.curtime() + "\n")
sys.stdout.flush()
except:
#It's not really a big deal if the progress bar cannot be printed.
pass
def update(self):
self.current_count += 1
percentage = (self.current_count/self.max_count)*100
try:
if percentage // self.step > self.last_percent:
if self.pretty_print:
sys.stdout.write('\033[A')
sys.stdout.write("Completion".rjust(3)+ ' |'+('#'*int(percentage/self.step)).ljust(self.justify_size)+'| ' + ('%.2f'%percentage).rjust(7)+'% ( ' + str(self.current_count) + " of " + str(self.max_count) + ' ) at ' + self.curtime() + "\n")
sys.stdout.flush()
self.last_percent = percentage // self.step
#Bar is always full at the end.
if count == self.max_count:
if self.pretty_print:
sys.stdout.write('\033[A')
sys.stdout.write("Completion".rjust(3)+ ' |'+('#'*self.justify_size).ljust(self.justify_size)+'| ' + ('%.2f'%percentage).rjust(7)+'% ( ' + str(self.current_count) + " of " + str(self.max_count) + ' ) at ' + self.curtime() + "\n")
sys.stdout.flush()
#Add space at end.
print("")
except:
#It's not really a big deal if the progress bar cannot be printed.
pass
#Takes a bytestring from the SQL database and converts it to a numpy array.
def convert_array(bytestring):
return np.frombuffer(bytestring, dtype = np.int32)
def convert_float_array_16(bytestring):
return np.frombuffer(bytestring, dtype = np.float16)
def convert_float_array_32(bytestring):
return np.frombuffer(bytestring, dtype = np.float32)
def convert_float_array_64(bytestring):
return np.frombuffer(bytestring, dtype = np.float64)
def read_fasta(file):
cur_seq = ""
cur_prot = ""
contents = {}
deflines = {}
fasta = agnostic_reader(file)
for line in fasta:
if line.startswith(">"):
if len(cur_seq) > 0:
contents[cur_prot] = cur_seq
deflines[cur_prot] = defline
cur_seq = ""
cur_prot = line.strip().split()[0][1:]
defline = line.strip()[len(cur_prot)+1 :].strip()
else:
cur_seq += line.strip()
fasta.close()
#Final iter
if len(cur_seq) > 0:
contents[cur_prot] = cur_seq
deflines[cur_prot] = defline
return contents, deflines
class fasta_file:
def __init__(self, file, type = "genome"):
self.file_path = os.path.abspath(file)
self.name = os.path.basename(file)
self.no_ext = os.path.splitext(self.name)[0]
self.type = type
self.tuple_structure = namedtuple("fasta", ["seqid", "description", "sequence"])
self.contents = {}
def convert(self, contents, descriptions):
for protein in contents:
self.contents = self.tuple_structure(seqid = protein, description = descriptions[protein], sequence = contents[protein])
def def_import_file(self):
contents, descriptions = read_fasta(self.file_path)
self.convert(contents, descriptions)
class pyhmmer_manager:
def __init__(self, do_compress):
self.hmm_model = []
self.hmm_model_optimized = None
self.proteins_to_search = []
self.protein_descriptions = None
self.hmm_result_proteins = []
self.hmm_result_accessions = []
self.hmm_result_scores = []
self.printable_lines = []
self.bacterial_SCPs = None
self.archaeal_SCPs = None
self.assign_hmm_sets()
self.domain_counts = {"Bacteria" : 0, "Archaea": 0}
self.voted_domain = {"Bacteria" : len(self.bacterial_SCPs), "Archaea" : len(self.archaeal_SCPs)}
self.bacterial_fraction = None
self.archaeal_fraction = None
self.best_hits = None
self.do_compress = do_compress
def optimize_models(self):
try:
self.hmm_model_optimized = []
for hmm in self.hmm_model:
prof = pyhmmer.plan7.Profile(M = hmm.insert_emissions.shape[0], alphabet = pyhmmer.easel.Alphabet.amino())
prof.configure(hmm = hmm, background = pyhmmer.plan7.Background(alphabet = pyhmmer.easel.Alphabet.amino()), L = hmm.insert_emissions.shape[0]-1)
optim = prof.optimized()
self.hmm_model_optimized.append(optim)
#Clean up.
self.hmm_model = None
except:
#Quiet fail condition - fall back on default model.
self.hmm_model_optimized = None
#Load HMM and try to optimize.
def load_hmm_from_file(self, hmm_path):
hmm_set = pyhmmer.plan7.HMMFile(hmm_path)
for hmm in hmm_set:
self.hmm_model.append(hmm)
#This doesn't seem to be improving performance currently.
self.optimize_models()
#Set archaeal and bacterial HMM sets.
def assign_hmm_sets(self):
self.bacterial_SCPs = {'PF00709_21': 'Adenylsucc_synt', 'PF00406_22': 'ADK', 'PF01808_18': 'AICARFT_IMPCHas', 'PF00231_19': 'ATP-synt',
'PF00119_20': 'ATP-synt_A', 'PF01264_21': 'Chorismate_synt', 'PF00889_19': 'EF_TS', 'PF01176_19': 'eIF-1a',
'PF02601_15': 'Exonuc_VII_L', 'PF01025_19': 'GrpE', 'PF01725_16': 'Ham1p_like', 'PF01715_17': 'IPPT',
'PF00213_18': 'OSCP', 'PF01195_19': 'Pept_tRNA_hydro', 'PF00162_19': 'PGK', 'PF02033_18': 'RBFA', 'PF02565_15': 'RecO_C',
'PF00825_18': 'Ribonuclease_P', 'PF00687_21': 'Ribosomal_L1', 'PF00572_18': 'Ribosomal_L13',
'PF00238_19': 'Ribosomal_L14', 'PF00252_18': 'Ribosomal_L16', 'PF01196_19': 'Ribosomal_L17',
'PF00861_22': 'Ribosomal_L18p', 'PF01245_20': 'Ribosomal_L19', 'PF00453_18': 'Ribosomal_L20',
'PF00829_21': 'Ribosomal_L21p', 'PF00237_19': 'Ribosomal_L22', 'PF00276_20': 'Ribosomal_L23',
'PF17136_4': 'ribosomal_L24', 'PF00189_20': 'Ribosomal_S3_C', 'PF00281_19': 'Ribosomal_L5', 'PF00181_23': 'Ribosomal_L2',
'PF01016_19': 'Ribosomal_L27', 'PF00828_19': 'Ribosomal_L27A', 'PF00830_19': 'Ribosomal_L28',
'PF00831_23': 'Ribosomal_L29', 'PF00297_22': 'Ribosomal_L3', 'PF01783_23': 'Ribosomal_L32p',
'PF01632_19': 'Ribosomal_L35p', 'PF00573_22': 'Ribosomal_L4', 'PF00347_23': 'Ribosomal_L6',
'PF03948_14': 'Ribosomal_L9_C', 'PF00338_22': 'Ribosomal_S10', 'PF00411_19': 'Ribosomal_S11',
'PF00416_22': 'Ribosomal_S13', 'PF00312_22': 'Ribosomal_S15', 'PF00886_19': 'Ribosomal_S16',
'PF00366_20': 'Ribosomal_S17', 'PF00203_21': 'Ribosomal_S19', 'PF00318_20': 'Ribosomal_S2',
'PF01649_18': 'Ribosomal_S20p', 'PF01250_17': 'Ribosomal_S6', 'PF00177_21': 'Ribosomal_S7',
'PF00410_19': 'Ribosomal_S8', 'PF00380_19': 'Ribosomal_S9', 'PF00164_25': 'Ribosom_S12_S23',
'PF01193_24': 'RNA_pol_L', 'PF01192_22': 'RNA_pol_Rpb6', 'PF01765_19': 'RRF', 'PF02410_15': 'RsfS',
'PF03652_15': 'RuvX', 'PF00584_20': 'SecE', 'PF03840_14': 'SecG', 'PF00344_20': 'SecY', 'PF01668_18': 'SmpB',
'PF00750_19': 'tRNA-synt_1d', 'PF01746_21': 'tRNA_m1G_MT', 'PF02367_17': 'TsaE', 'PF02130_17': 'UPF0054',
'PF02699_15': 'YajC'}
self.archaeal_SCPs = {'PF00709_21': 'Adenylsucc_synt', 'PF05221_17': 'AdoHcyase', 'PF01951_16': 'Archease', 'PF01813_17': 'ATP-synt_D',
'PF01990_17': 'ATP-synt_F', 'PF01864_17': 'CarS-like', 'PF01982_16': 'CTP-dep_RFKase', 'PF01866_17': 'Diphthamide_syn',
'PF04104_14': 'DNA_primase_lrg', 'PF01984_20': 'dsDNA_bind', 'PF04010_13': 'DUF357', 'PF04019_12': 'DUF359',
'PF04919_12': 'DUF655', 'PF01912_18': 'eIF-6', 'PF05833_11': 'FbpA', 'PF01725_16': 'Ham1p_like',
'PF00368_18': 'HMG-CoA_red', 'PF00334_19': 'NDK', 'PF02006_16': 'PPS_PS', 'PF02996_17': 'Prefoldin',
'PF01981_16': 'PTH2', 'PF01948_18': 'PyrI', 'PF00687_21': 'Ribosomal_L1', 'PF00572_18': 'Ribosomal_L13',
'PF00238_19': 'Ribosomal_L14', 'PF00827_17': 'Ribosomal_L15e', 'PF00252_18': 'Ribosomal_L16',
'PF01157_18': 'Ribosomal_L21e', 'PF00237_19': 'Ribosomal_L22', 'PF00276_20': 'Ribosomal_L23',
'PF16906_5': 'Ribosomal_L26', 'PF00831_23': 'Ribosomal_L29', 'PF00297_22': 'Ribosomal_L3',
'PF01198_19': 'Ribosomal_L31e', 'PF01655_18': 'Ribosomal_L32e', 'PF01780_19': 'Ribosomal_L37ae',
'PF00832_20': 'Ribosomal_L39', 'PF00573_22': 'Ribosomal_L4', 'PF00935_19': 'Ribosomal_L44', 'PF17144_4': 'Ribosomal_L5e',
'PF00347_23': 'Ribosomal_L6', 'PF00411_19': 'Ribosomal_S11', 'PF00416_22': 'Ribosomal_S13',
'PF00312_22': 'Ribosomal_S15', 'PF00366_20': 'Ribosomal_S17', 'PF00833_18': 'Ribosomal_S17e',
'PF00203_21': 'Ribosomal_S19', 'PF01090_19': 'Ribosomal_S19e', 'PF00318_20': 'Ribosomal_S2',
'PF01282_19': 'Ribosomal_S24e', 'PF01667_17': 'Ribosomal_S27e', 'PF01200_18': 'Ribosomal_S28e',
'PF01015_18': 'Ribosomal_S3Ae', 'PF00177_21': 'Ribosomal_S7', 'PF00410_19': 'Ribosomal_S8',
'PF01201_22': 'Ribosomal_S8e', 'PF00380_19': 'Ribosomal_S9', 'PF00164_25': 'Ribosom_S12_S23',
'PF06026_14': 'Rib_5-P_isom_A', 'PF01351_18': 'RNase_HII', 'PF13656_6': 'RNA_pol_L_2',
'PF01194_17': 'RNA_pol_N', 'PF03874_16': 'RNA_pol_Rpb4', 'PF01192_22': 'RNA_pol_Rpb6',
'PF01139_17': 'RtcB', 'PF00344_20': 'SecY', 'PF06093_13': 'Spt4', 'PF00121_18': 'TIM', 'PF01994_16': 'Trm56',
'PF00749_21': 'tRNA-synt_1c', 'PF00750_19': 'tRNA-synt_1d', 'PF13393_6': 'tRNA-synt_His',
'PF01142_18': 'TruD', 'PF01992_16': 'vATP-synt_AC39', 'PF01991_18': 'vATP-synt_E', 'PF01496_19': 'V_ATPase_I'}
#Convert passed sequences.
def convert_protein_seqs_in_mem(self, contents):
#Clean up.
self.proteins_to_search = []
for protein in contents:
#Skip a protein if it's longer than 100k AA.
if len(contents[protein]) >= 100000:
continue
as_bytes = protein.encode()
#Pyhmmer digitization of sequences for searching.
easel_seq = pyhmmer.easel.TextSequence(name = as_bytes, sequence = contents[protein])
easel_seq = easel_seq.digitize(pyhmmer.easel.Alphabet.amino())
self.proteins_to_search.append(easel_seq)
easel_seq = None
def load_protein_seqs_from_file(self, prots_file):
#Pyhmmer has a method for loading a fasta file, but we need to support gzipped inputs, so we do it manually.
contents, deflines = read_fasta(prots_file)
self.protein_descriptions = deflines
self.convert_protein_seqs_in_mem(contents)
def execute_search(self):
if self.hmm_model_optimized is None:
top_hits = list(pyhmmer.hmmsearch(self.hmm_model, self.proteins_to_search, cpus=1, bit_cutoffs="trusted"))
else:
top_hits = list(pyhmmer.hmmsearch(self.hmm_model_optimized, self.proteins_to_search, cpus=1, bit_cutoffs="trusted"))
self.printable_lines = []
self.hmm_result_proteins = []
self.hmm_result_accessions = []
self.hmm_result_scores = []
for model in top_hits:
for hit in model:
target_name = hit.name.decode()
target_acc = hit.accession
if target_acc is None:
target_acc = "-"
else:
target_acc = target_acc.decode()
query_name = hit.best_domain.alignment.hmm_name.decode()
query_acc = hit.best_domain.alignment.hmm_accession.decode()
full_seq_evalue = "%.2g" % hit.evalue
full_seq_score = round(hit.score, 1)
full_seq_bias = round(hit.bias, 1)
best_dom_evalue = "%.2g" % hit.best_domain.alignment.domain.i_evalue
best_dom_score = round(hit.best_domain.alignment.domain.score, 1)
best_dom_bias = round(hit.best_domain.alignment.domain.bias, 1)
#I don't know how to get most of these values.
exp = 0
reg = 0
clu = 0
ov = 0
env = 0
dom = len(hit.domains)
rep = 0
inc = 0
try:
description = self.protein_descriptions[target_name]
except:
description = ""
writeout = [target_name, target_acc, query_name, query_acc, full_seq_evalue, \
full_seq_score, full_seq_bias, best_dom_evalue, best_dom_score, best_dom_bias, \
exp, reg, clu, ov, env, dom, rep, inc, description]
#Format and join.
writeout = [str(i) for i in writeout]
writeout = '\t'.join(writeout)
self.printable_lines.append(writeout)
self.hmm_result_proteins.append(target_name)
self.hmm_result_accessions.append(query_acc)
self.hmm_result_scores.append(best_dom_score)
def filter_to_best_hits(self):
hmm_file = np.transpose(np.array([self.hmm_result_proteins, self.hmm_result_accessions, self.hmm_result_scores]))
#hmm_file = np.loadtxt(hmm_file_name, comments = '#', usecols = (0, 3, 8), dtype=(str))
#Sort the hmm file based on the score column in descending order.
hmm_file = hmm_file[hmm_file[:,2].astype(float).argsort()[::-1]]
#Identify the first row where each gene name appears, after sorting by score;
#in effect, return the highest scoring assignment per gene name
#Sort the indices of the result to match the score-sorted table instead of alphabetical order of gene names
hmm_file = hmm_file[np.sort(np.unique(hmm_file[:,0], return_index = True)[1])]
#Filter the file again for the unique ACCESSION names, since we're only allowed one gene per accession, I guess?
#Don't sort the indices, we don't care about the scores anymore.
hmm_file = hmm_file[np.unique(hmm_file[:,1], return_index = True)[1]]
sql_friendly_names = [i.replace(".", "_") for i in hmm_file[:,1]]
self.best_hits = dict(zip(hmm_file[:,0], sql_friendly_names))
hmm_file = None
#Count per-dom occurs.
def assign_domain(self):
for prot in self.best_hits.values():
if prot in self.bacterial_SCPs:
self.domain_counts["Bacteria"] += 1
if prot in self.archaeal_SCPs:
self.domain_counts["Archaea"] += 1
self.bacterial_fraction = self.domain_counts["Bacteria"] / self.voted_domain["Bacteria"]
self.aechaeal_fraction = self.domain_counts["Archaea"] / self.voted_domain["Archaea"]
if self.bacterial_fraction >= self.aechaeal_fraction:
self.voted_domain = "Bacteria"
else:
self.voted_domain = "Archaea"
pop_keys = list(self.best_hits.keys())
for key in pop_keys:
if self.voted_domain == "Bacteria":
if self.best_hits[key] not in self.bacterial_SCPs:
self.best_hits.pop(key)
if self.voted_domain == "Archaea":
if self.best_hits[key] not in self.archaeal_SCPs:
self.best_hits.pop(key)
def to_hmm_file(self, output):
#PyHMMER data is a bit hard to parse. For each result:
content = '\n'.join(self.printable_lines) + '\n'
if self.do_compress:
#Clean
if os.path.exists(output):
os.remove(output)
content = content.encode()
fh = gzip.open(output+".gz", "wb")
fh.write(content)
fh.close()
content = None
else:
#Clean
if os.path.exists(output+".gz"):
os.remove(output+".gz")
fh = open(output, "w")
fh.write(content)
fh.close()
content = None
#If we're doing this step at all, we've either loaded the seqs into mem by reading the prot file
#or have them in mem thanks to pyrodigal.
def run_for_fastaai(self, prots, hmm_output):
try:
self.convert_protein_seqs_in_mem(prots)
self.execute_search()
self.filter_to_best_hits()
try:
self.to_hmm_file(hmm_output)
except:
print(output, "cannot be created. HMM search failed. This file will be skipped.")
except:
print(output, "failed to run through HMMER!")
self.best_hits = None
def hmm_preproc_initializer(hmm_file, do_compress = False):
global hmm_manager
hmm_manager = pyhmmer_manager(do_compress)
hmm_manager.load_hmm_from_file(hmm_file)
class pyrodigal_manager:
def __init__(self, file = None, aa_out = None, nt_out = None, is_meta = False, full_headers = True, trans_table = 11,
num_bp_fmt = True, verbose = True, do_compress = "0", compare_against = None):
#Input NT sequences
self.file = file
#List of seqs read from input file.
self.sequences = None
#Concatenation of up to first 32 million bp in self.sequences - prodigal caps at this point.
self.training_seq = None
#Predicted genes go here
self.predicted_genes = None
#Record the translation table used.
self.trans_table = trans_table
#This is the pyrodigal manager - this does the gene predicting.
self.manager = pd.OrfFinder(meta=is_meta)
self.is_meta = is_meta
#Full prodigal header information includes more than just a protein number.
#If full_headers is true, protein deflines will match prodigal; else, just protein ID.
self.full_headers = full_headers
#Prodigal prints info to console. I enhanced the info and made printing default, but also allow them to be totally turned off.
self.verbose = verbose
#Prodigal formats outputs with 70 bases per line max
self.num_bp_fmt = num_bp_fmt
#File names for outputs
self.aa_out = aa_out
self.nt_out = nt_out
#List of proteins in excess of 100K base pairs (HMMER's limit) and their lengths. This is also fastAAI specific.
self.excluded_seqs = {}
#Gzip outputs if asked.
self.compress = do_compress
self.labeled_proteins = None
#Normally, we don't need to keep an input sequence after it's had proteins predicted for it - however
#For FastAAI and MiGA's purposes, comparisons of two translation tables is necessary.
#Rather than re-importing sequences and reconstructing the training sequences,
#keep them for faster repredict with less I/O
self.compare_to = compare_against
if self.compare_to is not None:
self.keep_seqs = True
self.keep_after_train = True
else:
self.keep_seqs = False
self.keep_after_train = False
#Imports a fasta as binary.
def import_sequences(self):
if self.sequences is None:
self.sequences = {}
#check for zipped and import as needed.
with open(self.file, 'rb') as test_gz:
#Gzip magic number
is_gz = (test_gz.read(2) == b'\x1f\x8b')
if is_gz:
fh = gzip.open(self.file, mode = 'rt')
else:
fh = open(self.file, "r")
imp = fh.readlines()
fh.close()
cur_seq = None
for s in imp:
#s = s.decode().strip()
s = s.strip()
#> is 62 in ascii. This is asking if the first character is '>'
if s.startswith(">"):
#Skip first cycle, then do for each after
if cur_seq is not None:
self.sequences[cur_seq] = ''.join(self.sequences[cur_seq])
self.sequences[cur_seq] = self.sequences[cur_seq].encode()
#print(cur_seq, len(self.sequences[cur_seq]))
cur_seq = s[1:]
cur_seq = cur_seq.split()[0] #Grab only seqid
cur_seq = cur_seq.encode('utf-8') #Encode to bytestring for pyrodigal.
self.sequences[cur_seq] = []
else:
self.sequences[cur_seq].append(s)
#Final set
self.sequences[cur_seq] = ''.join(self.sequences[cur_seq])
self.sequences[cur_seq] = self.sequences[cur_seq].encode()
#Now we have the data, go to training.
if not self.is_meta:
self.train_manager()
#Collect up to the first 32 million bases for use in training seq.
def train_manager(self):
running_sum = 0
seqs_added = 0
if self.training_seq is None:
self.training_seq = []
for seq in self.sequences:
running_sum += len(self.sequences[seq])
if seqs_added > 0:
#Prodigal interleaving logic - add this breaker between sequences, starting at sequence 2
self.training_seq.append(b'TTAATTAATTAA')
running_sum += 12
seqs_added += 1
#Handle excessive size
if running_sum >= 32000000:
print("Warning: Sequence is long (max 32000000 for training).")
print("Training on the first 32000000 bases.")
to_remove = running_sum - 32000000
#Remove excess characters
cut_seq = self.sequences[seq][:-to_remove]
#Add the partial seq
self.training_seq.append(cut_seq)
#Stop the loop and move to training
break
#add in a full sequence
self.training_seq.append(self.sequences[seq])
if seqs_added > 1:
self.training_seq.append(b'TTAATTAATTAA')
self.training_seq = b''.join(self.training_seq)
if len(self.training_seq) < 20000:
if self.verbose:
print("Can't train on 20 thousand or fewer characters. Switching to meta mode.")
self.manager = pd.OrfFinder(meta=True)
self.is_meta = True
else:
if self.verbose:
print("")
#G is 71, C is 67; we're counting G + C and dividing by the total.
gc = round(((self.training_seq.count(67) + self.training_seq.count(71))/ len(self.training_seq)) * 100, 2)
print(len(self.training_seq), "bp seq created,", gc, "pct GC")
#Train
self.manager.train(self.training_seq, translation_table = self.trans_table)
if not self.keep_after_train:
#Clean up
self.training_seq = None
def predict_genes(self):
if self.is_meta:
if self.verbose:
print("Finding genes in metagenomic mode")
else:
if self.verbose:
print("Finding genes with translation table", self.trans_table)
print("")
self.predicted_genes = {}
for seq in self.sequences:
if self.verbose:
print("Finding genes in sequence", seq.decode(), "("+str(len(self.sequences[seq]))+ " bp)... ", end = '')
self.predicted_genes[seq] = self.manager.find_genes(self.sequences[seq])
#If we're comparing multiple tables, then we want to keep these for re-prediction.
if not self.keep_seqs:
#Clean up
self.sequences[seq] = None
if self.verbose:
print("done!")
#Predict genes with an alternative table, compare results, and keep the winner.
def compare_alternative_table(self, table):
if table == self.trans_table:
print("You're trying to compare table", table, "with itself.")
else:
if self.verbose:
print("Comparing translation table", self.trans_table, "against table", table)
old_table = self.trans_table
old_genes = self.predicted_genes
old_size = 0
for seq in self.predicted_genes:
for gene in self.predicted_genes[seq]:
old_size += (gene.end - gene.begin)
self.trans_table = table
self.train_manager()
self.predict_genes()
new_size = 0
for seq in self.predicted_genes:
for gene in self.predicted_genes[seq]:
new_size += (gene.end - gene.begin)
if (old_size / new_size) > 1.1:
if self.verbose:
print("Translation table", self.trans_table, "performed better than table", old_table, "and will be used instead.")
else:
if self.verbose:
print("Translation table", self.trans_table, "did not perform significantly better than table", old_table, "and will not be used.")
self.trans_table = old_table
self.predicted_genes = old_genes
#cleanup
old_table = None
old_genes = None
old_size = None
new_size = None
def predict_and_compare(self):
self.predict_genes()
#Run alt comparisons in gene predict.
if self.compare_to is not None:
while len(self.compare_to) > 0:
try:
next_table = int(self.compare_to.pop(0))
if len(self.compare_to) == 0:
#Ready to clean up.
self.keep_after_train = True
self.keep_seqs = True
self.compare_alternative_table(next_table)
except:
print("Alternative table comparison failed! Skipping.")
#Break lines into size base pairs per line. Prodigal's default for bp is 70, aa is 60.
def num_bp_line_format(self, string, size = 70):
#ceiling funciton without the math module
ceiling = int(round((len(string)/size)+0.5, 0))
formatted = '\n'.join([string[(i*size):(i+1)*size] for i in range(0, ceiling)])
return formatted
#Writeouts
def write_nt(self):
if self.nt_out is not None:
if self.verbose:
print("Writing nucleotide sequences... ")
if self.compress == '1' or self.compress == '2':
out_writer = gzip.open(self.nt_out+".gz", "wb")
content = b''
for seq in self.predicted_genes:
seqname = b">"+ seq + b"_"
#Gene counter
count = 1
for gene in self.predicted_genes[seq]:
#Full header lines
if self.full_headers:
content += b' # '.join([seqname + str(count).encode(), str(gene.begin).encode(), str(gene.end).encode(), str(gene.strand).encode(), gene._gene_data.encode()])
else:
#Reduced headers if we don't care.
content += seqname + str(count).encode()
content += b'\n'
if self.num_bp_fmt:
#60 bp cap per line
content += self.num_bp_line_format(gene.sequence(), size = 70).encode()
else:
#One-line sequence.
content += gene.sequence().encode()
content += b'\n'
count += 1
out_writer.write(content)
out_writer.close()
if self.compress == '0' or self.compress == '2':
out_writer = open(self.nt_out, "w")
for seq in self.predicted_genes:
#Only do this decode once.
seqname = ">"+ seq.decode() +"_"
#Gene counter
count = 1
for gene in self.predicted_genes[seq]:
#Full header lines
if self.full_headers:
#Standard prodigal header
print(seqname + str(count), gene.begin, gene.end, gene.strand, gene._gene_data, sep = " # ", file = out_writer)
else:
#Reduced headers if we don't care.
print(seqname + str(count), file = out_writer)
if self.num_bp_fmt:
#60 bp cap per line
print(self.num_bp_line_format(gene.sequence(), size = 70), file = out_writer)
else:
#One-line sequence.
print(gene.sequence(), file = out_writer)
count += 1
out_writer.close()
def write_aa(self):
if self.aa_out is not None:
if self.verbose:
print("Writing amino acid sequences...")
self.labeled_proteins = {}
content = ''
for seq in self.predicted_genes:
count = 1
seqname = ">"+ seq.decode() + "_"
for gene in self.predicted_genes[seq]:
prot_name = seqname + str(count)
translation = gene.translate()
self.labeled_proteins[prot_name[1:]] = translation
defline = " # ".join([prot_name, str(gene.begin), str(gene.end), str(gene.strand), str(gene._gene_data)])
content += defline
content += "\n"
count += 1
content += self.num_bp_line_format(translation, size = 60)
content += "\n"
if self.compress == '0' or self.compress == '2':
out_writer = open(self.aa_out, "w")
out_writer.write(content)
out_writer.close()
if self.compress == '1' or self.compress == '2':
content = content.encode()
out_writer = gzip.open(self.aa_out+".gz", "wb")
out_writer.write(content)
out_writer.close()
def run_for_fastaai(self):
self.verbose = False
self.import_sequences()
self.train_manager()
self.predict_and_compare()
self.write_aa()
#Iterator for agnostic reader
class agnostic_reader_iterator:
def __init__(self, reader):
self.handle_ = reader.handle
self.is_gz_ = reader.is_gz
def __next__(self):
if self.is_gz_:
line = self.handle_.readline().decode()
else:
line = self.handle_.readline()
#Ezpz EOF check
if line:
return line
else:
raise StopIteration
#File reader that doesn't care if you give it a gzipped file or not.
class agnostic_reader:
def __init__(self, file):
self.path = file
with open(file, 'rb') as test_gz:
#Gzip magic number
is_gz = (test_gz.read(2) == b'\x1f\x8b')
self.is_gz = is_gz
if is_gz:
self.handle = gzip.open(self.path)
else:
self.handle = open(self.path)
def __iter__(self):
return agnostic_reader_iterator(self)
def close(self):
self.handle.close()
'''
Class for handling all of the raw genome/protein/protein+HMM file inputs when building a database.
Takes a file or files and processes them from genome -> protein, protein -> hmm, prot+HMM -> kmerized protein best hits as numpy int arrays according to the kmer_index
'''
class input_file:
def __init__(self, input_path, output = "", verbosity = False, do_compress = False,
make_crystal = False):
#starting path for the file; irrelevant for protein and hmm, but otherwise useful for keeping track.
self.path = input_path
#Output directory starts with this
self.output = os.path.normpath(output + "/")
#For printing file updates, this is the input name
self.name = os.path.basename(input_path)
#original name is the key used for the genomes index later on.
self.original_name = os.path.basename(input_path)
#This is the name that can be used for building files with new extensions.
if input_path.endswith(".gz"):
#Remove .gz first to make names consistent.
self.basename = os.path.splitext(os.path.basename(input_path[:-3]))[0]
else:
self.basename = os.path.splitext(os.path.basename(input_path))[0]
#Sanitize for SQL
#These are chars safe for sql
sql_safe = set('_abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789')
current_chars = set(self.basename)
#self.sql_name = self.basename
#Identify SQL-unsafe characters as those outside the permissible set and replace all with underscores.
for char in current_chars - sql_safe:
self.basename = self.basename.replace(char, "_")
#'genome' or 'protein' or 'protein and HMM'
self.status = None
#These will keep track of paths for each stage of file for us.
self.genome = None
self.protein = None
self.hmm = None
self.ran_hmmer = False
#If pyrodigal is run, then the protein sequences are already loaded into memory.
#We reuse them in kmer extraction instead of another I/O
self.prepared_proteins = None
self.intermediate = None
self.crystalize = make_crystal
self.best_hits = None
self.best_hits_kmers = None
self.protein_count = 0
self.protein_kmer_count = {}
self.trans_table = None
self.start_time = None
self.end_time = None
self.err_log = ""
#doesn't get updated otw.
self.initial_state = "protein+HMM"
self.verbose = verbosity
#Check if the file failed to produce ANY SCP HMM hits.
self.is_empty = False
self.do_compress = do_compress
self.crystal = None
self.init_time = None
#default to 0 time.
self.prot_pred_time = None
self.hmm_search_time = None
self.besthits_time = None
def curtime(self):
time_format = "%d/%m/%Y %H:%M:%S"
timer = datetime.datetime.now()
time = timer.strftime(time_format)
return time
def partial_timings(self):
protein_pred = self.prot_pred_time-self.init_time
hmm_search = self.hmm_search_time-self.prot_pred_time
besthits = self.besthits_time-self.hmm_search_time
protein_pred = protein_pred.total_seconds()
hmm_search = hmm_search.total_seconds()
besthits = besthits.total_seconds()
self.prot_pred_time = protein_pred
self.hmm_search_time = hmm_search
self.besthits_time = besthits
#Functions for externally setting status and file paths of particular types
def set_genome(self, path):
self.status = 'genome'
self.genome = path
def set_protein(self, path):
self.status = 'protein'
self.protein = path
def set_hmm(self, path):
if self.protein is None:
print("Warning! I don't have a protein yet, so this HMM will be useless to me until I do!")
self.status = 'protein and hmm'
self.hmm = path
def set_crystal(self, path):
self.status = 'crystal'
self.crystal = path
#Runs prodigal, compares translation tables and stores faa files
def genome_to_protein(self):
if self.genome is None:
print(self.name, "wasn't a declared as a genome! I can't make this into a protein!")
else:
protein_output = os.path.normpath(self.output + "/predicted_proteins/" + self.basename + '.faa')
if self.do_compress:
compress_level = "1"
else:
compress_level = "0"
mn = pyrodigal_manager(file = self.genome, aa_out = protein_output, compare_against = [4], do_compress = compress_level)
mn.run_for_fastaai()
self.trans_table = str(mn.trans_table)
for prot in mn.excluded_seqs:
self.err_log += "Protein " + prot + " was observed to have >100K amino acids ( " + str(mn.excluded_seqs[prot]) + " AA found ). It will not be included in predicted proteins for this genome;"
self.prepared_proteins = mn.labeled_proteins
del mn
#If there are zipped files leftover and we didn't want them, clean them up.
if self.do_compress:
self.set_protein(str(protein_output)+".gz")
#Clean up unzipped version on reruns
if os.path.exists(str(protein_output)):
os.remove(str(protein_output))
else:
self.set_protein(str(protein_output))
#Clean up a zipped version on reruns
if os.path.exists(str(protein_output)+".gz"):
os.remove(str(protein_output)+".gz")
self.prot_pred_time = datetime.datetime.now()
#run hmmsearch on a protein
def protein_to_hmm(self):
if self.protein is None:
print(self.basename, "wasn't a declared as a protein! I can't make this into an HMM!")
else:
folder = os.path.normpath(self.output + "/hmms")
hmm_output = os.path.normpath(folder +"/"+ self.basename + '.hmm')
if self.prepared_proteins is None:
self.prepared_proteins, deflines = read_fasta(self.protein)
hmm_manager.run_for_fastaai(self.prepared_proteins, hmm_output)
self.ran_hmmer = True
if self.do_compress:
self.set_hmm(str(hmm_output)+".gz")
if os.path.exists(str(hmm_output)):
os.remove(str(hmm_output))
else:
self.set_hmm(str(hmm_output))
if os.path.exists(str(hmm_output)+".gz"):
os.remove(str(hmm_output)+".gz")
self.hmm_search_time = datetime.datetime.now()
#Translate tetramers to unique int32 indices.
def unique_kmer_simple_key(self, seq):
#num tetramers = len(seq) - 4 + 1, just make it -3.
n_kmers = len(seq) - 3
#Converts the characters in a sequence into their ascii int value
as_ints = np.array([ord(i) for i in seq], dtype = np.int32)
#create seq like 0,1,2,3; 1,2,3,4; 2,3,4,5... for each tetramer that needs a value
kmers = np.arange(4*n_kmers)
kmers = kmers % 4 + kmers // 4
#Select the characters (as ints) corresponding to each tetramer all at once and reshape into rows of 4,
#each row corresp. to a successive tetramer
kmers = as_ints[kmers].reshape((n_kmers, 4))
#Given four 2-digit numbers, these multipliers work as offsets so that all digits are preserved in order when summed
mult = np.array([1000000, 10000, 100, 1], dtype = np.int32)
#the fixed values effectively offset the successive chars of the tetramer by 2 positions each time;
#practically, this is concatenation of numbers
#Matrix mult does this for all values at once.
return np.unique(np.dot(kmers, mult))
def load_hmm_and_filter_from_file(self):
prots = []
accs = []
scores = []
f = agnostic_reader(self.hmm)
for line in f:
if line.startswith("#"):
continue
else:
segs = line.strip().split()
if len(segs) < 9:
continue
prots.append(segs[0])
accs.append(segs[3])
scores.append(segs[8])
f.close()
if len(prots) < 1:
self.best_hits = {}
hmm_file = np.transpose(np.array([prots, accs, scores]))
#hmm_file = np.loadtxt(hmm_file_name, comments = '#', usecols = (0, 3, 8), dtype=(str))
#Sort the hmm file based on the score column in descending order.
hmm_file = hmm_file[hmm_file[:,2].astype(float).argsort()[::-1]]
#Identify the first row where each gene name appears, after sorting by score;
#in effect, return the highest scoring assignment per gene name
#Sort the indices of the result to match the score-sorted table instead of alphabetical order of gene names
hmm_file = hmm_file[np.sort(np.unique(hmm_file[:,0], return_index = True)[1])]
#Filter the file again for the unique ACCESSION names, since we're only allowed one gene per accession, I guess?
#Don't sort the indices, we don't care about the scores anymore.
hmm_file = hmm_file[np.unique(hmm_file[:,1], return_index = True)[1]]
sql_friendly_names = [i.replace(".", "_") for i in hmm_file[:,1]]
self.best_hits = dict(zip(hmm_file[:,0], sql_friendly_names))
#This should consider the domain by majority vote...
def prot_and_hmm_to_besthits(self):
if self.ran_hmmer:
#Manager has a filter built in.
self.best_hits = hmm_manager.best_hits
else:
#Load the best hits file via old numpy method.
self.load_hmm_and_filter_from_file()
hit_count = 0
#from pyrodigal predictions or HMM intermediate production, the sequences are already in mem and don't need read in.
if self.prepared_proteins is None:
#But otherwise, we need to read them in.
self.prepared_proteins, deflines = read_fasta(self.protein)
self.protein_kmer_count = {}
self.best_hits_kmers = {}
if self.crystalize:
crystal_record = []
#Kmerize proteins and record metadata
for protein in self.prepared_proteins:
if protein in self.best_hits:
accession = self.best_hits[protein]
if self.crystalize:
crystal_record.append(str(protein)+"\t"+str(accession)+"\t"+str(self.prepared_proteins[protein])+"\n")
kmer_set = self.unique_kmer_simple_key(self.prepared_proteins[protein])
self.protein_kmer_count[accession] = kmer_set.shape[0]
self.protein_count += 1
self.best_hits_kmers[accession] = kmer_set
hit_count += 1
#Free the space either way
self.prepared_proteins[protein] = None
if self.crystalize:
#only make a crystal if it actually has content.
if len(crystal_record) > 0:
crystal_path = os.path.normpath(self.output + "/crystals/" + self.basename + '_faai_crystal.txt')
crystal_record = "".join(crystal_record)
if self.do_compress:
crystal_record = crystal_record.encode()
crystal_writer = gzip.open(crystal_path+".gz", "wb")
crystal_writer.write(crystal_record)
crystal_writer.close()
else:
crystal_writer = open(crystal_path, "w")
crystal_writer.write(crystal_record)
crystal_writer.close()
#Final free.
self.prepared_proteins = None
#No HMM hits.
if hit_count == 0:
self.is_empty = True
self.besthits_time = datetime.datetime.now()
self.status = "best hits found"
def preprocess(self):
self.init_time = datetime.datetime.now()
#default to 0 time.
self.prot_pred_time = self.init_time
self.hmm_search_time = self.init_time
self.besthits_time = self.init_time
#There's no advancement stage for protein and HMM
if self.status == 'genome':
start_time = self.curtime()
#report = True
if self.start_time is None:
self.start_time = start_time
if self.initial_state == "protein+HMM":
self.initial_state = "genome"
self.genome_to_protein()
if self.status == 'protein':
start_time = self.curtime()
#report = True
if self.start_time is None:
self.start_time = start_time
if self.initial_state == "protein+HMM":
self.initial_state = "protein"
self.protein_to_hmm()
if self.status == 'protein and hmm':
start_time = self.curtime()
if self.start_time is None:
self.start_time = start_time
self.prot_and_hmm_to_besthits()
#Add an end time if either genome -> protein -> HMM or protein -> HMM happened.
if self.start_time is not None:
end_time = self.curtime()
self.end_time = end_time
else:
#Start was protein+HMM. There was no runtime, and intitial state is p+hmm
#self.initial_state = "protein+HMM"
self.start_time = "N/A"
self.end_time = "N/A"
#Protein not generated on this run.
if self.trans_table is None:
self.trans_table = "unknown"
self.partial_timings()
'''
Utility functions
'''
def prepare_directories(output, status, build_or_query, make_crystals = False):
preparation_successful = True
if not os.path.exists(output):
try:
os.mkdir(output)
except:
print("")
print("FastAAI tried to make output directory: '"+ output + "' but failed.")
print("")
print("Troubleshooting:")
print("")
print(" (1) Do you have permission to create directories in the location you specified?")
print(" (2) Did you make sure that all directories other than", os.path.basename(output), "already exist?")
print("")
preparation_successful = False
if preparation_successful:
try:
if status == 'genome':
if not os.path.exists(os.path.normpath(output + "/" + "predicted_proteins")):
os.mkdir(os.path.normpath(output + "/" + "predicted_proteins"))
if not os.path.exists(os.path.normpath(output + "/" + "hmms")):
os.mkdir(os.path.normpath(output + "/" + "hmms"))
if status == 'protein':
if not os.path.exists(os.path.normpath(output + "/" + "hmms")):
os.mkdir(os.path.normpath(output + "/" + "hmms"))
if make_crystals:
if not os.path.exists(os.path.normpath(output + "/" + "crystals")):
os.mkdir(os.path.normpath(output + "/" + "crystals"))
if build_or_query == "build":
if not os.path.exists(os.path.normpath(output + "/" + "database")):
os.mkdir(os.path.normpath(output + "/" + "database"))
if build_or_query == "query":
if not os.path.exists(os.path.normpath(output + "/" + "results")):
os.mkdir(os.path.normpath(output + "/" + "results"))
except:
print("FastAAI was able to create or find", output, "but couldn't make directories there.")
print("")
print("This shouldn't happen. Do you have permission to write to that directory?")
return preparation_successful
def find_hmm():
hmm_path = None
try:
#Try to locate the data bundled as it would be with a pip/conda install.
script_path = os.path.dirname(sys.modules['fastAAI_HMM_models'].__file__)
if len(script_path) == 0:
script_path = "."
hmm_complete_model = os.path.abspath(os.path.normpath(script_path + '/00.Libraries/01.SCG_HMMs/Complete_SCG_DB.hmm'))
hmm_path = str(hmm_complete_model)
#Check that the file exists or fail to the except.
fh = open(hmm_path)
fh.close()
except:
#Look in the same dir as the script; old method/MiGA friendly
script_path = os.path.dirname(__file__)
if len(script_path) == 0:
script_path = "."
hmm_complete_model = os.path.abspath(os.path.normpath(script_path +"/"+ "00.Libraries/01.SCG_HMMs/Complete_SCG_DB.hmm"))
hmm_path = str(hmm_complete_model)
return hmm_path
#Build DB from genomes
def unique_kmers(seq, ksize):
n_kmers = len(seq) - ksize + 1
kmers = []
for i in range(n_kmers):
kmers.append(kmer_index[seq[i:i + ksize]])
#We care about the type because we're working with bytes later.
return np.unique(kmers).astype(np.int32)
def split_seq(seq, num_grps):
newseq = []
splitsize = 1.0/num_grps*len(seq)
for i in range(num_grps):
newseq.append(seq[int(round(i*splitsize)):int(round((i+1)*splitsize))])
return newseq
#gives the max and min index needed to split a list of (max_val) genomes into
def split_indicies(max_val, num_grps):
newseq = []
splitsize = 1.0/num_grps*max_val
for i in range(num_grps):
newseq.append(((round(i*splitsize)), round((i+1)*splitsize)))
return newseq
def split_seq_indices(seq, num_grps):
newseq = []
splitsize = 1.0/num_grps*len(seq)
for i in range(num_grps):
newseq.append((int(round(i*splitsize)), int(round((i+1)*splitsize)),))
return newseq
def list_to_index_dict(list):
result = {}
counter = 0
for item in list:
result[item] = counter
counter += 1
return result
def rev_list_to_index_dict(list):
result = {}
counter = 0
for item in list:
result[counter] = item
counter += 1
return result
def generate_accessions_index(forward = True):
acc_list = ['PF01780_19', 'PF03948_14', 'PF17144_4', 'PF00830_19', 'PF00347_23', 'PF16906_5', 'PF13393_6',
'PF02565_15', 'PF01991_18', 'PF01984_20', 'PF00861_22', 'PF13656_6', 'PF00368_18', 'PF01142_18', 'PF00312_22', 'PF02367_17',
'PF01951_16', 'PF00749_21', 'PF01655_18', 'PF00318_20', 'PF01813_17', 'PF01649_18', 'PF01025_19', 'PF00380_19', 'PF01282_19',
'PF01864_17', 'PF01783_23', 'PF01808_18', 'PF01982_16', 'PF01715_17', 'PF00213_18', 'PF00119_20', 'PF00573_22', 'PF01981_16',
'PF00281_19', 'PF00584_20', 'PF00825_18', 'PF00406_22', 'PF00177_21', 'PF01192_22', 'PF05833_11', 'PF02699_15', 'PF01016_19',
'PF01765_19', 'PF00453_18', 'PF01193_24', 'PF05221_17', 'PF00231_19', 'PF00416_22', 'PF02033_18', 'PF01668_18', 'PF00886_19',
'PF00252_18', 'PF00572_18', 'PF00366_20', 'PF04104_14', 'PF04919_12', 'PF01912_18', 'PF00276_20', 'PF00203_21', 'PF00889_19',
'PF02996_17', 'PF00121_18', 'PF01990_17', 'PF00344_20', 'PF00297_22', 'PF01196_19', 'PF01194_17', 'PF01725_16', 'PF00750_19',
'PF00338_22', 'PF00238_19', 'PF01200_18', 'PF00162_19', 'PF00181_23', 'PF01866_17', 'PF00709_21', 'PF02006_16', 'PF00164_25',
'PF00237_19', 'PF01139_17', 'PF01351_18', 'PF04010_13', 'PF06093_13', 'PF00828_19', 'PF02410_15', 'PF01176_19', 'PF02130_17',
'PF01948_18', 'PF01195_19', 'PF01746_21', 'PF01667_17', 'PF03874_16', 'PF01090_19', 'PF01198_19', 'PF01250_17', 'PF17136_4',
'PF06026_14', 'PF03652_15', 'PF04019_12', 'PF01201_22', 'PF00832_20', 'PF01264_21', 'PF03840_14', 'PF00831_23', 'PF00189_20',
'PF02601_15', 'PF01496_19', 'PF00411_19', 'PF00334_19', 'PF00687_21', 'PF01157_18', 'PF01245_20', 'PF01994_16', 'PF01632_19',
'PF00827_17', 'PF01015_18', 'PF00829_21', 'PF00410_19', 'PF00833_18', 'PF00935_19', 'PF01992_16']
if forward:
list_of_poss_accs = list_to_index_dict(acc_list)
else:
list_of_poss_accs = rev_list_to_index_dict(acc_list)
return list_of_poss_accs
#Build or add to a FastAAI DB
def build_db_opts():
parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter,
description='''
This FastAAI module allows you to create a FastAAI database from one or many genomes, proteins, or proteins and HMMs, or add these files to an existing one.
Supply genomes OR proteins OR proteins AND HMMs as inputs.
If you supply genomes, FastAAI will predict proteins from them, and HMMs will be created from those proteins
If you supply only proteins, FastAAI will create HMM files from them, searching against FastAAI's internal database
If you supply proteins AND HMMs, FastAAI will directly use them to build the database.\n
You cannot supply both genomes and proteins
''')
parser.add_argument('-g', '--genomes', dest = 'genomes', default = None, help = 'A directory containing genomes in FASTA format.')
parser.add_argument('-p', '--proteins', dest = 'proteins', default = None, help = 'A directory containing protein amino acids in FASTA format.')
parser.add_argument('-m', '--hmms', dest = 'hmms', default = None, help = 'A directory containing the results of an HMM search on a set of proteins.')
parser.add_argument('-d', '--database', dest = 'db_name', default = "FastAAI_database.sqlite.db", help = 'The name of the database you wish to create or add to. The database will be created if it doesn\'t already exist and placed in the output directory. FastAAI_database.sqlite.db by default.')
parser.add_argument('-o', '--output', dest = 'output', default = "FastAAI", help = 'The directory to place the database and any protein or HMM files FastAAI creates. By default, a directory named "FastAAI" will be created in the current working directory and results will be placed there.')
parser.add_argument('--threads', dest = 'threads', type=int, default = 1, help = 'The number of processors to use. Default 1.')
parser.add_argument('--verbose', dest = 'verbose', action='store_true', help = 'Print minor updates to console. Major updates are printed regardless.')
parser.add_argument('--compress', dest = "do_comp", action = 'store_true', help = 'Gzip compress generated proteins, HMMs. Off by default.')
args, unknown = parser.parse_known_args()
return parser, args
def run_build(input_file):
input_file.preprocess()
if len(input_file.best_hits_kmers) < 1:
input_file.best_hits_kmers = None
input_file.err_log += " This file did not successfully complete. No SCPs could be found."
return input_file
def acc_transformer_init(db, tempdir_path):
sqlite3.register_converter("array", convert_array)
global indb
indb = db
global temp_dir
temp_dir = tempdir_path
global ok
ok = generate_accessions_index()
def acc_transformer(acc_name):
source = sqlite3.connect(indb)
scurs = source.cursor()
data = scurs.execute("SELECT * FROM {acc}_genomes".format(acc=acc_name)).fetchall()
scurs.close()
source.close()
reformat = {}
for row in data:
genome, kmers = row[0], np.frombuffer(row[1], dtype=np.int32)
for k in kmers:
if k not in reformat:
reformat[k] = []
reformat[k].append(genome)
data = None
to_add = []
for k in reformat:
as_bytes = np.array(reformat[k], dtype = np.int32)
as_bytes = as_bytes.tobytes()
reformat[k] = None
to_add.append((int(k), as_bytes,))
my_acc_db = os.path.normpath(temp_dir + "/"+acc_name+".db")
if os.path.exists(my_acc_db):
os.remove(my_acc_db)
my_db = sqlite3.connect(my_acc_db)
curs = my_db.cursor()
curs.execute("CREATE TABLE {acc} (kmer INTEGER PRIMARY KEY, genomes array)".format(acc=acc_name))
my_db.commit()
curs.executemany("INSERT INTO {acc} VALUES (?, ?)".format(acc = acc_name), to_add)
my_db.commit()
to_add = None
curs.execute("CREATE INDEX {acc}_index ON {acc} (kmer)".format(acc=acc_name))
my_db.commit()
curs.close()
my_db.close()
return [my_acc_db, acc_name]
def build_db(genomes, proteins, hmms, db_name, output, threads, verbose, do_compress):
success = True
imported_files = fastaai_file_importer(genomes = genomes, proteins = proteins, hmms = hmms, output = output, compress = do_compress)
imported_files.determine_inputs()
if imported_files.error:
print("Exiting FastAAI due to input file error.")
quit()
good_to_go = prepare_directories(output, imported_files.status, "query")
db_path = os.path.normpath(output + "/database")
if not os.path.exists(db_path):
os.mkdir(db_path)
if not good_to_go:
print("Exiting FastAAI")
sys.exit()
print("")
hmm_path = find_hmm()
#Check if the db contains path info. Incl. windows version.
if "/" not in db_name and "\\" not in db_name:
final_database = os.path.normpath(output + "/database/" + db_name)
else:
#If the person insists that the db has a path, let them.
final_database = db_name
#We'll skip trying this if the file already exists.
existing_genome_IDs = None
try:
if os.path.exists(final_database):
parent = sqlite3.connect(final_database)
curs = parent.cursor()
existing_genome_IDs = {}
sql_command = "SELECT genome, gen_id FROM genome_index"
for result in curs.execute(sql_command).fetchall():
genome = result[0]
id = int(result[1])
existing_genome_IDs[genome] = id
curs.close()
parent.close()
except:
print("You specified an existing file to be a database, but it does not appear to be a FastAAI database.")
print("FastAAI will not be able to continue. Please give FastAAI a different database name and continue.")
print("Exiting.")
success = False
if success:
hmm_file = find_hmm()
if existing_genome_IDs is not None:
genome_idx = max(list(existing_genome_IDs.values()))+1
else:
existing_genome_IDs = {}
genome_idx = 0
#return_to
td = tempfile.mkdtemp()
#if not os.path.exists(td):
# os.mkdir(td)
temp_db = os.path.normpath(td+"/FastAAI_temp_db.db")
if os.path.exists(temp_db):
os.remove(temp_db)
sqlite3.register_converter("array", convert_array)
worker = sqlite3.connect(temp_db)
wcurs = worker.cursor()
wcurs.execute("CREATE TABLE genome_index (genome text, gen_id integer, protein_count integer)")
wcurs.execute("CREATE TABLE genome_acc_kmer_counts (genome integer, accession integer, count integer)")
ok = generate_accessions_index()
for t in ok:
wcurs.execute("CREATE TABLE " + t + "_genomes (genome INTEGER PRIMARY KEY, kmers array)")
worker.commit()
new_gens = []
new_gak = []
accs_seen = {}
if verbose:
tracker = progress_tracker(total = len(imported_files.in_files), message = "Processing inputs")
else:
print("Processing inputs")
#Only build_db makes a log.
if not os.path.exists(os.path.normpath(output + "/" + "logs")):
os.mkdir(os.path.normpath(output + "/" + "logs"))
logger = open(os.path.normpath(output+"/logs/"+"FastAAI_preprocessing_log.txt"), "a")
print("file", "start_date", "end_date", "starting_format",
"prot_prediction_time", "trans_table", "hmm_search_time", "besthits_time",
"errors", sep = "\t", file = logger)
pool = multiprocessing.Pool(threads, initializer = hmm_preproc_initializer,
initargs = (hmm_file, do_compress,))
for result in pool.imap(run_build, imported_files.in_files):
#log data, regardless of kind
print(result.basename, result.start_time, result.end_time, result.initial_state,
result.prot_pred_time, result.trans_table, result.hmm_search_time, result.besthits_time,
result.err_log, sep = "\t", file = logger)
if result.best_hits_kmers is not None:
genome_name = result.original_name
if genome_name in existing_genome_IDs:
print(genome_name, "Already present in final database and will be skipped.")
print("")
else:
protein_count = result.protein_count
for acc_name in result.best_hits_kmers:
if acc_name not in accs_seen:
accs_seen[acc_name] = 0
acc_id = ok[acc_name]
kmer_ct = result.protein_kmer_count[acc_name]
kmers = result.best_hits_kmers[acc_name]
kmers = kmers.tobytes()
wcurs.execute("INSERT INTO {acc}_genomes VALUES (?, ?)".format(acc=acc_name), (genome_idx, kmers,))
new_gak.append((genome_idx, acc_id, kmer_ct,))
new_gens.append((genome_name, genome_idx, protein_count,))
genome_idx += 1
worker.commit()
if verbose:
tracker.update()
pool.close()
logger.close()
wcurs.executemany("INSERT INTO genome_index VALUES (?,?,?)", new_gens)
wcurs.executemany("INSERT INTO genome_acc_kmer_counts VALUES (?,?,?)", new_gak)
worker.commit()
wcurs.close()
worker.close()
accs_seen = list(accs_seen.keys())
parent = sqlite3.connect(final_database)
curs = parent.cursor()
curs.execute("attach '" + temp_db + "' as worker")
#initialize if needed.
curs.execute("CREATE TABLE IF NOT EXISTS genome_index (genome text, gen_id integer, protein_count integer)")
curs.execute("CREATE TABLE IF NOT EXISTS genome_acc_kmer_counts (genome integer, accession integer, count integer)")
curs.execute("INSERT INTO genome_index SELECT * FROM worker.genome_index")
curs.execute("INSERT INTO genome_acc_kmer_counts SELECT * FROM worker.genome_acc_kmer_counts")
curs.execute("CREATE INDEX IF NOT EXISTS kmer_acc ON genome_acc_kmer_counts (genome, accession);")
parent.commit()
if verbose:
tracker = progress_tracker(total = len(accs_seen), message = "Collecting results")
else:
print("Collecting results")
pool = multiprocessing.Pool(threads, initializer = acc_transformer_init,
initargs = (temp_db, td,))
for result in pool.imap_unordered(acc_transformer, accs_seen):
database, accession = result[0], result[1]
curs.execute("CREATE TABLE IF NOT EXISTS {acc} (kmer INTEGER PRIMARY KEY, genomes array)".format(acc=accession))
curs.execute("CREATE TABLE IF NOT EXISTS {acc}_genomes (genome INTEGER PRIMARY KEY, kmers array)".format(acc=accession))
curs.execute("CREATE INDEX IF NOT EXISTS {acc}_index ON {acc}(kmer)".format(acc=accession))
#Get the genomes from worker db.
curs.execute("INSERT INTO {acc}_genomes SELECT * FROM worker.{acc}_genomes".format(acc=accession))
parent.commit()
accdb = sqlite3.connect(database)
acc_curs = accdb.cursor()
to_update = acc_curs.execute("SELECT kmer, genomes, genomes FROM {acc}".format(acc=accession)).fetchall()
acc_curs.close()
accdb.close()
update_concat_sql = "INSERT INTO {acc} VALUES (?,?) ON CONFLICT(kmer) DO UPDATE SET genomes=genomes || (?)".format(acc=accession)
#ON CONFLICT(kmer) DO UPDATE SET genomes=genomes || acc.{acc}.genomes;".format(acc=accession)
#print(update_concat_sql)
curs.executemany(update_concat_sql, to_update)
parent.commit()
os.remove(database)
if verbose:
tracker.update()
pool.close()
curs.execute("detach worker")
parent.commit()
curs.close()
parent.close()
os.remove(temp_db)
try:
if len(os.listdir(td)) == 0:
shutil.rmtree(td)
except:
pass
if success:
print("Database build complete!")
return success
def file_v_db_initializer(tgak, tgt_names, tgt_cts, hmm_file, do_compress, tgt_ct, sd, out, style, in_mem, build_q, tdb):
#num_tgts, self.do_sd, self.output, self.style, self.as_mem_db, self.do_db_build
global _tdb
_tdb = tdb
global _tgt_gak
_tgt_gak = tgak
global _tname
_tname = tgt_names
global _tct
_tct = tgt_cts
global hmm_manager
hmm_manager = pyhmmer_manager(do_compress)
hmm_manager.load_hmm_from_file(hmm_file)
global num_tgts
num_tgts = tgt_ct
global _do_sd
_do_sd = sd
global out_style
out_style = style
global out_base
out_base = out
global db_is_in_mem
db_is_in_mem = in_mem
global make_query_db
make_query_db = build_q
return _tdb, _tgt_gak, _tname, _tct, hmm_manager, num_tgts, _do_sd, out_base, out_style, db_is_in_mem, make_query_db
def file_v_db_worker(query_args):
#query info for this particular query
in_file = query_args[0]
in_file.preprocess()
qname = in_file.basename
do_sd = _do_sd
#std dev. calcs are not meaningful with matrix style output.
if out_style == "matrix":
do_sd = False
if do_sd:
results = []
shared_acc_counts = []
else:
results = np.zeros(shape = num_tgts, dtype = np.float_)
shared_acc_counts = np.zeros(shape = num_tgts, dtype = np.int32)
if db_is_in_mem:
#The connection is already given as MDB if the db is in mem
tconn = _tdb
else:
#db is on disk and the connection has to be established.
tconn = sqlite3.connect(_tdb)
tcurs = tconn.cursor()
#This is a difference from the DB-first method.
acc_idx = generate_accessions_index(forward = True)
genome_lists = {}
tcurs.row_factory = lambda cursor, row: row[0]
if make_query_db:
ret = [qname, None, []]
else:
ret = [qname, None, None]
#We need to purge accsessions not in tgt.
for acc in in_file.best_hits_kmers:
one = in_file.best_hits_kmers[acc]
acc_id = acc_idx[acc]
if make_query_db:
ret[2].append((qname, acc_id, one.tobytes(),))
#Check working.
if acc_id in _tgt_gak:
kmer_ct = one.shape[0]
if do_sd:
hits = np.zeros(shape = num_tgts, dtype = np.int32)
hits[np.nonzero(_tgt_gak[acc_id])] = 1
shared_acc_counts.append(hits)
else:
shared_acc_counts[np.nonzero(_tgt_gak[acc_id])] += 1
#SQL has a max binding size of 999, for some reason.
if kmer_ct > 998:
#Each kmer needs to be a tuple.
these_kmers = [(int(kmer),) for kmer in one]
temp_name = "_" + qname +"_" + acc
temp_name = temp_name.replace(".", "_")
tcurs.execute("CREATE TEMP TABLE " + temp_name + " (kmer INTEGER)")
tconn.commit()
insert_table = "INSERT INTO " + temp_name + " VALUES (?)"
tcurs.executemany(insert_table, these_kmers)
tconn.commit()
join_and_select_sql = "SELECT genomes FROM " + temp_name + " INNER JOIN " + acc + " ON "+ temp_name+".kmer = " + acc+".kmer;"
set = tcurs.execute(join_and_select_sql).fetchall()
else:
#kmers must be a list, not a tuple.
these_kmers = [int(kmer) for kmer in one]
select = "SELECT genomes FROM " + acc + " WHERE kmer IN ({kmers})".format(kmers=','.join(['?']*len(these_kmers)))
set = tcurs.execute(select, these_kmers).fetchall()
#join results into one bytestring.
set = b''.join(set)
these_intersections = np.bincount(np.frombuffer(set, dtype = np.int32), minlength = num_tgts)
set = None
#Add tgt kmer counts to query kmer counts, find union size based on intersection size, cald jacc
jacc = np.divide(these_intersections, np.subtract(np.add(_tgt_gak[acc_id], kmer_ct), these_intersections))
if do_sd:
results.append(jacc)
else:
results += jacc
tcurs.row_factory = None
tcurs.close()
if do_sd:
results = np.vstack(results)
has_accs = np.vstack(shared_acc_counts)
shared_acc_counts = np.sum(has_accs, axis = 0)
#final jacc_means
jaccard_averages = np.divide(np.sum(results, axis = 0), shared_acc_counts)
aai_ests = numpy_kaai_to_aai(jaccard_averages)
#find diffs from means; this includes indicies corresponding to unshared SCPs that should not be included.
results = results - jaccard_averages
#fix those corresponding indicies to not contribute to the final SD.
results[np.nonzero(has_accs == 0)] = 0
#Square them
results = np.square(results)
#Sum squares and divide by shared acc. count, the sqrt to get SD.
jaccard_SDs = np.sqrt(np.divide(np.sum(results, axis = 0), shared_acc_counts))
jaccard_SDs = np.round(jaccard_SDs, 4).astype(str)
else:
#other condition.
jaccard_SDs = None
jaccard_averages = np.divide(results, shared_acc_counts)
#we don't want to pass char arrays to main, so skip this here and do it in main instead.
if out_style != "matrix":
aai_ests = numpy_kaai_to_aai(jaccard_averages)
del results
#Since the outputs go to separate files, it makes more sense to do them within the worker processes instead of in main.
if out_style == "tsv":
no_hit = np.where(shared_acc_counts == 0)
possible_hits = np.minimum(len(in_file.best_hits_kmers), _tct).astype(str)
jaccard_averages = np.round(jaccard_averages, 4).astype(str)
shared_acc_counts = shared_acc_counts.astype(str)
jaccard_averages[no_hit] = "N/A"
aai_ests[no_hit] = "N/A"
shared_acc_counts[no_hit] = "N/A"
possible_hits[no_hit] = "N/A"
output_name = os.path.normpath(out_base + "/"+qname+"_results.txt")
out = open(output_name, "w")
out.write("query\ttarget\tavg_jacc_sim\tjacc_SD\tnum_shared_SCPs\tposs_shared_SCPs\tAAI_estimate\n")
if do_sd:
jaccard_SDs[no_hit] = "N/A"
for i in range(0, len(aai_ests)):
out.write(qname+"\t"+_tname[i]+"\t"+jaccard_averages[i]+"\t"+jaccard_SDs[i]+"\t"+shared_acc_counts[i]+"\t"+possible_hits[i]+"\t"+aai_ests[i]+"\n")
else:
for i in range(0, len(aai_ests)):
out.write(qname+"\t"+_tname[i]+"\t"+jaccard_averages[i]+"\t"+"N/A"+"\t"+shared_acc_counts[i]+"\t"+possible_hits[i]+"\t"+aai_ests[i]+"\n")
out.close()
#We're just gonna pass this back to the main to print.
if out_style == "matrix":
ret[1] = jaccard_averages
return ret
#Handles both query and target types for a db vs db query
class file_vs_db_query:
def __init__(self, in_memory = False, input_file_objects = None,
target = None, threads = 1, do_sd = False, output_base = "FastAAI", output_style = "tsv",
build_db_from_queries = True, qdb_name = "Query_FastAAI_database.db", hmm_path = None,
do_comp = True, verbose = True):
#files to work with
self.queries = input_file_objects
self.do_db_build = build_db_from_queries
self.dbname = qdb_name
self.t = target
self.valids = None
#Originally this was made to be a memory database only block of code, but just if/else one change makes it work on disk and it doesn't need a redev, then.
self.as_mem_db = in_memory
self.t_conn = None
self.t_curs = None
self.threads = threads
self.do_sd = do_sd
self.output_base = output_base
self.output = os.path.normpath(output_base + "/results")
self.style = output_style
if hmm_path is not None:
self.hmm_path = hmm_path
else:
self.hmm_path = find_hmm()
self.do_comp = do_comp
self.verbose = verbose
'''
Workflow is:
load target db as mem (optional)
assess valid targets
create query db output (optional)
pass query args to workers
preproc query args
write results
fill query_db_out (optional)
'''
def open(self):
if self.as_mem_db:
self.t_conn = sqlite3.connect(':memory:')
else:
self.t_conn = sqlite3.connect(self.t)
self.t_curs = self.t_conn.cursor()
if self.as_mem_db:
self.t_curs.execute("attach '" + self.t + "' as targets")
self.t_curs.execute("CREATE TABLE genome_index AS SELECT * FROM targets.genome_index")
self.t_curs.execute("CREATE TABLE genome_acc_kmer_counts AS SELECT * FROM targets.genome_acc_kmer_counts")
self.t_curs.execute("CREATE INDEX t_gi ON genome_index (gen_id)")
self.t_curs.execute("CREATE INDEX t_gak ON genome_acc_kmer_counts (accession)")
if self.as_mem_db:
table_sql = "SELECT name FROM targets.sqlite_master"
else:
table_sql = "SELECT name FROM sqlite_master"
ok = generate_accessions_index()
ok_names = set(list(ok.keys()))
successful_tables = []
for name in self.t_curs.execute(table_sql).fetchall():
name = name[0]
if name in ok_names:
successful_tables.append(ok[name])
if self.as_mem_db:
self.t_curs.execute("CREATE TABLE " + name + " AS SELECT * FROM targets."+name)
self.t_curs.execute("CREATE INDEX "+name+"_index ON " + name+" (kmer)" )
if self.as_mem_db:
self.t_conn.commit()
self.t_curs.execute("detach targets")
self.valids = tuple(successful_tables)
def close(self):
self.t_curs.close()
self.t_curs = None
def clean_up(self):
self.t_conn.close()
self.t_conn = None
def sqlite_table_schema(self, conn, name):
"""Return a string representing the table's CREATE"""
cursor = conn.execute("SELECT sql FROM sqlite_master WHERE name=?;", [name])
sql = cursor.fetchone()[0]
cursor.close()
return sql
def execute(self):
print("FastAAI is running.")
tgt_id_res = self.t_curs.execute("SELECT * FROM genome_index ORDER BY gen_id").fetchall()
tgt_ids = []
tgt_naming = []
tgt_counts = []
for r in tgt_id_res:
genome, id, prot_ct = r[0], r[1], r[2]
tgt_ids.append(genome)
tgt_naming.append(genome)
tgt_counts.append(prot_ct)
num_tgts = len(tgt_ids)
tgt_counts = np.array(tgt_counts, dtype = np.int32)
tgts_gak = {}
gak_sql = "SELECT * FROM genome_acc_kmer_counts WHERE accession in ({accs})".format(accs=','.join(['?']*len(self.valids)))
for result in self.t_curs.execute(gak_sql, self.valids).fetchall():
genome, acc, ct = result[0], result[1], result[2]
if acc not in tgts_gak:
tgts_gak[acc] = np.zeros(num_tgts, dtype = np.int32)
tgts_gak[acc][genome] += ct
#If the DB is a memory DB, we need to maintain the connection, but neither needs to maintain the curor in main.
self.close()
query_groups = []
for query_input in self.queries:
query_groups.append((query_input,))
#And if it's a physical database, we do want to close it.
if not self.as_mem_db:
self.t_conn.close()
num_queries = len(query_groups)
if self.do_db_build:
sqlite3.register_converter("array", convert_array)
qdb_path = os.path.normpath(self.output_base + "/database/"+self.dbname)
if not os.path.exists(os.path.normpath(self.output_base + "/database")):
try:
os.mkdir(os.path.normpath(self.output_base + "/database"))
except:
print("Couldn't make database at", qdb_path)
self.do_db_build = False
if os.path.exists(qdb_path):
print("Database for queries already exists. I can't make one at:", qdb_path)
self.do_db_build = False
else:
query_db_conn = sqlite3.connect(qdb_path)
q_curs = query_db_conn.cursor()
q_curs.execute("CREATE TABLE storage (genome INTEGER, accession INTEGER, kmers array)")
q_curs.execute("CREATE INDEX store_idx ON storage (genome, accession)")
query_genome_index = []
qgi_ct = 0
qg_gak = []
if self.verbose:
tracker = progress_tracker(total = num_queries, message = "Calculating AAI...", one_line = True)
if self.style == "matrix":
output_name = os.path.normpath(self.output + "/FastAAI_matrix.txt")
output = open(output_name, "w")
#needs target names.
print("query_genome", *tgt_ids, sep = "\t", file = output)
#Need to pass these
#both initializers will share this.
shared_args = [tgts_gak, tgt_naming, tgt_counts, self.hmm_path, self.do_comp, num_tgts, self.do_sd, self.output,
self.style, self.as_mem_db, self.do_db_build]
if self.as_mem_db:
shared_args.append(self.t_conn)
shared_args = tuple(shared_args)
pool = multiprocessing.Pool(self.threads, initializer = file_v_db_initializer,
initargs = shared_args)
else:
#db is on disk,
shared_args.append(self.t)
shared_args = tuple(shared_args)
pool = multiprocessing.Pool(self.threads, initializer = file_v_db_initializer,
initargs = shared_args)
for result in pool.imap(file_v_db_worker, query_groups):
if self.verbose:
tracker.update()
qname = result[0]
if self.style == "matrix":
printout = numpy_kaai_to_aai(result[1])
print(qname, *printout, sep = "\t", file = output)
if self.do_db_build:
query_genome_index.append((qname, qgi_ct, len(result[2]),))
for row in result[2]:
num_kmers = int(len(row[2])/4)
qg_gak.append((qgi_ct, row[1], num_kmers,))
qgi_ct += 1
q_curs.executemany("INSERT INTO storage VALUES (?, ?, ?)", result[2])
query_db_conn.commit()
pool.close()
if self.style == "matrix":
output.close()
if self.do_db_build:
q_curs.execute("CREATE TABLE genome_index (genome text, gen_id integer, protein_count integer)")
q_curs.execute("CREATE TABLE genome_acc_kmer_counts (genome integer, accession integer, count integer)")
q_curs.executemany("INSERT INTO genome_index VALUES (?,?,?)", query_genome_index)
q_curs.executemany("INSERT INTO genome_acc_kmer_counts VALUES (?,?,?)", qg_gak)
query_db_conn.commit()
acc_id_to_name = generate_accessions_index(forward = False)
qgi_dict = {}
for tup in query_genome_index:
qgi_dict[tup[0]] = tup[1]
accs_in_db = q_curs.execute("SELECT DISTINCT(accession) FROM genome_acc_kmer_counts").fetchall()
if self.verbose:
tracker = progress_tracker(total = len(accs_in_db), message = "Crafting database from query outputs.", one_line = True)
for acc in accs_in_db:
acc = acc[0]
acc_name = acc_id_to_name[acc]
q_curs.execute("CREATE TABLE " + acc_name + " (kmer INTEGER PRIMARY KEY, genomes array)")
q_curs.execute("CREATE TABLE " + acc_name + "_genomes (genome INTEGER PRIMARY KEY, kmers array)")
data = q_curs.execute("SELECT genome, kmers FROM storage WHERE accession = ?", (acc,)).fetchall()
ins = []
#group by kmer
kmers_by_gen = {}
for row in data:
gen = row[0]
gen = qgi_dict[gen]
kmers = np.frombuffer(row[1], dtype = np.int32)
ins.append((gen, kmers,))
for k in kmers:
#typecast
k = int(k)
if k not in kmers_by_gen:
kmers_by_gen[k] = []
kmers_by_gen[k].append(gen)
data = None
q_curs.executemany("INSERT INTO "+ acc_name + "_genomes VALUES (?,?)", ins)
ins = []
for k in kmers_by_gen:
dat = kmers_by_gen[k]
dat = np.sort(np.array(dat, dtype = np.int32))
ins.append((k, dat.tobytes()))
q_curs.executemany("INSERT INTO "+ acc_name + " VALUES (?,?)", ins)
ins = None
query_db_conn.commit()
q_curs.execute("CREATE INDEX IF NOT EXISTS " + acc_name + "_index ON " + acc_name + " (kmer)")
if self.verbose:
tracker.update()
q_curs.execute("CREATE INDEX IF NOT EXISTS kmer_acc ON genome_acc_kmer_counts (genome, accession);")
q_curs.execute("DROP INDEX store_idx")
q_curs.execute("DROP TABLE storage")
query_db_conn.commit()
q_curs.execute("VACUUM")
query_db_conn.commit()
q_curs.close()
query_db_conn.close()
#Actually run the thing.
def run(self):
self.open()
self.execute()
#Clean up the db connections; free the mem.
self.clean_up()
def numpy_kaai_to_aai(kaai_array):
#aai_hat = (-0.3087057 + 1.810741 * (np.exp(-(-0.2607023 * np.log(kaai))**(1/3.435))))*100
#Protect the original jaccard averages memory item
aai_hat_array = kaai_array.copy()
non_zero = np.where(aai_hat_array > 0)
is_zero = np.where(aai_hat_array <= 0)
#I broke this down into its original components
#Avoid zeroes in log - still actually works, but it produces warnings I don't want to see.
aai_hat_array[non_zero] = np.log(aai_hat_array[non_zero])
aai_hat_array = np.multiply(np.subtract(np.multiply(np.exp(np.negative(np.power(np.multiply(aai_hat_array, -0.2607023), (1/3.435)))), 1.810741), 0.3087057), 100)
'''
Same as the above, broken down into easier-to-follow steps.
aai_hat_array = np.multiply(aai_hat_array, -0.2607023)
aai_hat_array = np.power(aai_hat_array, (1/3.435))
aai_hat_array = np.negative(aai_hat_array)
aai_hat_array = np.exp(aai_hat_array)
aai_hat_array = np.multiply(aai_hat_array, 1.810741)
aai_hat_array = np.subtract(aai_hat_array, 0.3087057)
aai_hat_array = np.multiply(aai_hat_array, 100)
'''
#<30 and >90 values
smol = np.where(aai_hat_array < 30)
big = np.where(aai_hat_array > 90)
aai_hat_array = np.round(aai_hat_array, 2)
#Convert to final printables
aai_hat_array = aai_hat_array.astype(str)
aai_hat_array[smol] = "<30%"
aai_hat_array[big] = ">90%"
#The math of the above ends up with zero values being big, so we fix those.
aai_hat_array[is_zero] = "<30%"
return aai_hat_array
#Also includes a multiply by 100 and type conversion compared to original - this is some silliness for saving memory.
def numpy_kaai_to_aai_just_nums(kaai_array, as_float = False):
#aai_hat = (-0.3087057 + 1.810741 * (np.exp(-(-0.2607023 * np.log(kaai))**(1/3.435))))*100
#Protect the original jaccard averages memory item
aai_hat_array = kaai_array.copy()
non_zero = np.where(aai_hat_array > 0)
is_zero = np.where(aai_hat_array <= 0)
#I broke this down into its original components
#Avoid zeroes in log - still actually works, but it produces warnings I don't want to see.
aai_hat_array[non_zero] = np.log(aai_hat_array[non_zero])
aai_hat_array = np.multiply(np.subtract(np.multiply(np.exp(np.negative(np.power(np.multiply(aai_hat_array, -0.2607023), (1/3.435)))), 1.810741), 0.3087057), 100)
'''
Same as the above, broken down into easier-to-follow steps.
aai_hat_array = np.multiply(aai_hat_array, -0.2607023)
aai_hat_array = np.power(aai_hat_array, (1/3.435))
aai_hat_array = np.negative(aai_hat_array)
aai_hat_array = np.exp(aai_hat_array)
aai_hat_array = np.multiply(aai_hat_array, 1.810741)
aai_hat_array = np.subtract(aai_hat_array, 0.3087057)
aai_hat_array = np.multiply(aai_hat_array, 100)
'''
aai_hat_array = np.round(aai_hat_array, 2)
#<30 and >90 values
smol = np.where(aai_hat_array < 30)
big = np.where(aai_hat_array > 90)
#We can find these later.
aai_hat_array[smol] = 15
aai_hat_array[big] = 95
if as_float:
aai_hat_array = np.round(aai_hat_array, 2)
else:
aai_hat_array = np.multiply(aai_hat_array, 100)
aai_hat_array = np.round(aai_hat_array, 2)
aai_hat_array = aai_hat_array.astype(np.int16)
return aai_hat_array
def curtime():
time_format = "%d/%m/%Y %H:%M:%S"
timer = datetime.datetime.now()
time = timer.strftime(time_format)
return time
#Perform a minimal-memory query of a target database from input files. Lighter weight function for low memory
def sql_query_opts():
parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter,
description='''
This FastAAI module takes one or many genomes, proteins, or proteins and HMMs as a QUERY and searches them against an existing FastAAI database TARGET using SQL
If you only have a few genomes - or not enough RAM to hold the entire target database in memory - this is the probably the best option for you.
To provide files, supply either a directory containing only one type of file (e.g. only genomes in FASTA format), a file containing paths to files of a type, 1 per line,
or a comma-separated list of files of a single type (no spaces)
If you provide FastAAI with genomes or only proteins (not proteins and HMMs), this FastAAI module will produce the required protein and HMM files as needed
and place them in the output directory, just like it does while building a database.
Once these inputs are ready to be queried against the database (each has both a protein and HMM file), they will be processed independently, 1 per thread at a time.
Note: Protein and HMM files generated during this query can be supplied to build a FastAAI database from proteins and HMMs using the build_db module, without redoing preprocessing.
''')
parser.add_argument('-g', '--genomes', dest = 'genomes', default = None, help = 'Genomes in FASTA format.')
parser.add_argument('-p', '--proteins', dest = 'proteins', default = None, help = 'Protein amino acids in FASTA format.')
parser.add_argument('-m', '--hmms', dest = 'hmms', default = None, help = 'HMM search files produced by FastAAI on a set of proteins.')
parser.add_argument('--target', dest = 'target', default = None, help = 'A path to the FastAAI database you wish to use as the target')
parser.add_argument('-o', '--output', dest = 'output', default = "FastAAI", help = 'The directory where FastAAI will place the result of this query and any protein or HMM files it has to generate. By default, a directory named "FastAAI" will be created in the current working directory and results will be placed there.')
parser.add_argument('--output_style', dest = "style", default = 'tsv', help = "Either 'tsv' or 'matrix'. Matrix produces a simplified output of only AAI estimates.")
parser.add_argument('--do_stdev', dest = "do_stdev", action='store_true', help = 'Off by default. Calculate std. deviations on Jaccard indicies. Increases memory usage and runtime slightly. Does NOT change estimated AAI values at all.')
parser.add_argument('--threads', dest = 'threads', type=int, default = 1, help = 'The number of processors to use. Default 1.')
parser.add_argument('--verbose', dest = 'verbose', action='store_true', help = 'Print minor updates to console. Major updates are printed regardless.')
parser.add_argument('--in_memory', dest = "in_mem", action = 'store_true', help = 'Load the target database into memory before querying. Consumes more RAM, but is faster and reduces file I/O substantially.')
parser.add_argument('--create_query_db', dest = "make_db", action = 'store_true', help = 'Create a query database from the genomes.')
parser.add_argument('--query_db_name', dest = "qdb_name", default = "Query_FastAAI_db.db", help = 'Name the query database. This file must not already exist.')
parser.add_argument('--compress', dest = "do_comp", action = 'store_true', help = 'Gzip compress generated proteins, HMMs. Off by default.')
args, unknown = parser.parse_known_args()
return parser, args
def sql_query_thread_starter(kmer_cts, protein_cts):
global target_kmer_cts
global target_protein_counts
target_kmer_cts = kmer_cts
target_protein_counts = protein_cts
#took a function from fastaai 2.0
class fastaai_file_importer:
def __init__(self, genomes = None, proteins = None, hmms = None, crystals = None,
output = "FastAAI", compress = False, crystalize = False):
#genomes, prots, hmms can be supplied as either directory, a file with paths 1/line, or comma-sep paths. Type is determined automatically.
self.genomes = genomes
self.proteins = proteins
self.hmms = hmms
self.crystals = crystals
self.genome_list = None
self.protein_list = None
self.hmm_list = None
self.crystal_list = None
self.crystalize = crystalize
#file base names.
self.identifiers = None
self.error = False
self.in_files = None
self.status = "genome"
self.output = output
self.do_comp = compress
def retrieve_files(self, arg):
done = False
files = []
names = []
#Case where a directory is supplied.
if os.path.isdir(arg):
for file in sorted(os.listdir(arg)):
#Retrieve file name
if file.endswith(".gz"):
name = os.path.splitext(os.path.basename(file[:-3]))[0]
else:
name = os.path.splitext(os.path.basename(file))[0]
names.append(name)
files.append(os.path.abspath(os.path.normpath(arg + '/' +file)))
done = True
#Case where a file containing paths is supplied.
if os.path.isfile(arg):
handle = agnostic_reader(arg)
for line in handle:
file = line.strip()
if os.path.exists(file):
if file.endswith(".gz"):
name = os.path.splitext(os.path.basename(file[:-3]))[0]
else:
name = os.path.splitext(os.path.basename(file))[0]
names.append(name)
files.append(os.path.abspath(os.path.normpath(file)))
handle.close()
done = True
if len(names) == 0 and len(files) == 0:
#Try interpreting the file as a singular path.
done = False
#Last check.
if not done:
for file in arg.split(","):
if os.path.exists(file):
if file.endswith(".gz"):
name = os.path.splitext(os.path.basename(file[:-3]))[0]
else:
name = os.path.splitext(os.path.basename(file))[0]
names.append(name)
files.append(os.path.abspath(os.path.normpath(file)))
return files, names
#Check if g/p/h
def determine_inputs(self):
if self.genomes is not None:
self.genome_list, self.identifiers = self.retrieve_files(self.genomes)
if self.proteins is not None or self.hmms is not None:
print("You can supply genomes or proteins or proteins and HMMS, but not genomes and anything else.")
self.error = True
#Proteins, but no HMMs
if self.proteins is not None and self.hmms is None:
self.protein_list, self.identifiers = self.retrieve_files(self.proteins)
if self.proteins is not None and self.hmms is not None:
self.protein_list, prot_names = self.retrieve_files(self.proteins)
self.hmm_list, hmm_names = self.retrieve_files(self.hmms)
if len(self.protein_list) != len(self.hmm_list):
print("Different number of proteins and HMMs supplied. You must supply the same number of each, and they must be matched pairs.")
self.error = True
else:
all_same = True
for p, h in zip(prot_names, hmm_names):
if p != h:
all_same = False
if all_same:
self.identifiers = prot_names
prot_names = None
hmm_names = None
else:
self.error = True
if self.crystals is not None:
self.crystal_list, self.identifiers = self.retrieve_files(self.crystals)
#The crystal naming scheme includes an identifier at the end. This removes it.
self.identifiers = [id[:-13] for id in self.identifiers]
if not self.error:
self.prep_input_files()
def prep_input_files(self):
self.in_files = []
if self.genome_list is not None:
self.status = "genome"
for g in self.genome_list:
f = input_file(g, output = self.output, do_compress = self.do_comp, make_crystal = self.crystalize)
f.set_genome(g)
self.in_files.append(f)
if self.protein_list is not None:
self.status = "protein"
for p in self.protein_list:
f = input_file(p, output = self.output, do_compress = self.do_comp, make_crystal = self.crystalize)
f.set_protein(p)
self.in_files.append(f)
if self.hmm_list is not None:
self.status = "protein+HMM"
for h, f in zip(self.hmm_list, self.in_files):
f.set_hmm(h)
def sql_query(genomes, proteins, hmms, db_name, output, threads, verbose, do_stdev, style, in_mem, make_db, qdb_name, do_comp):
if not os.path.exists(db_name):
print("")
print("FastAAI can't find your database:", db_name)
print("Are you sure that the path you've given to the database is correct and that the database exists?")
print("FastAAI exiting.")
print("")
sys.exit()
#importer opts
#genomes = None, proteins = None, hmms = None, crystals = None
imported_files = fastaai_file_importer(genomes = genomes, proteins = proteins, hmms = hmms, output = output)
imported_files.determine_inputs()
if imported_files.error:
print("Exiting FastAAI due to input file error.")
quit()
good_to_go = prepare_directories(output, imported_files.status, "query")
if not good_to_go:
print("Exiting FastAAI")
sys.exit()
print("")
'''
self, in_memory = False, input_file_objects = None,
target = None, threads = 1, do_sd = False, output_base = "FastAAI", output_style = "tsv",
build_db_from_queries = True, qdb_name = "Query_FastAAI_database.db", hmm_path = "00.Libraries/01.SCG_HMMs/Complete_SCG_DB.hmm",
do_comp = True, verbose = True
'''
hmm_path = find_hmm()
mdb = file_vs_db_query(in_memory = in_mem, input_file_objects = imported_files.in_files, target=db_name,
threads = threads, output_base = output, do_sd = do_stdev, output_style = style, do_comp = do_comp,
build_db_from_queries = make_db, qdb_name = qdb_name, verbose = verbose, hmm_path = hmm_path)
mdb.run()
#Here's where the querying db comes in
print("FastAAI query complete! Results at:", os.path.normpath(output + "/results"))
return None
#Manages the query process.
def db_query_opts():
parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter,
description='''
This FastAAI module takes two FastAAI databases and searches all of the genomes in the QUERY against all of the genomes in the TARGET
If you have many genomes (more than 1000), it will be faster to create the query database using FastAAI build_db,
then search it against an existing target using this module than it is to do the same thing with an SQL query.
If you give the same database as query and target, a special all vs. all search of the genomes in the database will be done.
''')
parser.add_argument('-q', '--query', dest = 'query', default = None, help = 'Path to the query database. The genomes FROM the query will be searched against the genomes in the target database')
parser.add_argument('-t', '--target', dest = 'target', default = None, help = 'Path to the target database.')
parser.add_argument('-o', '--output', dest = 'output', default = "FastAAI", help = 'The directory where FastAAI will place the result of this query. By default, a directory named "FastAAI" will be created in the current working directory and results will be placed there.')
parser.add_argument('--output_style', dest = "style", default = 'tsv', help = "Either 'tsv' or 'matrix'. Matrix produces a simplified output of only AAI estimates.")
parser.add_argument('--do_stdev', dest = "do_stdev", action='store_true', help = 'Off by default. Calculate std. deviations on Jaccard indicies. Increases memory usage and runtime slightly. Does NOT change estimated AAI values at all.')
parser.add_argument('--threads', dest = 'threads', type=int, default = 1, help = 'The number of processors to use. Default 1.')
parser.add_argument('--verbose', dest = 'verbose', action='store_true', help = 'Print minor updates to console. Major updates are printed regardless.')
parser.add_argument('--in_memory', dest = "in_mem", action = 'store_true', help = 'Load both databases into memory before querying. Consumes more RAM, but is faster and reduces file I/O substantially. Consider reducing number of threads')
parser.add_argument('--store_results', dest = "storage", action = 'store_true', help = 'Keep partial results in memory. Only works with --in_memory. Fewer writes, but more RAM. Default off.')
args, unknown = parser.parse_known_args()
return parser, args
#db-db query; in-mem
def parse_db_init(query, target, outpath):
global qdb
qdb = query
global tdb
tdb = target
global output_path
output_path = outpath
global query_gak
global target_gak
return qdb, tdb, output_path
def parse_accession(acc):
tmp = sqlite3.connect(":memory:")
curs = tmp.cursor()
curs.execute("attach '" + qdb + "' as queries")
curs.execute("attach '" + tdb + "' as targets")
sql = '''
SELECT queries.{acc}.genomes, targets.{acc}.genomes
FROM queries.{acc} INNER JOIN targets.{acc}
ON queries.{acc}.kmer=targets.{acc}.kmer
'''.format(acc = acc)
res = curs.execute(sql).fetchall()
curs.execute("detach queries")
curs.execute("detach targets")
curs.close()
tmp.close()
tl = []
ql = {}
acc_id = generate_accessions_index()
acc_id = acc_id[acc]
indexer = 0
for r in res:
queries = np.frombuffer(r[0], dtype = np.int32)
tgt = np.frombuffer(r[1], dtype = np.int32)
tl.append(tgt)
for q in queries:
if q not in ql:
ql[q] = {}
if acc_id not in ql[q]:
ql[q][acc_id] = []
ql[q][acc_id].append(indexer)
indexer += 1
tl = np.array(tl, dtype = object)
for q in ql:
if acc_id in ql[q]:
ql[q][acc_id] = np.array(ql[q][acc_id], dtype=np.int32)
out_file = os.path.normpath(output_path+"/"+acc+".pickle")
with open(out_file, "wb") as out:
pickle.dump([ql, tl], out)
return([acc, out_file])
#all of this is exclusive to the in-mem approach for db db query
def one_init(ql, tl, num_tgt, qgak_queue, tgak, tpres, sd, sty, output_dir, store_results, progress_queue, qnames, tnames, temp_dir):
global _ql
_ql = ql
global _tl
_tl = tl
global _nt
_nt = num_tgt
qgak_data = qgak_queue.get()
global out_base
out_base = output_dir
global group_id
group_id = os.path.normpath(temp_dir + "/partial_results_group_" + str(qgak_data[0])+ ".txt")
global _qgak
_qgak = qgak_data[1]
global query_grouping
query_grouping = qgak_data[2]
qgak_data = None
global _tgak
_tgak = tgak
global _tpres
_tpres = tpres
global _tct
_tct = np.sum(_tpres, axis = 0)
global do_sd
do_sd = sd
global style
style = sty
#Suppress div by zero warning - it's handled.
np.seterr(divide='ignore')
global store
store = store_results
if store:
global holder
holder = []
else:
global outwriter
outwriter = open(group_id, "w")
global prog_queue
prog_queue = progress_queue
global _qnames
_qnames = qnames
global _tnames
_tnames = tnames
def one_work(placeholder):
for q in query_grouping:
results = []
#We also need to count the accs in the query genome, but which are not part of the inner join.
for acc in _qgak[q][0]:
if acc in _ql[q]:
#the bincount is intersections.
these_intersections = np.bincount(np.concatenate(_tl[acc][_ql[q][acc]]), minlength = _nt)
else:
#there are no intersections even though this accession is shared with at least one target
#number of intersects is all zeros
these_intersections = np.zeros(_nt, dtype = np.int32)
#Append the counts or zeros, either way.
results.append(these_intersections)
results = np.vstack(results)
target_kmer_counts = _tgak[_qgak[q][0], :]
#unions = size(A) + size(B) - size(intersections(A, B))
#unions = target_kmer_counts + query_kmers_by_acc - intersections
unions = np.subtract(np.add(target_kmer_counts, _qgak[q][1][:, None]), results)
#These are now jaccards, not #intersections
results = np.divide(results, unions)
shared_acc_counts = np.sum(_tpres[_qgak[q][0], :], axis = 0)
no_hit = np.where(shared_acc_counts == 0)
jaccard_averages = np.divide(np.sum(results, axis = 0), shared_acc_counts)
#Skip SD if output is matrix
if style == "tsv":
aai_ests = numpy_kaai_to_aai(jaccard_averages)
if do_sd:
#find diffs from means; this includes indicies corresponding to unshared SCPs that should not be included.
results = results - jaccard_averages
#fix those corresponding indicies to not contribute to the final SD.
results[np.logical_not(_tpres[_qgak[q][0], :])] = 0
#results[np.nonzero(has_accs == 0)] = 0
#Square them; 0^2 = 0, so we don't have to think about the fixed indices any more.
results = np.square(results)
#Sum squares and divide by shared acc. count, the sqrt to get SD.
jaccard_SDs = np.sqrt(np.divide(np.sum(results, axis = 0), shared_acc_counts))
jaccard_SDs = np.round(jaccard_SDs, 4).astype(str)
no_hit = np.where(shared_acc_counts == 0)
#addtl.shape[0] is the query acc count
possible_hits = np.minimum(_qgak[q][0].shape[0], _tct).astype(str)
jaccard_averages = np.round(jaccard_averages, 4).astype(str)
shared_acc_counts = shared_acc_counts.astype(str)
jaccard_averages[no_hit] = "N/A"
aai_ests[no_hit] = "N/A"
shared_acc_counts[no_hit] = "N/A"
possible_hits[no_hit] = "N/A"
qname = _qnames[q]
output_name = os.path.normpath(out_base + "/results/"+qname+"_results.txt")
out = open(output_name, "w")
out.write("query\ttarget\tavg_jacc_sim\tjacc_SD\tnum_shared_SCPs\tposs_shared_SCPs\tAAI_estimate\n")
if do_sd:
jaccard_SDs[no_hit] = "N/A"
for i in range(0, len(aai_ests)):
out.write(qname+"\t"+_tnames[i]+"\t"+jaccard_averages[i]+"\t"+jaccard_SDs[i]+"\t"+shared_acc_counts[i]+"\t"+possible_hits[i]+"\t"+aai_ests[i]+"\n")
else:
for i in range(0, len(aai_ests)):
out.write(qname+"\t"+_tnames[i]+"\t"+jaccard_averages[i]+"\t"+"N/A"+"\t"+shared_acc_counts[i]+"\t"+possible_hits[i]+"\t"+aai_ests[i]+"\n")
out.close()
else:
if store:
aai_ests = numpy_kaai_to_aai_just_nums(jaccard_averages, as_float = False)
aai_ests[no_hit] = 0
#add zeros at misses/NAs
holder.append(aai_ests)
else:
aai_ests = numpy_kaai_to_aai_just_nums(jaccard_averages, as_float = True)
aai_ests[no_hit] = 0
print(*aai_ests, sep = "\t", file = outwriter)
prog_queue.put(q)
prog_queue.put("done")
return None
def two_work(i):
if store:
hold_together = np.vstack(holder)
np.savetxt(group_id, hold_together, delimiter = "\t", fmt='%4d')
else:
outwriter.close()
return group_id
def on_disk_init(query_database_path, target_database_path, num_tgt, query_queue, target_gak, tpres, sd, sty, output_dir, progress_queue, qnames, tnames, valids, temp_dir):
global database
database = sqlite3.connect(":memory:")
curs = database.cursor()
curs.execute("attach '" + query_database_path + "' as queries")
curs.execute("attach '" + target_database_path + "' as targets")
curs.close()
global _nt
_nt = num_tgt
qgak_data = query_queue.get()
global out_base
out_base = output_dir
global group_id
group_id = os.path.normpath(temp_dir + "/partial_results_group_" + str(qgak_data[0])+ ".txt")
global _qgak
_qgak = qgak_data[1]
global query_grouping
query_grouping = qgak_data[2]
global _tgak
_tgak = target_gak
global _tpres
_tpres = tpres
global _tct
_tct = np.sum(_tpres, axis = 0)
global do_sd
do_sd = sd
global style
style = sty
#Suppress div by zero warning - it's handled.
np.seterr(divide='ignore')
if style == "matrix":
global outwriter
outwriter = open(group_id, "w")
global prog_queue
prog_queue = progress_queue
global _qnames
_qnames = qnames
global _tnames
_tnames = tnames
global acc_indexer
acc_indexer = generate_accessions_index(forward = False)
global _valids
_valids = valids
def on_disk_work_one(placeholder):
curs = database.cursor()
for q in query_grouping:
results = []
qname = _qnames[q]
for acc in _qgak[q][0]:
acc_name = acc_indexer[acc]
if acc_name in _valids:
one = curs.execute("SELECT kmers FROM queries."+acc_name+"_genomes WHERE genome=?", (str(q),)).fetchone()[0]
one = np.frombuffer(one, dtype = np.int32)
if one.shape[0] > 998:
#Each kmer needs to be a tuple.
these_kmers = [(int(kmer),) for kmer in one]
temp_name = "_" + qname +"_" + acc_name
temp_name = temp_name.replace(".", "_")
curs.execute("CREATE TEMP TABLE " + temp_name + " (kmer INTEGER)")
insert_table = "INSERT INTO " + temp_name + " VALUES (?)"
curs.executemany(insert_table, these_kmers)
join_and_select_sql = "SELECT genomes FROM " + temp_name + " INNER JOIN targets." + acc_name + " ON "+ temp_name+".kmer = targets." + acc_name + ".kmer;"
matches = curs.execute(join_and_select_sql).fetchall()
else:
#kmers must be a list, not a tuple.
these_kmers = [int(kmer) for kmer in one]
select = "SELECT genomes FROM targets." + acc_name + " WHERE kmer IN ({kmers})".format(kmers=','.join(['?']*len(these_kmers)))
matches = curs.execute(select, these_kmers).fetchall()
set = []
for row in matches:
set.append(row[0])
set = b''.join(set)
matches = None
these_intersections = np.bincount(np.frombuffer(set, dtype = np.int32), minlength = _nt)
set = None
results.append(these_intersections)
else:
results.append(np.zeros(_nt, dtype=np.int32))
results = np.vstack(results)
target_kmer_counts = _tgak[_qgak[q][0], :]
#unions = size(A) + size(B) - size(intersections(A, B))
#unions = target_kmer_counts + query_kmers_by_acc - intersections
unions = np.subtract(np.add(target_kmer_counts, _qgak[q][1][:, None]), results)
#These are now jaccards, not #intersections
results = np.divide(results, unions)
shared_acc_counts = np.sum(_tpres[_qgak[q][0], :], axis = 0)
no_hit = np.where(shared_acc_counts == 0)
jaccard_averages = np.divide(np.sum(results, axis = 0), shared_acc_counts)
#Skip SD if output is matrix
if style == "tsv":
aai_ests = numpy_kaai_to_aai(jaccard_averages)
if do_sd:
#find diffs from means; this includes indicies corresponding to unshared SCPs that should not be included.
results = results - jaccard_averages
#fix those corresponding indicies to not contribute to the final SD.
results[np.logical_not(_tpres[_qgak[q][0], :])] = 0
#results[np.nonzero(has_accs == 0)] = 0
#Square them; 0^2 = 0, so we don't have to think about the fixed indices any more.
results = np.square(results)
#Sum squares and divide by shared acc. count, the sqrt to get SD.
jaccard_SDs = np.sqrt(np.divide(np.sum(results, axis = 0), shared_acc_counts))
jaccard_SDs = np.round(jaccard_SDs, 4).astype(str)
no_hit = np.where(shared_acc_counts == 0)
#_qgak[q][0] is the query acc count
possible_hits = np.minimum(_qgak[q][0].shape[0], _tct).astype(str)
jaccard_averages = np.round(jaccard_averages, 4).astype(str)
shared_acc_counts = shared_acc_counts.astype(str)
jaccard_averages[no_hit] = "N/A"
aai_ests[no_hit] = "N/A"
shared_acc_counts[no_hit] = "N/A"
possible_hits[no_hit] = "N/A"
output_name = os.path.normpath(out_base + "/results/"+qname+"_results.txt")
out = open(output_name, "w")
out.write("query\ttarget\tavg_jacc_sim\tjacc_SD\tnum_shared_SCPs\tposs_shared_SCPs\tAAI_estimate\n")
if do_sd:
jaccard_SDs[no_hit] = "N/A"
for i in range(0, len(aai_ests)):
out.write(qname+"\t"+_tnames[i]+"\t"+jaccard_averages[i]+"\t"+jaccard_SDs[i]+"\t"+shared_acc_counts[i]+"\t"+possible_hits[i]+"\t"+aai_ests[i]+"\n")
else:
for i in range(0, len(aai_ests)):
out.write(qname+"\t"+_tnames[i]+"\t"+jaccard_averages[i]+"\t"+"N/A"+"\t"+shared_acc_counts[i]+"\t"+possible_hits[i]+"\t"+aai_ests[i]+"\n")
out.close()
else:
aai_ests = numpy_kaai_to_aai_just_nums(jaccard_averages, as_float = True)
aai_ests[no_hit] = 0
print(*aai_ests, sep = "\t", file = outwriter)
prog_queue.put(q)
curs.close()
prog_queue.put("done")
def on_disk_work_two(i):
outwriter.close()
return group_id
def sorted_nicely(l):
convert = lambda text: int(text) if text.isdigit() else text
alphanum_key = lambda key: [ convert(c) for c in re.split('([0-9]+)', key) ]
return sorted(l, key = alphanum_key)
class db_db_remake:
def __init__(self, in_memory = False, store_mat_res = False,
query = None, target = None, threads = 1, do_sd = False,
output_base = "FastAAI", output_style = "tsv", verbose = True):
#databases to eat
self.q = query
self.t = target
#metadata
self.ok = generate_accessions_index(forward = True)
self.rev = generate_accessions_index(forward = False)
self.valids = None
#Originally this was made to be a memory database only block of code, but just if/else one change makes it work on disk and it doesn't need a redev, then.
self.as_mem_db = in_memory
self.store_mat = store_mat_res
#in-mem stuff
self.conn = None
self.curs = None
self.threads = threads
self.do_sd = do_sd
self.output_base = output_base
self.output = os.path.normpath(output_base + "/results")
self.style = output_style
self.query_names = None
self.target_names = None
self.num_queries = None
self.num_targets = None
self.query_gak = None
self.target_gak = None
self.target_presence = None
self.query_dict = None
self.target_dict = None
self.verbose = verbose
#getting the db metadata happens the same way in every case
def open(self):
if self.verbose:
print("Perusing database metadata")
self.conn = sqlite3.connect(":memory:")
self.curs = self.conn.cursor()
self.curs.execute("attach '" + self.q + "' as queries")
self.curs.execute("attach '" + self.t + "' as targets")
#Find the shared accessions for these databases
shared_accs_sql = '''
SELECT queries.sqlite_master.name
FROM queries.sqlite_master INNER JOIN targets.sqlite_master
ON queries.sqlite_master.name = targets.sqlite_master.name
'''
self.valids = {}
for table in self.curs.execute(shared_accs_sql).fetchall():
table = table[0]
#Filter to
if table in self.ok:
self.valids[table] = self.ok[table]
self.query_names = []
for r in self.curs.execute("SELECT genome FROM queries.genome_index ORDER BY gen_id").fetchall():
self.query_names.append(r[0])
self.target_names = []
for r in self.curs.execute("SELECT genome FROM targets.genome_index ORDER BY gen_id").fetchall():
self.target_names.append(r[0])
self.num_queries = len(self.query_names)
self.num_targets = len(self.target_names)
gak_sql = '''
SELECT * FROM {db}.genome_acc_kmer_counts
WHERE accession in ({accs})
ORDER BY genome
'''
acc_ids = list(self.valids.values())
acc_ids.sort()
acc_ids = tuple(acc_ids)
#query genome-acc-kmers (gak) is ordered by genome first, then accession
self.query_gak = {}
#for result in self.curs.execute(gak_sql.format(db = "queries", accs=','.join(['?']*len(self.valids))), acc_ids).fetchall():
for result in self.curs.execute("SELECT * FROM queries.genome_acc_kmer_counts ORDER BY genome").fetchall():
genome, accession, kmer_ct = result[0], result[1], result[2]
if genome not in self.query_gak:
self.query_gak[genome] = [[],[]]
self.query_gak[genome][0].append(accession)
self.query_gak[genome][1].append(kmer_ct)
#refigure into numpy arrays for quicker array access later.
for genome in self.query_gak:
self.query_gak[genome] = (np.array(self.query_gak[genome][0], dtype = np.int32), np.array(self.query_gak[genome][1], dtype = np.int32))
#Split these into ordered groups - this makes joining results at the end easier.
qgak_queue = multiprocessing.Queue()
groupings = split_seq_indices(np.arange(self.num_queries), self.threads)
group_id = 0
for group in groupings:
next_set = {}
for i in range(group[0], group[1]):
next_set[i] = self.query_gak[i]
self.query_gak[i] = None
#this ensures that the selection of qgak and the query index range match
qgak_queue.put((group_id, next_set, np.arange(group[0], group[1]),))
group_id += 1
self.query_gak = qgak_queue
qgak_queue = None
#tgt gak is organized by accession first, then genome
self.target_gak = np.zeros(shape = (122, self.num_targets), dtype = np.int32)
for result in self.curs.execute(gak_sql.format(db = "targets", accs=','.join(['?']*len(self.valids))), acc_ids).fetchall():
genome, accession, kmer_ct = result[0], result[1], result[2]
self.target_gak[accession, genome] += kmer_ct
self.target_presence = self.target_gak > 0
self.target_presence = self.target_presence.astype(bool)
#This needs to have a TSV write method
def load_in_mem(self):
#tempdir_path = os.path.normpath(self.output_base+"/temp")
tempdir_path = tempfile.mkdtemp()
#if not os.path.exists(tempdir_path):
# os.mkdir(tempdir_path)
ql = {}
tl = {}
for t in self.valids.values():
tl[t] = None
for i in range(0, self.num_queries):
ql[i] = {}
if self.verbose:
tracker = progress_tracker(total = len(self.valids), message = "Loading data in memory.")
else:
print("\nLoading data in memory.")
pool = multiprocessing.Pool(self.threads, initializer = parse_db_init,
initargs = (self.q, #query
self.t, #target
tempdir_path,)) #outpath
for result in pool.imap_unordered(parse_accession, self.valids.keys()):
this_accession = result[0]
this_acc_id = self.ok[this_accession]
with open(result[1], "rb") as inp:
this_acc_data = pickle.load(inp)
os.remove(result[1])
tl[this_acc_id] = this_acc_data[1]
for q in this_acc_data[0]:
#We know that this acc must be in every ql for this loaded data.
ql[q][this_acc_id] = this_acc_data[0][q][this_acc_id]
if self.verbose:
tracker.update()
pool.close()
if self.verbose:
tracker = progress_tracker(total = self.num_queries, message = "Calculating AAI")
else:
print("\nCalculating AAI.")
query_groups = []
for grouping in split_seq_indices(np.arange(self.num_queries), self.threads):
query_groups.append(np.arange(grouping[0], grouping[1]))
result_queue = multiprocessing.Queue()
remaining_procs = self.threads
still_going = True
pool = multiprocessing.Pool(self.threads, initializer = one_init,
initargs = (ql, #ql
tl, #tl
self.num_targets, #num_tgt
self.query_gak, #qgak_queue
self.target_gak, #tgak
self.target_presence, #tpres
self.do_sd, #sd
self.style, #sty
self.output_base, #output_dir
self.store_mat, #store_results
result_queue, #progress_queue
self.query_names, #qnames
self.target_names, #tnames
tempdir_path,)) #temp_dir
some_results = pool.imap(one_work, query_groups)
while still_going:
item = result_queue.get()
if item == "done":
remaining_procs -= 1
if remaining_procs == 0:
still_going = False
else:
if self.verbose:
tracker.update()
else:
pass
if self.style == "matrix":
result_files = []
for result in pool.map(two_work, range(0, self.threads)):
result_files.append(result)
pool.close()
self.write_mat_from_files(result_files, tempdir_path)
else:
pool.close()
#This needs to be implemented from existing code.
def db_on_disk(self):
tempdir_path = tempfile.mkdtemp()
if self.style == "matrix":
self.store_mat = False
result_queue = multiprocessing.Queue()
remaining_procs = self.threads
still_going = True
if self.verbose:
tracker = progress_tracker(total = self.num_queries, message = "Calculating AAI")
else:
print("\nCalculating AAI")
query_groups = []
for grouping in split_seq_indices(np.arange(self.num_queries), self.threads):
query_groups.append(np.arange(grouping[0], grouping[1]))
#query_database_path, target_database_path, num_tgt, query_queue, target_gak, tpres, sd,
#sty, output_dir, progress_queue, qnames, tnames, valids, temp_dir
pool = multiprocessing.Pool(self.threads, initializer = on_disk_init,
initargs = (self.q, #query_database_path
self.t, #target_database_path
self.num_targets, #num_tgt
self.query_gak, #query_queue
self.target_gak, #target_gak
self.target_presence, #tpres
self.do_sd, #sd
self.style, #sty
self.output_base, #output_dir
result_queue, #progress_queue
self.query_names, #qnames
self.target_names, #tnames
self.valids, #valids
tempdir_path,)) #temp_dir
some_results = pool.imap(on_disk_work_one, query_groups)
while still_going:
item = result_queue.get()
if item == "done":
remaining_procs -= 1
if remaining_procs == 0:
still_going = False
else:
if self.verbose:
tracker.update()
else:
pass
if self.style == "matrix":
result_files = []
for result in pool.map(on_disk_work_two, range(0, self.threads)):
result_files.append(result)
pool.close()
if self.style == "matrix":
self.write_mat_from_files(result_files, tempdir_path)
def write_mat_from_files(self, result_files, tempdir_path):
#tempdir_path = os.path.normpath(self.output_base+"/temp")
result_files = sorted_nicely(result_files)
#print("Combining:")
#for f in result_files:
# print(f)
if self.verbose:
tracker = progress_tracker(total = self.threads, step_size = 2, message = "Finalizing results.")
else:
print("\nFinalizing results.")
output_file = os.path.normpath(self.output+"/FastAAI_matrix.txt")
final_outwriter = open(output_file, "w")
print("query_genome\t"+'\t'.join(self.target_names), file = final_outwriter)
row = 0
for f in result_files:
fh = open(f, "r")
cur = fh.readlines()
fh.close()
for i in range(0, len(cur)):
if self.store_mat:
#Add the decimals - we don't need to do this is we've been writing line-wise.
#values will ALWAYS be 4 digits in this method, so groups of 2 dec. works.
cur[i] = re.sub("(\d{2})(\d{2})", "\\1.\\2", cur[i])
#Add in the query name to the row
cur[i] = self.query_names[row]+"\t"+cur[i]
row += 1
final_outwriter.write(''.join(cur))
cur = None
try:
os.remove(f)
except:
pass
if self.verbose:
tracker.update()
final_outwriter.close()
try:
if len(os.listdir(tempdir_path)) == 0:
shutil.rmtree(tempdir_path)
except:
pass
def close(self):
self.curs.close()
self.curs = None
def clean_up(self):
self.conn.close()
self.conn = None
def run(self):
self.open()
#work
if self.as_mem_db:
self.load_in_mem()
else:
self.db_on_disk()
self.close()
self.clean_up()
#Control the query process for any DB-first query.
def db_query(query, target, verbose, output, threads, do_stdev, style, in_mem, store_results):
print("")
#Sanity checks.
if target is None:
print("You need to supply a databasae for --target")
sys.exit()
#Sanity checks.
if query is None:
print("You need to supply a databasae for --query")
sys.exit()
#Sanity checks.
if not os.path.exists(target):
print("Target database not found. Exiting FastAAI")
sys.exit()
if not os.path.exists(query):
print("Query database not found. Exiting FastAAI")
sys.exit()
#status = "exists"
query_ok = assess_db(query)
target_ok = assess_db(target)
if query_ok != "exists":
print("Query database improperly formatted. Exiting FastAAI")
sys.exit()
if target_ok != "exists":
print("Query database improperly formatted. Exiting FastAAI")
sys.exit()
#Check if the database is querying against itself.
if target is None or query is None:
print("I require both a query and a target database. FastAAI exiting.")
sys.exit()
if query == target:
print("Performing an all vs. all query on", query)
#all_vs_all = True
else:
print("Querying", query, "against", target)
#all_vs_all = False
#Ready the output directories as needed.
#The databases are already created, the only state they can be in in P+H
good_to_go = prepare_directories(output, "protein and HMM", "query")
if not good_to_go:
print("Exiting FastAAI")
sys.exit()
#todo
mdb = db_db_remake(in_memory = in_mem, store_mat_res = store_results, query = query, target = target, threads = threads, do_sd = do_stdev, output_base = output, output_style = style, verbose = verbose)
mdb.run()
print("")
#Check to see if the file exists and is a valid fastAAI db
def assess_db(path):
status = None
if os.path.exists(path):
conn = sqlite3.connect(path)
curs = conn.cursor()
try:
sql = "SELECT name FROM sqlite_master WHERE type='table'"
curs.row_factory = lambda cursor, row: row[0]
tables = curs.execute(sql).fetchall()
curs.row_factory = None
curs.close()
conn.close()
if len(tables) > 2 and "genome_index" in tables and "genome_acc_kmer_counts" in tables:
status = "exists"
else:
status = "wrong format"
except:
status = "wrong format"
else:
try:
conn = sqlite3.connect(path)
conn.close()
status = "created"
except:
status = "unable to create"
return status
#Add one FastAAI DB to another FastAAI DB
def merge_db_opts():
parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter,
description='''
This FastAAI module allows you to add the contents of one or more FastAAI databases to another.
You must have at least two already-created FastAAI databases using the build_db module before this module can be used.
Supply a comma-separated list of at least one donor database and a single recipient database.
If the recipient already exists, then genomes in all the donors will be added to the recipient.
If the recipient does not already exist, a new database will be created, and the contents of all the donors will be added to it.
Example:
FastAAI.py merge_db --donors databases/db1.db,databases/db2.db -recipient databases/db3.db --threads 3
This command will create a new database called "db3.db", merge the data in db1.db and db2.db, and then add the merged data into db3.db
Only the recipient database will be modified; the donors will be left exactly as they were before running this module.
''')
parser.add_argument('-d', '--donors', dest = 'donors', default = None, help = 'Comma-separated string of paths to one or more donor databases. The genomes FROM the donors will be added TO the recipient and the donors will be unaltered')
parser.add_argument('--donor_file', dest = 'donor_file', default = None, help = 'File containing paths to one or more donor databases, one per line. Use EITHER this or --donors')
parser.add_argument('-r', '--recipient', dest = 'recipient', default = None, help = 'Path to the recipient database. Any genomes FROM the donor database not already in the recipient will be added to this database.')
parser.add_argument('--verbose', dest = 'verbose', action='store_true', help = 'Print minor updates to console. Major updates are printed regardless.')
parser.add_argument('--threads', dest = 'threads', type=int, default = 1, help = 'The number of processors to use. Default 1.')
args, unknown = parser.parse_known_args()
return parser, args
def merge_db_init(indexer, table_record, donor_dbs, tempdir):
global mgi
mgi = indexer
global accs_per_db
accs_per_db = table_record
global tdb_list
tdb_list = donor_dbs
global work_space
work_space = tempdir
def acc_transformer_merge(acc_name_genomes):
acc_name = acc_name_genomes.split("_genomes")[0]
my_acc_db = os.path.normpath(work_space + "/"+acc_name+".db")
if os.path.exists(my_acc_db):
os.remove(my_acc_db)
my_db = sqlite3.connect(my_acc_db)
curs = my_db.cursor()
curs.execute("CREATE TABLE {acc} (kmer INTEGER PRIMARY KEY, genomes array)".format(acc=acc_name))
curs.execute("CREATE TABLE {acc} (genome INTEGER PRIMARY KEY, kmers array)".format(acc=acc_name_genomes))
my_db.commit()
reformat = {}
for d in tdb_list:
simple_rows = []
#do nothing if the acc is not in the donor.
if acc_name_genomes in accs_per_db[d]:
donor_conn = sqlite3.connect(d)
dcurs = donor_conn.cursor()
data = dcurs.execute("SELECT * FROM {acc}".format(acc=acc_name_genomes)).fetchall()
dcurs.close()
donor_conn.close()
for row in data:
genome, kmers = row[0], row[1]
new_index = mgi[d][genome]
#-1 is the value indicating an already-seen genome that should not be added.
if new_index > -1:
simple_rows.append((new_index, kmers,))
kmers = np.frombuffer(kmers, dtype=np.int32)
for k in kmers:
if k not in reformat:
reformat[k] = []
reformat[k].append(new_index)
if len(simple_rows) > 0:
curs.executemany("INSERT INTO {acc} VALUES (?,?)".format(acc=acc_name_genomes), simple_rows)
my_db.commit()
simple_rows = None
data = None
to_add = []
for k in reformat:
as_bytes = np.array(reformat[k], dtype = np.int32)
as_bytes = as_bytes.tobytes()
reformat[k] = None
to_add.append((int(k), as_bytes,))
curs.executemany("INSERT INTO {acc} VALUES (?, ?)".format(acc = acc_name), to_add)
my_db.commit()
to_add = None
curs.execute("CREATE INDEX {acc}_index ON {acc} (kmer)".format(acc=acc_name))
my_db.commit()
curs.close()
my_db.close()
return [my_acc_db, acc_name]
def merge_db(recipient, donors, donor_file, verbose, threads):
#Prettier on the CLI
if (donors is None and donor_file is None) or recipient is None:
print("Either donor or target not given. FastAAI is exiting.")
return None
print("")
if donors is not None:
donors = donors.split(",")
if donor_file is not None:
try:
donors = []
fh = agnostic_reader(donor_file)
for line in fh:
line = line.strip()
donors.append(line)
fh.close()
except:
sys.exit("Could not parse your donor file.")
valid_donors = []
for d in donors:
if os.path.exists(d):
if d == recipient:
print("Donor database", d, "is the same as the recipient. This database will be skipped.")
else:
check = assess_db(d)
if check == "exists":
if d not in valid_donors:
valid_donors.append(d)
else:
print("It appears that database", d, "was already added to the list of donors. Did you type it twice in the list of donors? Skipping it.")
else:
if check == "created":
print("Donor database", d, "not found! Skipping.")
else:
print("Something was wrong with supplied database:", d+". A status check found:", check)
else:
print("Donor database", d, "not found! Are you sure the path is correct and this donor exists? This database will be skipped.")
if len(valid_donors) == 0:
print("None of the supplied donor databases were able to be accessed. FastAAI cannot continue if none of these databases are valid. Exiting.")
sys.exit()
recip_check = assess_db(recipient)
if recip_check == "created" or recip_check == "exists":
print("Donor databases:")
for donor in valid_donors:
print("\t", donor)
print("Will be added to recipient database:", recipient)
else:
print("I couldn't find or create the recipient database at", recipient+".", "Does the folder you're trying to place this database in exist, and do you have permission to write files to it? FastAAI exiting.")
sys.exit()
if recipient is None or len(valid_donors) == 0:
print("I require both a valid donor and a recipient database. FastAAI exiting.")
sys.exit()
gen_counter = 0
multi_gen_ids = {}
all_gens = {}
#Load recipient data, if any.
if recip_check == "exists":
conn = sqlite3.connect(recipient)
curs = conn.cursor()
data = curs.execute("SELECT genome, gen_id FROM genome_index").fetchall()
tabs = curs.execute("SELECT name FROM sqlite_master").fetchall()
curs.close()
conn.close()
multi_gen_ids[recipient] = {}
for row in data:
genome, index = row[0], row[1]
all_gens[genome] = 0
multi_gen_ids[recipient][genome] = index
gen_counter = max(list(multi_gen_ids[recipient].values())) + 1
genome_index_to_add = []
gak_to_add = []
tables = {}
#Donors should always exist, never be created.
for d in valid_donors:
#load
conn = sqlite3.connect(d)
curs = conn.cursor()
data = curs.execute("SELECT * FROM genome_index").fetchall()
tabs = curs.execute("SELECT name FROM sqlite_master").fetchall()
gak = curs.execute("SELECT * FROM genome_acc_kmer_counts").fetchall()
curs.close()
conn.close()
multi_gen_ids[d] = {}
for row in data:
genome, index, prot_ct = row[0], row[1], row[2]
if genome not in all_gens:
all_gens[genome] = 0
#We need to be able to convert number to number.
multi_gen_ids[d][index] = gen_counter
genome_index_to_add.append((genome, gen_counter, prot_ct,))
gen_counter += 1
else:
#This is a remove condition for later.
multi_gen_ids[d][index] = -1
data = None
for row in gak:
genome_id, acc_id, kmer_ct = row[0], row[1], row[2]
new_index = multi_gen_ids[d][genome_id]
if new_index > -1:
gak_to_add.append((new_index, acc_id, kmer_ct,))
tables[d] = []
for tab in tabs:
tab = tab[0]
if tab.endswith("_genomes"):
tables[d].append(tab)
tables[d] = set(tables[d])
all_tabs = set()
for t in tables:
all_tabs = all_tabs.union(tables[t])
all_tabs = list(all_tabs)
temp_dir = tempfile.mkdtemp()
try:
if verbose:
tracker = progress_tracker(len(all_tabs), message = "Formatting data to add to database")
else:
print("Formatting data to add to database")
conn = sqlite3.connect(recipient)
curs = conn.cursor()
#indexer, table_record, donor_dbs, tempdir
pool = multiprocessing.Pool(threads, initializer=merge_db_init, initargs = (multi_gen_ids, tables, valid_donors, temp_dir,))
for result in pool.imap_unordered(acc_transformer_merge, all_tabs):
db, accession = result[0], result[1]
curs.execute("CREATE TABLE IF NOT EXISTS {acc} (kmer INTEGER PRIMARY KEY, genomes array)".format(acc=accession))
curs.execute("CREATE TABLE IF NOT EXISTS {acc}_genomes (genome INTEGER PRIMARY KEY, kmers array)".format(acc=accession))
curs.execute("CREATE INDEX IF NOT EXISTS {acc}_index ON {acc}(kmer)".format(acc=accession))
conn.commit()
curs.execute("attach '" + db + "' as acc")
conn.commit()
#Get the genomes from worker db.
curs.execute("INSERT INTO {acc}_genomes SELECT * FROM acc.{acc}_genomes".format(acc=accession))
to_update = curs.execute("SELECT kmer, genomes, genomes FROM acc.{acc}".format(acc=accession)).fetchall()
update_concat_sql = "INSERT INTO {acc} VALUES (?,?) ON CONFLICT(kmer) DO UPDATE SET genomes=genomes || (?)".format(acc=accession)
curs.executemany(update_concat_sql, to_update)
conn.commit()
curs.execute("detach acc")
conn.commit()
os.remove(db)
if verbose:
tracker.update()
pool.close()
pool.join()
curs.execute("CREATE TABLE IF NOT EXISTS genome_index (genome text, gen_id integer, protein_count integer)")
curs.execute("CREATE TABLE IF NOT EXISTS genome_acc_kmer_counts (genome integer, accession integer, count integer)")
curs.executemany("INSERT INTO genome_index VALUES (?,?,?)", genome_index_to_add)
curs.executemany("INSERT INTO genome_acc_kmer_counts VALUES (?,?,?)", gak_to_add)
curs.execute("CREATE INDEX IF NOT EXISTS kmer_acc ON genome_acc_kmer_counts (genome, accession);")
conn.commit()
except:
curs.close()
conn.close()
#Error
shutil.rmtree(temp_dir)
if recip_check == "created":
print("Removing created database after failure.")
os.remove(recipient)
try:
curs.close()
conn.close()
#Success
shutil.rmtree(temp_dir)
except:
pass
print("\nDatabases merged!")
return None
#Query 1 genome vs. 1 target using Carlos' method - just needs query, target, threads
def single_query_opts():
parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter,
description='''
This FastAAI module takes a single query genome, protein, or protein and HMM pair and a single target genome, protein, or protein and HMM pair as inputs and calculates AAI between the two.
If you supply a genome as either query or target, a protein and HMM file will be made for the genome.
If you supply a protein as either query or target, an HMM file will be made for it.
If you supply both an HMM and protein, the search will start right away. You cannot provide only an HMM.
No database will be built, and you cannot query multiple genomes with this module.
If you wish to query multiple genomes against themselves in all vs. all AAI search, use aai_index instead.
If you wish to query multiple genomes against multiple targets, use multi_query instead.
''')
parser.add_argument('-qg', '--query_genome', dest = 'query_genome', default = None, help = 'Query genome')
parser.add_argument('-tg', '--target_genome', dest = 'target_genome', default = None, help = 'Target genome')
parser.add_argument('-qp', '--query_protein', dest = 'query_protein', default = None, help = 'Query protein')
parser.add_argument('-tp', '--target_protein', dest = 'target_protein', default = None, help = 'Target protein')
parser.add_argument('-qh', '--query_hmm', dest = 'query_hmm', default = None, help = 'Query HMM')
parser.add_argument('-th', '--target_hmm', dest = 'target_hmm', default = None, help = 'Target HMM')
parser.add_argument('-o', '--output', dest = 'output', default = "FastAAI", help = 'The directory where FastAAI will place the result of this query. By default, a directory named "FastAAI" will be created in the current working directory and results will be placed there.')
parser.add_argument('--threads', dest = 'threads', type=int, default = 1, help = 'The number of processors to use. Default 1.')
parser.add_argument('--verbose', dest = 'verbose', action='store_true', help = 'Print minor updates to console. Major updates are printed regardless.')
parser.add_argument('--compress', dest = "do_comp", action = 'store_true', help = 'Gzip compress generated proteins, HMMs. Off by default.')
args, unknown = parser.parse_known_args()
return parser, args
def kaai_to_aai(kaai):
# Transform the kAAI into estimated AAI values
aai_hat = (-0.3087057 + 1.810741 * (np.exp(-(-0.2607023 * np.log(kaai))**(1/3.435))))*100
return aai_hat
#This one's unique. It doesn't do anything with the DB, which means it doesn't access any other functionality outside of the input_file class. It just advances a pair of inputs in parallel and does intersections.
def single_query(qf, tf, output, verbose, threads, do_compress):
if qf.identifiers[0] == tf.identifiers[0]:
print("You've selected the same query and target genome. The AAI is 100%.")
print("FastAAI exiting.")
return None
statuses = ["genome", "protein", "protein and hmm"]
query_stat = statuses.index(qf.status)
target_stat = statuses.index(tf.status)
minimum_status = statuses[min(query_stat, target_stat)]
start_printouts = ["[Genome] Protein Protein+HMM", " Genome [Protein] Protein+HMM", "Genome Protein [Protein+HMM]"]
print("")
print("Query start: ", start_printouts[query_stat])
print("Target start:", start_printouts[target_stat])
print("")
qname = qf.identifiers[0]
tname = tf.identifiers[0]
name = os.path.normpath(output + "/results/" + qname + "_vs_" + tname + ".aai.txt")
print("Output will be located at", name)
advance_me = [qf.in_files[0], tf.in_files[0]]
#All we need to do this.
hmm_file = find_hmm()
pool = multiprocessing.Pool(min(threads, 2), initializer = hmm_preproc_initializer, initargs = (hmm_file, do_compress,))
results = pool.map(run_build, advance_me)
pool.close()
pool.join()
query = results[0]
target = results[1]
print(query.partial_timings())
print(target.partial_timings())
#One of the printouts
max_poss_prots = max(len(query.best_hits_kmers), len(target.best_hits_kmers))
accs_to_view = set(query.best_hits_kmers.keys()).intersection(set(target.best_hits_kmers.keys()))
results = []
for acc in accs_to_view:
intersect = np.intersect1d(query.best_hits_kmers[acc], target.best_hits_kmers[acc])
intersect = intersect.shape[0]
union = query.best_hits_kmers[acc].shape[0] + target.best_hits_kmers[acc].shape[0] - intersect
jacc = intersect/union
results.append(jacc)
results = np.array(results, dtype = np.float_)
jacc_mean = np.mean(results)
jacc_std = np.std(results)
actual_prots = len(results)
poss_prots = max(len(query.best_hits_kmers), len(target.best_hits_kmers))
aai_est = round(kaai_to_aai(jacc_mean), 2)
if aai_est > 90:
aai_est = ">90%"
else:
if aai_est < 30:
aai_est = "<30%"
output = open(name, "w")
print("query\ttarget\tavg_jacc_sim\tjacc_SD\tnum_shared_SCPs\tposs_shared_SCPs\tAAI_estimate", file = output)
print(qname, tname, round(jacc_mean, 4), round(jacc_std, 4), actual_prots, poss_prots, aai_est, sep = "\t", file = output)
output.close()
print("query\ttarget\tavg_jacc_sim\tjacc_SD\tnum_shared_SCPs\tposs_shared_SCPs\tAAI_estimate")
print(qname, tname, round(jacc_mean, 4), round(jacc_std, 4), actual_prots, poss_prots, aai_est, sep = "\t")
print("FastAAI single query done! Estimated AAI:", aai_est)
def miga_merge_opts():
parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter,
description='''
Hello, Miguel.
Give one genome in nt, aa, or aa+hmm format and a database to create or add to.
It'll add the genome as efficiently as possible.
The normal merge command creates parallel processes and gathers data in
one-SCP databases to add to the main DB. Great for many genomes. A lot of extra
work for just one.
This version skips the creation of subordinate DBs and just directly adds the genome.
Faster, fewer writes, no parallel overhead.
''')
parser.add_argument('--genome', dest = 'gen', default = None, help = 'Path to one genome, FASTA format')
parser.add_argument('--protein', dest = 'prot', default = None, help = 'Path to one protein, AA FASTA format')
parser.add_argument('--hmm', dest = 'hmm', default = None, help = 'Path to one HMM file as predicted by FastAAI')
parser.add_argument('--output', dest = 'output', default = "FastAAI", help = 'Place the partial output files into a directory with this base. Default "FastAAI"')
parser.add_argument('--target', dest = 'database', default = None, help = 'Path to the target database. The genome supplied will be added to this. The DB will be created if needed.')
parser.add_argument('--verbose', dest = 'verbose', action='store_true', help = 'Print minor updates to console. Major updates are printed regardless.')
parser.add_argument('--compress', dest = 'compress', action='store_true', help = 'Compress generated file output')
args, unknown = parser.parse_known_args()
return parser, args
def miga_merge(infile, target_db, verbose, do_compress):
status = assess_db(target_db)
if status == "wrong format":
print("The database", target_db, "exists, but appears to not be a FastAAI database.")
print("FastAAI will not alter this file. Quitting.")
return None
if status == "unable to create":
print("The database", target_db, "could not be created.")
print("Are you sure that the path you gave is valid? Quitting.")
return None
if verbose:
print("Processing genome")
next_id = 0
exist_gens = {}
conn = sqlite3.connect(target_db)
curs = conn.cursor()
if status == 'exists':
for row in curs.execute("SELECT * FROM genome_index ORDER BY gen_id").fetchall():
genome, id, prot_ct = row[0], row[1], row[2]
exist_gens[genome] = id
next_id += 1
if infile.basename in exist_gens:
print("It looks like the file you're trying to add already exists in the database.")
print("Adding it is too likely to corrupt the database. Quitting.")
return None
hmm_file = find_hmm()
global hmm_manager
hmm_manager = pyhmmer_manager(do_compress)
hmm_manager.load_hmm_from_file(hmm_file)
infile.preprocess()
if len(infile.best_hits_kmers) > 0:
ok = generate_accessions_index()
gak_to_add = []
gen_id = np.zeros(1, dtype = np.int32)
gen_id[0] = next_id
gen_id = gen_id.tobytes()
for accession in infile.best_hits_kmers:
acc_id = ok[accession]
gak_to_add.append((next_id, acc_id, infile.best_hits_kmers[accession].shape[0],))
curs.execute("CREATE TABLE IF NOT EXISTS {acc} (kmer INTEGER PRIMARY KEY, genomes array)".format(acc=accession))
curs.execute("CREATE TABLE IF NOT EXISTS {acc}_genomes (genome INTEGER PRIMARY KEY, kmers array)".format(acc=accession))
curs.execute("CREATE INDEX IF NOT EXISTS {acc}_index ON {acc}(kmer)".format(acc=accession))
gen_first = (next_id, infile.best_hits_kmers[accession].tobytes(),)
curs.execute("INSERT INTO {acc}_genomes VALUES (?,?)".format(acc=accession), gen_first)
kmers_first = []
for k in infile.best_hits_kmers[accession]:
#we know there's only one genome in these cases.
kmers_first.append((int(k), gen_id, gen_id, ))
update_concat_sql = "INSERT INTO {acc} VALUES (?,?) ON CONFLICT(kmer) DO UPDATE SET genomes=genomes || (?)".format(acc=accession)
curs.executemany(update_concat_sql, kmers_first)
#Safety checks.
curs.execute("CREATE TABLE IF NOT EXISTS genome_index (genome text, gen_id integer, protein_count integer)")
curs.execute("CREATE TABLE IF NOT EXISTS genome_acc_kmer_counts (genome integer, accession integer, count integer)")
gen_idx_to_add = (infile.basename, next_id, len(infile.best_hits_kmers))
curs.execute("INSERT INTO genome_index VALUES (?, ?, ?)", gen_idx_to_add)
#gak was made over the loops.
curs.executemany("INSERT INTO genome_acc_kmer_counts VALUES (?,?,?)", gak_to_add)
curs.execute("CREATE INDEX IF NOT EXISTS kmer_acc ON genome_acc_kmer_counts (genome, accession);")
conn.commit()
else:
print("No proteins to add for this genome:",infile.basename,"Database will be unaltered. Exiting.")
curs.close()
conn.close()
def miga_dirs(output, subdir):
preparation_successful = True
if not os.path.exists(output):
try:
os.mkdir(output)
except:
print("")
print("FastAAI tried to make output directory: '"+ output + "' but failed.")
print("")
print("Troubleshooting:")
print("")
print(" (1) Do you have permission to create directories in the location you specified?")
print(" (2) Did you make sure that all directories other than", os.path.basename(output), "already exist?")
print("")
preparation_successful = False
if preparation_successful:
try:
if not os.path.exists(os.path.normpath(output + "/" + subdir)):
os.mkdir(os.path.normpath(output + "/" + subdir))
except:
print("FastAAI was able to create or find", output, "but couldn't make directories there.")
print("")
print("This shouldn't happen. Do you have permission to write to that directory?")
return preparation_successful
def miga_preproc_opts():
parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter,
description='''Build module intended for use by MiGA.
Performs protein prediction, HMM searching, and best hit identification, but does NOT
build a database. Produces instead "crystals," which are tab-sep files containing protein,
HMM accession, and original protein sequences for the best hits. These crystals can be passed
to "miga_db_from_crystals" action later on to rapidly create a DB from many genomes.
''')
parser.add_argument('-g', '--genomes', dest = 'genomes', default = None, help = 'A directory containing genomes in FASTA format.')
parser.add_argument('-p', '--proteins', dest = 'proteins', default = None, help = 'A directory containing protein amino acids in FASTA format.')
parser.add_argument('-m', '--hmms', dest = 'hmms', default = None, help = 'A directory containing the results of an HMM search on a set of proteins.')
parser.add_argument('-o', '--output', dest = 'output', default = "FastAAI", help = 'The directory to place the database and any protein or HMM files FastAAI creates. By default, a directory named "FastAAI" will be created in the current working directory and results will be placed there.')
parser.add_argument('--threads', dest = 'threads', type=int, default = 1, help = 'The number of processors to use. Default 1.')
parser.add_argument('--verbose', dest = 'verbose', action='store_true', help = 'Print minor updates to console. Major updates are printed regardless.')
parser.add_argument('--compress', dest = "do_comp", action = 'store_true', help = 'Gzip compress generated proteins, HMMs. Off by default.')
args, unknown = parser.parse_known_args()
return parser, args
def run_miga_preproc(input_file):
input_file.crystalize = True
input_file.preprocess()
if len(input_file.best_hits_kmers) < 1:
input_file.best_hits_kmers = None
input_file.err_log += " This file did not successfully complete. No SCPs could be found."
return input_file
#Produce FastAAI preprocessed files containing HMM accession and associated protein sequence
def miga_preproc(genomes, proteins, hmms, output, threads, verbose, do_compress):
success = True
imported_files = fastaai_file_importer(genomes = genomes, proteins = proteins, hmms = hmms, output = output, compress = do_compress)
imported_files.determine_inputs()
if imported_files.error:
print("Exiting FastAAI due to input file error.")
quit()
#file make checks
p, h, c, l = True, True, True, True
if imported_files.status == "genome":
p = miga_dirs(output, "predicted_proteins")
h = miga_dirs(output, "hmms")
c = miga_dirs(output, "crystals")
if imported_files.status == "protein":
h = miga_dirs(output, "hmms")
c = miga_dirs(output, "crystals")
if imported_files.status == "protein+HMM":
c = miga_dirs(output, "crystals")
#We always want this one.
l = miga_dirs(output, "logs")
print("")
#Check if all created directories were successful.
success = p and h and c and l
if success:
hmm_file = find_hmm()
if verbose:
tracker = progress_tracker(total = len(imported_files.in_files), message = "Processing inputs")
else:
print("Processing inputs")
#Only build_db makes a log.
logger = open(os.path.normpath(output+"/logs/"+"FastAAI_preprocessing_log.txt"), "a")
print("file", "start_date", "end_date", "starting_format",
"prot_prediction_time", "trans_table", "hmm_search_time", "besthits_time",
"errors", sep = "\t", file = logger)
fail_log = open(os.path.normpath(output+"/logs/"+"FastAAI_genome_failures.txt"), "a")
pool = multiprocessing.Pool(threads, initializer = hmm_preproc_initializer, initargs = (hmm_file, do_compress,))
for result in pool.imap(run_miga_preproc, imported_files.in_files):
#log data, regardless of kind
print(result.basename, result.start_time, result.end_time, result.initial_state,
result.prot_pred_time, result.trans_table, result.hmm_search_time, result.besthits_time,
result.err_log, sep = "\t", file = logger)
if len(result.best_hits_kmers) < 1:
print(result.basename, file = fail_log)
if verbose:
tracker.update()
pool.close()
logger.close()
fail_log.close()
print("FastAAI preprocessing complete!")
return success
def miga_db_from_crystals_opts():
parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter,
description='''Takes a set of crystals produced with miga_preproc and makes a database from them.
Supply --crystals with a directory, file of paths, or list of paths just like --genomes in a build command.''')
parser.add_argument('-c', '--crystals', dest = 'crystals', default = None, help = 'A directory containing genomes in FASTA format.')
parser.add_argument('-d', '--database', dest = 'db_name', default = "FastAAI_database.sqlite.db", help = 'The name of the database you wish to create or add to. The database will be created if it doesn\'t already exist and placed in the output directory. FastAAI_database.sqlite.db by default.')
parser.add_argument('-o', '--output', dest = 'output', default = "FastAAI", help = 'The directory to place the database and any protein or HMM files FastAAI creates. By default, a directory named "FastAAI" will be created in the current working directory and results will be placed there.')
parser.add_argument('--threads', dest = 'threads', type=int, default = 1, help = 'The number of processors to use. Default 1.')
parser.add_argument('--verbose', dest = 'verbose', action='store_true', help = 'Print minor updates to console. Major updates are printed regardless.')
args, unknown = parser.parse_known_args()
return parser, args
#This is basically a copied function, but I'm going to ignore that for now.
def unique_kmer_miga(seq):
#num tetramers = len(seq) - 4 + 1, just make it -3.
n_kmers = len(seq) - 3
#Converts the characters in a sequence into their ascii int value
as_ints = np.array([ord(i) for i in seq], dtype = np.int32)
#create seq like 0,1,2,3; 1,2,3,4; 2,3,4,5... for each tetramer that needs a value
kmers = np.arange(4*n_kmers)
kmers = kmers % 4 + kmers // 4
#Select the characters (as ints) corresponding to each tetramer all at once and reshape into rows of 4,
#each row corresp. to a successive tetramer
kmers = as_ints[kmers].reshape((n_kmers, 4))
#Given four 2-digit numbers, these multipliers work as offsets so that all digits are preserved in order when summed
mult = np.array([1000000, 10000, 100, 1], dtype = np.int32)
#the fixed values effectively offset the successive chars of the tetramer by 2 positions each time;
#practically, this is concatenation of numbers
#Matrix mult does this for all values at once.
return np.unique(np.dot(kmers, mult))
def para_crystal_init(tdb_queue):
global tdb
global td_name
tdb = tdb_queue.get()
td_name = tdb
tdb = initialize_blank_db(tdb)
global ok
ok = generate_accessions_index()
def initialize_blank_db(path):
sqlite3.register_converter("array", convert_array)
worker = sqlite3.connect(path)
wcurs = worker.cursor()
wcurs.execute("CREATE TABLE genome_index (genome text, gen_id integer, protein_count integer)")
wcurs.execute("CREATE TABLE genome_acc_kmer_counts (genome integer, accession integer, count integer)")
ok = generate_accessions_index()
for t in ok:
wcurs.execute("CREATE TABLE " + t + "_genomes (genome INTEGER PRIMARY KEY, kmers array)")
wcurs.execute("CREATE TABLE " + t + " (kmer INTEGER PRIMARY KEY, genomes array)")
worker.commit()
wcurs.close()
return worker
def para_crystals_to_dbs(args):
path, name, num = args[0], args[1], args[2]
my_gak = []
my_qgi = []
num_prots = 0
curs = tdb.cursor()
fh = agnostic_reader(path)
for line in fh:
segs = line.strip().split("\t")
#prot_name = segs[0]
acc_name = segs[1]
prot_seq = segs[2]
acc_id = ok[acc_name]
tetramers = unique_kmer_miga(prot_seq)
my_gak.append((num, acc_id, tetramers.shape[0]))
tetramers = tetramers.tobytes()
curs.execute("INSERT INTO " + acc_name + "_genomes VALUES (?,?)", (num, tetramers,))
num_prots += 1
fh.close()
curs.execute("INSERT INTO genome_index VALUES (?, ?, ?)", (name, num, num_prots,))
curs.executemany("INSERT INTO genome_acc_kmer_counts VALUES (?, ?, ?)", my_gak)
tdb.commit()
curs.close()
return None
def group_by_kmer(placeholder):
curs = tdb.cursor()
surviving_tables = []
for acc in ok:
collected_data = curs.execute("SELECT * FROM {acc}_genomes".format(acc=acc)).fetchall()
rearrange = {}
if len(collected_data) > 0:
surviving_tables.append(acc)
for row in collected_data:
genome, tetramers = row[0], np.frombuffer(row[1], dtype = np.int32)
for t in tetramers:
if t not in rearrange:
rearrange[t] = [genome]
else:
rearrange[t].append(genome)
to_add = []
for tetra in rearrange:
as_bytes = np.array(rearrange[tetra], dtype = np.int32).tobytes()
rearrange[tetra] = None
to_add.append((int(tetra), as_bytes,))
curs.executemany("INSERT INTO {acc} VALUES (?, ?)".format(acc=acc), to_add)
to_add = None
else:
#Empty table/no genomes contained the relevant SCP
curs.execute("DROP TABLE {acc}".format(acc = acc))
curs.execute("DROP TABLE {acc}_genomes".format(acc = acc))
tdb.commit()
curs.close()
tdb.close()
return [td_name, surviving_tables]
#Merge one or many crystals into a DB.
def miga_db_from_crystals(crystals, output, db_name, threads, verbose):
success = True
imported_files = fastaai_file_importer(genomes = None, proteins = None,
hmms = None, crystals = crystals, output = output, compress = False)
imported_files.determine_inputs()
if imported_files.error:
print("Exiting FastAAI due to input file error.")
quit()
#We'll skip trying this if the file already exists.
existing_genome_IDs = None
final_db_path = None
try:
if os.path.exists(db_name):
if os.path.isfile(db_name):
final_db_path = db_name
else:
success = miga_dirs(output, "database")
final_db_path = os.path.normpath(output+ "/database/" + db_name)
else:
success = miga_dirs(output, "database")
final_db_path = os.path.normpath(output+ "/database/" + db_name)
except:
print("You specified an existing file to be a database, but it does not appear to be a FastAAI database.")
print("FastAAI will not be able to continue. Please give FastAAI a different database name and continue.")
print("Exiting.")
success = False
if os.path.exists(final_db_path):
if os.path.isfile(final_db_path):
parent = sqlite3.connect(final_db_path)
curs = parent.cursor()
existing_genome_IDs = {}
sql_command = "SELECT genome, gen_id FROM genome_index"
for result in curs.execute(sql_command).fetchall():
genome = result[0]
id = int(result[1])
existing_genome_IDs[genome] = id
curs.close()
parent.close()
if success:
if existing_genome_IDs is not None:
genome_idx = max(list(existing_genome_IDs.values()))+1
else:
existing_genome_IDs = {}
genome_idx = 0
cryst_args = []
for crystal_path, crystal_name in zip(imported_files.crystal_list, imported_files.identifiers):
#the genome is implicitly dropped if it's already in the target
if crystal_name not in existing_genome_IDs:
existing_genome_IDs[crystal_name] = genome_idx
cryst_args.append((crystal_path, crystal_name, genome_idx,))
genome_idx += 1
final_conn = sqlite3.connect(final_db_path)
final_curs = final_conn.cursor()
final_curs.execute("CREATE TABLE IF NOT EXISTS genome_index (genome text, gen_id integer, protein_count integer)")
final_curs.execute("CREATE TABLE IF NOT EXISTS genome_acc_kmer_counts (genome integer, accession integer, count integer)")
final_curs.execute("CREATE INDEX IF NOT EXISTS kmer_acc ON genome_acc_kmer_counts (genome, accession);")
final_conn.commit()
temp_dir = tempfile.mkdtemp()
temp_db_queue = multiprocessing.Queue()
for i in range(0, threads):
tdb_name = os.path.normpath(temp_dir + "/temp_db_" + str(i) + ".db")
temp_db_queue.put(tdb_name)
placeholder = [i for i in range(0, threads)]
pool = multiprocessing.Pool(threads, initializer = para_crystal_init, initargs = (temp_db_queue,))
if verbose:
tracker = progress_tracker(total = len(cryst_args), message = "Importing data")
else:
print("Importing data")
for result in pool.imap_unordered(para_crystals_to_dbs, cryst_args):
if verbose:
tracker.update()
if verbose:
tracker = progress_tracker(total = threads, message = "Formating data")
else:
print("Formating data")
for result in pool.imap_unordered(group_by_kmer, placeholder):
dbname, surviving_tables = result[0], result[1]
new_conn = sqlite3.connect(dbname)
new_curs = new_conn.cursor()
ngak = new_curs.execute("SELECT * FROM genome_acc_kmer_counts").fetchall()
ngi = new_curs.execute("SELECT * FROM genome_index").fetchall()
final_curs.executemany("INSERT INTO genome_index VALUES (?, ?, ?)", ngi)
final_curs.executemany("INSERT INTO genome_acc_kmer_counts VALUES (?, ?, ?)", ngak)
final_conn.commit()
ngak = None
ngi = None
for acc in surviving_tables:
final_curs.execute("CREATE TABLE IF NOT EXISTS {acc}_genomes (genome INTEGER PRIMARY KEY, kmers array)".format(acc=acc))
final_curs.execute("CREATE TABLE IF NOT EXISTS {acc} (kmer INTEGER PRIMARY KEY, genomes array)".format(acc=acc))
final_curs.execute("CREATE INDEX IF NOT EXISTS {acc}_index ON {acc}(kmer)".format(acc=acc))
curag = new_curs.execute("SELECT * FROM {acc}_genomes".format(acc=acc)).fetchall()
final_curs.executemany("INSERT INTO {acc}_genomes VALUES (?, ?)".format(acc=acc), curag)
curag = None
curaac = new_curs.execute("SELECT kmer, genomes, genomes FROM {acc}".format(acc=acc)).fetchall()
update_concat_sql = "INSERT INTO {acc} VALUES (?,?) ON CONFLICT(kmer) DO UPDATE SET genomes=genomes || (?)".format(acc=acc)
final_curs.executemany(update_concat_sql, curaac)
curacc = None
final_conn.commit()
new_curs.close()
new_conn.close()
if verbose:
tracker.update()
pool.close()
final_curs.close()
final_conn.close()
shutil.rmtree(temp_dir)
'''
Main
'''
#Preprocess genomes, build DB, query all vs all to self.
def aai_index_opts():
parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter,
description='''FastAAI module for preprocessing a set of genomes, proteins, or proteins+HMMs
into a database, and then querying the database against itself.
Equivalent to running build_db and db_query in sequence. Check these modules for additional
details on inputs.''')
parser.add_argument('-o', '--output', dest = 'output', default = "FastAAI", help = 'The directory to place the database and any protein or HMM files FastAAI creates. By default, a directory named "FastAAI" will be created in the current working directory and results will be placed there.')
parser.add_argument('-g', '--genomes', dest = 'genomes', default = None, help = 'A directory containing genomes in FASTA format.')
parser.add_argument('-p', '--proteins', dest = 'proteins', default = None, help = 'A directory containing protein amino acids in FASTA format.')
parser.add_argument('-m', '--hmms', dest = 'hmms', default = None, help = 'A directory containing the results of an HMM search on a set of proteins.')
parser.add_argument('-d', '--database', dest = 'db_name', default = "FastAAI_database.sqlite.db", help = 'The name of the database you wish to create or add to. The database will be created if it doesn\'t already exist and placed in the output directory. FastAAI_database.sqlite.db by default.')
parser.add_argument('--output_style', dest = "style", default = 'tsv', help = "Either 'tsv' or 'matrix'. Matrix produces a simplified output of only AAI estimates.")
parser.add_argument('--do_stdev', dest = "do_stdev", action='store_true', help = 'Off by default. Calculate std. deviations on Jaccard indicies. Increases memory usage and runtime slightly. Does NOT change estimated AAI values at all.')
parser.add_argument('--in_memory', dest = "in_mem", action = 'store_true', help = 'Load both databases into memory before querying. Consumes more RAM, but is faster and reduces file I/O substantially. Consider reducing number of threads')
parser.add_argument('--store_results', dest = "storage", action = 'store_true', help = 'Keep partial results in memory. Only works with --in_memory. Fewer writes, but more RAM. Default off.')
parser.add_argument('--compress', dest = "do_comp", action = 'store_true', help = 'Gzip compress generated proteins, HMMs. Off by default.')
parser.add_argument('--threads', dest = 'threads', type=int, default = 1, help = 'The number of processors to use. Default 1.')
parser.add_argument('--verbose', dest = 'verbose', action='store_true', help = 'Print minor updates to console. Major updates are printed regardless.')
args, unknown = parser.parse_known_args()
return parser, args
#Preprocess two sets of genomes A and B into two distinct databases Adb and Bdb, then query Adb against Bdb
def multi_query_opts():
parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter,
description='''FastAAI module for preprocessing two sets of input files into two separate DBs,
then querying the DBs against eachother. Not for use with already-made FastAAI databases.
See "build_db" action for details on file inputs.
See "db_query" action for details on querying options.''')
parser.add_argument('--query_output', dest = 'qoutput', default = "FastAAI_query", help = 'Output directory for query files. Default "FastAAI_query." FastAAI will work if this directory is the same as --target_output, but this is NOT a good idea.')
parser.add_argument('--target_output', dest = 'toutput', default = "FastAAI_target", help = 'Output directory for target files. Default "FastAAI_target." AAI results will be placed in this directory')
parser.add_argument('--query_genomes', dest = 'qgenomes', default = None, help = 'Query genomes')
parser.add_argument('--target_genomes', dest = 'tgenomes', default = None, help = 'Target genomes')
parser.add_argument('--query_proteins', dest = 'qproteins', default = None, help = 'Query proteins')
parser.add_argument('--target_proteins', dest = 'tproteins', default = None, help = 'Target proteins')
parser.add_argument('--query_hmms', dest = 'qhmms', default = None, help = 'Query HMMs')
parser.add_argument('--target_hmms', dest = 'thmms', default = None, help = 'Target HMMs')
parser.add_argument('--query_database', dest = 'qdb_name', default = "FastAAI_query_database.sqlite.db", help = 'Query database name. Default "FastAAI_query_database.sqlite.db"')
parser.add_argument('--target_database', dest = 'tdb_name', default = "FastAAI_target_database.sqlite.db", help ='Target database name. Default "FastAAI_target_database.sqlite.db"')
parser.add_argument('--output_style', dest = "style", default = 'tsv', help = "Either 'tsv' or 'matrix'. Matrix produces a simplified output of only AAI estimates.")
parser.add_argument('--do_stdev', dest = "do_stdev", action='store_true', help = 'Off by default. Calculate std. deviations on Jaccard indicies. Increases memory usage and runtime slightly. Does NOT change estimated AAI values at all.')
parser.add_argument('--in_memory', dest = "in_mem", action = 'store_true', help = 'Load both databases into memory before querying. Consumes more RAM, but is faster and reduces file I/O substantially. Consider reducing number of threads')
parser.add_argument('--store_results', dest = "storage", action = 'store_true', help = 'Keep partial results in memory. Only works with --in_memory. Fewer writes, but more RAM. Default off.')
parser.add_argument('--compress', dest = "do_comp", action = 'store_true', help = 'Gzip compress generated proteins, HMMs. Off by default.')
parser.add_argument('--threads', dest = 'threads', type=int, default = 1, help = 'The number of processors to use. Default 1.')
parser.add_argument('--verbose', dest = 'verbose', action='store_true', help = 'Print minor updates to console. Major updates are printed regardless.')
args, unknown = parser.parse_known_args()
return parser, args
def main():
#The currently supported modules.
modules = ["build_db", "merge_db", "simple_query", "db_query", "single_query", "aai_index", "multi_query", "miga_merge", "miga_preproc", "miga_db_from_crystals"]
#Print modules if someone just types FastAAI
if len(sys.argv) < 2:
print("")
print(" I couldn't find the module you specified. Please select one of the following modules:")
print("")
print("-------------------------------------- Database Construction Options --------------------------------------")
print("")
print(" build_db |" + " Create or add to a FastAAI database from genomes, proteins, or proteins and HMMs")
print(" merge_db |" + " Add the contents of one FastAAI DB to another")
print("")
print("---------------------------------------------- Query Options ----------------------------------------------")
print("")
print(" simple_query |" + " Query a genome or protein (one or many) against an existing FastAAI database")
print(" db_query |" + " Query the genomes in one FastAAI database against the genomes in another FastAAI database")
print("")
print("------------------------------------------- Other Options -------------------------------------------")
print("")
print(" single_query |" + " Query ONE query genome against ONE target genome")
print(" multi_query |" + " Create a query DB and a target DB, then calculate query vs. target AAI")
print(" aai_index |" + " Create a database from multiple genomes and do an all vs. all AAI index of the genomes")
print("")
print("-----------------------------------------------------------------------------------------------------------")
print(" To select a module, enter 'FastAAI [module]' into the command line!")
print("")
sys.exit()
#This is the module selection
selection = sys.argv[1]
if selection == "version":
sys.exit("FastAAI version=0.1.17")
if selection not in modules:
print("")
print(" I couldn't find the module you specified. Please select one of the following modules:")
print("")
print("-------------------------------------- Database Construction Options --------------------------------------")
print("")
print(" build_db |" + " Create or add to a FastAAI database from genomes, proteins, or proteins and HMMs")
print(" merge_db |" + " Add the contents of one FastAAI DB to another")
print("")
print("---------------------------------------------- Query Options ----------------------------------------------")
print("")
print(" simple_query |" + " Query a genome or protein (one or many) against an existing FastAAI database")
print(" db_query |" + " Query the genomes in one FastAAI database against the genomes in another FastAAI database")
print("")
print("------------------------------------------- Other Options -------------------------------------------")
print("")
print(" single_query |" + " Query ONE query genome against ONE target genome")
print(" multi_query |" + " Create a query DB and a target DB, then calculate query vs. target AAI")
print(" aai_index |" + " Create a database from multiple genomes and do an all vs. all AAI index of the genomes")
print("")
print("-----------------------------------------------------------------------------------------------------------")
print(" To select a module, enter 'FastAAI [module]' into the command line!")
print("")
sys.exit()
#################### Database build or add ########################
if selection == "build_db":
parser, opts = build_db_opts()
#module name only
if len(sys.argv) < 3:
print(parser.print_help())
sys.exit()
#Directory based
genomes, proteins, hmms = opts.genomes, opts.proteins, opts.hmms
output = os.path.normpath(opts.output)
threads = opts.threads
verbose = opts.verbose
#Database handle
db_name = opts.db_name
do_comp = opts.do_comp
build_db(genomes, proteins, hmms, db_name, output, threads, verbose, do_comp)
#################### Add two DBs ########################
if selection == "merge_db":
parser, opts = merge_db_opts()
if len(sys.argv) < 3:
print(parser.print_help())
sys.exit()
recipient = opts.recipient
donors = opts.donors
donor_file = opts.donor_file
verbose = opts.verbose
threads = opts.threads
if donors is not None and donor_file is not None:
sys.exit("You cannot specify both --donors and --donor_file.")
merge_db(recipient, donors, donor_file, verbose, threads)
#################### Query files vs DB ########################
if selection == "simple_query":
parser, opts = sql_query_opts()
if len(sys.argv) < 3:
print(parser.print_help())
sys.exit()
genomes, proteins, hmms = opts.genomes, opts.proteins, opts.hmms
db_name = opts.target
output = opts.output
threads = opts.threads
verbose = opts.verbose
do_stdev = opts.do_stdev
style, in_mem, make_db, qdb_name, do_comp = opts.style, opts.in_mem, opts.make_db, opts.qdb_name, opts.do_comp
sql_query(genomes, proteins, hmms, db_name, output, threads, verbose, do_stdev, style, in_mem, make_db, qdb_name, do_comp)
#################### Query DB vs DB ###########################
if selection == "db_query":
parser, opts = db_query_opts()
#module name only
if len(sys.argv) < 3:
print(parser.print_help())
sys.exit()
query = opts.query
target = opts.target
verbose = opts.verbose
do_stdev = opts.do_stdev
output = opts.output
threads = opts.threads
style, in_mem, store = opts.style, opts.in_mem, opts.storage
db_query(query, target, verbose, output, threads, do_stdev, style, in_mem, store)
#################### One-pass functions #######################
if selection == "single_query":
parser, opts = single_query_opts()
#module name only
if len(sys.argv) < 3:
print(parser.print_help())
sys.exit()
output = os.path.normpath(opts.output)
try:
threads = int(opts.threads)
except:
print("Couldn't interpret your threads. Defaulting to 1.")
threads = 1
verbose = opts.verbose
do_compress = opts.do_comp
query_genome = opts.query_genome
query_protein = opts.query_protein
query_hmm = opts.query_hmm
query_file = fastaai_file_importer(genomes = query_genome, proteins = query_protein, hmms = query_hmm, output = output, compress = do_compress)
query_file.determine_inputs()
target_genome = opts.target_genome
target_protein = opts.target_protein
target_hmm = opts.target_hmm
target_file = fastaai_file_importer(genomes = target_genome, proteins = target_protein, hmms = target_hmm, output = output, compress = do_compress)
target_file.determine_inputs()
is_ok = True
if len(query_file.in_files) != 1:
print("Query genome unacceptable. Check your inputs")
is_ok = False
if len(target_file.in_files) != 1:
print("target genome unacceptable. Check your inputs")
is_ok = False
if is_ok:
good_to_go = prepare_directories(output, query_file.status, "query")
if good_to_go:
good_to_go = prepare_directories(output, target_file.status, "query")
if good_to_go:
single_query(query_file, target_file, output, verbose, threads, do_compress)
if selection == "aai_index":
parser, opts = aai_index_opts()
if len(sys.argv) < 3:
print(parser.print_help())
sys.exit()
genomes, proteins, hmms = opts.genomes, opts.proteins, opts.hmms
output = os.path.normpath(opts.output)
threads = opts.threads
verbose = opts.verbose
#Database handle
db_name = opts.db_name
do_comp = opts.do_comp
do_stdev = opts.do_stdev
style, in_mem, store = opts.style, opts.in_mem, opts.storage
#This is the same logic from the build_db section and it's what we need for getting the DB name.
#Check if the db contains path info. Incl. windows version.
if "/" not in db_name and "\\" not in db_name:
final_database = os.path.normpath(output + "/database/" + db_name)
else:
#If the person insists that the db has a path, let them.
final_database = db_name
build_db(genomes, proteins, hmms, db_name, output, threads, verbose, do_comp)
query, target = final_database, final_database
db_query(query, target, verbose, output, threads, do_stdev, style, in_mem, store)
if selection == "multi_query":
parser, opts = multi_query_opts()
if len(sys.argv) < 3:
print(parser.print_help())
sys.exit()
#Shared options
threads = opts.threads
verbose = opts.verbose
#query options
do_comp = opts.do_comp
do_stdev = opts.do_stdev
style, in_mem, store = opts.style, opts.in_mem, opts.storage
#query inputs
qgenomes, qproteins, qhmms = opts.qgenomes, opts.qproteins, opts.qhmms
qoutput = os.path.normpath(opts.qoutput)
qdb_name = opts.qdb_name
#This is the same logic from the build_db section and it's what we need for getting the DB name.
#Check if the db contains path info. Incl. windows version.
if "/" not in qdb_name and "\\" not in qdb_name:
final_qdb = os.path.normpath(qoutput + "/database/" + qdb_name)
else:
#If the person insists that the db has a path, let them.
final_qdb = db_name
#target inputs
tgenomes, tproteins, thmms = opts.tgenomes, opts.tproteins, opts.thmms
toutput = os.path.normpath(opts.toutput)
tdb_name = opts.tdb_name
#This is the same logic from the build_db section and it's what we need for getting the DB name.
#Check if the db contains path info. Incl. windows version.
if "/" not in tdb_name and "\\" not in tdb_name:
final_tdb = os.path.normpath(toutput + "/database/" + tdb_name)
else:
#If the person insists that the db has a path other than output/database, let them.
final_tdb = db_name
#run query build
build_db(qgenomes, qproteins, qhmms, qdb_name, qoutput, threads, verbose, do_comp)
#run target build
build_db(tgenomes, tproteins, thmms, tdb_name, toutput, threads, verbose, do_comp)
#run query db against target db
db_query(final_qdb, final_tdb, verbose, toutput, threads, do_stdev, style, in_mem, store)
############## MiGA module #################
if selection == "miga_merge":
parser, opts = miga_merge_opts()
#module name only
if len(sys.argv) < 3:
print(parser.print_help())
sys.exit()
g,p,h = opts.gen, opts.prot, opts.hmm
target = opts.database
verbose = opts.verbose
output_path = opts.output
if target == None:
target = os.path.normpath(output_path + "/database/FastAAI_database.sqlite.db")
do_compress = opts.compress
imported_files = fastaai_file_importer(genomes = g, proteins = p, hmms = h,
output = output_path, compress = do_compress)
imported_files.determine_inputs()
if len(imported_files.in_files) == 0:
print("Something was wrong with your input file.")
else:
input_genome = imported_files.in_files[0]
good_to_go = prepare_directories(output_path, imported_files.status, "build")
miga_merge(input_genome, target, verbose, do_compress)
#This is where a new db would normally be created,
#which is not what happens when the supplied target is some other sort of path.
output_default = os.path.normpath(output_path + "/database")
if len(os.listdir(output_default)) == 0:
os.rmdir(output_default)
if selection == "miga_preproc":
parser, opts = miga_preproc_opts()
#module name only
if len(sys.argv) < 3:
print(parser.print_help())
sys.exit()
#Directory based
genomes, proteins, hmms = opts.genomes, opts.proteins, opts.hmms
output = os.path.normpath(opts.output)
threads = opts.threads
verbose = opts.verbose
do_comp = opts.do_comp
miga_preproc(genomes, proteins, hmms, output, threads, verbose, do_comp)
if selection == "miga_db_from_crystals":
parser, opts = miga_db_from_crystals_opts()
#module name only
if len(sys.argv) < 3:
print(parser.print_help())
sys.exit()
crystals = opts.crystals
if crystals is None:
print("I need to be given crystals to proceed!")
quit()
db_name = opts.db_name
try:
threads = int(opts.threads)
except:
threads = 1
print("Can't recognize threads param:", str(opts.threads), "defaulting to 1.")
verbose = opts.verbose
output_path = opts.output
miga_db_from_crystals(crystals, output_path, db_name, threads, verbose)
return None
if __name__ == "__main__":
main() | PypiClean |
/Flask-CKEditor-0.4.6.tar.gz/Flask-CKEditor-0.4.6/flask_ckeditor/static/basic/lang/af.js | /*
Copyright (c) 2003-2020, CKSource - Frederico Knabben. All rights reserved.
For licensing, see LICENSE.md or https://ckeditor.com/license
*/
CKEDITOR.lang['af']={"editor":"Woordverwerker","editorPanel":"Woordverwerkerpaneel","common":{"editorHelp":"Druk op ALT 0 vir hulp","browseServer":"Blaai op bediener","url":"URL","protocol":"Protokol","upload":"Oplaai","uploadSubmit":"Stuur aan die bediener","image":"Beeld","flash":"Flash","form":"Vorm","checkbox":"Merkhokkie","radio":"Radioknoppie","textField":"Teksveld","textarea":"Teksarea","hiddenField":"Versteekteveld","button":"Knop","select":"Keuseveld","imageButton":"Beeldknop","notSet":"<geen instelling>","id":"Id","name":"Naam","langDir":"Skryfrigting","langDirLtr":"Links na regs (LTR)","langDirRtl":"Regs na links (RTL)","langCode":"Taalkode","longDescr":"Lang beskrywing URL","cssClass":"CSS klasse","advisoryTitle":"Aanbevole titel","cssStyle":"Styl","ok":"OK","cancel":"Kanselleer","close":"Sluit","preview":"Voorbeeld","resize":"Skalierung","generalTab":"Algemeen","advancedTab":"Gevorderd","validateNumberFailed":"Hierdie waarde is nie 'n nommer nie.","confirmNewPage":"Alle wysiginge sal verlore gaan. Is jy seker dat jy 'n nuwe bladsy wil laai?","confirmCancel":"Sommige opsies is gewysig. Is jy seker dat jy hierdie dialoogvenster wil sluit?","options":"Opsies","target":"Teiken","targetNew":"Nuwe venster (_blank)","targetTop":"Boonste venster (_top)","targetSelf":"Selfde venster (_self)","targetParent":"Oorspronklike venster (_parent)","langDirLTR":"Links na Regs (LTR)","langDirRTL":"Regs na Links (RTL)","styles":"Styl","cssClasses":"CSS klasse","width":"Breedte","height":"Hoogte","align":"Orienteerung","left":"Links","right":"Regs","center":"Middel","justify":"Eweredig","alignLeft":"Links oplyn","alignRight":"Regs oplyn","alignCenter":"Middel oplyn","alignTop":"Bo","alignMiddle":"Middel","alignBottom":"Onder","alignNone":"Geen","invalidValue":"Ongeldige waarde","invalidHeight":"Hoogte moet 'n getal wees","invalidWidth":"Breedte moet 'n getal wees.","invalidLength":"Die waarde vir die veld \"%1\" moet 'n posetiewe nommer wees met of sonder die meeteenheid (%2).","invalidCssLength":"Die waarde vir die \"%1\" veld moet 'n posetiewe getal wees met of sonder 'n geldige CSS eenheid (px, %, in, cm, mm, em, ex, pt, of pc).","invalidHtmlLength":"Die waarde vir die \"%1\" veld moet 'n posetiewe getal wees met of sonder 'n geldige HTML eenheid (px of %).","invalidInlineStyle":"Ongeldige CSS. Formaat is een of meer sleutel-wert paare, \"naam : wert\" met kommapunte gesky.","cssLengthTooltip":"Voeg 'n getal wert in pixel in, of 'n waarde met geldige CSS eenheid (px, %, in, cm, mm, em, ex, pt, of pc).","unavailable":"%1<span class=\"cke_accessibility\">, nie beskikbaar nie</span>","keyboard":{"8":"Backspace","13":"Enter","16":"Skuif","17":"Ctrl","18":"Alt","32":"Spasie","35":"Einde","36":"Tuis","46":"Verwyder","112":"F1","113":"F2","114":"F3","115":"F4","116":"F5","117":"F6","118":"F7","119":"F8","120":"F9","121":"F10","122":"F11","123":"F12","124":"F13","125":"F14","126":"F15","127":"F16","128":"F17","129":"F18","130":"F19","131":"F20","132":"F21","133":"F22","134":"F23","135":"F24","224":"Bevel"},"keyboardShortcut":"Sleutel kombenasie","optionDefault":"Verstek"},"about":{"copy":"Kopiereg © $1. Alle regte voorbehou.","dlgTitle":"Meer oor CKEditor 4","moreInfo":"Vir lisensie-informasie, besoek asb. ons webwerf:"},"basicstyles":{"bold":"Vet","italic":"Skuins","strike":"Deurgestreep","subscript":"Onderskrif","superscript":"Bo-skrif","underline":"Onderstreep"},"notification":{"closed":"Notification closed."},"toolbar":{"toolbarCollapse":"Verklein werkbalk","toolbarExpand":"Vergroot werkbalk","toolbarGroups":{"document":"Dokument","clipboard":"Knipbord/Undo","editing":"Verander","forms":"Vorms","basicstyles":"Eenvoudige Styl","paragraph":"Paragraaf","links":"Skakels","insert":"Toevoeg","styles":"Style","colors":"Kleure","tools":"Gereedskap"},"toolbars":"Werkbalke"},"clipboard":{"copy":"Kopiëer","copyError":"U leser se sekuriteitsinstelling belet die kopiëringsaksie. Gebruik die sleutelbordkombinasie (Ctrl/Cmd+C).","cut":"Uitsnei","cutError":"U leser se sekuriteitsinstelling belet die outomatiese uitsnei-aksie. Gebruik die sleutelbordkombinasie (Ctrl/Cmd+X).","paste":"Byvoeg","pasteNotification":"Druk %1 om by te voeg. You leser ondersteun nie die toolbar knoppie of inoud kieslysie opsie nie. ","pasteArea":"Area byvoeg","pasteMsg":"Voeg jou inhoud in die gebied onder by en druk OK"},"indent":{"indent":"Vergroot inspring","outdent":"Verklein inspring"},"fakeobjects":{"anchor":"Anker","flash":"Flash animasie","hiddenfield":"Verborge veld","iframe":"IFrame","unknown":"Onbekende objek"},"link":{"acccessKey":"Toegangsleutel","advanced":"Gevorderd","advisoryContentType":"Aanbevole inhoudstipe","advisoryTitle":"Aanbevole titel","anchor":{"toolbar":"Anker byvoeg/verander","menu":"Anker-eienskappe","title":"Anker-eienskappe","name":"Ankernaam","errorName":"Voltooi die ankernaam asseblief","remove":"Remove Anchor"},"anchorId":"Op element Id","anchorName":"Op ankernaam","charset":"Karakterstel van geskakelde bron","cssClasses":"CSS klasse","download":"Force Download","displayText":"Display Text","emailAddress":"E-posadres","emailBody":"Berig-inhoud","emailSubject":"Berig-onderwerp","id":"Id","info":"Skakel informasie","langCode":"Taalkode","langDir":"Skryfrigting","langDirLTR":"Links na regs (LTR)","langDirRTL":"Regs na links (RTL)","menu":"Wysig skakel","name":"Naam","noAnchors":"(Geen ankers beskikbaar in dokument)","noEmail":"Gee die e-posadres","noUrl":"Gee die skakel se URL","noTel":"Please type the phone number","other":"<ander>","phoneNumber":"Phone number","popupDependent":"Afhanklik (Netscape)","popupFeatures":"Eienskappe van opspringvenster","popupFullScreen":"Volskerm (IE)","popupLeft":"Posisie links","popupLocationBar":"Adresbalk","popupMenuBar":"Spyskaartbalk","popupResizable":"Herskaalbaar","popupScrollBars":"Skuifbalke","popupStatusBar":"Statusbalk","popupToolbar":"Werkbalk","popupTop":"Posisie bo","rel":"Relationship","selectAnchor":"Kies 'n anker","styles":"Styl","tabIndex":"Tab indeks","target":"Doel","targetFrame":"<raam>","targetFrameName":"Naam van doelraam","targetPopup":"<opspringvenster>","targetPopupName":"Naam van opspringvenster","title":"Skakel","toAnchor":"Anker in bladsy","toEmail":"E-pos","toUrl":"URL","toPhone":"Phone","toolbar":"Skakel invoeg/wysig","type":"Skakelsoort","unlink":"Verwyder skakel","upload":"Oplaai"},"list":{"bulletedlist":"Ongenommerde lys","numberedlist":"Genommerde lys"},"undo":{"redo":"Oordoen","undo":"Ontdoen"}}; | PypiClean |
/DJModels-0.0.6-py3-none-any.whl/djmodels/db/backends/base/creation.py | import os
import sys
from io import StringIO
from djmodels.apps import apps
from djmodels.conf import settings
from djmodels.core import serializers
from djmodels.db import router
# The prefix to put on the default database name when creating
# the test database.
TEST_DATABASE_PREFIX = 'test_'
class BaseDatabaseCreation:
"""
Encapsulate backend-specific differences pertaining to creation and
destruction of the test database.
"""
def __init__(self, connection):
self.connection = connection
@property
def _nodb_connection(self):
"""
Used to be defined here, now moved to DatabaseWrapper.
"""
return self.connection._nodb_connection
def log(self, msg):
sys.stderr.write(msg + os.linesep)
def create_test_db(self, verbosity=1, autoclobber=False, serialize=True, keepdb=False):
"""
Create a test database, prompting the user for confirmation if the
database already exists. Return the name of the test database created.
"""
# Don't import djmodels.core.management if it isn't needed.
from djmodels.core.management import call_command
test_database_name = self._get_test_db_name()
if verbosity >= 1:
action = 'Creating'
if keepdb:
action = "Using existing"
self.log('%s test database for alias %s...' % (
action,
self._get_database_display_str(verbosity, test_database_name),
))
# We could skip this call if keepdb is True, but we instead
# give it the keepdb param. This is to handle the case
# where the test DB doesn't exist, in which case we need to
# create it, then just not destroy it. If we instead skip
# this, we will get an exception.
self._create_test_db(verbosity, autoclobber, keepdb)
self.connection.close()
settings.DATABASES[self.connection.alias]["NAME"] = test_database_name
self.connection.settings_dict["NAME"] = test_database_name
# We report migrate messages at one level lower than that requested.
# This ensures we don't get flooded with messages during testing
# (unless you really ask to be flooded).
call_command(
'migrate',
verbosity=max(verbosity - 1, 0),
interactive=False,
database=self.connection.alias,
run_syncdb=True,
)
# We then serialize the current state of the database into a string
# and store it on the connection. This slightly horrific process is so people
# who are testing on databases without transactions or who are using
# a TransactionTestCase still get a clean database on every test run.
if serialize:
self.connection._test_serialized_contents = self.serialize_db_to_string()
call_command('createcachetable', database=self.connection.alias)
# Ensure a connection for the side effect of initializing the test database.
self.connection.ensure_connection()
return test_database_name
def set_as_test_mirror(self, primary_settings_dict):
"""
Set this database up to be used in testing as a mirror of a primary
database whose settings are given.
"""
self.connection.settings_dict['NAME'] = primary_settings_dict['NAME']
def serialize_db_to_string(self):
"""
Serialize all data in the database into a JSON string.
Designed only for test runner usage; will not handle large
amounts of data.
"""
# Build list of all apps to serialize
from djmodels.db.migrations.loader import MigrationLoader
loader = MigrationLoader(self.connection)
app_list = []
for app_config in apps.get_app_configs():
if (
app_config.models_module is not None and
app_config.label in loader.migrated_apps and
app_config.name not in settings.TEST_NON_SERIALIZED_APPS
):
app_list.append((app_config, None))
# Make a function to iteratively return every object
def get_objects():
for model in serializers.sort_dependencies(app_list):
if (model._meta.can_migrate(self.connection) and
router.allow_migrate_model(self.connection.alias, model)):
queryset = model._default_manager.using(self.connection.alias).order_by(model._meta.pk.name)
yield from queryset.iterator()
# Serialize to a string
out = StringIO()
serializers.serialize("json", get_objects(), indent=None, stream=out)
return out.getvalue()
def deserialize_db_from_string(self, data):
"""
Reload the database with data from a string generated by
the serialize_db_to_string() method.
"""
data = StringIO(data)
for obj in serializers.deserialize("json", data, using=self.connection.alias):
obj.save()
def _get_database_display_str(self, verbosity, database_name):
"""
Return display string for a database for use in various actions.
"""
return "'%s'%s" % (
self.connection.alias,
(" ('%s')" % database_name) if verbosity >= 2 else '',
)
def _get_test_db_name(self):
"""
Internal implementation - return the name of the test DB that will be
created. Only useful when called from create_test_db() and
_create_test_db() and when no external munging is done with the 'NAME'
settings.
"""
if self.connection.settings_dict['TEST']['NAME']:
return self.connection.settings_dict['TEST']['NAME']
return TEST_DATABASE_PREFIX + self.connection.settings_dict['NAME']
def _execute_create_test_db(self, cursor, parameters, keepdb=False):
cursor.execute('CREATE DATABASE %(dbname)s %(suffix)s' % parameters)
def _create_test_db(self, verbosity, autoclobber, keepdb=False):
"""
Internal implementation - create the test db tables.
"""
test_database_name = self._get_test_db_name()
test_db_params = {
'dbname': self.connection.ops.quote_name(test_database_name),
'suffix': self.sql_table_creation_suffix(),
}
# Create the test database and connect to it.
with self._nodb_connection.cursor() as cursor:
try:
self._execute_create_test_db(cursor, test_db_params, keepdb)
except Exception as e:
# if we want to keep the db, then no need to do any of the below,
# just return and skip it all.
if keepdb:
return test_database_name
self.log('Got an error creating the test database: %s' % e)
if not autoclobber:
confirm = input(
"Type 'yes' if you would like to try deleting the test "
"database '%s', or 'no' to cancel: " % test_database_name)
if autoclobber or confirm == 'yes':
try:
if verbosity >= 1:
self.log('Destroying old test database for alias %s...' % (
self._get_database_display_str(verbosity, test_database_name),
))
cursor.execute('DROP DATABASE %(dbname)s' % test_db_params)
self._execute_create_test_db(cursor, test_db_params, keepdb)
except Exception as e:
self.log('Got an error recreating the test database: %s' % e)
sys.exit(2)
else:
self.log('Tests cancelled.')
sys.exit(1)
return test_database_name
def clone_test_db(self, suffix, verbosity=1, autoclobber=False, keepdb=False):
"""
Clone a test database.
"""
source_database_name = self.connection.settings_dict['NAME']
if verbosity >= 1:
action = 'Cloning test database'
if keepdb:
action = 'Using existing clone'
self.log('%s for alias %s...' % (
action,
self._get_database_display_str(verbosity, source_database_name),
))
# We could skip this call if keepdb is True, but we instead
# give it the keepdb param. See create_test_db for details.
self._clone_test_db(suffix, verbosity, keepdb)
def get_test_db_clone_settings(self, suffix):
"""
Return a modified connection settings dict for the n-th clone of a DB.
"""
# When this function is called, the test database has been created
# already and its name has been copied to settings_dict['NAME'] so
# we don't need to call _get_test_db_name.
orig_settings_dict = self.connection.settings_dict
return {**orig_settings_dict, 'NAME': '{}_{}'.format(orig_settings_dict['NAME'], suffix)}
def _clone_test_db(self, suffix, verbosity, keepdb=False):
"""
Internal implementation - duplicate the test db tables.
"""
raise NotImplementedError(
"The database backend doesn't support cloning databases. "
"Disable the option to run tests in parallel processes.")
def destroy_test_db(self, old_database_name=None, verbosity=1, keepdb=False, suffix=None):
"""
Destroy a test database, prompting the user for confirmation if the
database already exists.
"""
self.connection.close()
if suffix is None:
test_database_name = self.connection.settings_dict['NAME']
else:
test_database_name = self.get_test_db_clone_settings(suffix)['NAME']
if verbosity >= 1:
action = 'Destroying'
if keepdb:
action = 'Preserving'
self.log('%s test database for alias %s...' % (
action,
self._get_database_display_str(verbosity, test_database_name),
))
# if we want to preserve the database
# skip the actual destroying piece.
if not keepdb:
self._destroy_test_db(test_database_name, verbosity)
# Restore the original database name
if old_database_name is not None:
settings.DATABASES[self.connection.alias]["NAME"] = old_database_name
self.connection.settings_dict["NAME"] = old_database_name
def _destroy_test_db(self, test_database_name, verbosity):
"""
Internal implementation - remove the test db tables.
"""
# Remove the test database to clean up after
# ourselves. Connect to the previous database (not the test database)
# to do so, because it's not allowed to delete a database while being
# connected to it.
with self.connection._nodb_connection.cursor() as cursor:
cursor.execute("DROP DATABASE %s"
% self.connection.ops.quote_name(test_database_name))
def sql_table_creation_suffix(self):
"""
SQL to append to the end of the test table creation statements.
"""
return ''
def test_db_signature(self):
"""
Return a tuple with elements of self.connection.settings_dict (a
DATABASES setting value) that uniquely identify a database
accordingly to the RDBMS particularities.
"""
settings_dict = self.connection.settings_dict
return (
settings_dict['HOST'],
settings_dict['PORT'],
settings_dict['ENGINE'],
self._get_test_db_name(),
) | PypiClean |
/Happypanda-1.1.zip/Happypanda-1.1/CHANGELOG.rst | *Happypanda v1.1*
- Fixes
- Fixed HP settings unusable without internet connection
*Happypanda v1.0*
- New stuff
- New GUI look
- New helpful color widgets added to ``Settings -> Visual``
[rachmadaniHaryono]
- Gallery Contextmenu:
- Added ``Set rating`` to quickly set gallery rating
- Added ``Lookup Artist`` to open a new tab on preffered site
with the artist's galleries
- Added ``Reset read count`` under ``Advanced``
- Gallery Lists are now included when exporting gallery data
- New sorting option: Gallery Rating
- It is now possible to also append tags to galleries instead of
replacing when editing
- New gallery download source: ``asmhentai.com`` [rachmadaniHaryono]
- New `special namespaced
tag <https://github.com/Pewpews/happypanda/wiki/Gallery-Searching#special-namespaced-tags>`__:
``URL``
- Use like this: ``url:none`` or ``url:g.e-hentai``
- Many quality of life changes
- Changed stuff
- ``g.e-hentai`` to ``e-hentai``
- Old URLs will automatically be converted to new on metadata
fetch
- Displaying rating on galleries is now optional
- Improved search history
- Improved gallery downloader (now very reliable)
[rachmadaniHaryono]
- Galleries will automatically be rated 5 stars on favorite
- Gallery List edit popup will now appear in the middle of the
application
- Added a way to relogin into website
- Fixes
- E-Hentai login & gallery downloading
- ``date added`` metadata wasn't included when exporting gallery
data
- ``last read`` Metadata wasn't included when importing gallery data
- backup database name would get unusually long [rachmadaniHaryono]
- Fixed HDoujin ``info.txt`` parsing
- Newly downloaded galleries would sometimes cause a crash
- Attempting to download exhentai links without being logged in
would cause a crash
- Using the random gallery opener would in rare cases cause a crash
- Moving a gallery would cause a crash from a raised PermissionError
exception
- Fetching metadata for galleries would return multiple unrelated
galleries to choose among
- Fetching metadata for galleries with a colored cover whose gallery
source is an archive would sometimes cause a crash
- Galleries with an empty tag field wouldn't show up on a
``tag:none`` filter search
- Gallery Deleted popup would appear when deleting gallery files
from the application
- Attempting to download removed galleries would cause a crash
- Some gallery importing issues
*Happypanda v0.30*
- Someone finally convinced me into adding star ratings
- *Note:* Ratings won't be fetched from EH since I find them
useless... Though I might make it an option later on.
- External viewer icon on galleries has been removed in favor of
this
- Visual make-over
- Improved how thumbnails are loaded in gridview
- Moving files into a monitored folder will now automatically add the
galleries into your inbox
- Added the following special namespaced tags:
- ``path:none`` to filter galleries that has a broken source
- ``rating:integer`` to filter galleries that has been rated
``integer``
- read more about them
`here <https://github.com/Pewpews/happypanda/wiki/Gallery-Searching>`__
- Updated DB to version 0.26
- Added ``Rating`` metadata
- Fixed bugs:
- Attempting to add galleries on a fresh install was causing an
exception
- Moving files into a monitored folder, and then accepting the
pop-up would cause an exception
*Happypanda v0.29*
- Increased and improved stability and perfomance
- Shortened startup time
- Galleries are now added dynamically
- New feature: Tabs
- New inbox tab for new gallery additions
- Checking for duplicates will also make a new tab
- Gallery deletion will now process smoothly
- It is now possible to edit multiple galleries
- Type and Langauge in metadata popup window are now clickable to issue
a search
- Updated DB to version 0.25
- Added views in series table
- Visual changes in gridview
- Added a recently added indicator
- Gallery filetypes will now be displayed with text
- Added new options in settings
- Removed option: autoadd scanned galleries
- Fixed bugs:
- Fixed some metadata fetchings bugs
- Fixed database import and export issues
- Closing gallery metadata popup window caused an exception
- Fetching metadata with no internet connection caused an exception
- Invalid folders/archives were being picked up by the monitor
- Fixed default language issues
- Metadata would sometimes fail when doing a filesearch
- Thumbnail cache dir was not being cleared
- Adding from directory was not possible with single gallery add
method
*Happypanda v0.28.1*
- Fixed bugs:
- Fixed typo in external viewer args
- Fixed regex not working when in namespace
- Fixed thumbnail generation causing an unhandled exception
- Moved directories kept their old path
- Fixed auto metdata fetcher failing when mixing galleries with
colored covers and galleries with greyscale covers
- Gallery Metadata window wouldn't stay open
- Fixed a DB bug causing all kinds of errors, including:
- Editing a gallery while fetching its metadata would cause an
exception
- Closing the gallery dialog while fetching metadata would cause an
exception
*Happypanda v0.28*
- Improved perfomance of grid view significantly
- Galleries are now draggable
- It is now possible to add galleries to a list by dragging them to
the list
- Improved metadata fetching accuracy from EH
- Improved gallery lists with new options
- It is now possible to enable several search options on per
gallerylist basis
- A new *Enforce* option to enforce the gallerylist filter
- Improved gallery search
- New special namespaced tags: ``read_count``, ``date_added`` and
``last_read``
- Read more about them in the gallery searching guide
- New ``<`` less than and ``>`` greater than operator to be used
with some special namespaced tags
- Read about it in the special namespaced tags section in the
gallery searching guide
- Brought back the old way of gaining access to EX
- Only to be used if you can't gain access to EX by logging in
normally
- Added ability to specify arguments sent to viewer in settings (in the
``Advanced`` section)
- If when opening a gallery only the first image was viewable by
your viewer, try change the arguments sent to the viewer
- Updated the database to version 0.24 with the addition of new
gallerylist fields
- Moved regex search option to searchbar
- Added grid spacing option in settings (``Visual->Grid View``)
- Added folder and file extensions ignoring in settings
(``Application->Ignore``)
- Folder and file extensions ignoring will work for the directory
monitor and *Add gallery..* and *Populate from folder* gallery
adding methods
- Added new default language: Chinese
- Improved and fixed URL parser in gallery-downloader
- Custom languages will now be parsed from filenames along with the
default languages
- Tags are now sorted alphabetically everywhere
- Gallerylists in contextmenu are also now sorted
- Reason for why metdata fecthing failed is now shown in the
failed-to-get-metadata-gallery popup
- The current search term will now be upkeeped (upkept?) when switching
between views
- Disabled some tray messages on linux to prevent crash
- The current gallerylist context will now be shown on the statusbar
- The keys ``del`` and ``shift + del`` are now bound to gallery
deletion
- Added *exclude/include in auto metadata fetcher* in contextmenu for
selection
- Bug fixes:
- No thumbnails were created for images with incorrect extensions
(namely png images with .jpg extension)
- Only accounts with access to EX were able to login
- Some filesystem events were not being detected
- Name parser was not parsing languages
- Some gallery attributes to not be added to the db on initial
gallery creation
- Attempting to fetch metadata while an instance of auto metadata
fetcher was already running caused an exception
- Gallery wasn't removed in view when removing from the
duplicate-galleries popup
- Other minor bugs
*Happypanda v0.27*
- Many visual changes
- Including new ribbon indicating gallery type in gridview
- New sidebar widget:
- New feature: Gallery lists
- New feature: Artists list
- Moved *NS & Tags* treelist from settings to sidebar widget
- Metadata fetcher:
- Galleries with multiple hits found will now come last in the
fetching process
- Added fallback system to fetch metadata from other sources than EH
- Currently supports panda.chaika.moe
- Gallery downloader should now be more tolerant to mistakes in URLs
- Added a "gallery source is missing" indicator in grid view
- Removed EH member\_id and pass\_hash in favor for EH login method
- Added new sort option: *last read*
- Added option to exclude/include gallery from auto metadata fetcher in
the contextmenu
- Added general key shortcuts (read about the not so obvious shortcuts
`here <https://github.com/Pewpews/happypanda/wiki/Keyboard-Shortcuts>`__)
- Added support for new metafile: *HDoujin downloader*'s default
into.txt file
- Added support for panda.chaika.moe URLs when fetching metadata
- Updated database to version 0.23:
- Gallery lists addition
- New unique indexes in some tables
- Thumbnail paths are now relative (removing the need to rebuild
thumbs when moving Happypanda folder)
- Settings:
- Added option to force support for high DPI displays
- Added option to control the gallery size in grid view
- Enabled most *Gallery* options in the *Visual* section for OSX
- Added options to customize gallery type ribbon colors
- Added options to set default gallery values
- Added a way to add custom languages in settings
- Added option to send deleted files to recycle bin
- Added option to hide the sidebar widget on startup
- Bug fixes:
- Fixed a bug causing some external viewers to only be able to view
the first image
- Fixed metadata disappearance bug (hopefully, for real this time!)
- Fixed decoding issues preventing some galleries from getting
imported
- Fixed lots of critical database issues requiring a rebuild for
updating users
- Fixed gallery downloading from g.e-hentai
- Fixed bug causing "Show in library" to not work properly
- Fixed a bug causing a hang while fetching metadata
- Fixed a bug causing autometadata fetcher to sometimes fail
fetching for some galleries
- Fixed hand when checking for duplicates
- Fixed database rebuild issues
- Potentially fixed a bug preventing archives from being imported,
courtesy of KuroiKitsu
- Many other minor bugs
*Happypanda v0.26*
- Startup is now slighty faster
- New redesigned gallery metadata window!
- New chapter view in the metadata window
- Artist field is now clickable to issue a search for galleries with
same artist
- Some GUI changes
- New advanced gallery search **(make sure to read the search guide
found in ``Settings -> About -> Search Guide``)**
- Case sensitive searching
- Whole terms match searching
- Terms excluding
- New special namespaced tags (Read about them in
``Settings -> About -> Search Guide``)
- New import/export database feature found in
``Settings -> About -> Database``
- Added new column in ``Skipped paths`` window to show what reason
caused a file to be skipped
- Gallery downloader
- Added new batch urls window to gallery downloader
- Gallery downloading from ``panda.chaika.moe`` is now using its new
api
- Added context menu's to download items
- Added download progress on download items
- Doubleclicking on finished download items will open its containing
folder
- Added autocomplete on the artist field in gallery edit dialog
- Activated the ``last read`` attribute on galleries
- Improved hash generation
- Introducing metafiles:
- Files containing gallery metadata in same folder/archive is now
detected on import
- Only supports `eze <https://github.com/dnsev-h/eze>`__'s
``info.json`` files for now
- Settings
- Moved alot of options around. **Note: Some options will be reset**
- Reworded some options and fixed typos
- Enabled the ``Database`` tab in *About* section with import/export
database feature
- Updated the database to version 0.22
- Database will now be backed up before upgrading
- Clicking on the tray icon ballon will now activate Happypanda
- Thumbnail regenerating
- Added confirmation popup when about to regenerate thumbnails
- Application restart is no longer required after regenerating
thumbnails
- Added confirmation popup asking about if the thumbnail cache
should be cleaned before regenerating
- Renamed ``Random Gallery Opener`` to ``Open random gallery`` and also
moved it to the Gallery menu on the toolbar
- ``Open random gallery`` will now only pick a random gallery in
current view.
- *E.g. switching to the favorite view will make it pick a random
gallery among the favorites*
- Fixed bugs:
- Fixed a bug causing archives downloaded from g.e/ex to fail when
trying to add to library
- Fixed a bug where fetching galleries from the database would
sometimes throw an exception
- Fixed a bug causing people running from source to never see the
new update notification
- Fixed some popup blur bug
- Fixed an annoyance where the text cursor would always move to the
end when searching
- Fixed a bug where ``Show in Folder`` and ``Open folder/archive``
in gallery context menu was doing the same thing
- Fixed a bug where tags fetched from chaika included underscores
- Fixed bug where the notification widget would sometimes not show
messages
- Fixed bug where chapters added to gallery with directory source
would not open correctly
*Happypanda v0.25*
- Added *Show in folder* entry in gallery contextmenu
- Gallery popups
- A contextmenu will now be shown when you rightclick a gallery
- Added *Skip* button in the metadata gallery chooser popup (the one
asking you which gallery you want to extract metadata from)
- The text in metadata gallery chooser popups will now wrap
- Added tooltips displaying title and artist when hovering galleries
in some popups
- Settings
- A new button allowing you to recreate your thumbnail cache is now
in *Settings* -> *Advanced* -> *Gallery*
- Added new tab *Downloader* in *Web* section
- Renamed *General* tab in *Web* section to *Metadata*
- Some options in settings will now show a tooltip explaining the
option on hover
- You can now go back to previous or to next search terms with the two
new buttons beside the search bar (hidden until you actually search
something)
- Back and Forward keys has been bound to these two buttons (very OS
dependent but something like ``ALT + LEFT ARROW`` etc.) Back and
Forward buttons on your mouse should also probably work (*shrugs*)
- Added *Use current gallery link* checkbox option in *Web* section
- Toolbar
- Renamed *Misc* to *Tools*
- New *Scan for new galleries* entry in *Gallery*
- New *Gallery Downloader* entry in *Tools*
- Gallery downloading
- Supports archive and torrent downloading
- archives will be automatically imported while torrents will be
sent to your torrent client
- Currently supports ex/g.e gallery urls and panda.chaika.moe
gallery/archive urls
- Note: downloading archives from ex/g.e will be handled the same
way as if you did it in your browser, i.e. it will cost you
GP/credits.
- Tray icon
- You can now manually check for a new update by right clicking on
the tray icon
- Triggering the tray icon, i.e. clicking on it, will now activate
(showing it) the Happypanda window
- Fixed bugs:
- Fixed a bug where skipped galleries/paths would get moved
- Fixed a bug where gallery archives/folders containing images with
``.jpeg`` and/or capitalized (``.JPG``, etc.) extensions were
treated as invalid gallery sources, or causing the program to
behave very weird if they managed to get imported somehow
- Fixed a bug where you couldn't search with the Regex option turned
on
- Fixed a bug where changing gallery covers would fail if the
previous cover was not deleted or found.
- Fixed a bug where non-existent monitored folders were not detected
- Fixed a bug in the user settings (*settings.ini*) parsing, hence
the reset
- Fixed other minor misc. bugs
*Happypanda v0.24.1*
- Fixed bugs:
- Removing a gallery and its files should now work
- Popups was staying on top of all windows
*Happypanda v0.24*
- Mostly gui fixes/improvements
- Changed toolbar style and icons
- Added new native spinners
- Added spinner for the metadata fetching process
- Added spinner for initial load
- Added spinner for DB activity
- Removed sort contextmenu and added it to the toolbar
- Removed some space around galleries in grid view
- Added kinetic scrolling when scrolling with middlemouse button
- New DB Overview window and tab in settings dialog
- you can now see all namespaces and tags in the
``Namespace and Tags`` tab
- Pressing the return-key will now open selected galleries
- New options in settings dialog
- Make extracting archives before opening optional in
``Application -> General``
- Open chapters sequentially or all at once in
``Application -> General``
- Added a confirmation when closing while there is still DB activity to
avoid data loss
- Added log file rotation
- When happypanda.log reaches ``10 mb`` a new file will be made
(rotating between 3 files)
- Fixed bugs:
- Temporarily fixed a critical bug where galleries wouldn't load
- Fixed a bug where the tray icon would stay even after closing the
application
- Fixed a bug where clicking on a tag with no namespace in the
Gallery Metdata Popup would search the tag with a blank namespace
- Fixed a minor bug where when opening the settings dialog a small
window would appear first in a split second
*Happypanda v0.23*
- Stability and perfomance increase for very large libraries
- Instant startup: Galleries are now lazily loaded
- Application now supports very large galleries (tested with 10k
galleries)
- Gallery searching will now scale with amount of galleries (means,
no freezes when searching)
- Same with adding new galleries.
- The gallery window appearing when you click on a gallery is now
interactable
- Clicking on a link will open it in your default browser
- Clicking on a tag will search for the tag
- Added some animation and a spinner
- Fixed bugs:
- Fixed critical bug where slected galleries were not mapped
properly. (Which sometimes resulted in wrong galleries being
removed)
- Fixed a bug where pressing CTRL + A to select all galleries would
tell that i has selected the total amount of galleries multipled
by 3
- Fixed a bug where the notificationbar would sometiems not hide
itself
- & other minor bugs
*Happypanda v0.22*
- Added support for .rar files.
- To enable rar support, specify the path to unrar in Settings ->
Application -> General. Follow the instructions for your OS.
- Fixed most (if not all) gallery importing issues
- Added a way to populate form archive.
- Note: Subfolders will always be treated as galleries when
populating from an archive.
- Fixed a bug where users who tries Happypanda for the first time would
see the 'rebuilding galleries' dialog.
- & other misc. changes
*Happypanda v0.21*
- The application will now ask if you want to view skipped paths after
searching for galleries
- Added 'delete successful' in the notificationbar
- Bugfixes:
- Fixed critical bug: Could not open chapters
- If your gallery still won't open then please try re-adding the
gallery.
- Fixed bug: Covers for archives with no folder in-between were not
being found
- & other minor bugs
*Happypanda v0.20*
- Added support for recursively importing of galleries (applies to
archives)
- Directories in archives will now be noticed when importing
- Directories with archives as chapters will now be properly
imported
- Added drag and drop feature for directories and archives
- Galleries that was unsuccesful during gallery fetching will now be
displayed in a popup
- Added support for directory or archive ignoring
- Added support for changing gallery covers
- Added: move imported galleries to a specified folder feature
- Increased speed of Populate from folder and Add galleries...
- Improved title parser to now remove unneecessary whitespaces
- Improved gallery hashing to avoid unnecessary hashing
- Added 'Add archive' button in chapter dialog
- Popups will now center on parent window correctly
- It is now possible to move popups by leftclicking and dragging
- Added background blur effect when popups are shown
- The rebuild galleries popup will now show real progress
- Settings:
- Added new option: Treat subfolders as galleries
- Added new option: Move imported galleries
- Added new option: Scroll to new galleries (disabled)
- Added new option: Open random gallery chapters
- Added new option: Rename gallery source (disabled)
- Added new tab in Advanced section: Gallery
- Added new options: Gallery renamer (disabled)
- Added new tab in Application section: Ignore
- Enabled General tab in Application section
- Reenabled Display on gallery options
- Contextmenu:
- When selecting more galleries only options that apply to selected
galleries will be shown
- It is now possible to favourite/Unfavourite selected galleries
- Reenabled removing of selected galleries
- Added: Advanced and Change cover
- Updated database to version 0.2
- Bugfixes:
- Fixed critical bug: not being able to add chapters
- Fixed bug: removing a chapter would always remove the first
chapter
- Fixed bug: fetched metadata title and artist would not be
formatted correctly
- & other minor bugs
*Happypanda v0.19*
- Improved stability
- Updated and fixed auto metadata fetcher:
- Now twice as fast
- No more need to restart application because it froze
- Updated to support namespace fetching directly from the official
API
- Improved tag autocompletion in gallery dialog
- Added a system tray to notify you about events such as auto metadata
fetcher being done
- Sorting:
- Added a new sort option: Publication Date
- Added an indicator to the current sort option.
- Your current sort option will now be saved
- Increased pecision of date added
- Settings:
- Added new options:
- Continue auto metadata fetcher from where it left off
- Use japanese title
- Enabled option:
- Auto add new galleries on startup
- Removed options:
- HTML Parsing or API
- Bugfixes:
- Fixed critical bug: Fetching metadata from exhentai not working
- Fixed critical bug: Duplicates were being created in database
- Fixed a bug causing the update checker to always fail.
*Happypanda v0.18*
- Greatly improved stability
- Added numbers to show how many galleries are left when fetching for
metadata
- Possibly fixed a bug causing the *"big changes are about to occur"*
popup to never disappear
- Fixed auto metadata fetcher (did not work before)
*Happypanda v0.17*
- Improved UI
- Improved stability
- Improved the toolbar
-
- Added a way to find duplicate galleries
- Added a random gallery opener
- Added a way to fetch metadata for all your galleries
- Added a way to automagically fetch metadata from g.e-/exhentai
- Fetching metadata is now safer, and should not get you banned
- Added a new sort option: Date added
- Added a place for gallery hashes in the database
- Added folder monitoring support
- You will now be informed when you rename, remove or add a gallery
source in one of your monitored folders
- The application will scan for new galleries in all of your
monitored folders on startup
- Added a new section in settings dialog: Application
- Added new options in settings dialog
- Enabled the 'General' tab in the Web section
- Bugfixes:
- Fixed a bug where you could only open the first chapter of a
gallery
- Fixed a bug causing the application to crash when populating new
galleries
- Fixed some issues occuring when adding archive files
- Fixed some issues occuring when editing galleries
- other small bugfixes
- Disabled gallery source type and external program viewer icons
because of memory leak (will be reenabled in a later version)
- Cleaned up some code
*Happypanda v0.16*
- A more proper way to search for namespace and tags is now available
- Added support for external image viewers
- Added support for CBZ
- The settings button will now open up a real settings dialog
- Tons of new options are now available in the settings dialog
- Restyled the grid view
- Restyled the tooltip to now show other metadata in grid view
- Added troubleshoot, regex and search guides
- Fixed bugs:
- Application crashing when adding a gallery
- Application crashing when refreshing
- Namespace & tags not being shown correctly
- & other small bugs
*Happypanda v0.15*
- More options are now available in contextmenu when rightclicking a
gallery
- It's now possible to add and remove chapters from a gallery
- Added a way to select more galleries
- More options are now available in contextmenu for selected
galleries
- Added more columns to tableview
- Language
- Link
- Chapters
- Tweaked the grid view to reduce the lag when scrolling
- Added 1 more way to add galleries
- Already exisiting galleries will now be ignored
- Database will now try to auto update to newest version
- Updated Database to version 0.16 (breaking previous versions)
- Bugfixes
*Happypanda v0.14*
- New tableview. Switch easily between grid view and table view with
the new button beside the searchbar
- Now able to add and read ZIP archives (You don't need to extract
anymore).
- Added temp folder for when opening a chapter
- Changed icons to white icons
- Added tag autocomplete in series dialog
- Searchbar is now enabled for searching
- Autocomplete will complete series' titles
- Search for title or author
- Tag searching is only partially supported.
- Added sort options in contextmenu
- Title of series is now included in the 'Opening chapter' string
- Happypanda will now check for new version on startup
- Happypanda will now log errors.
- Added a --debug or -d option to create a detailed log
- Updated Database version to 0.15 (supports 0.14 & 0.13)
- Now with unique tag mappings
- A new metadata: times\_read
*Happypanda v0.13*
- First public release
| PypiClean |
/Instrumental-lib-0.7.zip/Instrumental-lib-0.7/instrumental/drivers/multimeters/hp.py | from enum import Enum
from . import Multimeter
from ..util import as_enum, check_enums
from ... import u, Q_
_INST_PARAMS = ['visa_address']
_INST_VISA_INFO = {
'HPMultimeter': ('HEWLETT-PACKARD', ['34401A'])
}
class TriggerSource(Enum):
bus = 0
immediate = imm = 1
ext = 2
external = 2
class MultimeterError(Exception):
pass
class HPMultimeter(Multimeter):
def _initialize(self):
self._meas_units = None
self._rsrc.write_termination = '\r\n'
self._rsrc.write('system:remote')
self._rsrc.write('trig:count 512')
def _handle_err_info(self, err_info):
code, msg = err_info.strip().split(',')
code = int(code)
if code != 0:
raise MultimeterError(msg[1:-1])
def _query(self, message):
resp, err_info = self._rsrc.query(message + ';:syst:err?').split(';')
self._handle_err_info(err_info)
return resp
def _write(self, message):
err_info = self._rsrc.query(message + ';:syst:err?')
self._handle_err_info(err_info)
def _make_rr_str(self, val):
if val in ('min', 'max', 'def'):
return val
else:
q = Q_(val)
return str(q.m_as(self._meas_units))
def _make_range_res_str(self, range, res):
if range is None and res is not None:
raise ValueError('You may only specify `res` if `range` is given')
strs = [self._make_rr_str(r) for r in (range, res) if r is not None]
return ','.join(strs)
def config_voltage_dc(self, range=None, resolution=None):
""" Configure the multimeter to perform a DC voltage measurement.
Parameters
----------
range : Quantity, 'min', 'max', or 'def'
Expected value of the input signal
resolution : Quantity, 'min', 'max', or 'def'
Resolution of the measurement, in measurement units
"""
self._meas_units = u.volts
self._write('func "volt:dc"')
if range is not None:
self._write('volt:range {}'.format(self._make_rr_str(range)))
if resolution is not None:
self._write('volt:res {}'.format(self._make_rr_str(resolution)))
def initiate(self):
self._write('init')
def fetch(self):
val_str = self._query('fetch?')
if ',' in val_str:
val = [float(s) for s in val_str.strip('\r\n,').split(',')]
else:
val = float(val_str.strip())
return Q_(val, self._meas_units)
def trigger(self):
"""Emit the software trigger
To use this function, the device must be in the 'wait-for-trigger' state and its
`trigger_source` must be set to 'bus'.
"""
self._write('*TRG')
def clear(self):
self._rsrc.write('\x03')
@property
def trigger_source(self):
src = self._query('trigger:source?')
return as_enum(TriggerSource, src.rstrip().lower())
@trigger_source.setter
@check_enums(src=TriggerSource)
def trigger_source(self, src):
self._write('trigger:source {}'.format(src.name))
@property
def npl_cycles(self):
func = self._query('func?').strip('\r\n"')
resp = self._query('{}:nplcycles?'.format(func))
return float(resp)
@npl_cycles.setter
def npl_cycles(self, value):
if value not in (0.02, 0.2, 1, 10, 100):
raise ValueError("npl_cycles can only be 0.02, 0.2, 1, 10, or 100")
func = self._query('func?').strip('\r\n"')
self._write('{}:nplcycles {}'.format(func, value)) | PypiClean |
/DigiCore-1.1.3.tar.gz/DigiCore-1.1.3/digiCore/utils.py | import datetime
import time
from datetime import date
from dateutil.relativedelta import relativedelta
from loguru import logger
import pandas as pd
import chardet
from Crypto.Cipher import AES
import base64
import hashlib
BLOCK_SIZE = 16 # Bytes
class DateTool:
"""
日期相关的使用方法
1、获取当前日期和指定间隔月的时间:get_month_delta_date
2、获取两个日期之间的 时间列表:get_date_list
3、获取两个日期之间,指定间隔时间的日期列表:get_date_cycle_list
"""
@classmethod
def get_month_delta_date(cls, month_delta=1, date_format="%Y%m%d"):
"""
获取查询时间,起止时间
默认查询上月数据
:param month_delta: 间隔月数,少一月为1. 多一月为负一
:param date_format: 日期格式,少一月为1. 多一月为负一
:return: ("20230101","20230201")
"""
last_month_today = date.today() - relativedelta(months=month_delta)
start_date = date(last_month_today.year, last_month_today.month, 1).strftime(date_format)
this_month_today = date.today() - relativedelta(months=month_delta - 1)
end_date = date(this_month_today.year, this_month_today.month, 1).strftime(date_format)
return start_date, end_date
@classmethod
def get_date_list(cls, begin_date: str, end_date: str, date_format="%Y%m%d") -> list:
"""
获取起始时间内每天的日期列表
:param begin_date: 开始日期
:param end_date: 结束日期
:param date_format: 日期格式
:return list:日期列表
"""
# 前闭后闭
date_list = []
start_date = datetime.datetime.strptime(begin_date, date_format)
end_date = datetime.datetime.strptime(end_date, date_format)
while start_date <= end_date:
date_str = start_date.strftime(date_format)
date_list.append(date_str)
start_date += datetime.timedelta(days=1)
return date_list
@classmethod
def get_date_cycle_list(cls, start_date: str, end_date: str, cycle: int, date_format="%Y-%m-%d") -> list:
"""
获取日期列表
:param start_date: 开始时间
:param end_date: 结束时间
:param cycle: 查询的时间
:param date_format: 日期格式
:return list: 返回日期列表
"""
date_list = []
start_date = datetime.datetime.strptime(start_date, date_format)
end_date = datetime.datetime.strptime(end_date, date_format)
while True:
date = {}
middle_date = start_date + datetime.timedelta(days=cycle)
# 结束时间
if middle_date >= end_date:
date["start"] = start_date
date["end"] = end_date
date_list.append(date)
break
else:
date["start"] = start_date
date["end"] = middle_date
date_list.append(date)
start_date = middle_date
return date_list
@classmethod
def get_task_date(cls, days, date_format="%Y-%m-%d"):
"""
:param days: 间隔天数
:param date_format: 日期格式
获取今天时间格式,以及今天之前的前几天
"""
end_date = time.strftime(date_format, time.localtime())
offset = datetime.timedelta(days=-int(days))
start_date = (datetime.datetime.now() + offset).strftime(date_format)
return start_date, end_date
@classmethod
def get_last_month_date(self, date_format='%Y-%m-%d'):
"""
获取上个月的第一天和最后一天的日期
:return:
"""
# 获取今天的日期
today = datetime.date.today()
# 获取上个月的第一天
first_day_of_last_month = today.replace(day=1) - datetime.timedelta(days=1)
first_day_of_last_month = first_day_of_last_month.replace(day=1)
# 获取上个月的最后一天
last_day_of_last_month = today.replace(day=1) - datetime.timedelta(days=1)
begin_date, end_date = first_day_of_last_month.strftime(date_format), last_day_of_last_month.strftime(
date_format)
return begin_date, end_date
class TableTool:
"""
python 读取、写入excel数据工具包
"""
@classmethod
def load_excel_data(cls, file_type: str, file_path: str, sheet=None, skiprows: int = 0):
"""
:param file_type: 文件类型 :
:param file_path: 文件路径
:param sheet: 子表名称
:param
:return: 数据列表
"""
with open(file_path, 'rb') as f:
data = chardet.detect(f.read())
encoding = data['encoding']
try:
if file_type == 'excel':
df = pd.read_excel(file_path, sheet, skiprows=skiprows)
elif file_type == 'csv':
df = pd.read_csv(file_path, encoding=encoding, skiprows=skiprows)
else:
logger.error(f"{file_type} 文件类型错误。参数为:excel、csv")
return
except Exception as e:
logger.error(f'error file:{e.__traceback__.tb_frame.f_globals["__file__"]}')
logger.error(f'error line:{e.__traceback__.tb_lineno}')
logger.error(f'error message:{e.args}')
return
return df
class EncryptTool:
"""
加密工具包
"""
@classmethod
def do_pad(cls, text):
return text + (BLOCK_SIZE - len(text) % BLOCK_SIZE) * \
chr(BLOCK_SIZE - len(text) % BLOCK_SIZE)
@classmethod
def aes_encrypt(cls, key, data):
"""
AES的ECB模式加密方法
:param key: 密钥
:param data:被加密字符串(明文)
:return:密文
"""
key = key.encode('utf-8')
# 字符串补位
data = cls.do_pad(data)
cipher = AES.new(key, AES.MODE_ECB)
# 加密后得到的是bytes类型的数据,使用Base64进行编码,返回byte字符串
result = cipher.encrypt(data.encode())
encode_str = base64.b64encode(result)
enc_text = encode_str.decode('utf-8')
return enc_text
@classmethod
def md5_encrypt(cls, text: str):
md = hashlib.md5()
md.update(text.encode('utf-8'))
return md.hexdigest()
class MsgTool:
@classmethod
def format_log_msg(cls, e):
"""
格式化错误信息
:param e:
:return:
"""
logger.error(f'error file:{e.__traceback__.tb_frame.f_globals["__file__"]}')
logger.error(f'error line:{e.__traceback__.tb_lineno}')
logger.error(f'error message:{e.args}')
error_msg = {'error_file': e.__traceback__.tb_frame.f_globals["__file__"],
'error_line': e.__traceback__.tb_lineno,
'error_message': e.args}
return error_msg
@classmethod
def format_api_msg(cls, enum):
"""
格式化api返回信息
:param enum:
:return:
"""
return {"code": enum[0], "msg": enum[1]} | PypiClean |
/Nikola-8.2.4-py3-none-any.whl/nikola/plugins/task/copy_files.py |
# Copyright © 2012-2023 Roberto Alsina and others.
# Permission is hereby granted, free of charge, to any
# person obtaining a copy of this software and associated
# documentation files (the "Software"), to deal in the
# Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the
# Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice
# shall be included in all copies or substantial portions of
# the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""Copy static files into the output folder."""
import os
from nikola.plugin_categories import Task
from nikola import utils
class CopyFiles(Task):
"""Copy static files into the output folder."""
name = "copy_files"
def gen_tasks(self):
"""Copy static files into the output folder."""
kw = {
'files_folders': self.site.config['FILES_FOLDERS'],
'output_folder': self.site.config['OUTPUT_FOLDER'],
'filters': self.site.config['FILTERS'],
}
yield self.group_task()
for src in kw['files_folders']:
dst = kw['output_folder']
filters = kw['filters']
real_dst = os.path.join(dst, kw['files_folders'][src])
for task in utils.copy_tree(src, real_dst, link_cutoff=dst):
task['basename'] = self.name
task['uptodate'] = [utils.config_changed(kw, 'nikola.plugins.task.copy_files')]
yield utils.apply_filters(task, filters, skip_ext=['.html']) | PypiClean |
/Dero-0.15.0-py3-none-any.whl/dero/manager/config/models/section.py | from typing import List, Union
import os
import warnings
from dero.mixins.repr import ReprMixin
from dero.manager.basemodels.container import Container
from dero.manager.config.models.config import ActiveFunctionConfig
from dero.manager.sectionpath.sectionpath import _strip_py
class ConfigSection(Container, ReprMixin):
repr_cols = ['name', 'config', 'items']
def __init__(self, configs: List[Union[ActiveFunctionConfig, 'ConfigSection']],
section_config: ActiveFunctionConfig=None, name: str=None):
self.config = section_config
self.items = configs
self.name = name
def __getattr__(self, item):
# Handle if trying to get this section's config
if item == self.config.name:
return self.config
return self.config_map[item]
def __dir__(self):
return self.config_map.keys()
@property
def items(self):
return self._items
@items.setter
def items(self, items):
self._items = items
self._set_config_map() # recreate config map on setting items
@property
def config_map(self):
return self._config_map
def _set_config_map(self):
config_map = {}
for config in self:
if config.name is None:
warnings.warn(f"Couldn't determine name of config {config}. Can't add to mapping.")
continue
config_map[config.name] = config
self._config_map = config_map
def update(self, d: dict=None, **kwargs):
if d is not None:
self.config.update(d)
self.config.update(kwargs)
@classmethod
def from_files(cls, basepath: str):
# Get section name by name of folder
name = os.path.basename(os.path.abspath(basepath))
# Config folder doesn't exist, return empty config
if not os.path.exists(basepath):
return cls([], name=name)
config_file_list = [file for file in next(os.walk(basepath))[2] if file.endswith('.py')]
# Ignore folders starting with . or _
config_section_list = [
folder for folder in next(os.walk(basepath))[1] if not any(
[folder.startswith(item) for item in ('_','.')]
)
]
# Special handling for section config
try:
config_file_list.remove('section.py')
section_config = ActiveFunctionConfig.from_file(os.path.join(basepath, 'section.py'), name=name)
except ValueError:
# Didn't find section config
section_config = None
configs = [
ActiveFunctionConfig.from_file(
os.path.join(basepath, file),
name=_strip_py(file)
) for file in config_file_list
]
# Recursively calling section creation to create individual config files
config_sections = [ConfigSection.from_files(os.path.join(basepath, folder)) for folder in config_section_list]
configs += config_sections
return cls(configs, section_config=section_config, name=name) | PypiClean |
/MindsDB-23.8.3.0.tar.gz/MindsDB-23.8.3.0/mindsdb/integrations/handlers/lightwood_handler/lightwood_handler/functions.py | import os
import json
import requests
import tempfile
import traceback
import dataclasses
from pathlib import Path
from datetime import datetime
import pandas as pd
from pandas.core.frame import DataFrame
import lightwood
from lightwood.api.types import ProblemDefinition, JsonAI
from mindsdb.utilities import log
from mindsdb.utilities.functions import mark_process
from mindsdb.integrations.libs.const import PREDICTOR_STATUS
from mindsdb.integrations.utilities.utils import format_exception_error
from mindsdb.interfaces.model.functions import (
get_model_records
)
from mindsdb.interfaces.storage import db
from mindsdb.interfaces.storage.fs import FileStorage, RESOURCE_GROUP
from mindsdb.interfaces.storage.json import get_json_storage
import mindsdb.utilities.profiler as profiler
from .utils import rep_recur, brack_to_mod, unpack_jsonai_old_args
def create_learn_mark():
if os.name == 'posix':
p = Path(tempfile.gettempdir()).joinpath('mindsdb/learn_processes/')
p.mkdir(parents=True, exist_ok=True)
p.joinpath(f'{os.getpid()}').touch()
def delete_learn_mark():
if os.name == 'posix':
p = Path(tempfile.gettempdir()).joinpath('mindsdb/learn_processes/').joinpath(f'{os.getpid()}')
if p.exists():
p.unlink()
@mark_process(name='learn')
@profiler.profile()
def run_generate(df: DataFrame, predictor_id: int, model_storage, args: dict = None):
model_storage.training_state_set(current_state_num=1, total_states=5, state_name='Generating problem definition')
json_ai_override = args.pop('using', {})
if 'dtype_dict' in json_ai_override:
args['dtype_dict'] = json_ai_override.pop('dtype_dict')
if 'problem_definition' in json_ai_override:
args = {**args, **json_ai_override['problem_definition']}
if 'timeseries_settings' in args:
for tss_key in [f.name for f in dataclasses.fields(lightwood.api.TimeseriesSettings)]:
k = f'timeseries_settings.{tss_key}'
if k in json_ai_override:
args['timeseries_settings'][tss_key] = json_ai_override.pop(k)
problem_definition = lightwood.ProblemDefinition.from_dict(args)
model_storage.training_state_set(current_state_num=2, total_states=5, state_name='Generating JsonAI')
json_ai = lightwood.json_ai_from_problem(df, problem_definition)
json_ai = json_ai.to_dict()
unpack_jsonai_old_args(json_ai_override)
json_ai_override = brack_to_mod(json_ai_override)
rep_recur(json_ai, json_ai_override)
json_ai = JsonAI.from_dict(json_ai)
model_storage.training_state_set(current_state_num=3, total_states=5, state_name='Generating code')
code = lightwood.code_from_json_ai(json_ai)
predictor_record = db.Predictor.query.with_for_update().get(predictor_id)
predictor_record.code = code
db.session.commit()
json_storage = get_json_storage(
resource_id=predictor_id
)
json_storage.set('json_ai', json_ai.to_dict())
@mark_process(name='learn')
@profiler.profile()
def run_fit(predictor_id: int, df: pd.DataFrame, model_storage) -> None:
try:
predictor_record = db.Predictor.query.with_for_update().get(predictor_id)
assert predictor_record is not None
predictor_record.data = {'training_log': 'training'}
predictor_record.status = PREDICTOR_STATUS.TRAINING
db.session.commit()
model_storage.training_state_set(current_state_num=4, total_states=5, state_name='Training model')
predictor: lightwood.PredictorInterface = lightwood.predictor_from_code(predictor_record.code)
predictor.learn(df)
db.session.refresh(predictor_record)
fs = FileStorage(
resource_group=RESOURCE_GROUP.PREDICTOR,
resource_id=predictor_id,
sync=True
)
predictor.save(fs.folder_path / fs.folder_name)
fs.push(compression_level=0)
predictor_record.data = predictor.model_analysis.to_dict()
# getting training time for each tried model. it is possible to do
# after training only
fit_mixers = list(predictor.runtime_log[x] for x in predictor.runtime_log
if isinstance(x, tuple) and x[0] == "fit_mixer")
submodel_data = predictor_record.data.get("submodel_data", [])
# add training time to other mixers info
if submodel_data and fit_mixers and len(submodel_data) == len(fit_mixers):
for i, tr_time in enumerate(fit_mixers):
submodel_data[i]["training_time"] = tr_time
predictor_record.data["submodel_data"] = submodel_data
model_storage.training_state_set(current_state_num=5, total_states=5, state_name='Complete')
predictor_record.dtype_dict = predictor.dtype_dict
db.session.commit()
except Exception as e:
db.session.refresh(predictor_record)
predictor_record.data = {'error': f'{traceback.format_exc()}\nMain error: {e}'}
db.session.commit()
raise e
@mark_process(name='learn')
def run_learn_remote(df: DataFrame, predictor_id: int) -> None:
try:
serialized_df = json.dumps(df.to_dict())
predictor_record = db.Predictor.query.with_for_update().get(predictor_id)
resp = requests.post(predictor_record.data['train_url'],
json={'df': serialized_df, 'target': predictor_record.to_predict[0]})
assert resp.status_code == 200
predictor_record.data['status'] = 'complete'
except Exception:
predictor_record.data['status'] = 'error'
predictor_record.data['error'] = str(resp.text)
db.session.commit()
@mark_process(name='learn')
def run_learn(df: DataFrame, args: dict, model_storage) -> None:
# FIXME
predictor_id = model_storage.predictor_id
predictor_record = db.Predictor.query.with_for_update().get(predictor_id)
predictor_record.training_start_at = datetime.now()
db.session.commit()
run_generate(df, predictor_id, model_storage, args)
run_fit(predictor_id, df, model_storage)
predictor_record.status = PREDICTOR_STATUS.COMPLETE
predictor_record.training_stop_at = datetime.now()
db.session.commit()
@mark_process(name='finetune')
def run_finetune(df: DataFrame, args: dict, model_storage):
try:
base_predictor_id = args['base_model_id']
base_predictor_record = db.Predictor.query.get(base_predictor_id)
if base_predictor_record.status != PREDICTOR_STATUS.COMPLETE:
raise Exception("Base model must be in status 'complete'")
predictor_id = model_storage.predictor_id
predictor_record = db.Predictor.query.get(predictor_id)
# TODO move this to ModelStorage (don't work with database directly)
predictor_record.data = {'training_log': 'training'}
predictor_record.training_start_at = datetime.now()
predictor_record.status = PREDICTOR_STATUS.FINETUNING # TODO: parallel execution block
db.session.commit()
base_fs = FileStorage(
resource_group=RESOURCE_GROUP.PREDICTOR,
resource_id=base_predictor_id,
sync=True
)
predictor = lightwood.predictor_from_state(base_fs.folder_path / base_fs.folder_name,
base_predictor_record.code)
predictor.adjust(df, adjust_args=args.get('using', {}))
fs = FileStorage(
resource_group=RESOURCE_GROUP.PREDICTOR,
resource_id=predictor_id,
sync=True
)
predictor.save(fs.folder_path / fs.folder_name)
fs.push(compression_level=0)
predictor_record.data = predictor.model_analysis.to_dict() # todo: update accuracy in LW as post-finetune hook
predictor_record.code = base_predictor_record.code
predictor_record.update_status = 'up_to_date'
predictor_record.status = PREDICTOR_STATUS.COMPLETE
predictor_record.training_stop_at = datetime.now()
db.session.commit()
except Exception as e:
log.logger.error(e)
predictor_id = model_storage.predictor_id
predictor_record = db.Predictor.query.with_for_update().get(predictor_id)
print(traceback.format_exc())
error_message = format_exception_error(e)
predictor_record.data = {"error": error_message}
predictor_record.status = PREDICTOR_STATUS.ERROR
db.session.commit()
raise
finally:
if predictor_record.training_stop_at is None:
predictor_record.training_stop_at = datetime.now()
db.session.commit() | PypiClean |
/EARL-pytorch-0.5.1.tar.gz/EARL-pytorch-0.5.1/earl_pytorch/dataset/dataset.py | import gc
import os
import pickle
import numpy as np
import torch
import tqdm
from torch.utils import data as data
from torch.utils.data.dataset import T_co, TensorDataset, ConcatDataset, IterableDataset
from .create_dataset import replay_to_dfs, convert_dfs, normalize, swap_teams, swap_left_right, get_base_features
class ReplayDataset(IterableDataset):
def __init__(self, replay_folder, cache_folder, limit=np.inf, batch_size=512, buffer_size=262144):
self.replay_folder = replay_folder
self.cache_folder = cache_folder
self.limit = limit
self.batch_size = batch_size
self.buffer_size = buffer_size
self.buffer_x, self.buffer_y = get_base_features(buffer_size, 6)
self.n = 0
def __iter__(self):
worker_info = torch.utils.data.get_worker_info()
if self.replay_folder is None:
files = [(dp, f) for dp, dn, fn in os.walk(self.cache_folder) for f in fn if f.endswith(".pickle")]
else:
files = [(dp, f) for dp, dn, fn in os.walk(self.replay_folder) for f in fn if f.endswith(".replay")]
if worker_info is None:
start = 0
step = 1
else:
start = worker_info.id
step = worker_info.num_workers
for dp, f in files[start::step]:
try:
if self.replay_folder is None:
out_path = os.path.join(dp, f)
with open(out_path, "rb") as handle:
dfs = pickle.load(handle)
else:
in_path = os.path.join(dp, f)
out_path = os.path.join(self.cache_folder, f[:-7] + ".pickle")
if os.path.exists(out_path):
with open(out_path, "rb") as handle:
dfs = pickle.load(handle)
else:
dfs = replay_to_dfs(in_path)
with open(out_path, "wb") as handle:
pickle.dump(dfs, handle)
x_n, y_n = convert_dfs(dfs)
assert x_n[2].shape == x_n[3].shape
normalize(x_n)
x_s, y_s = [np.copy(v) for v in x_n], [np.copy(v) for v in y_n]
swap_teams(x_n, y_n)
x_m, y_m = [np.copy(v) for v in x_n], [np.copy(v) for v in y_n]
swap_left_right(x_m, y_m)
x_sm, y_sm = [np.copy(v) for v in x_s], [np.copy(v) for v in y_s]
swap_left_right(x_sm, y_sm)
added_size = len(x_n[0]) * 4
if self.n + added_size >= self.buffer_size:
indices = np.random.permutation(self.n)
self.buffer_x = [v[indices] for v in self.buffer_x]
self.buffer_y = [v[indices] for v in self.buffer_y]
for i in range(0, self.n, self.batch_size):
batch_x = tuple(torch.from_numpy(t[i:i + self.batch_size]).cuda() for t in self.buffer_x)
batch_y = tuple(torch.from_numpy(t[i:i + self.batch_size]).cuda() for t in self.buffer_y)
yield batch_x, batch_y
self.buffer_x, self.buffer_y = get_base_features(self.buffer_size, 6)
self.n = 0
self._fill_buffer(self.buffer_x, self.n, x_n, x_s, x_m, x_sm)
self._fill_buffer(self.buffer_y, self.n, y_n, y_s, y_m, y_sm)
self.n += added_size
except Exception as e:
# print(e)
pass
def _fill_buffer(self, start, buffer, *arrs):
for ds, v, v_s, v_m, v_sm in zip(buffer, *arrs):
j = start
ds[j:j + len(v)] = v
j += len(v)
ds[j:j + len(v)] = v_s
j += len(v_s)
ds[j:j + len(v)] = v_m
j += len(v_m)
ds[j:j + len(v)] = v_sm
j += len(v_sm)
def __len__(self):
return self.limit
class ReplayDatasetFull(data.Dataset):
def __init__(self, replay_folder, cache_folder, limit=None, name=None):
self.replay_folder = replay_folder
self.cache_folder = cache_folder
self.limit = limit
if name is not None:
save_path = os.path.join(cache_folder, f"{name}.npz")
if os.path.exists(save_path):
arrs = np.load(save_path)
arrs = list(arrs.values())
self.x, self.y = arrs[:4], arrs[4:]
swap_teams(self.x, self.y, slice(None, None, 2))
return
if self.replay_folder is None:
files = [(dp, f) for dp, dn, fn in os.walk(self.cache_folder) for f in fn if f.endswith(".pickle")]
else:
files = [(dp, f) for dp, dn, fn in os.walk(self.replay_folder) for f in fn if f.endswith(".replay")]
file_iter = tqdm.tqdm(enumerate(files[:limit]),
desc="Load",
total=limit,
bar_format="{l_bar}{r_bar}")
arrays = []
for i, (dp, f) in file_iter:
try:
if self.replay_folder is None:
out_path = os.path.join(dp, f)
with open(out_path, "rb") as handle:
dfs = pickle.load(handle)
else:
in_path = os.path.join(dp, f)
out_path = os.path.join(self.cache_folder, f[:-7] + ".pickle")
if os.path.exists(out_path):
with open(out_path, "rb") as handle:
dfs = pickle.load(handle)
else:
dfs = replay_to_dfs(in_path)
with open(out_path, "wb") as handle:
pickle.dump(dfs, handle)
x_n, y_n = convert_dfs(dfs)
assert x_n[2].shape == x_n[3].shape
normalize(x_n)
arrays.append((x_n, y_n))
# x_s, y_s = [v.copy() for v in x_n], [v.copy() for v in y_n]
# swap_teams(x_s, y_s)
# arrays.append((x_s, y_s))
except Exception as e:
print(e)
pass
print("Concatenating...")
self.x = []
for i in range(len(arrays[0][0])):
conc = []
for row in arrays:
conc.append(row[0][i])
row[0][i] = None
self.x.append(np.concatenate(conc))
conc.clear()
gc.collect()
self.y = []
for i in range(len(arrays[0][1])):
conc = []
for row in arrays:
conc.append(row[1][i])
row[1][i] = None
self.y.append(np.concatenate(conc))
conc.clear()
gc.collect()
if name is not None:
np.savez_compressed(save_path, *self.x, *self.y)
swap_teams(self.x, self.y, slice(None, None, 2))
def __getitem__(self, index) -> T_co:
return [v[index] for v in self.x], [v[index] for v in self.y]
def __len__(self):
return self.x[0].shape[0]
def get_dataset(replay_folder, cache_folder, limit, name=None):
if name is not None:
name = os.path.join(cache_folder, name)
if os.path.exists(name):
return torch.load(name)
if replay_folder is None:
files = [(dp, f) for dp, dn, fn in os.walk(cache_folder) for f in fn if f.endswith(".pickle")]
else:
files = [(dp, f) for dp, dn, fn in os.walk(replay_folder) for f in fn if f.endswith(".replay")]
file_iter = tqdm.tqdm(enumerate(files[:limit]),
desc="Load",
total=limit,
bar_format="{l_bar}{r_bar}")
datasets = []
for i, (dp, f) in file_iter:
try:
if replay_folder is None:
out_path = os.path.join(dp, f)
with open(out_path, "rb") as handle:
dfs = pickle.load(handle)
else:
in_path = os.path.join(dp, f)
out_path = os.path.join(cache_folder, f[:-7] + ".pickle")
if os.path.exists(out_path):
with open(out_path, "rb") as handle:
dfs = pickle.load(handle)
else:
dfs = replay_to_dfs(in_path)
with open(out_path, "wb") as handle:
pickle.dump(dfs, handle)
x_n, y_n = convert_dfs(dfs, tensors=True)
assert x_n[2].shape == x_n[3].shape
normalize(x_n)
swap_teams(x_n, y_n, slice(i % 2, None, 2))
datasets.append(TensorDataset(*x_n, *y_n))
# x_s, y_s = [v.copy() for v in x_n], [v.copy() for v in y_n]
# swap_teams(x_s, y_s)
# arrays.append((x_s, y_s))
except Exception as e:
print(e)
pass
ds = ConcatDataset(datasets)
if name is not None:
torch.save(ds, name)
return ds | PypiClean |
/Nuitka-1.8.tar.gz/Nuitka-1.8/nuitka/build/inline_copy/colorama/colorama/win32.py |
# from winbase.h
STDOUT = -11
STDERR = -12
try:
import ctypes
from ctypes import LibraryLoader
windll = LibraryLoader(ctypes.WinDLL)
from ctypes import wintypes
except (AttributeError, ImportError):
windll = None
SetConsoleTextAttribute = lambda *_: None
winapi_test = lambda *_: None
else:
from ctypes import byref, Structure, c_char, POINTER
COORD = wintypes._COORD
class CONSOLE_SCREEN_BUFFER_INFO(Structure):
"""struct in wincon.h."""
_fields_ = [
("dwSize", COORD),
("dwCursorPosition", COORD),
("wAttributes", wintypes.WORD),
("srWindow", wintypes.SMALL_RECT),
("dwMaximumWindowSize", COORD),
]
def __str__(self):
return '(%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d)' % (
self.dwSize.Y, self.dwSize.X
, self.dwCursorPosition.Y, self.dwCursorPosition.X
, self.wAttributes
, self.srWindow.Top, self.srWindow.Left, self.srWindow.Bottom, self.srWindow.Right
, self.dwMaximumWindowSize.Y, self.dwMaximumWindowSize.X
)
_GetStdHandle = windll.kernel32.GetStdHandle
_GetStdHandle.argtypes = [
wintypes.DWORD,
]
_GetStdHandle.restype = wintypes.HANDLE
_GetConsoleScreenBufferInfo = windll.kernel32.GetConsoleScreenBufferInfo
_GetConsoleScreenBufferInfo.argtypes = [
wintypes.HANDLE,
POINTER(CONSOLE_SCREEN_BUFFER_INFO),
]
_GetConsoleScreenBufferInfo.restype = wintypes.BOOL
_SetConsoleTextAttribute = windll.kernel32.SetConsoleTextAttribute
_SetConsoleTextAttribute.argtypes = [
wintypes.HANDLE,
wintypes.WORD,
]
_SetConsoleTextAttribute.restype = wintypes.BOOL
_SetConsoleCursorPosition = windll.kernel32.SetConsoleCursorPosition
_SetConsoleCursorPosition.argtypes = [
wintypes.HANDLE,
COORD,
]
_SetConsoleCursorPosition.restype = wintypes.BOOL
_FillConsoleOutputCharacterA = windll.kernel32.FillConsoleOutputCharacterA
_FillConsoleOutputCharacterA.argtypes = [
wintypes.HANDLE,
c_char,
wintypes.DWORD,
COORD,
POINTER(wintypes.DWORD),
]
_FillConsoleOutputCharacterA.restype = wintypes.BOOL
_FillConsoleOutputAttribute = windll.kernel32.FillConsoleOutputAttribute
_FillConsoleOutputAttribute.argtypes = [
wintypes.HANDLE,
wintypes.WORD,
wintypes.DWORD,
COORD,
POINTER(wintypes.DWORD),
]
_FillConsoleOutputAttribute.restype = wintypes.BOOL
_SetConsoleTitleW = windll.kernel32.SetConsoleTitleW
_SetConsoleTitleW.argtypes = [
wintypes.LPCWSTR
]
_SetConsoleTitleW.restype = wintypes.BOOL
def _winapi_test(handle):
csbi = CONSOLE_SCREEN_BUFFER_INFO()
success = _GetConsoleScreenBufferInfo(
handle, byref(csbi))
return bool(success)
def winapi_test():
return any(_winapi_test(h) for h in
(_GetStdHandle(STDOUT), _GetStdHandle(STDERR)))
def GetConsoleScreenBufferInfo(stream_id=STDOUT):
handle = _GetStdHandle(stream_id)
csbi = CONSOLE_SCREEN_BUFFER_INFO()
success = _GetConsoleScreenBufferInfo(
handle, byref(csbi))
return csbi
def SetConsoleTextAttribute(stream_id, attrs):
handle = _GetStdHandle(stream_id)
return _SetConsoleTextAttribute(handle, attrs)
def SetConsoleCursorPosition(stream_id, position, adjust=True):
position = COORD(*position)
# If the position is out of range, do nothing.
if position.Y <= 0 or position.X <= 0:
return
# Adjust for Windows' SetConsoleCursorPosition:
# 1. being 0-based, while ANSI is 1-based.
# 2. expecting (x,y), while ANSI uses (y,x).
adjusted_position = COORD(position.Y - 1, position.X - 1)
if adjust:
# Adjust for viewport's scroll position
sr = GetConsoleScreenBufferInfo(STDOUT).srWindow
adjusted_position.Y += sr.Top
adjusted_position.X += sr.Left
# Resume normal processing
handle = _GetStdHandle(stream_id)
return _SetConsoleCursorPosition(handle, adjusted_position)
def FillConsoleOutputCharacter(stream_id, char, length, start):
handle = _GetStdHandle(stream_id)
char = c_char(char.encode())
length = wintypes.DWORD(length)
num_written = wintypes.DWORD(0)
# Note that this is hard-coded for ANSI (vs wide) bytes.
success = _FillConsoleOutputCharacterA(
handle, char, length, start, byref(num_written))
return num_written.value
def FillConsoleOutputAttribute(stream_id, attr, length, start):
''' FillConsoleOutputAttribute( hConsole, csbi.wAttributes, dwConSize, coordScreen, &cCharsWritten )'''
handle = _GetStdHandle(stream_id)
attribute = wintypes.WORD(attr)
length = wintypes.DWORD(length)
num_written = wintypes.DWORD(0)
# Note that this is hard-coded for ANSI (vs wide) bytes.
return _FillConsoleOutputAttribute(
handle, attribute, length, start, byref(num_written))
def SetConsoleTitle(title):
return _SetConsoleTitleW(title) | PypiClean |
/LinkPython-0.1.1.tar.gz/LinkPython-0.1.1/modules/link/CONTRIBUTING.md | Bug Reports
===========
If you've found a bug in Link itself, then please file a new issue here at GitHub. If you
have found a bug in a Link-enabled app, it might be wiser to reach out to the developer of
the app before filing an issue here.
Any and all information that you can provide regarding the bug will help in our being able
to find it. Specifically, that could include:
- Stacktraces, in the event of a crash
- Versions of the software used, and the underlying operating system
- Steps to reproduce
- Screenshots, in the case of a bug which results in a visual error
Pull Requests
=============
We are happy to accept pull requests from the GitHub community, assuming that they meet
the following criteria:
- You have signed and returned Ableton's [CLA][cla]
- The [tests pass](#testing)
- The PR passes all CI service checks
- The code is [well-formatted](#code-formatting)
- The git commit messages comply to [the commonly accepted standards][git-commit-msgs]
Testing
-------
Link ships with unit tests that are run on [Travis CI][travis] and [AppVeyor][appveyor] for
all PRs. There are two test suites: `LinkCoreTest`, which tests the core Link
functionality, and `LinkDiscoverTest`, which tests the network discovery feature of Link.
A third virtual target, `LinkAllTest` is provided by the CMake project as a convenience
to run all tests at once.
The unit tests are run on every platform which Link is officially supported on, and also
are run through [Valgrind][valgrind] on Linux to check for memory corruption and leaks. If
valgrind detects any memory errors when running the tests, it will fail the build.
If you are submitting a PR which fixes a bug or introduces new functionality, please add a
test which helps to verify the correctness of the code in the PR.
Code Formatting
---------------
Link uses [clang-format][clang-format] to enforce our preferred code style. At the moment,
we use **clang-format version 6.0**. Note that other versions may format code differently.
Any PRs submitted to Link are also checked with clang-format by the Travis CI service. If
you get a build failure, then you can format your code by running the following command:
```
clang-format -style=file -i (filename)
```
[cla]: http://ableton.github.io/cla/
[git-commit-msgs]: http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html
[clang-format]: http://llvm.org/builds
[valgrind]: http://valgrind.org
[travis]: http://travis-ci.com
[appveyor]: http://appveyor.com
| PypiClean |
/Augusta-1.0.4.tar.gz/Augusta-1.0.4/docs/Installation.rst | Installation
------------
We highly reccoment installing and using Augusta in a virtual environment.
Virtual environment can be created and activated using `conda <https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html>`_:
.. code-block::
> conda create -n venv_Augusta python=3.7 anaconda
> conda activate venv_Augusta
Dependencies
=====================
Docker
^^^^^^^^
Docker installation can be checked via:
.. code-block::
> docker info
See `Docker <https://docs.docker.com/get-docker/>`_ for more information.
Python
^^^^^^^^^
Python is already installed in the virtual environment venv_Augusta.
In case of not using venv_Augusta, Augusta package needs **Python version 3.7 or 3.8**.
Python version can be checked via:
.. code-block::
> python3 --version
See `Python <https://www.python.org/>`_ for more information.
Install Augusta
==================
from PyPi / pip
^^^^^^^^^^^^^^^^
.. code-block:: python
> pip install Augusta
from GitHub
^^^^^^^^^^^
.. code-block:: python
> git clone https://github.com/JanaMus/Augusta.git
> cd Augusta
> python setup.py install
*Linux non-root user:*
Augusta uses MEME Suite **Docker** Image to search for motifs.
Therefore, access to Docker is needed to be set via terminal:
1. create the docker group
.. code-block:: python
> sudo groupadd docker
2. add your user to the docker group
.. code-block:: python
> sudo usermod -aG docker $USER
See `Docker documentation <https://docs.docker.com/engine/install/linux-postinstall/>`_ for more information.
| PypiClean |