id
stringlengths 1
265
| text
stringlengths 6
5.19M
| dataset_id
stringclasses 7
values |
---|---|---|
/Auxjad-1.0.0.tar.gz/Auxjad-1.0.0/auxjad/get/selections_are_identical.py | from collections.abc import Iterable
from typing import Union
import abjad
def selections_are_identical(selections: Union[Iterable[abjad.Component],
Iterable[abjad.Selection],
],
*,
include_indicators: bool = True,
) -> bool:
r"""Returns a :obj:`bool` representing whether two or more selections are
identical or not. Input argument must be an iterable made of two or more
|abjad.Selection|'s.
Basic usage:
When the pitches and effective durations of all leaves in all
selections are identical, this function returns ``True``:
>>> container1 = abjad.Staff(r"c'4 d'4 e'4 f'4 <g' a'>2 r2")
>>> container2 = abjad.Staff(r"c'4 d'4 e'4 f'4 <g' a'>2 r2")
>>> selections = [container1[:], container2[:]]
>>> auxjad.get.selections_are_identical(selections)
True
.. note::
Auxjad automatically adds this function as an extension function to
|abjad.get|. It can thus be used from either |auxjad.get|_ or
|abjad.get| namespaces. Therefore, the two lines below are equivalent:
>>> container1 = abjad.Staff(r"c'4 d'4 e'4 f'4 <g' a'>2 r2")
>>> container2 = abjad.Staff(r"c'4 d'4 e'4 f'4 <g' a'>2 r2")
>>> selections = [container1[:], container2[:]]
>>> auxjad.get.selections_are_identical(selections)
True
>>> auxjad.get.selections_are_identical(selections)
True
Effective durations:
Even if all leaves of both selections are identical in relation to both
pitches and written durations, the function considers the effective
durations. This means that situations like the one below do not yield a
false positive:
>>> container1 = abjad.Staff(r"c'4 d'4 e'4 f'4 <g' a'>2 r2")
>>> container2 = abjad.Staff(
... r"\times 3/2 {c'4 d'4 e'4} f'4 <g' a'>2 r2"
... )
>>> selections = [container1[:], container2[:]]
>>> auxjad.get.selections_are_identical(selections)
False
``include_indicators``:
By default, this function includes indicators in the comparison, so the
containers in the example below are understood to be different:
>>> container1 = abjad.Staff(r"c'4\pp d'4 e'4-. f'4 <g' a'>2-> r2")
>>> container2 = abjad.Staff(r"c'4 d'4 e'4 f'4 <g' a'>2 r2")
>>> selections = [container1[:], container2[:]]
>>> auxjad.get.selections_are_identical(selections)
False
Set the argument ``include_indicators`` to ``False`` to ignore
indicators when comparison selections. In that case, the containers in
the example above are then considered identical:
>>> container1 = abjad.Staff(r"c'4\pp d'4 e'4-. f'4 <g' a'>2-> r2")
>>> container2 = abjad.Staff(r"c'4 d'4 e'4 f'4 <g' a'>2 r2")
>>> selections = [container1[:], container2[:]]
>>> auxjad.get.selections_are_identical(
... selections,
... include_indicators=False,
... )
True
Grace notes:
This function also handles grace notes.
>>> container1 = abjad.Staff(r"c'4 d'4 e'4 f'4")
>>> container2 = abjad.Staff(r"c'4 \grace{d'4} d'4 e'4 f'4")
>>> selection1 = abjad.select(container1)
>>> selection2 = abjad.select(container2)
>>> selections = [selection1, selection2]
>>> auxjad.get.selections_are_identical(selections)
False
>>> container1 = abjad.Staff(r"c'4 d'4 e'4 f'4 <g' a'>2 r2")
>>> container2 = abjad.Staff(
... r"c'4 \grace{c''4} d'4 e'4 f'4 <g' a'>2 r2"
... )
>>> selection1 = abjad.select(container1)
>>> selection2 = abjad.select(container2)
>>> selections = [selection1, selection2]
>>> auxjad.get.selections_are_identical(selections)
False
>>> container1 = abjad.Staff(
... r"c'4 \grace{c''4} d'4 e'4 f'4 <g' a'>2 r2"
... )
>>> container2 = abjad.Staff(
... r"c'4 \grace{c''8} d'4 e'4 f'4 <g' a'>2 r2"
... )
>>> selection1 = abjad.select(container1)
>>> selection2 = abjad.select(container2)
>>> selections = [selection1, selection2]
>>> auxjad.get.selections_are_identical(selections)
False
>>> container1 = abjad.Staff(
... r"c'4 \grace{c''16} d'4 e'4 f'4 <g' a'>2 r2"
... )
>>> container2 = abjad.Staff(
... r"c'4 \grace{c''16} d'4 e'4 f'4 <g' a'>2 r2"
... )
>>> selection1 = abjad.select(container1)
>>> selection2 = abjad.select(container2)
>>> selections = [selection1, selection2]
>>> auxjad.get.selections_are_identical(selections)
True
.. warning::
It is important though to create selections using |abjad.select()| as
shown in the example above instead of using the syntax
``container[:]``, since the latter selects only leaves which are not
grace notes.
.. note::
It is important to note it is the contents of the containers which are
compared, so containers of different classes can still return a
``True`` value.
>>> container1 = abjad.Container(r"c'4 d'4 e'4 f'4")
>>> container2 = abjad.Staff(r"c'4 d'4 e'4 f'4")
>>> selections = [container1[:], container2[:]]
>>> auxjad.get.selections_are_identical(selections)
True
"""
if not isinstance(selections, Iterable):
raise TypeError("argument must be an iterable of 'abjad.Selection's "
"or 'abjad.Component's")
for selection in selections:
if not isinstance(selection, (abjad.Component, abjad.Selection)):
raise TypeError("argument must be an iterable of "
"'abjad.Selection's or 'abjad.Component's")
if not isinstance(include_indicators, bool):
raise TypeError("'include_indicators' must be 'bool'")
for index, selection1 in enumerate(selections[:-1]):
for selection2 in selections[index + 1:]:
leaves1 = [leaf for leaf in selection1.leaves()]
leaves2 = [leaf for leaf in selection2.leaves()]
if len(leaves1) != len(leaves2):
return False
for leaf1, leaf2 in zip(leaves1, leaves2):
if not isinstance(leaf1, type(leaf2)):
return False
if abjad.get.duration(leaf1) != abjad.get.duration(leaf2):
return False
if (isinstance(leaf1, abjad.Note)
and leaf1.written_pitch != leaf2.written_pitch):
return False
if (isinstance(leaf1, abjad.Chord)
and leaf1.written_pitches != leaf2.written_pitches):
return False
leaf1_graces = abjad.get.before_grace_container(leaf1)
leaf2_graces = abjad.get.before_grace_container(leaf2)
if not isinstance(leaf1_graces, type(leaf2_graces)):
return False
if include_indicators:
indicators1 = [format(indicator) for indicator
in abjad.get.indicators(leaf1)]
indicators2 = [format(indicator) for indicator
in abjad.get.indicators(leaf2)]
for indicator1 in indicators1:
if indicator1 not in indicators2:
return False
return True | PypiClean |
/Flask-CKEditor-0.4.6.tar.gz/Flask-CKEditor-0.4.6/flask_ckeditor/static/full/plugins/forms/dialogs/textfield.js | /*
Copyright (c) 2003-2020, CKSource - Frederico Knabben. All rights reserved.
For licensing, see LICENSE.md or https://ckeditor.com/legal/ckeditor-oss-license
*/
CKEDITOR.dialog.add("textfield",function(b){function e(a){a=a.element;var b=this.getValue();b?a.setAttribute(this.id,b):a.removeAttribute(this.id)}function f(a){a=a.hasAttribute(this.id)&&a.getAttribute(this.id);this.setValue(a||"")}var g={email:1,password:1,search:1,tel:1,text:1,url:1};return{title:b.lang.forms.textfield.title,minWidth:350,minHeight:150,getModel:function(a){a=a.getSelection().getSelectedElement();return!a||"input"!=a.getName()||!g[a.getAttribute("type")]&&a.getAttribute("type")?
null:a},onShow:function(){var a=this.getModel(this.getParentEditor());a&&this.setupContent(a)},onOk:function(){var a=this.getParentEditor(),b=this.getModel(a),c=this.getMode(a)==CKEDITOR.dialog.CREATION_MODE;c&&(b=a.document.createElement("input"),b.setAttribute("type","text"));b={element:b};c&&a.insertElement(b.element);this.commitContent(b);c||a.getSelection().selectElement(b.element)},onLoad:function(){this.foreach(function(a){a.getValue&&(a.setup||(a.setup=f),a.commit||(a.commit=e))})},contents:[{id:"info",
label:b.lang.forms.textfield.title,title:b.lang.forms.textfield.title,elements:[{type:"hbox",widths:["50%","50%"],children:[{id:"_cke_saved_name",type:"text",label:b.lang.forms.textfield.name,"default":"",accessKey:"N",setup:function(a){this.setValue(a.data("cke-saved-name")||a.getAttribute("name")||"")},commit:function(a){a=a.element;this.getValue()?a.data("cke-saved-name",this.getValue()):(a.data("cke-saved-name",!1),a.removeAttribute("name"))}},{id:"value",type:"text",label:b.lang.forms.textfield.value,
"default":"",accessKey:"V",commit:function(a){if(CKEDITOR.env.ie&&!this.getValue()){var d=a.element,c=new CKEDITOR.dom.element("input",b.document);d.copyAttributes(c,{value:1});c.replace(d);a.element=c}else e.call(this,a)}}]},{type:"hbox",widths:["50%","50%"],children:[{id:"size",type:"text",label:b.lang.forms.textfield.charWidth,"default":"",accessKey:"C",style:"width:50px",validate:CKEDITOR.dialog.validate.integer(b.lang.common.validateNumberFailed)},{id:"maxLength",type:"text",label:b.lang.forms.textfield.maxChars,
"default":"",accessKey:"M",style:"width:50px",validate:CKEDITOR.dialog.validate.integer(b.lang.common.validateNumberFailed)}],onLoad:function(){CKEDITOR.env.ie7Compat&&this.getElement().setStyle("zoom","100%")}},{id:"type",type:"select",label:b.lang.forms.textfield.type,"default":"text",accessKey:"M",items:[[b.lang.forms.textfield.typeEmail,"email"],[b.lang.forms.textfield.typePass,"password"],[b.lang.forms.textfield.typeSearch,"search"],[b.lang.forms.textfield.typeTel,"tel"],[b.lang.forms.textfield.typeText,
"text"],[b.lang.forms.textfield.typeUrl,"url"]],setup:function(a){this.setValue(a.getAttribute("type"))},commit:function(a){var d=a.element;if(CKEDITOR.env.ie){var c=d.getAttribute("type"),e=this.getValue();c!=e&&(c=CKEDITOR.dom.element.createFromHtml('\x3cinput type\x3d"'+e+'"\x3e\x3c/input\x3e',b.document),d.copyAttributes(c,{type:1}),c.replace(d),a.element=c)}else d.setAttribute("type",this.getValue())}},{id:"required",type:"checkbox",label:b.lang.forms.textfield.required,"default":"",accessKey:"Q",
value:"required",setup:CKEDITOR.plugins.forms._setupRequiredAttribute,commit:function(a){a=a.element;this.getValue()?a.setAttribute("required","required"):a.removeAttribute("required")}}]}]}}); | PypiClean |
/Create-Python-Project-0.1.0.tar.gz/Create-Python-Project-0.1.0/create_python_project/info.py | import re
from collections import OrderedDict
class FieldDescriptor:
"""Base Field Descriptor from which every Info inherit from"""
def __init__(self, name=None, default=None):
self._name = name
self.default = default
def __set__(self, instance, value):
value = value if value is not None else self.default() if callable(self.default) else self.default
if value is not None:
self.validate(instance, value)
instance.__dict__[self._name] = value
def __get__(self, instance, owner):
return instance.__dict__.get(self._name, None)
def set_name(self, name):
self._name = name
def attr_name(self, instance):
return '{class_name}.{name}'.format(class_name=instance.__class__.__name__,
name=self._name)
def validate(self, instance, value):
for klass in reversed(type(self).__mro__):
if hasattr(klass, 'is_valid') and not klass.is_valid(self, value):
klass.raise_error(klass.error_message(self, attr=self.attr_name(instance), value=value))
def error_message(self, attr, value):
raise NotImplementedError
@staticmethod
def raise_error(message):
raise NotImplementedError
def is_valid(self, value):
return True
class InfoMeta(type):
"""Meta class for Info"""
@classmethod
def __prepare__(mcs, name, bases):
return OrderedDict()
def __new__(mcs, name, bases, ns):
fields = []
for field, info in ns.items():
if isinstance(info, FieldDescriptor):
info.set_name(field)
fields.append(field)
for base in bases:
if hasattr(base, '_fields'):
for field in base._fields:
fields.append(field)
ns.setdefault(field, base.__dict__[field])
cls = super().__new__(mcs, name, bases, dict(ns))
cls._fields = tuple(set(fields))
return cls
class BaseInfo(FieldDescriptor, metaclass=InfoMeta):
"""BaseInfo class"""
def __init__(self, _name=None, default=None, **kwargs):
super().__init__(name=_name, default=default)
for field in self._fields:
setattr(self, field, kwargs.get(field, None))
def validate_info(self, info=None, **kwargs):
if info is not None:
assert isinstance(info, type(self)), '{0} must be updated to {0} but you passed {1}'.format(type(self),
info)
return info
else:
return self.copy(**kwargs)
def copy(self, **kwargs):
kwargs = {k: v for k, v in kwargs.items() if v is not None}
copy_kwargs = {
field: info.copy(**kwargs) if isinstance(info, BaseInfo) else info
for field, info in zip(self._fields, [getattr(self, field) for field in self._fields])
}
copy_kwargs.update(kwargs)
return type(self)(**copy_kwargs)
def transform_lines(self, new_info, lines):
pass
def update_info(self, new_info):
"""Update the current info with the new info"""
for field in self._fields:
current, new = getattr(self, field), getattr(new_info, field, None)
if isinstance(current, BaseInfo) and isinstance(new, BaseInfo):
current.update_info(new)
setattr(self, field, new)
def update(self, new_info, lines=None, **kwargs):
"""Perform transformation on lines corresponding to the new provided info and
update current info with new info"""
new_info = self.validate_info(new_info, **kwargs)
if lines is not None: # pragma: no branch
self.transform_lines(new_info, lines)
for field in self._fields:
current, new = getattr(self, field), getattr(new_info, field, None)
if isinstance(current, BaseInfo) and isinstance(new, BaseInfo):
current.update(new, lines, **kwargs)
else:
try:
iterator = iter(current)
except TypeError:
continue
else:
for i, info in enumerate(iterator):
if isinstance(info, BaseInfo):
info.update(new[i], lines)
self.update_info(new_info)
def __eq__(self, info):
if not isinstance(self, type(info)):
return False
for field in self._fields:
if getattr(self, field) != getattr(info, field):
return False
return True
class BaseTypeInfo(BaseInfo):
"""Base type info validating against a type"""
_type = object
def is_valid(self, value):
return isinstance(value, self._type)
def error_message(self, attr, value):
return '{attr} must be an instance of {_type} but you passed {value}'.format(_type=self._type,
attr=attr,
value=value)
@staticmethod
def raise_error(message):
raise TypeError(message)
class IntInfo(BaseTypeInfo):
"""Base info validating against int"""
_type = int
class StrInfo(BaseTypeInfo):
"""Base info validating against str"""
_type = str
class BoolInfo(BaseTypeInfo):
"""Base bool info validating against bool"""
_type = bool
class TupleInfo(BaseTypeInfo):
"""Base list info validating against list"""
_type = tuple
class ItemTupleInfo(TupleInfo):
_item_type = BaseInfo
def is_valid(self, value):
for val in value:
if not isinstance(val, self._item_type):
return False
return True
def error_message(self, attr, value):
return '{attr} elements must be instance of {type} but you passed {value}'.format(type=self._item_type,
attr=attr,
value=value)
class IntTupleInfo(ItemTupleInfo):
_item_type = IntInfo
class NonNullStrInfo(StrInfo):
"""Non Null Str Info"""
def is_valid(self, value):
return len(value) > 0
def error_message(self, attr, value):
return '{attr} must be a non null string'.format(attr=attr)
@staticmethod
def raise_error(message):
raise AssertionError(message)
class SingleLineStrInfo(StrInfo):
"""Single Line Str Info"""
def is_valid(self, value):
return len(value.split('\n')) == 1
def error_message(self, attr, value):
return '{attr} must be a one line string'.format(attr=attr)
class NonNullSingleLineStrInfo(NonNullStrInfo, SingleLineStrInfo):
"""Non Null Single Line Str info"""
class RSTSymbolInfo(NonNullStrInfo):
""".rst underline symbol info"""
_symbols = '=-`:\'"~^_*+#<>'
def is_valid(self, value):
return len(value) == 1 and value in self._symbols
def error_message(self, attr, value):
return '{attr} must be one of {symbols} but you passed {value}'.format(symbols=self._symbols,
attr=attr,
value=value)
class ComplexInfo(BaseInfo):
"""Info validating against its class"""
def is_valid(self, value):
return isinstance(value, type(self))
def error_message(self, attr, value):
return '{attr} must be an instance of {type} but you passed {value}'.format(type=type(self),
attr=attr,
value=value)
@staticmethod
def raise_error(message):
raise TypeError(message)
class TextInfo(ComplexInfo):
"""Text Info"""
text = StrInfo()
lineno = IntInfo()
def transform_lines(self, new_info, lines):
lines[self.lineno] = lines[self.lineno].replace(self.text, new_info.text.strip())
super().transform_lines(new_info, lines)
class NonNullTextInfo(TextInfo):
"""Text Info"""
text = NonNullStrInfo()
class RSTTitleInfo(TextInfo):
"""Info for an .rst section title"""
text = NonNullSingleLineStrInfo(default='<title>')
symbol = RSTSymbolInfo(default='=')
has_overline = BoolInfo(default=False)
def transform_lines(self, new_info, lines):
lines[self.lineno + 1] = len(new_info.text) * new_info.symbol
if self.has_overline:
lines[self.lineno - 1] = len(new_info.text) * new_info.symbol
super().transform_lines(new_info, lines)
class RSTScriptInfo(ComplexInfo):
"""Info of an .rst script"""
title = RSTTitleInfo()
def __init__(self, title=None, **kwargs):
if isinstance(title, str):
title = type(type(self).__dict__['title'])(text=title)
super().__init__(title=title, **kwargs)
class SingleLineTextInfo(TextInfo):
text = NonNullSingleLineStrInfo()
class PyDocstringInfo(RSTScriptInfo):
"""Info of a python docstring"""
copyright = SingleLineTextInfo()
license = SingleLineTextInfo()
def __init__(self, **kwargs):
for arg in self._fields:
if isinstance(kwargs.get(arg, None), str) and isinstance(type(self).__dict__[arg], TextInfo):
kwargs[arg] = type(type(self).__dict__[arg])(text=kwargs.get(arg))
super().__init__(**kwargs)
class CodeInfo(BaseInfo):
"""Info for python script code"""
class PyInfo(ComplexInfo):
"""Info of a python script"""
docstring = PyDocstringInfo()
code = CodeInfo(default=CodeInfo)
docstring_lineno = IntInfo(default=0)
class VarInfo(ComplexInfo):
"""Info for variable info"""
var = NonNullSingleLineStrInfo()
value = SingleLineStrInfo()
lineno = IntInfo()
def transform_lines(self, new_info, lines):
pattern = re.compile(
'(?P<var>{var}\s?=\s?)(?P<quote>[\'"])(?P<value>{value})[\'"]'.format(var=self.var, value=self.value))
lines[self.lineno] = pattern.sub('\g<var>\g<quote>{value}\g<quote>'.format(value=new_info.value),
lines[self.lineno])
super().transform_lines(new_info, lines)
class KwargInfo(ComplexInfo):
"""Info for kwarg argument of a python function"""
arg = NonNullSingleLineStrInfo()
value = SingleLineStrInfo()
lineno = IntInfo()
def transform_lines(self, new_info, lines):
pattern = re.compile(
'(?P<arg>{arg}\s?=\s?)?(?P<quote>[\'"])(?P<value>{value})[\'"]'.format(arg=self.arg, value=self.value))
lines[self.lineno] = pattern.sub('\g<arg>\g<quote>{value}\g<quote>'.format(value=new_info.value),
lines[self.lineno])
super().transform_lines(new_info, lines)
class KwargTupleInfo(ItemTupleInfo):
"""Info for setup packages"""
_item_type = KwargInfo
class SetupKwargsInfo(ComplexInfo):
"""Info contained in a setuptools setup call"""
name = KwargInfo()
version = KwargInfo()
url = KwargInfo()
author = KwargInfo()
author_email = KwargInfo()
description = KwargInfo()
packages = KwargTupleInfo()
def __init__(self, **kwargs):
for arg in self._fields:
if isinstance(kwargs.get(arg, None), str) and isinstance(type(self).__dict__[arg], KwargInfo):
kwargs[arg] = type(type(self).__dict__[arg])(value=kwargs.get(arg))
super().__init__(**kwargs)
class SetupInfo(CodeInfo):
setup = SetupKwargsInfo()
class InitInfo(CodeInfo):
version = VarInfo()
class PyInitInfo(PyInfo):
code = InitInfo(default=InitInfo)
class PySetupInfo(PyInfo):
code = SetupInfo(default=SetupInfo) | PypiClean |
/NVDA-addonTemplate-0.5.2.zip/NVDA-addonTemplate-0.5.2/NVDAAddonTemplate/data/{{cookiecutter.project_slug}}/scons-local-2.5.0/SCons/Tool/gcc.py |
__revision__ = "src/engine/SCons/Tool/gcc.py rel_2.5.0:3543:937e55cd78f7 2016/04/09 11:29:54 bdbaddog"
import cc
import os
import re
import subprocess
import SCons.Util
compilers = ['gcc', 'cc']
def generate(env):
"""Add Builders and construction variables for gcc to an Environment."""
if 'CC' not in env:
env['CC'] = env.Detect(compilers) or compilers[0]
cc.generate(env)
if env['PLATFORM'] in ['cygwin', 'win32']:
env['SHCCFLAGS'] = SCons.Util.CLVar('$CCFLAGS')
else:
env['SHCCFLAGS'] = SCons.Util.CLVar('$CCFLAGS -fPIC')
# determine compiler version
version = detect_version(env, env['CC'])
if version:
env['CCVERSION'] = version
def exists(env):
# is executable, and is a GNU compiler (or accepts '--version' at least)
return detect_version(env, env.Detect(env.get('CC', compilers)))
def detect_version(env, cc):
"""Return the version of the GNU compiler, or None if it is not a GNU compiler."""
cc = env.subst(cc)
if not cc:
return None
version = None
#pipe = SCons.Action._subproc(env, SCons.Util.CLVar(cc) + ['-dumpversion'],
pipe = SCons.Action._subproc(env, SCons.Util.CLVar(cc) + ['--version'],
stdin = 'devnull',
stderr = 'devnull',
stdout = subprocess.PIPE)
# -dumpversion was added in GCC 3.0. As long as we're supporting
# GCC versions older than that, we should use --version and a
# regular expression.
#line = pipe.stdout.read().strip()
#if line:
# version = line
line = pipe.stdout.readline()
match = re.search(r'[0-9]+(\.[0-9]+)+', line)
if match:
version = match.group(0)
# Non-GNU compiler's output (like AIX xlc's) may exceed the stdout buffer:
# So continue with reading to let the child process actually terminate.
while pipe.stdout.readline():
pass
ret = pipe.wait()
if ret != 0:
return None
return version
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4: | PypiClean |
/knics_jupyter_frontend-0.1.0.tar.gz/knics_jupyter_frontend-0.1.0/KNICS_Jupyter_frontend/labextension/static/remoteEntry.a910049d2e28ec9b8a93.js | var _JUPYTERLAB;
/******/ (() => { // webpackBootstrap
/******/ "use strict";
/******/ var __webpack_modules__ = ({
/***/ "webpack/container/entry/KNICS_Jupyter_frontend":
/*!***********************!*\
!*** container entry ***!
\***********************/
/***/ ((__unused_webpack_module, exports, __webpack_require__) => {
var moduleMap = {
"./index": () => {
return __webpack_require__.e("lib_index_js").then(() => (() => ((__webpack_require__(/*! ./lib/index.js */ "./lib/index.js")))));
},
"./extension": () => {
return __webpack_require__.e("lib_index_js").then(() => (() => ((__webpack_require__(/*! ./lib/index.js */ "./lib/index.js")))));
},
"./style": () => {
return Promise.all([__webpack_require__.e("vendors-node_modules_css-loader_dist_runtime_api_js-node_modules_css-loader_dist_runtime_cssW-72eba1"), __webpack_require__.e("style_index_js")]).then(() => (() => ((__webpack_require__(/*! ./style/index.js */ "./style/index.js")))));
}
};
var get = (module, getScope) => {
__webpack_require__.R = getScope;
getScope = (
__webpack_require__.o(moduleMap, module)
? moduleMap[module]()
: Promise.resolve().then(() => {
throw new Error('Module "' + module + '" does not exist in container.');
})
);
__webpack_require__.R = undefined;
return getScope;
};
var init = (shareScope, initScope) => {
if (!__webpack_require__.S) return;
var name = "default"
var oldScope = __webpack_require__.S[name];
if(oldScope && oldScope !== shareScope) throw new Error("Container initialization failed as it has already been initialized with a different share scope");
__webpack_require__.S[name] = shareScope;
return __webpack_require__.I(name, initScope);
};
// This exports getters to disallow modifications
__webpack_require__.d(exports, {
get: () => (get),
init: () => (init)
});
/***/ })
/******/ });
/************************************************************************/
/******/ // The module cache
/******/ var __webpack_module_cache__ = {};
/******/
/******/ // The require function
/******/ function __webpack_require__(moduleId) {
/******/ // Check if module is in cache
/******/ var cachedModule = __webpack_module_cache__[moduleId];
/******/ if (cachedModule !== undefined) {
/******/ return cachedModule.exports;
/******/ }
/******/ // Create a new module (and put it into the cache)
/******/ var module = __webpack_module_cache__[moduleId] = {
/******/ id: moduleId,
/******/ // no module.loaded needed
/******/ exports: {}
/******/ };
/******/
/******/ // Execute the module function
/******/ __webpack_modules__[moduleId](module, module.exports, __webpack_require__);
/******/
/******/ // Return the exports of the module
/******/ return module.exports;
/******/ }
/******/
/******/ // expose the modules object (__webpack_modules__)
/******/ __webpack_require__.m = __webpack_modules__;
/******/
/******/ // expose the module cache
/******/ __webpack_require__.c = __webpack_module_cache__;
/******/
/************************************************************************/
/******/ /* webpack/runtime/compat get default export */
/******/ (() => {
/******/ // getDefaultExport function for compatibility with non-harmony modules
/******/ __webpack_require__.n = (module) => {
/******/ var getter = module && module.__esModule ?
/******/ () => (module['default']) :
/******/ () => (module);
/******/ __webpack_require__.d(getter, { a: getter });
/******/ return getter;
/******/ };
/******/ })();
/******/
/******/ /* webpack/runtime/define property getters */
/******/ (() => {
/******/ // define getter functions for harmony exports
/******/ __webpack_require__.d = (exports, definition) => {
/******/ for(var key in definition) {
/******/ if(__webpack_require__.o(definition, key) && !__webpack_require__.o(exports, key)) {
/******/ Object.defineProperty(exports, key, { enumerable: true, get: definition[key] });
/******/ }
/******/ }
/******/ };
/******/ })();
/******/
/******/ /* webpack/runtime/ensure chunk */
/******/ (() => {
/******/ __webpack_require__.f = {};
/******/ // This file contains only the entry chunk.
/******/ // The chunk loading function for additional chunks
/******/ __webpack_require__.e = (chunkId) => {
/******/ return Promise.all(Object.keys(__webpack_require__.f).reduce((promises, key) => {
/******/ __webpack_require__.f[key](chunkId, promises);
/******/ return promises;
/******/ }, []));
/******/ };
/******/ })();
/******/
/******/ /* webpack/runtime/get javascript chunk filename */
/******/ (() => {
/******/ // This function allow to reference async chunks
/******/ __webpack_require__.u = (chunkId) => {
/******/ // return url for filenames based on template
/******/ return "" + chunkId + "." + {"lib_index_js":"dd61c965c457f092bd9a","vendors-node_modules_css-loader_dist_runtime_api_js-node_modules_css-loader_dist_runtime_cssW-72eba1":"416973763e56b24852f8","style_index_js":"2b67ef2180e84a104942"}[chunkId] + ".js";
/******/ };
/******/ })();
/******/
/******/ /* webpack/runtime/global */
/******/ (() => {
/******/ __webpack_require__.g = (function() {
/******/ if (typeof globalThis === 'object') return globalThis;
/******/ try {
/******/ return this || new Function('return this')();
/******/ } catch (e) {
/******/ if (typeof window === 'object') return window;
/******/ }
/******/ })();
/******/ })();
/******/
/******/ /* webpack/runtime/hasOwnProperty shorthand */
/******/ (() => {
/******/ __webpack_require__.o = (obj, prop) => (Object.prototype.hasOwnProperty.call(obj, prop))
/******/ })();
/******/
/******/ /* webpack/runtime/load script */
/******/ (() => {
/******/ var inProgress = {};
/******/ var dataWebpackPrefix = "KNICS_Jupyter_frontend:";
/******/ // loadScript function to load a script via script tag
/******/ __webpack_require__.l = (url, done, key, chunkId) => {
/******/ if(inProgress[url]) { inProgress[url].push(done); return; }
/******/ var script, needAttach;
/******/ if(key !== undefined) {
/******/ var scripts = document.getElementsByTagName("script");
/******/ for(var i = 0; i < scripts.length; i++) {
/******/ var s = scripts[i];
/******/ if(s.getAttribute("src") == url || s.getAttribute("data-webpack") == dataWebpackPrefix + key) { script = s; break; }
/******/ }
/******/ }
/******/ if(!script) {
/******/ needAttach = true;
/******/ script = document.createElement('script');
/******/
/******/ script.charset = 'utf-8';
/******/ script.timeout = 120;
/******/ if (__webpack_require__.nc) {
/******/ script.setAttribute("nonce", __webpack_require__.nc);
/******/ }
/******/ script.setAttribute("data-webpack", dataWebpackPrefix + key);
/******/ script.src = url;
/******/ }
/******/ inProgress[url] = [done];
/******/ var onScriptComplete = (prev, event) => {
/******/ // avoid mem leaks in IE.
/******/ script.onerror = script.onload = null;
/******/ clearTimeout(timeout);
/******/ var doneFns = inProgress[url];
/******/ delete inProgress[url];
/******/ script.parentNode && script.parentNode.removeChild(script);
/******/ doneFns && doneFns.forEach((fn) => (fn(event)));
/******/ if(prev) return prev(event);
/******/ }
/******/ ;
/******/ var timeout = setTimeout(onScriptComplete.bind(null, undefined, { type: 'timeout', target: script }), 120000);
/******/ script.onerror = onScriptComplete.bind(null, script.onerror);
/******/ script.onload = onScriptComplete.bind(null, script.onload);
/******/ needAttach && document.head.appendChild(script);
/******/ };
/******/ })();
/******/
/******/ /* webpack/runtime/make namespace object */
/******/ (() => {
/******/ // define __esModule on exports
/******/ __webpack_require__.r = (exports) => {
/******/ if(typeof Symbol !== 'undefined' && Symbol.toStringTag) {
/******/ Object.defineProperty(exports, Symbol.toStringTag, { value: 'Module' });
/******/ }
/******/ Object.defineProperty(exports, '__esModule', { value: true });
/******/ };
/******/ })();
/******/
/******/ /* webpack/runtime/sharing */
/******/ (() => {
/******/ __webpack_require__.S = {};
/******/ var initPromises = {};
/******/ var initTokens = {};
/******/ __webpack_require__.I = (name, initScope) => {
/******/ if(!initScope) initScope = [];
/******/ // handling circular init calls
/******/ var initToken = initTokens[name];
/******/ if(!initToken) initToken = initTokens[name] = {};
/******/ if(initScope.indexOf(initToken) >= 0) return;
/******/ initScope.push(initToken);
/******/ // only runs once
/******/ if(initPromises[name]) return initPromises[name];
/******/ // creates a new share scope if needed
/******/ if(!__webpack_require__.o(__webpack_require__.S, name)) __webpack_require__.S[name] = {};
/******/ // runs all init snippets from all modules reachable
/******/ var scope = __webpack_require__.S[name];
/******/ var warn = (msg) => (typeof console !== "undefined" && console.warn && console.warn(msg));
/******/ var uniqueName = "KNICS_Jupyter_frontend";
/******/ var register = (name, version, factory, eager) => {
/******/ var versions = scope[name] = scope[name] || {};
/******/ var activeVersion = versions[version];
/******/ if(!activeVersion || (!activeVersion.loaded && (!eager != !activeVersion.eager ? eager : uniqueName > activeVersion.from))) versions[version] = { get: factory, from: uniqueName, eager: !!eager };
/******/ };
/******/ var initExternal = (id) => {
/******/ var handleError = (err) => (warn("Initialization of sharing external failed: " + err));
/******/ try {
/******/ var module = __webpack_require__(id);
/******/ if(!module) return;
/******/ var initFn = (module) => (module && module.init && module.init(__webpack_require__.S[name], initScope))
/******/ if(module.then) return promises.push(module.then(initFn, handleError));
/******/ var initResult = initFn(module);
/******/ if(initResult && initResult.then) return promises.push(initResult['catch'](handleError));
/******/ } catch(err) { handleError(err); }
/******/ }
/******/ var promises = [];
/******/ switch(name) {
/******/ case "default": {
/******/ register("KNICS_Jupyter_frontend", "0.1.0", () => (__webpack_require__.e("lib_index_js").then(() => (() => (__webpack_require__(/*! ./lib/index.js */ "./lib/index.js"))))));
/******/ }
/******/ break;
/******/ }
/******/ if(!promises.length) return initPromises[name] = 1;
/******/ return initPromises[name] = Promise.all(promises).then(() => (initPromises[name] = 1));
/******/ };
/******/ })();
/******/
/******/ /* webpack/runtime/publicPath */
/******/ (() => {
/******/ var scriptUrl;
/******/ if (__webpack_require__.g.importScripts) scriptUrl = __webpack_require__.g.location + "";
/******/ var document = __webpack_require__.g.document;
/******/ if (!scriptUrl && document) {
/******/ if (document.currentScript)
/******/ scriptUrl = document.currentScript.src
/******/ if (!scriptUrl) {
/******/ var scripts = document.getElementsByTagName("script");
/******/ if(scripts.length) scriptUrl = scripts[scripts.length - 1].src
/******/ }
/******/ }
/******/ // When supporting browsers where an automatic publicPath is not supported you must specify an output.publicPath manually via configuration
/******/ // or pass an empty string ("") and set the __webpack_public_path__ variable from your code to use your own logic.
/******/ if (!scriptUrl) throw new Error("Automatic publicPath is not supported in this browser");
/******/ scriptUrl = scriptUrl.replace(/#.*$/, "").replace(/\?.*$/, "").replace(/\/[^\/]+$/, "/");
/******/ __webpack_require__.p = scriptUrl;
/******/ })();
/******/
/******/ /* webpack/runtime/jsonp chunk loading */
/******/ (() => {
/******/ // no baseURI
/******/
/******/ // object to store loaded and loading chunks
/******/ // undefined = chunk not loaded, null = chunk preloaded/prefetched
/******/ // [resolve, reject, Promise] = chunk loading, 0 = chunk loaded
/******/ var installedChunks = {
/******/ "KNICS_Jupyter_frontend": 0
/******/ };
/******/
/******/ __webpack_require__.f.j = (chunkId, promises) => {
/******/ // JSONP chunk loading for javascript
/******/ var installedChunkData = __webpack_require__.o(installedChunks, chunkId) ? installedChunks[chunkId] : undefined;
/******/ if(installedChunkData !== 0) { // 0 means "already installed".
/******/
/******/ // a Promise means "currently loading".
/******/ if(installedChunkData) {
/******/ promises.push(installedChunkData[2]);
/******/ } else {
/******/ if(true) { // all chunks have JS
/******/ // setup Promise in chunk cache
/******/ var promise = new Promise((resolve, reject) => (installedChunkData = installedChunks[chunkId] = [resolve, reject]));
/******/ promises.push(installedChunkData[2] = promise);
/******/
/******/ // start chunk loading
/******/ var url = __webpack_require__.p + __webpack_require__.u(chunkId);
/******/ // create error before stack unwound to get useful stacktrace later
/******/ var error = new Error();
/******/ var loadingEnded = (event) => {
/******/ if(__webpack_require__.o(installedChunks, chunkId)) {
/******/ installedChunkData = installedChunks[chunkId];
/******/ if(installedChunkData !== 0) installedChunks[chunkId] = undefined;
/******/ if(installedChunkData) {
/******/ var errorType = event && (event.type === 'load' ? 'missing' : event.type);
/******/ var realSrc = event && event.target && event.target.src;
/******/ error.message = 'Loading chunk ' + chunkId + ' failed.\n(' + errorType + ': ' + realSrc + ')';
/******/ error.name = 'ChunkLoadError';
/******/ error.type = errorType;
/******/ error.request = realSrc;
/******/ installedChunkData[1](error);
/******/ }
/******/ }
/******/ };
/******/ __webpack_require__.l(url, loadingEnded, "chunk-" + chunkId, chunkId);
/******/ } else installedChunks[chunkId] = 0;
/******/ }
/******/ }
/******/ };
/******/
/******/ // no prefetching
/******/
/******/ // no preloaded
/******/
/******/ // no HMR
/******/
/******/ // no HMR manifest
/******/
/******/ // no on chunks loaded
/******/
/******/ // install a JSONP callback for chunk loading
/******/ var webpackJsonpCallback = (parentChunkLoadingFunction, data) => {
/******/ var [chunkIds, moreModules, runtime] = data;
/******/ // add "moreModules" to the modules object,
/******/ // then flag all "chunkIds" as loaded and fire callback
/******/ var moduleId, chunkId, i = 0;
/******/ if(chunkIds.some((id) => (installedChunks[id] !== 0))) {
/******/ for(moduleId in moreModules) {
/******/ if(__webpack_require__.o(moreModules, moduleId)) {
/******/ __webpack_require__.m[moduleId] = moreModules[moduleId];
/******/ }
/******/ }
/******/ if(runtime) var result = runtime(__webpack_require__);
/******/ }
/******/ if(parentChunkLoadingFunction) parentChunkLoadingFunction(data);
/******/ for(;i < chunkIds.length; i++) {
/******/ chunkId = chunkIds[i];
/******/ if(__webpack_require__.o(installedChunks, chunkId) && installedChunks[chunkId]) {
/******/ installedChunks[chunkId][0]();
/******/ }
/******/ installedChunks[chunkId] = 0;
/******/ }
/******/
/******/ }
/******/
/******/ var chunkLoadingGlobal = self["webpackChunkKNICS_Jupyter_frontend"] = self["webpackChunkKNICS_Jupyter_frontend"] || [];
/******/ chunkLoadingGlobal.forEach(webpackJsonpCallback.bind(null, 0));
/******/ chunkLoadingGlobal.push = webpackJsonpCallback.bind(null, chunkLoadingGlobal.push.bind(chunkLoadingGlobal));
/******/ })();
/******/
/******/ /* webpack/runtime/nonce */
/******/ (() => {
/******/ __webpack_require__.nc = undefined;
/******/ })();
/******/
/************************************************************************/
/******/
/******/ // module cache are used so entry inlining is disabled
/******/ // startup
/******/ // Load entry module and return exports
/******/ var __webpack_exports__ = __webpack_require__("webpack/container/entry/KNICS_Jupyter_frontend");
/******/ (_JUPYTERLAB = typeof _JUPYTERLAB === "undefined" ? {} : _JUPYTERLAB).KNICS_Jupyter_frontend = __webpack_exports__;
/******/
/******/ })()
;
//# sourceMappingURL=remoteEntry.a910049d2e28ec9b8a93.js.map | PypiClean |
/Odoo_API_Library-1.1.4.tar.gz/Odoo_API_Library-1.1.4/Odoo_API_Library/Validator.py | import logging
import jwt
import re
import datetime
import traceback
import os
from odoo import http, service, registry, SUPERUSER_ID
from odoo.http import request
from odoo.tools import DEFAULT_SERVER_DATETIME_FORMAT
_logger = logging.getLogger(__name__)
regex = r"^[a-z0-9!#$%&'*+\/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+\/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$"
class Validator:
def is_valid_email(self, email):
return re.search(regex, email)
def key(self):
# return os.environ.get('ODOO_JWT_KEY')
ICPSudo = request.env['ir.config_parameter'].sudo()
return ICPSudo.get_param('jwt_secret_key')
def create_token(self, user):
try:
ICPSudo = request.env['ir.config_parameter'].sudo()
exp_day = ICPSudo.get_param("access_token_expires_in")
exp = datetime.datetime.utcnow() + datetime.timedelta(days=float(exp_day))
payload = {
'exp': exp,
'iat': datetime.datetime.utcnow(),
'sub': user['id'],
'lgn': user['login'],
}
token = jwt.encode(
payload,
self.key(),
algorithm=ICPSudo.get_param('jwt_algorithm')
)
self.save_token(token, user['id'], exp)
# return token.decode('utf-8')
return token
except Exception as ex:
_logger.error(ex)
raise
def save_token(self, token, uid, exp):
request.env['jwt_provider.access_token'].sudo().create({
'user_id': uid,
'expires': exp.strftime(DEFAULT_SERVER_DATETIME_FORMAT),
'token': token,
})
def verify(self, token):
record = request.env['jwt_provider.access_token'].sudo().search([
('token', '=', token)
])
if len(record) != 1:
_logger.info('not found %s' % token)
return False
if record.is_expired:
return False
return record.user_id
def verify_token(self, token):
try:
result = {
'status': False,
'message': None,
}
payload = jwt.decode(token, self.key(), algorithms='HS256')
if not self.verify(token):
result['message'] = 'Token invalid or expired'
result['code'] = 498
_logger.info('11111')
return result
uid = request.session.authenticate(
request.session.db, login=payload['lgn'], password=token)
if not uid:
result['message'] = 'Token invalid or expired'
result['code'] = 498
_logger.info('2222')
return result
result['status'] = True
return result
except (jwt.ExpiredSignatureError, jwt.InvalidTokenError, Exception) as e:
result['code'] = 498
result['message'] = 'Token invalid or expired'
_logger.error(traceback.format_exc())
return result
validator = Validator() | PypiClean |
/LabExT_pkg-2.2.0.tar.gz/LabExT_pkg-2.2.0/LabExT/View/ExperimentWizard/Components/MeasurementWindow.py | import logging
from tkinter import Tk, Frame, Label, Button, messagebox
from LabExT.View.Controls.CustomTable import CustomTable
from LabExT.View.TooltipMenu import CreateToolTip
class MeasurementWindow(Frame):
"""Shows all possible measurements in a table and lets the user
decide on the order in which the measurements will be performed.
Called by the ExperimentWizard.
"""
def __init__(self, parent: Tk, experiment_manager, callback=None):
"""Constructor.
Parameters
----------
parent : Tk
Tkinter window parent.
experiment_manager : ExperimentManager
Instance of the current ExperimentManager.
"""
super(MeasurementWindow,
self).__init__(parent) # call parent constructor
self.logger = logging.getLogger()
self.logger.debug('Initialised MeasurementWindow with parent: %s, experiment_manager: %s', parent, experiment_manager )
self._root = parent
self.callback = callback
self._experiment_manager = experiment_manager
self._root.title = 'Measurement Overview'
self._root.geometry('{}x{}'.format(500, 250))
# all possible measurements
self._meas = self._experiment_manager.exp.measurement_list
self.logger.debug('All possible measurements: %s', self._meas)
# selected measurements
self._selection = list()
# if the user aborts, this is set to true, used by the ExperimentWizard
self._abort = False
parent.protocol("WM_DELETE_WINDOW", self.__on_close__)
self.grid(row=0, column=0) # place window in root element
self.__setup__() # setup the window content
def __on_close__(self):
"""Asks the user if (s)he wants to quit, since this class is
part of the ExperimentWizard.
"""
m = messagebox.askyesno('Quit',
'Do you want to quit the ExperimentWizard?')
if m:
self._root.destroy()
self._abort = True
self.logger.debug('User aborted MeasurementWindow')
def __setup__(self):
"""Sets up the measurement table and the buttons.
"""
# create the rows and columns for the table
columns = ["Order", "Name"]
rows = list()
self._meas = list(self._meas)
self._meas.sort()
for meas in self._meas:
tup = (0, meas)
rows.append(tup)
# create table
self._meas_table = CustomTable(self._root, columns, rows)
# insert the measurements to the table and add event when user selects a measurement
for i, item in enumerate(self._meas_table._tree.get_children('')):
self._meas_table._tree.item(item=item, tags=(str(i)))
self._meas_table._tree.tag_bind(str(i), '<ButtonRelease-1>', self.select_item)
CreateToolTip(experiment_manager=self._experiment_manager,
widget=self._meas_table._tree,
stringvar=i,
is_treeview=True,
item=item)
# set up buttons and label with information for the user
self._select_all_button = Button(
self._root, text="Select all", command=self.select_all)
self._select_all_button.grid(column=0, row=3, sticky='w')
self._info_label = Label(
self._root,
text='Order 0 means that the measurement is not selected.\nRight click on measurement for info.')
self._info_label.grid(column=0, row=3, sticky='')
self._continue_button = Button(
self._root, text="Continue", command=self._continue)
self._continue_button.grid(column=0, row=3, sticky='e')
def select_item(self, a):
"""Called when the user selects a measurement in the table.
Sets the order of the measurements.
Parameters
----------
a : Tkinter Event Object
Python object instance with attributes about the event.
"""
# do nothing to the selection, if the header is clicked
region = self._meas_table._tree.identify("region", a.x, a.y)
if region == "heading":
return
# get the item, that was clicked on
curMeas = self._meas_table._tree.focus()
self.logger.debug('Client clicked on: %s', curMeas)
meas = self._meas_table._tree.set(curMeas, 1)
self.logger.debug('Measurement: %s', meas)
order = int(self._meas_table._tree.set(curMeas, 0))
self.logger.debug('Order: %s', order)
# determine if item should be selected or deselected
if meas in self._selection:
self.logger.debug('Measurement was removed from selection.')
self._selection.remove(meas)
# update order of all selected measurements
# get all measurements
for item in self._meas_table._tree.get_children(''):
othermeas = self._meas_table._tree.set(item, 1)
# only regard the selected measurement
if othermeas in self._selection:
# the order is the index in the selection list,
# because the deselected measurement is removed from self.selection
self._meas_table._tree.set(
item=item,
column=0,
value=self._selection.index(othermeas) + 1)
# set order of selected measurement to 0 = deselected
self._meas_table._tree.set(curMeas, 0, 0)
else:
self.logger.debug('Measurement was added to selection.')
self._selection.append(meas)
self._meas_table._tree.set(curMeas, 0, len(self._selection))
def select_all(self):
"""Selects all measurements or deselects all, if all are
selected.
Called when user presses 'Select All' button.
"""
# determine whether or not all measurements are already selected
all_selected = False
if len(self._selection) == len(self._meas):
all_selected = True
self.logger.debug('Currently all measurements are selected: %s', all_selected)
# if all measurements are selected, deselect all, by giving them order 0
if all_selected:
self._selection.clear()
for item in self._meas_table._tree.get_children(''):
self._meas_table._tree.set(item=item, column=0, value=0)
# else select all in ascending order
else:
self._selection.clear()
for meas in self._meas:
self._selection.append(meas)
for i, item in enumerate(self._meas_table._tree.get_children('')):
self._meas_table._tree.set(item=item, column=0, value=i + 1)
def _continue(self):
"""Called when user presses on 'Continue' button.
Calls Experiment to import measurements then closes to return
to ExperimentWizard.
"""
# if the user doesn't select any measurement, we don't do anything
if not self._selection:
messagebox.showinfo('Warning',
'Please select at least one measurement')
return
self.logger.debug('Will now import all measurements...')
for meas in self._selection:
self._experiment_manager.exp.create_measurement_object(meas)
self._root.destroy()
if self.callback is not None:
self.callback() | PypiClean |
/GhettoRecorder-3.0-py3-none-any.whl/ghettorecorder/cmd.py | import os
import sys
import time
import signal
import multiprocessing as mp
from pathlib import Path
import ghettorecorder.ghetto_menu as menu
import ghettorecorder.ghetto_procenv as procenv
import ghettorecorder.ghetto_blacklist as ghetto_blacklist
import ghettorecorder.ghetto_container as container
from ghettorecorder.ghetto_api import ghettoApi
mp.set_start_method('spawn', force=True) # http server process
class Entry:
def __init__(self):
# file system config
self.dir_name = os.path.dirname(__file__) # absolute dir path
self.config_dir = '' # where settings ini is located
self.config_name = "settings.ini"
self.blacklist_name = "blacklist.json"
self.radios_parent_dir = '' # changed if settings GLOBAL 'save_to_dir' changes, blacklist_dir is also that dir
# radio dicts, lists
self.runs_meta = True
self.runs_record = True
self.runs_listen = True
self.radio_name_list = []
self.config_file_radio_url_dict = {} # all {name: url}
self.config_file_settings_dict = {} # blacklist, folders
self.radio_selection_dict = {} # selection to rec
# can be useful if on command line, and want start a http server to stream one of the radio instances local
self.no_err_radios = [] # started radios without errors in err dict
entry = Entry()
def init_path():
"""File system basic info to find the configuration file.
| Container creates folders in places where writing is allowed.
"""
config_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)))
container_dir = container.container_setup()
if container_dir:
config_dir = container_dir
print('container config_dir ', config_dir)
entry.config_dir = config_dir
ghettoApi.path.config_dir = config_dir
ghettoApi.path.config_name = entry.config_name
def run_radios(radio_dict):
"""
Each instance can have its own configuration. Use a Database or json file.
- instantiate radios in a dict, start instances
- failed radios are canceled
- first radio of the ini list starts a http server to listen local buffered sound
:params: radio_base_dir: parent dir
:params: radio_dict: radios with url from menu
"""
for radio, url in radio_dict.items():
procenv.radio_instance_create(radio, url, **entry.__dict__)
url_timeout = 15
start = time.perf_counter()
while 1: # minimize wait time
done = all([True if instance.init_done else False for instance in ghettoApi.radio_inst_dict.values()])
if done or (round((time.perf_counter() - start)) >= url_timeout):
break
def radios_error_get():
"""Useful for terminal, where we must start
all instances at the same time.
"""
instance_err_dict = {}
for radio, inst in ghettoApi.radio_inst_dict.items():
if ghettoApi.radio_inst_dict[radio].error_dict:
instance_err_dict[radio] = ghettoApi.radio_inst_dict[radio].error_dict
ghettoApi.radio_inst_dict[radio].cancel()
print(f' ### cancel radio {radio} ###')
if len(instance_err_dict):
print('\n\n --- errors ---\n\n')
[print(k, v) for k, v in instance_err_dict.items()]
print('\n\n --- end ---\n\n')
entry.no_err_radios = [radio for radio in ghettoApi.radio_inst_dict.keys() if radio not in instance_err_dict.keys()]
return entry.no_err_radios
def show_radios_urls_formatted():
"""Print formatted urls to be able to click listen.
"""
for radio, url in entry.config_file_radio_url_dict.items():
print(f'* {radio:<20} {url}')
print('\n\t---')
def signal_handler(sig, frame):
""" Terminal: catch Keyboard Interrupt ctrl + c, "signal.signal()" instances listen.
:params: sig: SIGTERM
:params: frame: SIGINT
"""
ghettoApi.blacklist.stop_blacklist_writer = True
shutdown()
print('\nThank you for using the GhettoRecorder module.')
sys.exit(0)
signal.signal(signal.SIGTERM, signal_handler)
signal.signal(signal.SIGINT, signal_handler)
def shutdown():
"""Trigger shutdown of radio instances.
"""
radio_lst = procenv.radio_instances_get()
for radio in radio_lst:
procenv.del_radio_instance(radio)
def run_ghetto(frontend=None):
"""
| [STATIONS] *config_file_radio_url_dict* {radio: url} from ini; radio = url
| [GLOBAL] *config_file_settings_dict* {'blacklist_enable': 'True', 'save_to_dir': 'f:\\012345'}
| *radio_selection_dict* user selection command line, bulk start radio instances later
| *radios_parent_dir* is the folder for all the radio dirs
| HTTP server can use Ajax, radio buttons and a dict to switch radio instances on/off
:methods: init_path: collect path variables in an instance and API
:params: frontend: switch options to avoid input() loops and forced parallel start of instances, unlike cmd
"""
init_path()
# show main menu and collect radios or update config file
menu.record() if frontend else menu.menu_main() # ini file to internal dict or show terminal selection
entry.config_file_radio_url_dict = menu.settings_ini_to_dict()
for radio in entry.config_file_radio_url_dict.keys():
entry.radio_name_list.append(radio)
entry.config_file_settings_dict = menu.settings_ini_global()
# dict for html radio buttons or terminal menu input() loop
entry.radio_selection_dict = menu.radio_url_dict_create() if frontend else menu.record_read_radios()
remote_dir = ghettoApi.path.save_to_dir # settings.ini [GLOBAL] section path option for custom folder
if remote_dir:
entry.radios_parent_dir = Path(ghettoApi.path.save_to_dir)
else:
entry.radios_parent_dir = Path(ghettoApi.path.config_dir)
ghetto_blacklist.init(**entry.__dict__) # checks start option on/off itself
def main():
""""""
run_ghetto()
entry.runs_listen = False # use frontend for listen
run_radios(entry.radio_selection_dict)
show_radios_urls_formatted()
while 1:
# names_list = [thread.name for thread in threading.enumerate()]
# print(names_list)
time.sleep(10) # interval to show list; exit via signal_handler and keyboard
if __name__ == "__main__":
main() | PypiClean |
/CSUMMDET-1.0.23.tar.gz/CSUMMDET-1.0.23/mmdet/models/bbox_heads/double_bbox_head.py | import torch.nn as nn
from mmcv.cnn.weight_init import normal_init, xavier_init
from ..backbones.resnet import Bottleneck
from ..registry import HEADS
from ..utils import ConvModule
from .bbox_head import BBoxHead
class BasicResBlock(nn.Module):
"""Basic residual block.
This block is a little different from the block in the ResNet backbone.
The kernel size of conv1 is 1 in this block while 3 in ResNet BasicBlock.
Args:
in_channels (int): Channels of the input feature map.
out_channels (int): Channels of the output feature map.
conv_cfg (dict): The config dict for convolution layers.
norm_cfg (dict): The config dict for normalization layers.
"""
def __init__(self,
in_channels,
out_channels,
conv_cfg=None,
norm_cfg=dict(type='BN')):
super(BasicResBlock, self).__init__()
# main path
self.conv1 = ConvModule(
in_channels,
in_channels,
kernel_size=3,
padding=1,
bias=False,
conv_cfg=conv_cfg,
norm_cfg=norm_cfg)
self.conv2 = ConvModule(
in_channels,
out_channels,
kernel_size=1,
bias=False,
activation=None,
conv_cfg=conv_cfg,
norm_cfg=norm_cfg)
# identity path
self.conv_identity = ConvModule(
in_channels,
out_channels,
kernel_size=1,
conv_cfg=conv_cfg,
norm_cfg=norm_cfg,
activation=None)
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
identity = x
x = self.conv1(x)
x = self.conv2(x)
identity = self.conv_identity(identity)
out = x + identity
out = self.relu(out)
return out
@HEADS.register_module
class DoubleConvFCBBoxHead(BBoxHead):
r"""Bbox head used in Double-Head R-CNN
/-> cls
/-> shared convs ->
\-> reg
roi features
/-> cls
\-> shared fc ->
\-> reg
""" # noqa: W605
def __init__(self,
num_convs=0,
num_fcs=0,
conv_out_channels=1024,
fc_out_channels=1024,
conv_cfg=None,
norm_cfg=dict(type='BN'),
**kwargs):
kwargs.setdefault('with_avg_pool', True)
super(DoubleConvFCBBoxHead, self).__init__(**kwargs)
assert self.with_avg_pool
assert num_convs > 0
assert num_fcs > 0
self.num_convs = num_convs
self.num_fcs = num_fcs
self.conv_out_channels = conv_out_channels
self.fc_out_channels = fc_out_channels
self.conv_cfg = conv_cfg
self.norm_cfg = norm_cfg
# increase the channel of input features
self.res_block = BasicResBlock(self.in_channels,
self.conv_out_channels)
# add conv heads
self.conv_branch = self._add_conv_branch()
# add fc heads
self.fc_branch = self._add_fc_branch()
out_dim_reg = 4 if self.reg_class_agnostic else 4 * self.num_classes
self.fc_reg = nn.Linear(self.conv_out_channels, out_dim_reg)
self.fc_cls = nn.Linear(self.fc_out_channels, self.num_classes)
self.relu = nn.ReLU(inplace=True)
def _add_conv_branch(self):
"""Add the fc branch which consists of a sequential of conv layers"""
branch_convs = nn.ModuleList()
for i in range(self.num_convs):
branch_convs.append(
Bottleneck(
inplanes=self.conv_out_channels,
planes=self.conv_out_channels // 4,
conv_cfg=self.conv_cfg,
norm_cfg=self.norm_cfg))
return branch_convs
def _add_fc_branch(self):
"""Add the fc branch which consists of a sequential of fc layers"""
branch_fcs = nn.ModuleList()
for i in range(self.num_fcs):
fc_in_channels = (
self.in_channels *
self.roi_feat_area if i == 0 else self.fc_out_channels)
branch_fcs.append(nn.Linear(fc_in_channels, self.fc_out_channels))
return branch_fcs
def init_weights(self):
normal_init(self.fc_cls, std=0.01)
normal_init(self.fc_reg, std=0.001)
for m in self.fc_branch.modules():
if isinstance(m, nn.Linear):
xavier_init(m, distribution='uniform')
def forward(self, x_cls, x_reg):
# conv head
x_conv = self.res_block(x_reg)
for conv in self.conv_branch:
x_conv = conv(x_conv)
if self.with_avg_pool:
x_conv = self.avg_pool(x_conv)
x_conv = x_conv.view(x_conv.size(0), -1)
bbox_pred = self.fc_reg(x_conv)
# fc head
x_fc = x_cls.view(x_cls.size(0), -1)
for fc in self.fc_branch:
x_fc = self.relu(fc(x_fc))
cls_score = self.fc_cls(x_fc)
return cls_score, bbox_pred | PypiClean |
/BMI160_i2c-0.6.tar.gz/BMI160_i2c-0.6/README.md | # BMI160-i2c
I2C library to use the Inertial Mesurment Unit BMI160. Heavily inspired on @serioeseGmbH code [serioeseGmbH/BMI160](https://github.com/serioeseGmbH/BMI160), in fact I just adapted his code.
This library was tested successfully on a Raspberry Pi 3 B
## Installation
The package is [available on pypi.org](https://pypi.org/project/BMI160-i2c).
You can install this package using this command
`python3 -m pip install BMI160-i2c`
**This library require [smbus](https://github.com/kplindegaard/smbus2)**
Install smbus2 using the following command:
`python3 -m pip install smbus2`
## Usage
Wire the breakout board with these lines : GND, 3V3, SAO (to GND), SDA, SCL
Make sure that the device is available at `0x68` or `0x69` i2c address by running this command:
`i2cdetect -y 1`
Example : A little python script to fetch all 6 values from the sensor :
```python
from time import sleep
from BMI160_i2c import Driver
print('Trying to initialize the sensor...')
sensor = Driver(0x68) # change address if needed
print('Initialization done')
while True:
data = sensor.getMotion6()
# fetch all gyro and acclerometer values
print({
'gx': data[0],
'gy': data[1],
'gz': data[2],
'ax': data[3],
'ay': data[4],
'az': data[5]
})
sleep(0.1)
```
## Documentation
There is so many method available to do whatever you want to do with a sensor of this kind.
Look at all the methods available [here](https://github.com/lefuturiste/BMI160-i2c/blob/master/BMI160_i2c/__init__.py).
## Credits & Related links
- [hanyazou/BMI160-Arduino](https://github.com/hanyazou/BMI160-Arduino/)
- [serioeseGmbH/BMI160](https://github.com/serioeseGmbH/BMI160)
- [IMU BMI160 Bosch product page](https://www.bosch-sensortec.com/products/motion-sensors/imus/bmi160.html)
- [BMI160 Datasheet](https://www.bosch-sensortec.com/media/boschsensortec/downloads/datasheets/bst-bmi160-ds000.pdf)
- [smbus2 docs](https://smbus2.readthedocs.io/en/latest/)
## Contributions
Feel free to open a issue or a pull request I will be happy to answer any questions or help you with this library.
You can also use these alternative methods to contact me:
- Twitter: [@_le_futuriste](https://twitter.com/_le_futuriste)
- Discord: `lefuturiste#5297`
- Discord server: [https://discord.gg/9M4vVsX](https://discord.gg/9M4vVsX)
## Maintenance
- Increment the version used in `setup.py`
- Build the package: `python3 setup.py sdist bdist_wheel`
- Publish the package: `python3 -m twine upload dist/*`
- Enter `__token__` for the username
- Enter `pypi-{....}` for the password
- And tada!
| PypiClean |
/Deeplodocus-0.3.0-py3-none-any.whl/deeplodocus/data/load/pipeline_entry.py | from typing import Optional
from typing import List
from typing import Any
import weakref
# Deeplodocus imports
from deeplodocus.data.load.formatter import Formatter
from deeplodocus.utils.flag import Flag
class PipelineEntry(object):
"""
AUTHORS:
--------
:author: Alix Leroy
DESCRIPTION:
------------
A PipelineEntry class.
Is the entry point of the model, losses and metrics
Format the data
"""
def __init__(self,
index: int,
dataset: weakref,
entry_type: Flag,
entry_type_index: int,
convert_to: Optional[List[int]] = None,
move_axis: Optional[List[int]] = None):
# Index of the PipelineEntry instance
self.index = index
# Entry type (Input, Label, Additional Data)
self.entry_type = entry_type
# Index of the entry type in the Dataset
self.entry_type_index = entry_type_index
# Dataset
self.dataset = dataset
# Data formatter
self.formatter = Formatter(pipeline_entry=weakref.ref(self),
convert_to=convert_to,
move_axis=move_axis)
def format(self, data: Any) -> Any:
"""
AUTHORS:
--------
:author: Alix Leroy
DESCRIPTION:
------------
Call the Formatter instance to format the data
1) Format the data type
2) Move the axis to a define sequence
PARAMETERS:
-----------
:param data (Any): The data to format
RETURN:
-------
:return (Any): The formatted data
"""
return self.formatter.format(data=data, entry_type=self.entry_type)
###########
# GETTERS #
###########
def get_index(self) -> int:
"""
AUTHORS:
--------
:author: Alix Leroy
DESCRIPTION:
------------
Get the PipelineEntry index
PARAMETERS:
-----------
None
RETURN:
-------
:return self.index(int): The PipelineEntry index
"""
return self.index
def get_dataset(self):
"""
AUTHORS:
--------
:author: Alix Leroy
DESCRIPTION:
------------
Get the weakref of the Dataset whose the Pipeline belongs to
PARAMETERS:
-----------
None
RETURN:
-------
:return self.index(int): The weakref of the Dataset
"""
return self.dataset()
def get_entry_type(self) -> Flag:
"""
AUTHORS:
--------
:author: Alix Leroy
DESCRIPTION:
------------
Get the entry type
PARAMETERS:
-----------
None
RETURN:
-------
:return self.entry_type(Flag): The entry type
"""
return self.entry_type
def get_entry_type_index(self) -> int:
"""
AUTHORS:
--------
:author: Alix Leroy
DESCRIPTION:
------------
Get the PipelineEntry index for the type it belong (input, label, additional_data)
PARAMETERS:
-----------
None
RETURN:
-------
:return self.entry_type_index(int): The PipelineEntry index
"""
return self.entry_type_index | PypiClean |
/GeoNode-3.2.0-py3-none-any.whl/geonode/static/geonode/js/ol-2.13/lib/OpenLayers/Lang/be-tarask.js | * @requires OpenLayers/Lang.js
*/
/**
* Namespace: OpenLayers.Lang["be-tarask"]
* Dictionary for Беларуская (тарашкевіца). Keys for entries are used in calls to
* <OpenLayers.Lang.translate>. Entry bodies are normal strings or
* strings formatted for use with <OpenLayers.String.format> calls.
*/
OpenLayers.Lang["be-tarask"] = OpenLayers.Util.applyDefaults({
'unhandledRequest': "Неапрацаваны вынік запыту ${statusText}",
'Permalink': "Сталая спасылка",
'Overlays': "Слаі",
'Base Layer': "Базавы слой",
'noFID': "Немагчыма абнавіць магчымасьць, для якога не існуе FID.",
'browserNotSupported': "Ваш браўзэр не падтрымлівае вэктарную графіку. У цяперашні момант падтрымліваюцца: ${renderers}",
'minZoomLevelError': "Уласьцівасьць minZoomLevel прызначана толькі для выкарыстаньня са слаямі вытворнымі ад FixedZoomLevels. Тое, што гэты wfs-слой правяраецца на minZoomLevel — рэха прошлага. Але мы ня можам выдаліць гэтую магчымасьць, таму што ад яе залежаць некаторыя заснаваныя на OL дастасаваньні. Тым ня менш, праверка minZoomLevel будзе выдаленая ў вэрсіі 3.0. Калі ласка, выкарыстоўваеце замест яе ўстаноўкі мінімальнага/максымальнага памераў, як апісана тут: http://trac.openlayers.org/wiki/SettingZoomLevels",
'commitSuccess': "WFS-транзакцыя: ПОСЬПЕХ ${response}",
'commitFailed': "WFS-транзакцыя: ПАМЫЛКА ${response}",
'googleWarning': "Не атрымалася загрузіць слой Google. \x3cbr\x3e\x3cbr\x3eКаб пазбавіцца гэтага паведамленьня, выберыце новы базавы слой у сьпісе ў верхнім правым куце.\x3cbr\x3e\x3cbr\x3e Хутчэй за ўсё, прычына ў тым, што скрыпт бібліятэкі Google Maps ня быў уключаныя альбо не ўтрымлівае слушны API-ключ для Вашага сайта.\x3cbr\x3e\x3cbr\x3eРаспрацоўшчыкам: Для таго, каб даведацца як зрабіць так, каб усё працавала, \x3ca href=\'http://trac.openlayers.org/wiki/Google\' target=\'_blank\'\x3eнацісьніце тут\x3c/a\x3e",
'getLayerWarning': "Немагчыма загрузіць слой ${layerType}.\x3cbr\x3e\x3cbr\x3eКаб пазбавіцца гэтага паведамленьня, выберыце новы базавы слой у сьпісе ў верхнім правым куце.\x3cbr\x3e\x3cbr\x3eХутчэй за ўсё, прычына ў тым, што скрыпт бібліятэкі ${layerLib} ня быў слушна ўключаны.\x3cbr\x3e\x3cbr\x3eРаспрацоўшчыкам: Для таго, каб даведацца як зрабіць так, каб усё працавала, \x3ca href=\'http://trac.openlayers.org/wiki/${layerLib}\' target=\'_blank\'\x3eнацісьніце тут\x3c/a\x3e",
'Scale = 1 : ${scaleDenom}': "Маштаб = 1 : ${scaleDenom}",
'W': "З",
'E': "У",
'N': "Пн",
'S': "Пд",
'reprojectDeprecated': "Вы выкарыстоўваеце ўстаноўку \'reproject\' для слоя ${layerName}. Гэтая ўстаноўка зьяўляецца састарэлай: яна выкарыстоўвалася для падтрымкі паказу зьвестак на камэрцыйных базавых мапах, але гэта функцыя цяпер рэалізаваная ў убудаванай падтрымцы сфэрычнай праекцыі Мэркатара. Дадатковая інфармацыя ёсьць на http://trac.openlayers.org/wiki/SphericalMercator.",
'methodDeprecated': "Гэты мэтад састарэлы і будзе выдалены ў вэрсіі 3.0. Калі ласка, замест яго выкарыстоўвайце ${newMethod}."
}); | PypiClean |
/Box2D-2.3.2.tar.gz/Box2D-2.3.2/examples/backends/pyqt4_framework.py | import string
import sys
import re
from PyQt4 import (QtGui, QtCore)
from PyQt4.QtGui import (QTableWidgetItem, QColor)
from PyQt4.QtCore import Qt
from Box2D import (b2AABB, b2CircleShape, b2Color, b2DistanceJoint,
b2EdgeShape, b2LoopShape, b2MouseJoint, b2Mul,
b2PolygonShape, b2PulleyJoint, b2Vec2)
from Box2D import (b2_pi, b2_staticBody, b2_kinematicBody)
from ..framework import (fwQueryCallback, FrameworkBase, Keys)
from .. import settings
from .pyqt4_gui import Ui_MainWindow
class Pyqt4Draw(object):
"""
This debug drawing class differs from the other frameworks. It provides an
example of how to iterate through all the objects in the world and
associate (in PyQt4's case) QGraphicsItems with them.
While DrawPolygon and DrawSolidPolygon are not used for the core shapes in
the world (DrawPolygonShape is), they are left in for compatibility with
other frameworks and the tests.
world_coordinate parameters are also left in for compatibility. Screen
coordinates cannot be used, as PyQt4 does the scaling and rotating for us.
If you utilize this framework and need to add more items to the
QGraphicsScene for a single step, be sure to add them to the temp_items
array to be deleted on the next draw.
"""
MAX_TIMES = 20
axisScale = 0.4
def __init__(self, test):
self.test = test
self.window = self.test.window
self.scene = self.window.scene
self.view = self.window.graphicsView
self.item_cache = {}
self.temp_items = []
self.status_font = QtGui.QFont("Times", 10, QtGui.QFont.Bold)
self.font_spacing = QtGui.QFontMetrics(self.status_font).lineSpacing()
self.draw_idx = 0
def StartDraw(self):
for item in self.temp_items:
self.scene.removeItem(item)
self.temp_items = []
def EndDraw(self):
pass
def SetFlags(self, **kwargs):
"""
For compatibility with other debug drawing classes.
"""
pass
def DrawStringAt(self, x, y, str, color=None):
item = QtGui.QGraphicsSimpleTextItem(str)
if color is None:
color = (255, 255, 255, 255)
brush = QtGui.QBrush(QColor(255, 255, 255, 255))
item.setFont(self.status_font)
item.setBrush(brush)
item.setPos(self.view.mapToScene(x, y))
item.scale(1. / self.test._viewZoom, -1. / self.test._viewZoom)
self.temp_items.append(item)
self.scene.addItem(item)
def DrawPoint(self, p, size, color):
"""
Draw a single point at point p given a pixel size and color.
"""
self.DrawCircle(p, size / self.test.viewZoom, color, drawwidth=0)
def DrawAABB(self, aabb, color):
"""
Draw a wireframe around the AABB with the given color.
"""
line1 = self.scene.addLine(aabb.lowerBound.x, aabb.lowerBound.y,
aabb.upperBound.x, aabb.lowerBound.y,
pen=QtGui.QPen(QColor(*color.bytes)))
line2 = self.scene.addLine(aabb.upperBound.x, aabb.upperBound.y,
aabb.lowerBound.x, aabb.upperBound.y,
pen=QtGui.QPen(QColor(*color.bytes)))
self.temp_items.append(line1)
self.temp_items.append(line2)
def DrawSegment(self, p1, p2, color):
"""
Draw the line segment from p1-p2 with the specified color.
"""
line = self.scene.addLine(p1[0], p1[1], p2[0], p2[1],
pen=QtGui.QPen(QColor(*color.bytes)))
self.temp_items.append(line)
def DrawTransform(self, xf):
"""
Draw the transform xf on the screen
"""
p1 = xf.position
p2 = p1 + self.axisScale * xf.R.x_axis
p3 = p1 + self.axisScale * xf.R.y_axis
line1 = self.scene.addLine(p1[0], p1[1], p2[0], p2[1],
pen=QtGui.QPen(QColor(255, 0, 0)))
line2 = self.scene.addLine(p1[0], p1[1], p3[0], p3[1],
pen=QtGui.QPen(QColor(0, 255, 0)))
self.temp_items.append(line1)
self.temp_items.append(line2)
def DrawCircle(self, center, radius, color, drawwidth=1, shape=None):
"""
Draw a wireframe circle given the center, radius, axis of orientation
and color.
"""
border_color = [c * 255 for c in color] + [255]
pen = QtGui.QPen(QtGui.QColor(*border_color))
ellipse = self.scene.addEllipse(center[0] - radius, center[1] - radius,
radius * 2, radius * 2, pen=pen)
self.temp_items.append(ellipse)
def DrawSolidCircle(self, center, radius, axis, color, shape=None):
"""
Draw a solid circle given the center, radius, axis of orientation and
color.
"""
border_color = color.bytes + [255]
inside_color = (color / 2).bytes + [127]
brush = QtGui.QBrush(QtGui.QColor(*inside_color))
pen = QtGui.QPen(QtGui.QColor(*border_color))
ellipse = self.scene.addEllipse(center[0] - radius, center[1] - radius,
radius * 2, radius * 2, brush=brush,
pen=pen)
line = self.scene.addLine(center[0], center[1],
(center[0] - radius * axis[0]),
(center[1] - radius * axis[1]),
pen=QtGui.QPen(QColor(255, 0, 0)))
self.temp_items.append(ellipse)
self.temp_items.append(line)
def DrawPolygon(self, vertices, color, shape=None):
"""
Draw a wireframe polygon given the world vertices vertices (tuples)
with the specified color.
"""
poly = QtGui.QPolygonF()
pen = QtGui.QPen(QtGui.QColor(*color.bytes))
for v in vertices:
poly += QtCore.QPointF(*v)
item = self.scene.addPolygon(poly, pen=pen)
self.temp_items.append(item)
def DrawSolidPolygon(self, vertices, color, shape=None):
"""
Draw a filled polygon given the world vertices vertices (tuples) with
the specified color.
"""
poly = QtGui.QPolygonF()
border_color = color.bytes + [255]
inside_color = (color / 2).bytes + [127]
brush = QtGui.QBrush(QtGui.QColor(*inside_color))
pen = QtGui.QPen(QtGui.QColor(*border_color))
for v in vertices:
poly += QtCore.QPointF(*v)
item = self.scene.addPolygon(poly, brush=brush, pen=pen)
self.temp_items.append(item)
def DrawCircleShape(self, shape, transform, color, temporary=False):
center = b2Mul(transform, shape.pos)
radius = shape.radius
axis = transform.R.x_axis
border_color = color.bytes + [255]
inside_color = (color / 2).bytes + [127]
brush = QtGui.QBrush(QtGui.QColor(*inside_color))
pen = QtGui.QPen(QtGui.QColor(*border_color))
ellipse = self.scene.addEllipse(-radius, -radius,
radius * 2, radius * 2, brush=brush,
pen=pen)
line = self.scene.addLine(center[0], center[1],
(center[0] - radius * axis[0]),
(center[1] - radius * axis[1]),
pen=QtGui.QPen(QColor(255, 0, 0)))
ellipse.setPos(*center)
ellipse.radius = radius
if temporary:
self.temp_items.append(ellipse)
self.temp_items.append(line)
else:
self.item_cache[hash(shape)] = [ellipse, line]
def DrawPolygonShape(self, shape, transform, color, temporary=False):
poly = QtGui.QPolygonF()
border_color = color.bytes + [255]
inside_color = (color / 2).bytes + [127]
brush = QtGui.QBrush(QtGui.QColor(*inside_color))
pen = QtGui.QPen(QtGui.QColor(*border_color))
for v in shape.vertices:
poly += QtCore.QPointF(*v)
item = self.scene.addPolygon(poly, brush=brush, pen=pen)
item.setRotation(transform.angle * 180.0 / b2_pi)
item.setPos(*transform.position)
if temporary:
self.temp_items.append(item)
else:
self.item_cache[hash(shape)] = [item]
def _remove_from_cache(self, shape):
items = self.item_cache[hash(shape)]
del self.item_cache[hash(shape)]
for item in items:
self.scene.removeItem(item)
def DrawShape(self, shape, transform, color, selected=False):
"""
Draw any type of shape
"""
cache_hit = False
if hash(shape) in self.item_cache:
cache_hit = True
items = self.item_cache[hash(shape)]
items[0].setRotation(transform.angle * 180.0 / b2_pi)
if isinstance(shape, b2CircleShape):
radius = shape.radius
if items[0].radius == radius:
center = b2Mul(transform, shape.pos)
items[0].setPos(*center)
line = items[1]
axis = transform.R.x_axis
line.setLine(center[0], center[1],
(center[0] - radius * axis[0]),
(center[1] - radius * axis[1]))
else:
self._remove_from_cache(shape)
cache_hit = False
else:
items[0].setPos(*transform.position)
if not selected or cache_hit:
return
if selected:
color = b2Color(1, 1, 1)
temporary = True
else:
temporary = False
if isinstance(shape, b2PolygonShape):
self.DrawPolygonShape(shape, transform, color, temporary)
elif isinstance(shape, b2EdgeShape):
v1 = b2Mul(transform, shape.vertex1)
v2 = b2Mul(transform, shape.vertex2)
self.DrawSegment(v1, v2, color)
elif isinstance(shape, b2CircleShape):
self.DrawCircleShape(shape, transform, color, temporary)
elif isinstance(shape, b2LoopShape):
vertices = shape.vertices
v1 = b2Mul(transform, vertices[-1])
for v2 in vertices:
v2 = b2Mul(transform, v2)
self.DrawSegment(v1, v2, color)
v1 = v2
def DrawJoint(self, joint):
"""
Draw any type of joint
"""
bodyA, bodyB = joint.bodyA, joint.bodyB
xf1, xf2 = bodyA.transform, bodyB.transform
x1, x2 = xf1.position, xf2.position
p1, p2 = joint.anchorA, joint.anchorB
color = b2Color(0.5, 0.8, 0.8)
if isinstance(joint, b2DistanceJoint):
self.DrawSegment(p1, p2, color)
elif isinstance(joint, b2PulleyJoint):
s1, s2 = joint.groundAnchorA, joint.groundAnchorB
self.DrawSegment(s1, p1, color)
self.DrawSegment(s2, p2, color)
self.DrawSegment(s1, s2, color)
elif isinstance(joint, b2MouseJoint):
pass # don't draw it here
else:
self.DrawSegment(x1, p1, color)
self.DrawSegment(p1, p2, color)
self.DrawSegment(x2, p2, color)
def ManualDraw(self):
"""
This implements code normally present in the C++ version, which calls
the callbacks that you see in this class (DrawSegment, DrawSolidCircle,
etc.).
This is implemented in Python as an example of how to do it, and also a
test.
"""
colors = {
'active': b2Color(0.5, 0.5, 0.3),
'static': b2Color(0.5, 0.9, 0.5),
'kinematic': b2Color(0.5, 0.5, 0.9),
'asleep': b2Color(0.6, 0.6, 0.6),
'default': b2Color(0.9, 0.7, 0.7),
}
settings = self.test.settings
world = self.test.world
if self.test.selected_shapebody:
sel_shape, sel_body = self.test.selected_shapebody
else:
sel_shape = None
if settings.drawShapes:
for body in world.bodies:
transform = body.transform
for fixture in body.fixtures:
shape = fixture.shape
if not body.active:
color = colors['active']
elif body.type == b2_staticBody:
color = colors['static']
elif body.type == b2_kinematicBody:
color = colors['kinematic']
elif not body.awake:
color = colors['asleep']
else:
color = colors['default']
self.DrawShape(fixture.shape, transform,
color, (sel_shape == shape))
if settings.drawJoints:
for joint in world.joints:
self.DrawJoint(joint)
# if settings.drawPairs
# pass
if settings.drawAABBs:
color = b2Color(0.9, 0.3, 0.9)
# cm = world.contactManager
for body in world.bodies:
if not body.active:
continue
transform = body.transform
for fixture in body.fixtures:
shape = fixture.shape
for childIndex in range(shape.childCount):
self.DrawAABB(shape.getAABB(
transform, childIndex), color)
def to_screen(self, point):
"""
In here for compatibility with other frameworks.
"""
return tuple(point)
class GraphicsScene (QtGui.QGraphicsScene):
def __init__(self, test, parent=None):
super(GraphicsScene, self).__init__(parent)
self.test = test
def keyPressEvent(self, event):
self.test._Keyboard_Event(event.key(), down=True)
def keyReleaseEvent(self, event):
self.test._Keyboard_Event(event.key(), down=False)
def mousePressEvent(self, event):
pos = self.test.ConvertScreenToWorld(
event.scenePos().x(), event.scenePos().y())
if event.button() == Qt.RightButton:
self.test.ShowProperties(pos)
elif event.button() == Qt.LeftButton:
if event.modifiers() == Qt.ShiftModifier:
self.test.ShiftMouseDown(pos)
else:
self.test.MouseDown(pos)
def mouseReleaseEvent(self, event):
pos = event.scenePos().x(), event.scenePos().y()
if event.button() == Qt.RightButton:
self.test.MouseUp(pos)
elif event.button() == Qt.LeftButton:
self.test.MouseUp(pos)
def mouseMoveEvent(self, event):
pos = event.scenePos().x(), event.scenePos().y()
self.test.MouseMove(self.test.ConvertScreenToWorld(*pos))
QtGui.QGraphicsScene.mouseMoveEvent(self, event)
class MainWindow(QtGui.QMainWindow, Ui_MainWindow):
def __init__(self, test, parent=None):
QtGui.QMainWindow.__init__(self)
self.setupUi(self)
self.scene = GraphicsScene(test)
self.test = test
self.scene.setBackgroundBrush(QtGui.QBrush(QtGui.QColor(0, 0, 0)))
self.graphicsView.setScene(self.scene)
self.graphicsView.scale(self.test.viewZoom, -self.test.viewZoom)
self.reset_properties_list()
self.restoreLayout()
def increase_font_size(amount=1.0):
self.setFontSize(app.font().pointSize() + amount)
def decrease_font_size(amount=1.0):
self.setFontSize(app.font().pointSize() - amount)
self.mnuExit.triggered.connect(self.close)
self.mnuIncreaseFontSize.triggered.connect(increase_font_size)
self.mnuDecreaseFontSize.triggered.connect(decrease_font_size)
self.add_settings_widgets()
def add_settings_widgets(self):
self.settings_widgets = {}
gb = self.gbOptions # the options groupbox
layout = QtGui.QVBoxLayout()
gb.setLayout(layout)
for text, variable in settings.checkboxes:
if variable:
widget = QtGui.QCheckBox('&' + text)
def state_changed(value, variable=variable, widget=widget):
setattr(self.test.settings, variable, widget.isChecked())
widget.stateChanged.connect(state_changed)
widget.setChecked(getattr(self.test.settings, variable))
self.settings_widgets[variable] = widget
else:
widget = QtGui.QLabel(text)
widget.setAlignment(Qt.AlignHCenter)
layout.addWidget(widget)
for slider in settings.sliders:
label = QtGui.QLabel(slider['text'])
label.setAlignment(Qt.AlignHCenter)
layout.addWidget(label)
widget = QtGui.QScrollBar(Qt.Horizontal)
widget.setRange(slider['min'], slider['max'])
var = slider['name']
def value_changed(value, slider=slider, label=label):
variable = slider['name']
text = slider['text']
setattr(self.test.settings, variable, value)
label.setText('%s (%d)' % (text, value))
widget.valueChanged.connect(value_changed)
self.settings_widgets[var] = widget
layout.addWidget(widget)
self.update_widgets_from_settings()
def update_widgets_from_settings(self, step_settings=None):
if step_settings is None:
step_settings = self.test.settings
for var, widget in list(self.settings_widgets.items()):
if isinstance(widget, QtGui.QCheckBox):
widget.setChecked(getattr(step_settings, var))
else:
widget.setValue(getattr(step_settings, var))
for slider in settings.sliders:
var = slider['name']
self.settings_widgets[var].setValue(getattr(step_settings, var))
def reset_properties_list(self):
self.twProperties.clear()
self.twProperties.setRowCount(0)
self.twProperties.setColumnCount(3)
self.twProperties.verticalHeader().hide() # don't show numbers on left
self.twProperties.setHorizontalHeaderLabels(['class', 'name', 'value'])
def keyPressEvent(self, event):
self.test._Keyboard_Event(event.key(), down=True)
def keyReleaseEvent(self, event):
self.test._Keyboard_Event(event.key(), down=False)
@property
def settings(self):
return QtCore.QSettings("pybox2d", "Framework")
def setFontSize(self, size):
"""
Update the global font size
"""
if size <= 0.0:
return
global app
font = app.font()
font.setPointSize(size)
app.setFont(font)
def restoreLayout(self):
"""
Restore the layout of each widget
"""
settings = self.settings
try:
self.restoreGeometry(settings.value("geometry").toByteArray())
self.restoreState(settings.value("windowState").toByteArray())
size = settings.value('fontSize').toFloat()[0]
self.setFontSize(size)
except:
pass
def saveLayout(self):
"""
Save the layout of each widget
"""
settings = self.settings
settings.setValue("geometry", self.saveGeometry())
settings.setValue("windowState", self.saveState())
settings.setValue("fontSize", app.font().pointSize())
def closeEvent(self, event):
QtGui.QMainWindow.closeEvent(self, event)
self.saveLayout()
app = None
class Pyqt4Framework(FrameworkBase):
TEXTLINE_START = 0
def setup_keys(self):
# Only basic keys are mapped for now: K_[a-z0-9], K_F[1-12] and
# K_COMMA.
for letter in string.ascii_uppercase:
setattr(Keys, 'K_' + letter.lower(),
getattr(Qt, 'Key_%s' % letter))
for i in range(0, 10):
setattr(Keys, 'K_%d' % i, getattr(Qt, 'Key_%d' % i))
for i in range(1, 13):
setattr(Keys, 'K_F%d' % i, getattr(Qt, 'Key_F%d' % i))
Keys.K_LEFT = Qt.Key_Left
Keys.K_RIGHT = Qt.Key_Right
Keys.K_UP = Qt.Key_Up
Keys.K_DOWN = Qt.Key_Down
Keys.K_HOME = Qt.Key_Home
Keys.K_PAGEUP = Qt.Key_PageUp
Keys.K_PAGEDOWN = Qt.Key_PageDown
Keys.K_COMMA = Qt.Key_Comma
Keys.K_SPACE = Qt.Key_Space
def __reset(self):
# Screen/rendering-related
self._viewZoom = 10.0
self._viewCenter = None
self._viewOffset = None
self.screenSize = None
self.textLine = 0
self.font = None
self.fps = 0
self.selected_shapebody = None, None
# GUI-related
self.window = None
self.setup_keys()
def __init__(self):
super(Pyqt4Framework, self).__init__()
self.__reset()
if settings.fwSettings.onlyInit: # testing mode doesn't initialize Pyqt4
return
global app
app = QtGui.QApplication(sys.argv)
print('Initializing Pyqt4 framework...')
# Pyqt4 Initialization
self.window = MainWindow(self)
self.window.show()
self.window.setWindowTitle("Python Box2D Testbed - " + self.name)
self.renderer = Pyqt4Draw(self)
# Note that in this framework, we override the draw debug data routine
# that occurs in Step(), and we implement the normal C++ code in
# Python.
self.world.DrawDebugData = lambda: self.renderer.ManualDraw()
self.screenSize = b2Vec2(0, 0)
self.viewCenter = (0, 10.0 * 20.0)
self.groundbody = self.world.CreateBody()
def setCenter(self, value):
"""
Updates the view offset based on the center of the screen.
Tells the debug draw to update its values also.
"""
self._viewCenter = b2Vec2(*value)
self._viewOffset = self._viewCenter - self.screenSize / 2
self.window.graphicsView.centerOn(*self._viewCenter)
def setZoom(self, zoom):
self._viewZoom = zoom
self.window.graphicsView.resetTransform()
self.window.graphicsView.scale(self._viewZoom, -self._viewZoom)
self.window.graphicsView.centerOn(*self._viewCenter)
viewZoom = property(lambda self: self._viewZoom, setZoom,
doc='Zoom factor for the display')
viewCenter = property(lambda self: self._viewCenter, setCenter,
doc='Screen center in camera coordinates')
viewOffset = property(lambda self: self._viewOffset,
doc='The offset of the top-left corner of the screen')
def run(self):
"""
What would be the main loop is instead a call to
app.exec_() for the event-driven pyqt4.
"""
global app
self.step_timer = QtCore.QTimer()
self.step_timer.timeout.connect(self.SimulationLoop)
self.window.twProperties.itemChanged.connect(self.prop_cell_changed)
self.step_timer.start(int((1000.0 / self.settings.hz)))
app.exec_()
self.step_timer.stop()
print('Cleaning up...')
self.world.contactListener = None
self.world.destructionListener = None
self.world.renderer = None
self.world = None
def _Keyboard_Event(self, key, down=True):
"""
Internal keyboard event, don't override this.
Checks for the initial keydown of the basic testbed keys. Passes the unused
ones onto the test via the Keyboard() function.
"""
if down:
if key == Keys.K_z: # Zoom in
self.viewZoom = min(1.10 * self.viewZoom, 50.0)
elif key == Keys.K_x: # Zoom out
self.viewZoom = max(0.9 * self.viewZoom, 0.02)
elif key == Keys.K_SPACE: # Launch a bomb
self.LaunchRandomBomb()
else: # Inform the test of the key press
self.Keyboard(key)
else:
self.KeyboardUp(key)
def CheckKeys(self):
pass
def _ShowProperties(self, obj):
self.selected_shapebody = None, None
class_ = obj.__class__
ignore_list = ('thisown',)
i = 0
twProperties = self.window.twProperties
# Get all of the members of the class
for prop in dir(class_):
# If they're properties and not to be ignored, add them to the
# table widget
if (isinstance(getattr(class_, prop), property)
and prop not in ignore_list):
try:
value = getattr(obj, prop)
except:
# Write-only?
continue
widget = None
# Attempt to determine whether it's read-only or not
try:
setattr(obj, prop, value)
except:
editable = False
else:
editable = True
# Increase the row count and insert the new item
twProperties.setRowCount(twProperties.rowCount() + 1)
i = twProperties.rowCount() - 1
self.item = QTableWidgetItem(class_.__name__)
twProperties.setItem(i, 0, QTableWidgetItem(
class_.__name__)) # class name
twProperties.item(i, 0).setFlags(Qt.ItemIsEnabled)
twProperties.setItem(
i, 1, QtGui.QTableWidgetItem(prop)) # prop name
twProperties.item(i, 1).setFlags(Qt.ItemIsEnabled)
# and finally, the property values
# booleans are checkboxes
if isinstance(value, bool):
def state_changed(value, prop=prop):
self.property_changed(prop, value == Qt.Checked)
widget = QtGui.QCheckBox('')
widget.stateChanged.connect(state_changed)
if value:
widget.setCheckState(Qt.Checked)
# ints, floats are spinboxes
elif isinstance(value, (int, float)):
def value_changed(value, prop=prop):
self.property_changed(prop, value)
widget = QtGui.QDoubleSpinBox()
widget.valueChanged.connect(value_changed)
widget.setValue(value)
# lists turn into -- lists
elif isinstance(value, list):
widget = QtGui.QListWidget()
for entry in value:
widget.addItem(str(entry))
if value:
# sz=widget.item(0).sizeHint()
# print(sz, sz.width(), sz.height())
# sz.setHeight(sz.height()*2)
# widget.setMinimumSize(sz)
# widget.setMinimumSize(QtCore.QSize(1,60))
pass # TODO
# vec2s will be shown as a textbox
elif isinstance(value, b2Vec2):
value = '(%.2f, %.2f)' % (tuple(value))
else:
pass
if widget:
twProperties.setCellWidget(i, 2, widget)
if hasattr(widget, 'setReadOnly'):
widget.setReadOnly(not editable)
elif hasattr(widget, 'setEnabled'):
widget.setEnabled(editable)
else:
# Just using the table widget, set the cell text
cell = QtGui.QTableWidgetItem(str(value))
if editable:
cell.setFlags(Qt.ItemIsEditable | Qt.ItemIsEnabled)
else:
cell.setFlags(Qt.ItemIsEnabled)
twProperties.setItem(i, 2, cell)
i += 1
# callback indicating a cell in the table widget was changed
def prop_cell_changed(self, twi):
if twi.column() != 2: # the data column
return
row = twi.row()
prop = str(self.window.twProperties.item(row, 1).text())
self.property_changed(prop, str(twi.text()))
# callback indicating one of the property widgets was modified
def property_changed(self, prop, value=None):
if not self.selected_shapebody[0]:
return
print('Trying to change %s to %s...' % (prop, value))
shape, body = self.selected_shapebody
for inst in (shape, body):
if hasattr(inst, prop):
try:
cur_value = getattr(inst, prop)
if isinstance(cur_value, b2Vec2):
m = re.search('\(?([\d\.]*)\s*,\s*([\d\.]*)\)?', value)
if m:
x, y = m.groups()
value = (float(x), float(y))
except:
raise
pass
try:
setattr(inst, prop, value)
except:
print('Failed - %s' % sys.exc_info()[1])
def ShowProperties(self, p):
aabb = b2AABB(lowerBound=p - (0.001, 0.001),
upperBound=p + (0.001, 0.001))
# Query the world for overlapping shapes.
query = fwQueryCallback(p)
self.world.QueryAABB(query, aabb)
if query.fixture:
self.window.reset_properties_list()
fixture = query.fixture
body = fixture.body
self._ShowProperties(body)
shape = fixture.shape
self._ShowProperties(shape)
self.selected_shapebody = (shape, body)
def Step(self, settings):
super(Pyqt4Framework, self).Step(settings)
def ConvertScreenToWorld(self, x, y):
"""
PyQt4 gives us transformed positions, so no need to convert
"""
return b2Vec2(x, y)
DrawStringAt = lambda self, *args: self.renderer.DrawStringAt(*args)
def Print(self, str, color=(229, 153, 153, 255)):
"""
Draw some text at the top status lines and advance to the next line.
"""
self.DrawStringAt(5, self.textLine, str, color)
self.textLine += self.renderer.font_spacing
def Keyboard(self, key):
"""
Callback indicating 'key' has been pressed down.
The keys are mapped after pygame's style.
from framework import Keys
if key == Keys.K_z:
...
"""
pass
def KeyboardUp(self, key):
"""
Callback indicating 'key' has been released.
See Keyboard() for key information
"""
pass
def FixtureDestroyed(self, fixture):
shape = fixture.shape
if shape == self.selected_shapebody[0]:
self.selected_shapebody = None, None
self.window.reset_properties_list()
if hash(shape) in self.renderer.item_cache:
scene_items = self.renderer.item_cache[hash(shape)]
for item in scene_items:
self.window.scene.removeItem(item)
del self.renderer.item_cache[hash(shape)] | PypiClean |
/Kr0nOs-3.4.1.tar.gz/Kr0nOs-3.4.1/kronbot/cogs/audio/databases.py | import asyncio
import concurrent.futures
import contextlib
import datetime
import json
import logging
import time
from dataclasses import dataclass, field
from typing import TYPE_CHECKING, Dict, List, Mapping, MutableMapping, Optional, Tuple, Union
import apsw
from kronbot.core import Config
from kronbot.core.bot import Kron
from kronbot.core.data_manager import cog_data_path
from .errors import InvalidTableError
from .sql_statements import *
from .utils import PlaylistScope
log = logging.getLogger("kron.audio.database")
if TYPE_CHECKING:
database_connection: apsw.Connection
_bot: Kron
_config: Config
else:
_config = None
_bot = None
database_connection = None
SCHEMA_VERSION = 3
SQLError = apsw.ExecutionCompleteError
_PARSER: Mapping = {
"youtube": {
"insert": YOUTUBE_UPSERT,
"youtube_url": {"query": YOUTUBE_QUERY},
"update": YOUTUBE_UPDATE,
},
"spotify": {
"insert": SPOTIFY_UPSERT,
"track_info": {"query": SPOTIFY_QUERY},
"update": SPOTIFY_UPDATE,
},
"lavalink": {
"insert": LAVALINK_UPSERT,
"data": {"query": LAVALINK_QUERY, "played": LAVALINK_QUERY_LAST_FETCHED_RANDOM},
"update": LAVALINK_UPDATE,
},
}
def _pass_config_to_databases(config: Config, bot: Kron):
global _config, _bot, database_connection
if _config is None:
_config = config
if _bot is None:
_bot = bot
if database_connection is None:
database_connection = apsw.Connection(
str(cog_data_path(_bot.get_cog("Audio")) / "Audio.db")
)
@dataclass
class PlaylistFetchResult:
playlist_id: int
playlist_name: str
scope_id: int
author_id: int
playlist_url: Optional[str] = None
tracks: List[MutableMapping] = field(default_factory=lambda: [])
def __post_init__(self):
if isinstance(self.tracks, str):
self.tracks = json.loads(self.tracks)
@dataclass
class CacheFetchResult:
query: Optional[Union[str, MutableMapping]]
last_updated: int
def __post_init__(self):
if isinstance(self.last_updated, int):
self.updated_on: datetime.datetime = datetime.datetime.fromtimestamp(self.last_updated)
if isinstance(self.query, str) and all(
k in self.query for k in ["loadType", "playlistInfo", "isSeekable", "isStream"]
):
self.query = json.loads(self.query)
@dataclass
class CacheLastFetchResult:
tracks: List[MutableMapping] = field(default_factory=lambda: [])
def __post_init__(self):
if isinstance(self.tracks, str):
self.tracks = json.loads(self.tracks)
@dataclass
class CacheGetAllLavalink:
query: str
data: List[MutableMapping] = field(default_factory=lambda: [])
def __post_init__(self):
if isinstance(self.data, str):
self.data = json.loads(self.data)
class CacheInterface:
def __init__(self):
self.database = database_connection.cursor()
@staticmethod
def close():
with contextlib.suppress(Exception):
database_connection.close()
async def init(self):
self.database.execute(PRAGMA_SET_temp_store)
self.database.execute(PRAGMA_SET_journal_mode)
self.database.execute(PRAGMA_SET_read_uncommitted)
self.maybe_migrate()
self.database.execute(LAVALINK_CREATE_TABLE)
self.database.execute(LAVALINK_CREATE_INDEX)
self.database.execute(YOUTUBE_CREATE_TABLE)
self.database.execute(YOUTUBE_CREATE_INDEX)
self.database.execute(SPOTIFY_CREATE_TABLE)
self.database.execute(SPOTIFY_CREATE_INDEX)
await self.clean_up_old_entries()
async def clean_up_old_entries(self):
max_age = await _config.cache_age()
maxage = datetime.datetime.now(tz=datetime.timezone.utc) - datetime.timedelta(days=max_age)
maxage_int = int(time.mktime(maxage.timetuple()))
values = {"maxage": maxage_int}
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
executor.submit(self.database.execute, LAVALINK_DELETE_OLD_ENTRIES, values)
executor.submit(self.database.execute, YOUTUBE_DELETE_OLD_ENTRIES, values)
executor.submit(self.database.execute, SPOTIFY_DELETE_OLD_ENTRIES, values)
def maybe_migrate(self):
current_version = self.database.execute(PRAGMA_FETCH_user_version).fetchone()
if isinstance(current_version, tuple):
current_version = current_version[0]
if current_version == SCHEMA_VERSION:
return
self.database.execute(PRAGMA_SET_user_version, {"version": SCHEMA_VERSION})
async def insert(self, table: str, values: List[MutableMapping]):
try:
query = _PARSER.get(table, {}).get("insert")
if query is None:
raise InvalidTableError(f"{table} is not a valid table in the database.")
self.database.execute("BEGIN;")
self.database.executemany(query, values)
self.database.execute("COMMIT;")
except Exception as err:
log.debug("Error during audio db insert", exc_info=err)
async def update(self, table: str, values: Dict[str, Union[str, int]]):
try:
table = _PARSER.get(table, {})
sql_query = table.get("update")
time_now = int(datetime.datetime.now(datetime.timezone.utc).timestamp())
values["last_fetched"] = time_now
if not table:
raise InvalidTableError(f"{table} is not a valid table in the database.")
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
executor.submit(self.database.execute, sql_query, values)
except Exception as err:
log.debug("Error during audio db update", exc_info=err)
async def fetch_one(
self, table: str, query: str, values: Dict[str, Union[str, int]]
) -> Tuple[Optional[str], bool]:
table = _PARSER.get(table, {})
sql_query = table.get(query, {}).get("query")
if not table:
raise InvalidTableError(f"{table} is not a valid table in the database.")
max_age = await _config.cache_age()
maxage = datetime.datetime.now(tz=datetime.timezone.utc) - datetime.timedelta(days=max_age)
maxage_int = int(time.mktime(maxage.timetuple()))
values.update({"maxage": maxage_int})
output = self.database.execute(sql_query, values).fetchone() or (None, 0)
result = CacheFetchResult(*output)
return result.query, False
async def fetch_all(
self, table: str, query: str, values: Dict[str, Union[str, int]]
) -> List[CacheLastFetchResult]:
table = _PARSER.get(table, {})
sql_query = table.get(query, {}).get("played")
if not table:
raise InvalidTableError(f"{table} is not a valid table in the database.")
output = []
for index, row in enumerate(self.database.execute(sql_query, values), start=1):
if index % 50 == 0:
await asyncio.sleep(0.01)
output.append(CacheLastFetchResult(*row))
return output
async def fetch_random(
self, table: str, query: str, values: Dict[str, Union[str, int]]
) -> CacheLastFetchResult:
table = _PARSER.get(table, {})
sql_query = table.get(query, {}).get("played")
if not table:
raise InvalidTableError(f"{table} is not a valid table in the database.")
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
for future in concurrent.futures.as_completed(
[executor.submit(self.database.execute, sql_query, values)]
):
try:
row = future.result()
row = row.fetchone()
except Exception as exc:
log.debug(f"Failed to completed random fetch from database", exc_info=exc)
return CacheLastFetchResult(*row)
class PlaylistInterface:
def __init__(self):
self.cursor = database_connection.cursor()
self.cursor.execute(PRAGMA_SET_temp_store)
self.cursor.execute(PRAGMA_SET_journal_mode)
self.cursor.execute(PRAGMA_SET_read_uncommitted)
self.cursor.execute(PLAYLIST_CREATE_TABLE)
self.cursor.execute(PLAYLIST_CREATE_INDEX)
@staticmethod
def close():
with contextlib.suppress(Exception):
database_connection.close()
@staticmethod
def get_scope_type(scope: str) -> int:
if scope == PlaylistScope.GLOBAL.value:
table = 1
elif scope == PlaylistScope.USER.value:
table = 3
else:
table = 2
return table
def fetch(self, scope: str, playlist_id: int, scope_id: int) -> PlaylistFetchResult:
scope_type = self.get_scope_type(scope)
row = (
self.cursor.execute(
PLAYLIST_FETCH,
({"playlist_id": playlist_id, "scope_id": scope_id, "scope_type": scope_type}),
).fetchone()
or []
)
return PlaylistFetchResult(*row) if row else None
async def fetch_all(
self, scope: str, scope_id: int, author_id=None
) -> List[PlaylistFetchResult]:
scope_type = self.get_scope_type(scope)
if author_id is not None:
output = []
for index, row in enumerate(
self.cursor.execute(
PLAYLIST_FETCH_ALL_WITH_FILTER,
({"scope_type": scope_type, "scope_id": scope_id, "author_id": author_id}),
),
start=1,
):
if index % 50 == 0:
await asyncio.sleep(0.01)
output.append(row)
else:
output = []
for index, row in enumerate(
self.cursor.execute(
PLAYLIST_FETCH_ALL, ({"scope_type": scope_type, "scope_id": scope_id})
),
start=1,
):
if index % 50 == 0:
await asyncio.sleep(0.01)
output.append(row)
return [PlaylistFetchResult(*row) for row in output] if output else []
async def fetch_all_converter(
self, scope: str, playlist_name, playlist_id
) -> List[PlaylistFetchResult]:
scope_type = self.get_scope_type(scope)
try:
playlist_id = int(playlist_id)
except Exception:
playlist_id = -1
output = []
for index, row in enumerate(
self.cursor.execute(
PLAYLIST_FETCH_ALL_CONVERTER,
(
{
"scope_type": scope_type,
"playlist_name": playlist_name,
"playlist_id": playlist_id,
}
),
),
start=1,
):
if index % 50 == 0:
await asyncio.sleep(0.01)
output.append(row)
return [PlaylistFetchResult(*row) for row in output] if output else []
def delete(self, scope: str, playlist_id: int, scope_id: int):
scope_type = self.get_scope_type(scope)
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
executor.submit(
self.cursor.execute,
PLAYLIST_DELETE,
({"playlist_id": playlist_id, "scope_id": scope_id, "scope_type": scope_type}),
)
def delete_scheduled(self):
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
executor.submit(self.cursor.execute, PLAYLIST_DELETE_SCHEDULED)
def drop(self, scope: str):
scope_type = self.get_scope_type(scope)
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
executor.submit(
self.cursor.execute, PLAYLIST_DELETE_SCOPE, ({"scope_type": scope_type})
)
def create_table(self, scope: str):
scope_type = self.get_scope_type(scope)
return self.cursor.execute(PLAYLIST_CREATE_TABLE, ({"scope_type": scope_type}))
def upsert(
self,
scope: str,
playlist_id: int,
playlist_name: str,
scope_id: int,
author_id: int,
playlist_url: Optional[str],
tracks: List[MutableMapping],
):
scope_type = self.get_scope_type(scope)
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
executor.submit(
self.cursor.execute,
PLAYLIST_UPSERT,
{
"scope_type": str(scope_type),
"playlist_id": int(playlist_id),
"playlist_name": str(playlist_name),
"scope_id": int(scope_id),
"author_id": int(author_id),
"playlist_url": playlist_url,
"tracks": json.dumps(tracks),
},
) | PypiClean |
/MITunaX-0.1.tar.gz/MITunaX-0.1/tuna/utils/dupe_resolve.py |
from sqlalchemy.exc import IntegrityError, OperationalError
from tuna.dbBase.sql_alchemy import DbSession
from tuna.helper import handle_op_error
from tuna.utils.logger import setup_logger
LOGGER = setup_logger('dupe_resolve')
view_perf_cfg_rep = """
create or replace view perf_cfg_rep as
select cc2.id, cc3.id as cfg from
(select cpc.id, cpc.valid as cpc_valid, cc.spatial_dim, cc.batchsize, cc.pad_h,
cc.pad_w, cc.conv_stride_h, cc.conv_stride_w, cc.dilation_h, cc.dilation_w,
cc.group_count, cc.conv_mode, cc.pad_mode, cc.trans_output_pad_h,
cc.trans_output_pad_w, cc.input_tensor, cc.weight_tensor, cc.out_layout
from conv_perf_config as cpc inner join conv_config as cc on cpc.config=cc.id where cc.valid=0) as cc2
inner join conv_config as cc3
on cc2.spatial_dim=cc3.spatial_dim and cc2.batchsize=cc3.batchsize and cc2.pad_h=cc3.pad_h
and cc2.pad_w=cc3.pad_w and cc2.conv_stride_h=cc3.conv_stride_h and cc2.conv_stride_w=cc3.conv_stride_w
and cc2.dilation_h=cc3.dilation_h and cc2.dilation_w=cc3.dilation_w
and cc2.group_count=cc3.group_count and cc2.conv_mode=cc3.conv_mode
and cc2.pad_mode=cc3.pad_mode and cc2.trans_output_pad_h=cc3.trans_output_pad_h
and cc2.trans_output_pad_w=cc3.trans_output_pad_w and cc2.input_tensor=cc3.input_tensor
and cc2.weight_tensor=cc3.weight_tensor and cc2.out_layout=cc3.out_layout
where cc3.spatial_dim=2 and cc3.valid=1 and cc2.cpc_valid=1
group by cc2.id, cfg;
"""
view_perf_db_rep = """
create or replace view perf_db_rep as
select cpd2.theid, cpc3.id as mcfg from
(select cpd.id as theid, cpd.valid as cpd_valid, layout, data_type, direction, bias, config, cc.*
from conv_perf_db as cpd
inner join conv_perf_config as cpc on cpd.miopen_config=cpc.id
inner join conv_config as cc on cpc.config=cc.id
where cc.valid=0) as cpd2
inner join conv_perf_config as cpc3
on cpd2.layout=cpc3.layout and cpd2.data_type=cpc3.data_type
and cpd2.direction=cpc3.direction and cpd2.bias=cpc3.bias
inner join conv_config as cc3
on cc3.id=cpc3.config
and cpd2.spatial_dim=cc3.spatial_dim and cpd2.batchsize=cc3.batchsize and cpd2.pad_h=cc3.pad_h
and cpd2.pad_w=cc3.pad_w and cpd2.conv_stride_h=cc3.conv_stride_h and cpd2.conv_stride_w=cc3.conv_stride_w
and cpd2.dilation_h=cc3.dilation_h and cpd2.dilation_w=cc3.dilation_w
and cpd2.group_count=cc3.group_count and cpd2.conv_mode=cc3.conv_mode
and cpd2.pad_mode=cc3.pad_mode and cpd2.trans_output_pad_h=cc3.trans_output_pad_h
and cpd2.trans_output_pad_w=cc3.trans_output_pad_w and cpd2.input_tensor=cc3.input_tensor
and cpd2.weight_tensor=cc3.weight_tensor and cpd2.out_layout=cc3.out_layout
where cc3.valid=1 and cpc3.valid=1 and cpd2.cpd_valid=1
group by cpd2.theid, mcfg;
"""
def main():
"""main"""
with DbSession() as session:
session.execute(view_perf_cfg_rep)
session.commit()
res = session.execute("select id, cfg from perf_cfg_rep").all()
invalid = 0
for id, cfg in res:
try:
query = "update conv_perf_config set config={} where id={};".format(
cfg, id)
print(query)
#session.execute(query)
#session.commit()
except OperationalError as error:
handle_op_error(LOGGER, error)
except IntegrityError as error:
session.rollback()
LOGGER.warning('insert failed (%s)', error)
if "Duplicate entry" in "%s" % error:
query = "update conv_perf_config set valid=0 where id={};".format(id)
LOGGER.warning('Invalidating entry (%s)', query)
invalid += 1
session.execute(query)
session.commit()
if invalid:
LOGGER.warning('Invalidated %u perf_config entries', invalid)
session.execute(view_perf_db_rep)
session.commit()
res = session.execute("select theid, mcfg from perf_db_rep").all()
invalid = 0
for id, cfg in res:
try:
query = "update conv_perf_db set miopen_config={} where id={};".format(
cfg, id)
print(query)
session.execute(query)
session.commit()
except OperationalError as error:
handle_op_error(LOGGER, error)
except IntegrityError as error:
session.rollback()
LOGGER.warning('insert failed (%s)', error)
if "Duplicate entry" in "%s" % error:
query = "update conv_perf_db set valid=0 where id={};".format(id)
LOGGER.warning('Invalidating entry (%s)', query)
invalid += 1
session.execute(query)
session.commit()
if invalid:
LOGGER.warning('Invalidated %u perf_db entries', invalid)
if __name__ == '__main__':
main() | PypiClean |
/NEMO_billing-2.6.7-py3-none-any.whl/NEMO_billing/prepayments/admin.py | from NEMO.utilities import format_datetime
from django import forms
from django.contrib import admin
from django.contrib.admin import widgets
from NEMO_billing.invoices.models import BillableItemType
from NEMO_billing.prepayments.models import Fund, FundType, ProjectPrepaymentDetail
from NEMO_billing.utilities import IntMultipleChoiceField
class ProjectPrepaymentDetailAdminForm(forms.ModelForm):
charge_types = IntMultipleChoiceField(
choices=BillableItemType.choices_except(BillableItemType.CUSTOM_CHARGE, BillableItemType.CONSUMABLE),
required=True,
widget=widgets.FilteredSelectMultiple(verbose_name="Types", is_stacked=False),
)
class Meta:
model = ProjectPrepaymentDetail
fields = "__all__"
@admin.register(Fund)
class FundAdmin(admin.ModelAdmin):
list_display = [
"__str__",
"get_project",
"get_account",
"fund_type",
"amount",
"balance",
"get_balance_date_display",
"get_start_display",
"get_expiration_display",
"reference",
"balance_warning_percent",
]
list_filter = [("fund_type", admin.RelatedOnlyFieldListFilter)]
@admin.display(description="Project", ordering="project_prepayment__project")
def get_project(self, obj: Fund):
return obj.project_prepayment.project
@admin.display(description="Account", ordering="project_prepayment__project__account")
def get_account(self, obj: Fund):
return obj.project_prepayment.project.account
@admin.display(description="Start", ordering=["start_year", "start_month"])
def get_start_display(self, obj: Fund):
return format_datetime(obj.start_date, "F Y")
@admin.display(description="Expires", ordering=["expiration_year", "expiration_month"])
def get_expiration_display(self, obj: Fund):
if obj.expiration_date:
return format_datetime(obj.expiration_date, "F Y")
@admin.display(description="Balance date", ordering="balance_date")
def get_balance_date_display(self, obj: Fund):
return format_datetime(obj.project_prepayment.balance_last_updated, "SHORT_DATE_FORMAT")
@admin.register(ProjectPrepaymentDetail)
class ProjectPrepaymentDetailAdmin(admin.ModelAdmin):
list_display = ["project", "get_charge_types", "get_only_core_facilities"]
filter_horizontal = ["only_core_facilities"]
form = ProjectPrepaymentDetailAdminForm
@admin.display(description="Core facilities allowed")
def get_only_core_facilities(self, instance: ProjectPrepaymentDetail):
return instance.get_only_core_facilities_display()
@admin.display(description="Charge types allowed")
def get_charge_types(self, instance: ProjectPrepaymentDetail):
return instance.get_charge_types_display()
admin.site.register(FundType) | PypiClean |
/Agora-Scholar-0.2.0.tar.gz/Agora-Scholar-0.2.0/agora/scholar/actions/stream.py | import calendar
import logging
from datetime import datetime as dt, datetime
from redis.lock import Lock
from agora.scholar.actions import FragmentConsumerResponse
from agora.scholar.daemons.fragment import FragmentPlugin, map_variables, match_filter, is_fragment_synced, \
fragment_contexts
from agora.scholar.daemons.fragment import fragment_lock
from agora.stoa.actions.core import AGENT_ID
from agora.stoa.actions.core import STOA
from agora.stoa.actions.core.fragment import FragmentRequest, FragmentAction, FragmentSink
from agora.stoa.actions.core.utils import parse_bool, chunks
from agora.stoa.messaging.reply import reply
from agora.stoa.store import r
from agora.stoa.store.triples import load_stream_triples, fragments_cache
__author__ = 'Fernando Serena'
log = logging.getLogger('agora.scholar.actions.stream')
log.info("'Cleaning stream requests' locks...")
request_locks = r.keys('{}:requests:*:lock'.format(AGENT_ID))
for rlk in request_locks:
r.delete(rlk)
class StreamPlugin(FragmentPlugin):
@property
def sink_class(self):
return StreamSink
def consume(self, fid, (c, s, p, o), graph, *args):
sink = args[0]
sink.lock.acquire()
try:
# Prevent from consuming a triple when the delivery state says it was completely sent
# Anyway, this HAS TO BE REMOVED from here, because the stream flag should be enough
if sink.delivery == 'sent':
return
# Proceed only if the stream flag is enabled
if sink.stream:
# log.info('[{}] Streaming fragment triple...'.format(sink.request_id))
reply((c, s.n3(), p.n3(), o.n3()), headers={'source': 'stream', 'format': 'tuple', 'state': 'streaming',
'response_to': sink.message_id,
'submitted_on': calendar.timegm(datetime.utcnow().timetuple()),
'submitted_by': sink.submitted_by},
**sink.recipient)
finally:
sink.lock.release()
def complete(self, fid, *args):
sink = args[0]
sink.lock.acquire()
try:
# At this point, the stream flag is disabled, and the delivery state might need to be updated
sink.stream = False
if sink.delivery == 'streaming':
log.debug('Sending end stream signal after {}'.format(sink.delivery))
sink.delivery = 'sent'
reply((), headers={'state': 'end', 'format': 'tuple'}, **sink.recipient)
log.info('Stream of fragment {} for request {} is done'.format(fid, sink.request_id))
finally:
sink.lock.release()
FragmentPlugin.register(StreamPlugin)
class StreamRequest(FragmentRequest):
def __init__(self):
super(StreamRequest, self).__init__()
def _extract_content(self, request_type=STOA.StreamRequest):
"""
Parse streaming request data. For this operation, there is no additional data to extract.
"""
super(StreamRequest, self)._extract_content(request_type=request_type)
class StreamAction(FragmentAction):
def __init__(self, message):
"""
Prepare request and sink objects before starting initialization
"""
self.__request = StreamRequest()
self.__sink = StreamSink()
super(StreamAction, self).__init__(message)
@property
def sink(self):
return self.__sink
@classmethod
def response_class(cls):
return StreamResponse
@property
def request(self):
return self.__request
def submit(self):
super(StreamAction, self).submit()
# A stream request is ready just after its submission
self.sink.delivery = 'ready'
class StreamSink(FragmentSink):
"""
Extends FragmentSink by adding a new property that helps to manage the stream state
"""
def _remove(self, pipe):
try:
self.lock.acquire()
super(StreamSink, self)._remove(pipe)
pipe.delete('{}lock'.format(self._request_key))
except Exception as e:
log.warning(e.message)
def __init__(self):
super(StreamSink, self).__init__()
self.__lock = None
def _save(self, action, general=True):
super(StreamSink, self)._save(action, general)
def _load(self):
super(StreamSink, self)._load()
# Create the request lock
lock_key = '{}lock'.format(self._request_key)
self.__lock = r.lock(lock_key, lock_class=Lock)
@property
def stream(self):
return parse_bool(r.hget('{}'.format(self._request_key), '__stream'))
@stream.setter
def stream(self, value):
with r.pipeline(transaction=True) as p:
p.multi()
p.hset('{}'.format(self._request_key), '__stream', value)
p.execute()
log.info('Request {} stream state is now "{}"'.format(self._request_id, value))
@property
def lock(self):
"""
Helps to manage request stream and delivery status from both plugin events and build response times
:return: A redis-based lock object for a given request
"""
return self.__lock
class StreamResponse(FragmentConsumerResponse):
def __init__(self, rid):
self.__sink = StreamSink()
self.__sink.load(rid)
self.__fragment_lock = fragment_lock(self.__sink.fragment_id)
super(StreamResponse, self).__init__(rid)
@property
def sink(self):
return self.__sink
def _build(self):
"""
This function yields nothing only when the new state is 'streaming'
:return: Quads like (context, subject, predicate, object)
"""
timestamp = calendar.timegm(dt.utcnow().timetuple())
fragment = None
self.sink.lock.acquire()
try:
fragment, streaming = self.fragment(timestamp=timestamp)
if streaming:
self.sink.stream = True
if fragment:
self.sink.delivery = 'mixing'
else:
self.sink.delivery = 'streaming'
else:
self.sink.stream = False
if fragment:
self.sink.delivery = 'pushing'
log.debug('Fragment retrieved from cache for request number {}'.format(self._request_id))
else:
self.sink.delivery = 'sent'
log.debug('Sending end stream signal since there is no fragment and stream is disabled')
yield (), {'state': 'end', 'format': 'tuple'}
except Exception as e:
log.warning(e.message)
self.sink.stream = True
self.sink.delivery = 'streaming'
finally:
self.sink.lock.release()
if fragment:
log.info('Building a stream result from cache for request number {}...'.format(self._request_id))
filter_mapping = self.sink.filter_mapping
self.__fragment_lock.acquire()
try:
for ch in chunks(fragment, 1000):
if ch:
rows = []
for (c, s, p, o) in ch:
real_context = map_variables(c, self.sink.mapping, filter_mapping)
consume = True
if self.sink.map(c[2]) in filter_mapping:
consume = match_filter(o, real_context[2])
if consume and self.sink.map(c[0]) in filter_mapping:
consume = match_filter(s, real_context[0])
if consume:
rows.append((real_context, s.n3(), p.n3(), o.n3()))
yield rows, {'source': 'store', 'format': 'tuple',
'state': 'streaming',
'response_to': self.sink.message_id,
'submitted_on': calendar.timegm(
datetime.utcnow().timetuple()),
'submitted_by': self.sink.submitted_by}
finally:
self.__fragment_lock.release()
self.sink.lock.acquire()
try:
if self.sink.delivery == 'pushing' or (self.sink.delivery == 'mixing' and not self.sink.stream):
self.sink.delivery = 'sent'
log.info(
'The response stream of request {} is completed. Notifying...'.format(self.sink.request_id))
yield (), {'state': 'end', 'format': 'tuple'}
elif self.sink.delivery == 'mixing' and self.sink.stream:
self.sink.delivery = 'streaming'
finally:
self.sink.lock.release()
def fragment(self, timestamp):
def __load_contexts():
contexts = fragment_contexts(self.sink.fragment_id)
triple_patterns = {context: eval(context)[1] for context in contexts}
# Yield triples for each known triple pattern context
for context in contexts:
for (s, p, o) in fragments_cache.get_context(context):
yield triple_patterns[context], s, p, o
if timestamp is None:
timestamp = calendar.timegm(dt.utcnow().timetuple())
self.__fragment_lock.acquire()
try:
from_streaming = not is_fragment_synced(self.sink.fragment_id)
return (load_stream_triples(self.sink.fragment_id, timestamp), True) if from_streaming else (
__load_contexts(), False)
finally:
self.__fragment_lock.release() | PypiClean |
/MaterialDjango-0.2.5.tar.gz/MaterialDjango-0.2.5/materialdjango/static/materialdjango/components/bower_components/iron-flex-layout/README.md | [](https://travis-ci.org/PolymerElements/iron-flex-layout)
[](https://beta.webcomponents.org/element/PolymerElements/iron-flex-layout)
## <iron-flex-layout>
The `<iron-flex-layout>` component provides simple ways to use
[CSS flexible box layout](https://developer.mozilla.org/en-US/docs/Web/Guide/CSS/Flexible_boxes),
also known as flexbox. This component provides two different ways to use flexbox:
1. [Layout classes](https://github.com/PolymerElements/iron-flex-layout/tree/master/iron-flex-layout-classes.html).
The layout class stylesheet provides a simple set of class-based flexbox rules, that
let you specify layout properties directly in markup. You must include this file
in every element that needs to use them.
Sample use:
<!--
```
<custom-element-demo>
<template>
<script src="../webcomponentsjs/webcomponents-lite.min.js"></script>
<link rel="import" href="iron-flex-layout-classes.html">
<dom-module id="demo-element">
<template>
<style is="custom-style" include="iron-flex iron-flex-alignment"></style>
<style>
.container, .layout {
background-color: #ccc;
padding: 4px;
}
.container div, .layout div {
background-color: white;
padding: 12px;
margin: 4px;
}
</style>
<next-code-block></next-code-block>
</template>
<script>Polymer({is: "demo-element"});</script>
</dom-module>
<demo-element></demo-element>
</template>
</custom-element-demo>
```
-->
```html
<div class="layout horizontal layout-start" style="height: 154px">
<div>cross axis start alignment</div>
</div>
```
1. [Custom CSS mixins](https://github.com/PolymerElements/iron-flex-layout/blob/master/iron-flex-layout.html).
The mixin stylesheet includes custom CSS mixins that can be applied inside a CSS rule using the `@apply` function.
Please note that the old [/deep/ layout classes](https://github.com/PolymerElements/iron-flex-layout/tree/master/classes)
are deprecated, and should not be used. To continue using layout properties
directly in markup, please switch to using the new `dom-module`-based
[layout classes](https://github.com/PolymerElements/iron-flex-layout/tree/master/iron-flex-layout-classes.html).
Please note that the new version does not use `/deep/`, and therefore requires you
to import the `dom-modules` in every element that needs to use them.
A complete [guide](https://elements.polymer-project.org/guides/flex-layout) to `<iron-flex-layout>` is available.
| PypiClean |
/Nuitka_fixed-1.1.2-cp310-cp310-win_amd64.whl/nuitka/build/inline_copy/yaml_27/yaml/serializer.py | __all__ = ['Serializer', 'SerializerError']
from error import YAMLError
from events import *
from nodes import *
class SerializerError(YAMLError):
pass
class Serializer(object):
ANCHOR_TEMPLATE = u'id%03d'
def __init__(self, encoding=None,
explicit_start=None, explicit_end=None, version=None, tags=None):
self.use_encoding = encoding
self.use_explicit_start = explicit_start
self.use_explicit_end = explicit_end
self.use_version = version
self.use_tags = tags
self.serialized_nodes = {}
self.anchors = {}
self.last_anchor_id = 0
self.closed = None
def open(self):
if self.closed is None:
self.emit(StreamStartEvent(encoding=self.use_encoding))
self.closed = False
elif self.closed:
raise SerializerError("serializer is closed")
else:
raise SerializerError("serializer is already opened")
def close(self):
if self.closed is None:
raise SerializerError("serializer is not opened")
elif not self.closed:
self.emit(StreamEndEvent())
self.closed = True
#def __del__(self):
# self.close()
def serialize(self, node):
if self.closed is None:
raise SerializerError("serializer is not opened")
elif self.closed:
raise SerializerError("serializer is closed")
self.emit(DocumentStartEvent(explicit=self.use_explicit_start,
version=self.use_version, tags=self.use_tags))
self.anchor_node(node)
self.serialize_node(node, None, None)
self.emit(DocumentEndEvent(explicit=self.use_explicit_end))
self.serialized_nodes = {}
self.anchors = {}
self.last_anchor_id = 0
def anchor_node(self, node):
if node in self.anchors:
if self.anchors[node] is None:
self.anchors[node] = self.generate_anchor(node)
else:
self.anchors[node] = None
if isinstance(node, SequenceNode):
for item in node.value:
self.anchor_node(item)
elif isinstance(node, MappingNode):
for key, value in node.value:
self.anchor_node(key)
self.anchor_node(value)
def generate_anchor(self, node):
self.last_anchor_id += 1
return self.ANCHOR_TEMPLATE % self.last_anchor_id
def serialize_node(self, node, parent, index):
alias = self.anchors[node]
if node in self.serialized_nodes:
self.emit(AliasEvent(alias))
else:
self.serialized_nodes[node] = True
self.descend_resolver(parent, index)
if isinstance(node, ScalarNode):
detected_tag = self.resolve(ScalarNode, node.value, (True, False))
default_tag = self.resolve(ScalarNode, node.value, (False, True))
implicit = (node.tag == detected_tag), (node.tag == default_tag)
self.emit(ScalarEvent(alias, node.tag, implicit, node.value,
style=node.style))
elif isinstance(node, SequenceNode):
implicit = (node.tag
== self.resolve(SequenceNode, node.value, True))
self.emit(SequenceStartEvent(alias, node.tag, implicit,
flow_style=node.flow_style))
index = 0
for item in node.value:
self.serialize_node(item, node, index)
index += 1
self.emit(SequenceEndEvent())
elif isinstance(node, MappingNode):
implicit = (node.tag
== self.resolve(MappingNode, node.value, True))
self.emit(MappingStartEvent(alias, node.tag, implicit,
flow_style=node.flow_style))
for key, value in node.value:
self.serialize_node(key, node, None)
self.serialize_node(value, node, key)
self.emit(MappingEndEvent())
self.ascend_resolver() | PypiClean |
/GailBot-0.2a0-py3-none-any.whl/gailbot/services/organizer/source/source_manager.py | from typing import List, Dict, Union
import os
from .source_object import SourceObject
from ..settings import SettingObject
from gailbot.core.utils.general import get_name, is_file, is_directory, is_path
from gailbot.core.utils.logger import makelogger
from gailbot.configs import workspace_config_loader
OUTPUT_EXTENSION = workspace_config_loader().file_extension.output
logger = makelogger("source_manager")
class SourceManager:
"""
Holds and handles all functionality for managing all sources
"""
def __init__(self) -> None:
self.sources: Dict[str, SourceObject] = dict()
def add_source(self, source_path: str, output: str) -> Union[str, bool]:
"""
Adds a source to the source manager object
Args:
source_path: str: path to the source object to add
output: str: path to the output directory
Returns:
Name of the source if it is successfully added, false if it is not
successfully added
"""
try:
logger.info("in try")
name = get_name(source_path)
i = 0
while name in self.sources.keys():
print(name)
if i:
i += 1
name = name.replace("---" + str(i - 1), "---" + str(i))
else:
i += 1
name = f"{name}---{i}"
source = SourceObject(source_path, name, output)
self.sources[name] = source
except Exception as e:
logger.info("in exception")
logger.error(e, exc_info=e)
return False
else:
logger.info("in else")
return name
def remove_source(self, source_name: str) -> bool:
"""
Removes a given source from the source manager's sources
Args:
source_name: str: name to remove
Returns:
True if given source was successfully removed, false if given
source was not found
"""
logger.info(f"source {source_name} is removed")
if not self.is_source(source_name):
return False
self.sources.pop(source_name)
return True
def is_source(self, source: str) -> bool:
"""
Determines if a given source is currently in the source manager's sources
Args:
source: str: key of the source to search for
Returns:
True if given source was found, false if not
"""
source_name = get_name(source) if is_path(source) else source
if source_name in self.sources:
return True
else:
return False
def source_names(self) -> List[str]:
"""
Obtains all source names as a list
Returns:
List of strings containing all source names
"""
return list(self.sources.keys())
def get_source(self, source: str) -> Union[bool, SourceObject]:
"""
Gets the source associated with a given source name
Args:
source_name: str: string of name to search for
Returns:
Source object associated with the given name
Returns false if object with given name is not found
"""
source_name = get_name(source) if is_path(source) else source
if self.is_source(source_name):
return self.sources[source_name]
else:
return False
def get_source_outdir(self, source: str) -> Union[bool, str]:
"""
Gets the source output directory associated with a given source name
Args:
source_name: str: string of name to search for
Returns:
Source object associated with the given name
Returns false if object with given name is not found
"""
source_name = get_name(source) if is_path(source) else source
if self.is_source(source_name):
logger.info("is source")
return self.sources[source_name].output
else:
logger.error(source_name)
return False
def get_source_setting(self, source: str) -> SettingObject:
"""
Gets the object;s source settings
Args:
source: str: source object to look for
Returns:
SettingObject of the current source's settings
"""
source_name = get_name(source) if is_path(source) else source
if self.is_source(source_name):
return self.sources[source_name].source_setting()
else:
return False
def apply_setting_profile_to_source(
self, source: str, setting: SettingObject, overwrite: bool
):
"""
Applies the given settings to the given source
Args:
source: str: given source to update
setting: SettingObject: setting object to apply
overwrite: bool: whether or not to overwrite
Returns:
bool: True if successfully applied, false if not
"""
source_name = get_name(source) if is_path(source) else source
logger.info(f"apply setting {setting} to {source_name}")
if self.is_source(source_name):
self.sources[source_name].apply_setting(setting, overwrite)
return self.sources[source_name].configured
logger.error(f"not a valid source")
return False
def add_progress_display(self, source: str, displayer: callable) -> bool:
"""
Add function to display file progress
Args:
source (str): a string that identify the source
displayer (callable): the function that check for file progress
Returns:
bool: True if the displayer is applied, false otherwise
"""
source_name = get_name(source) if is_path(source) else source
if self.is_source(source_name):
return self.sources[source_name].add_progress_display(displayer)
return False
def get_sources_with_setting(self, setting_name: str) -> List[str]:
"""
Accesses all sources with a given settings profile
Args:
self
setting_name: string of the settings profile to look for
Returns:
list of strings of all source names with the settings profile
"""
return [
k for k, v in self.sources.items() if v.setting.get_name() == setting_name
]
def get_configured_sources(self, sources: List[str] = None) -> List[SourceObject]:
"""given the a list of source name, return a list of the sourceObject
that stores the source configured with setting
Args:
sources (List[str], optional): a list of source name, if not
given, return a list of configured source. Defaults to None.
Returns:
List[SourceObject]: a list of source object that stores the source data
"""
configured = []
if not sources:
for source in self.sources.values():
if source.setting != None:
configured.append(source)
return configured
else:
for source in sources:
src = self.get_source(source)
if src.setting != None:
configured.append(source)
def is_source_configured(self, source: str) -> bool:
"""
Determines if given source has been configured with settings
Args:
self
source_name: string of the source name
Returns:
True if configured, false if not
"""
source_name = get_name(source) if is_path(source) else source
return self.sources[source_name].configured
def __repr__(self) -> str:
return f"Source manager with sources {self.source_names}"
@staticmethod
def _is_path(source: str):
"""
Determines if a string is a path
Args:
source: str: string to determine if is a path
Returns:
bool: true if given string is a path, false if not
"""
return is_file(source) or is_directory(source) | PypiClean |
/NLP_LIB_cpu-0.0.12.tar.gz/NLP_LIB_cpu-0.0.12/NLP_LIB/nlp_core/engine.py | import importlib
import random
import sys
import json
import numpy as np
import codecs
import os
import tensorflow as tf
import tensorflow.keras
import re
from tensorflow.keras.callbacks import TensorBoard, ModelCheckpoint
from tensorflow.keras import backend as K
from NLP_LIB.nlp_core.predefined import ConfigMapper
from NLP_LIB.federated.federated_data import FederatedData
# sys.stdout.reconfigure(encoding='utf-8') # Python 3.7 only
sys.stdout = codecs.getwriter("utf-8")(sys.stdout.detach())
np.random.seed(0)
sys.path.append('.')
from NLP_LIB.nlp_core.training_wrapper import TrainingWrapper
from NLP_LIB.nlp_core.dataset_wrapper import DatasetWrapper
from NLP_LIB.nlp_core.data_transform_wrapper import DataTransformWrapper
from NLP_LIB.nlp_core.callback_wrapper import CallbackWrapper
from NLP_LIB.nlp_core.model_wrapper import ModelWrapper
# Main class for NLP Engine
class NLPEngine:
def __init__(self):
self.callbacks_module = importlib.import_module('NLP_LIB.callbacks')
self.datasets_module = importlib.import_module('NLP_LIB.datasets')
self.models_module = importlib.import_module('NLP_LIB.models')
self.transforms_module = importlib.import_module('NLP_LIB.transforms')
def run_train(self, config):
# Detect if finetuing from multiple pretrain checkpoints.
multiple_init_checkpoint_names = None
multiple_init_checkpoints = None
if 'model' in config and 'config' in config['model'] and 'encoder_checkpoint' in config['model']['config']:
encoder_checkpoint = config['model']['config']['encoder_checkpoint']
if os.path.isdir(encoder_checkpoint):
multiple_init_checkpoint_names = os.listdir(encoder_checkpoint)
multiple_init_checkpoints = list(map(lambda x: os.path.join(encoder_checkpoint, x), multiple_init_checkpoint_names))
print('[INFO] Init from multiple checkpoints: ' + str(multiple_init_checkpoints))
if multiple_init_checkpoints is None:
dataset = config['dataset']
dataset_class = dataset['class']
dataset_config = dataset['config']
dataset_class = getattr(self.datasets_module, dataset_class)
dataset = dataset_class(dataset_config)
input_transform = config['input_transform']
input_transform_class = input_transform['class']
input_transform_config = input_transform['config']
input_transform_class = getattr(self.transforms_module, input_transform_class)
input_transform = input_transform_class(input_transform_config, dataset)
output_transform = config['output_transform']
output_transform_class = output_transform['class']
output_transform_config = output_transform['config']
output_transform_class = getattr(self.transforms_module, output_transform_class)
output_transform = output_transform_class(output_transform_config, dataset)
model = config['model']
model_class = model['class']
model_config = model['config']
model_class = getattr(self.models_module, model_class)
model = model_class(model_config, input_transform, output_transform)
execution = config['execution']
execution_config = execution['config']
callbacks_ = config['callbacks']
callbacks = []
for callback in callbacks_:
callback_class = callback['class']
callback_config = callback['config']
callback_class = getattr(self.callbacks_module, callback_class)
callback = callback_class(callback_config, execution_config, model, dataset, input_transform, output_transform)
callbacks.append(callback)
training = TrainingWrapper(model, input_transform, output_transform, callbacks, execution_config)
training.train(dataset)
else:
execution = config['execution']
execution_config = execution['config']
base_output_dir = os.path.join(*re.split('/|\\\\', execution_config['output_dir']))
for encoder_checkpoint, checkpoint_name in zip(multiple_init_checkpoints, multiple_init_checkpoint_names):
config['model']['config']['encoder_checkpoint'] = encoder_checkpoint
print('[INFO] Init from checkpoint: ' + str(encoder_checkpoint))
# Save output to separated directory
output_dir = os.path.join(base_output_dir, 'trials', checkpoint_name)
if not os.path.exists(output_dir):
os.makedirs(output_dir)
execution_config['output_dir'] = output_dir
dataset = config['dataset']
dataset_class = dataset['class']
dataset_config = dataset['config']
dataset_class = getattr(self.datasets_module, dataset_class)
dataset = dataset_class(dataset_config)
input_transform = config['input_transform']
input_transform_class = input_transform['class']
input_transform_config = input_transform['config']
input_transform_class = getattr(self.transforms_module, input_transform_class)
input_transform = input_transform_class(input_transform_config, dataset)
output_transform = config['output_transform']
output_transform_class = output_transform['class']
output_transform_config = output_transform['config']
output_transform_class = getattr(self.transforms_module, output_transform_class)
output_transform = output_transform_class(output_transform_config, dataset)
model = config['model']
model_class = model['class']
model_config = model['config']
model_class = getattr(self.models_module, model_class)
model = model_class(model_config, input_transform, output_transform)
callbacks_ = config['callbacks']
callbacks = []
for callback in callbacks_:
callback_class = callback['class']
callback_config = callback['config']
callback_class = getattr(self.callbacks_module, callback_class)
callback = callback_class(callback_config, execution_config, model, dataset, input_transform, output_transform)
callbacks.append(callback)
training = TrainingWrapper(model, input_transform, output_transform, callbacks, execution_config)
training.train(dataset)
def run_train_federated_simulation(self, config, node_count):
print('[INFO] Start running federated training simulation on ' + str(node_count) + ' node(s).')
# Load some parameters of execution_config as they may have to be instruments
execution = config['execution']
execution_config = execution['config']
base_output_dir = os.path.join(execution_config['output_dir'], 'ftrain_' + str(node_count))
if not os.path.exists(base_output_dir):
os.makedirs(base_output_dir)
base_epoch = execution_config['epochs']
# The process of federated simulation is that we will train model on each node epoch-by-epoch.
# After each epoch we will load trained model of each node and perform federated averaging on their weights.
# We then save averaged weights to latest checkpoint of model in each node and proceed to next epoch.
# Tensorboard log directory
dir_suffix = '' # We do not use gpu_count in save path anymore
tboard_log_dir = os.path.join(base_output_dir, 'tboard_log' + dir_suffix)
if not os.path.exists(tboard_log_dir):
os.makedirs(tboard_log_dir)
log_writer = tf.summary.FileWriter(tboard_log_dir)
for epoch in range(base_epoch):
print('[INFO] Federated training epoch: ' + str(epoch))
# Avoid memory leakage in Tensorflow / Keras
K.clear_session()
federated_weights_list = []
federated_model = None
x_valid_feed = None
y_valid_feed = None
metric_names = None
for node_id in range(node_count):
print('[INFO] Running epoch ' + str(epoch) + ' of node: ' + str(node_id))
dataset = config['dataset']
dataset_class = dataset['class']
dataset_config = dataset['config']
dataset_class = getattr(self.datasets_module, dataset_class)
dataset = dataset_class(dataset_config)
federated_dataset = FederatedData(config, dataset, node_count, node_id)
input_transform = config['input_transform']
input_transform_class = input_transform['class']
input_transform_config = input_transform['config']
input_transform_class = getattr(self.transforms_module, input_transform_class)
input_transform = input_transform_class(input_transform_config, federated_dataset)
output_transform = config['output_transform']
output_transform_class = output_transform['class']
output_transform_config = output_transform['config']
output_transform_class = getattr(self.transforms_module, output_transform_class)
output_transform = output_transform_class(output_transform_config, federated_dataset)
model = config['model']
model_class = model['class']
model_config = model['config']
model_class = getattr(self.models_module, model_class)
model = model_class(model_config, input_transform, output_transform)
# Change epoch to let each node train incrementally epoch-by-epoch
execution_config['epochs'] = (epoch + 1)
# Change output directory to be include node_id so we save model from each node separately
execution_config['output_dir'] = os.path.join(*re.split('/|\\\\', base_output_dir), 'federated_' + str(node_id))
callbacks_ = config['callbacks']
callbacks = []
for callback in callbacks_:
callback_class = callback['class']
callback_config = callback['config']
callback_class = getattr(self.callbacks_module, callback_class)
callback = callback_class(callback_config, execution_config, model, federated_dataset, input_transform, output_transform)
callbacks.append(callback)
training = TrainingWrapper(model, input_transform, output_transform, callbacks, execution_config)
federated_model, x_valid, y_valid = training.train(federated_dataset)
# Store validation data for used in federated evaluation
if x_valid_feed is None or y_valid_feed is None:
x_valid_feed = x_valid
y_valid_feed = y_valid
else:
for i, (x, xn) in enumerate(zip(x_valid_feed, x_valid)):
x_valid_feed[i] = np.append(x, xn, 0)
for i, (y, yn) in enumerate(zip(y_valid_feed, y_valid)):
y_valid_feed[i] = np.append(y, yn, 0)
metric_names = training.trainable_model.get_metric_names()
federated_weights = federated_model.get_weights()
federated_weights_list.append(federated_weights)
# [TODO]: Perform federated averaging on model weights
print('[INFO] Finished federated training of epoch: ' + str(epoch))
new_weights = list()
print('[INFO] Perform federated averaging for epoch: ' + str(epoch))
for weights_list_tuple in zip(*federated_weights_list):
new_weights.append([np.array(weights_).mean(axis=0) for weights_ in zip(*weights_list_tuple)])
federated_model.set_weights(new_weights)
# Save the averaged weight to center checkpoint
checkpoint_dir = os.path.join(base_output_dir, 'checkpoint')
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
dir_suffix = '' # We do not use gpu_count in save path anymore
last_checkpoint_filepath = os.path.join(checkpoint_dir, 'last_weight' + dir_suffix + '.h5')
print('[INFO] Saving averaged weight at: ' + last_checkpoint_filepath)
federated_model.save_weights(last_checkpoint_filepath)
# Perform averaged model evaluation here!
print('Evaluate with federated evaluation dataset of size: ' + str(x_valid_feed[0].shape))
metrics = federated_model.evaluate(
x=x_valid_feed, y=y_valid_feed,
batch_size=execution_config['batch_size']
)
summary_vals = [tf.Summary.Value(tag="loss", simple_value=metrics[0])]
for i in range(len(metric_names)):
summary_vals.append(tf.Summary.Value(tag=metric_names[i], simple_value=metrics[i + 1]))
summary = tf.Summary(value=summary_vals)
log_writer.add_summary(summary, epoch)
log_writer.flush()
print('==== FEDETATED EVALUATION RESULTS ====')
print(metrics)
# Also save the averaged model to every federated node. This is equal to serialize and send updated model to every node.
for node_id in range(node_count):
output_dir = os.path.join(*re.split('/|\\\\', base_output_dir), 'federated_' + str(node_id))
checkpoint_dir = os.path.join(output_dir, 'checkpoint')
dir_suffix = '' # We do not use gpu_count in save path anymore
last_checkpoint_filepath = os.path.join(checkpoint_dir, 'last_weight' + dir_suffix + '.h5')
print('[INFO] Saving averaged weight at node ' + str(node_id) + ': ' + last_checkpoint_filepath)
federated_model.save_weights(last_checkpoint_filepath)
def run_prediction(self, mode, sampling_algorithm, generation_count, config, input_mode, input_path):
print('Running in ' + mode + ' mode for input_mode = ' + input_mode + ', input_path = ' + input_path)
dataset = config['dataset']
dataset_class = dataset['class']
dataset_config = dataset['config']
dataset_class = getattr(self.datasets_module, dataset_class)
dataset = dataset_class(dataset_config)
input_transform = config['input_transform']
input_transform_class = input_transform['class']
input_transform_config = input_transform['config']
input_transform_class = getattr(self.transforms_module, input_transform_class)
input_transform = input_transform_class(input_transform_config, dataset)
output_transform = config['output_transform']
output_transform_class = output_transform['class']
output_transform_config = output_transform['config']
output_transform_class = getattr(self.transforms_module, output_transform_class)
output_transform = output_transform_class(output_transform_config, dataset)
model = config['model']
model_class = model['class']
model_config = model['config']
model_class = getattr(self.models_module, model_class)
model = model_class(model_config, input_transform, output_transform)
execution = config['execution']
execution_config = execution['config']
callbacks_ = config['callbacks']
callbacks = []
for callback in callbacks_:
callback_class = callback['class']
callback_config = callback['config']
callback_class = getattr(self.callbacks_module, callback_class)
callback = callback_class(callback_config, execution_config, model, dataset, input_transform, output_transform)
callbacks.append(callback)
training = TrainingWrapper(model, input_transform, output_transform, callbacks, execution_config)
return training.predict(mode, sampling_algorithm, generation_count, input_mode, input_path)
def run_server(self, config):
print('Running server for model: ' + str(config))
dataset = config['dataset']
dataset_class = dataset['class']
dataset_config = dataset['config']
dataset_class = getattr(self.datasets_module, dataset_class)
dataset = dataset_class(dataset_config)
input_transform = config['input_transform']
input_transform_class = input_transform['class']
input_transform_config = input_transform['config']
input_transform_class = getattr(self.transforms_module, input_transform_class)
input_transform = input_transform_class(input_transform_config, dataset)
output_transform = config['output_transform']
output_transform_class = output_transform['class']
output_transform_config = output_transform['config']
output_transform_class = getattr(self.transforms_module, output_transform_class)
output_transform = output_transform_class(output_transform_config, dataset)
model = config['model']
model_class = model['class']
model_config = model['config']
model_class = getattr(self.models_module, model_class)
model = model_class(model_config, input_transform, output_transform)
execution = config['execution']
execution_config = execution['config']
callbacks_ = config['callbacks']
callbacks = []
for callback in callbacks_:
callback_class = callback['class']
callback_config = callback['config']
callback_class = getattr(self.callbacks_module, callback_class)
callback = callback_class(callback_config, execution_config, model, dataset, input_transform, output_transform)
callbacks.append(callback)
session = keras.backend.get_session()
graph = tf.get_default_graph()
training = TrainingWrapper(model, input_transform, output_transform, callbacks, execution_config)
serving_model = training.create_serving_model()
from NLP_LIB.nlp_core.serving import ModelServer
model_server = ModelServer(training, serving_model, graph, session, str(config))
model_server.start_server()
return 0
# return training.predict(mode, sampling_algorithm, generation_count, input_mode, input_path)
def main(argv):
if len(argv) < 2:
print("Usage: python3 <APP_NAME>.py <CONFIG_FILE_PATH> <optional: train | predict> <optional: str:XXX | file:XXX>")
exit(1)
mode = 'train'
if len(argv) > 2:
mode = argv[2]
print('mode = ' + mode)
generation_count = 0
sampling_algorithm = None
if mode.startswith('generate:'):
tokens = mode.split(':')
generation_count = int(tokens[1])
if len(tokens) > 2:
sampling_algorithm = tokens[2]
mode ='generate'
print('Running generating mode with N = ' + str(generation_count) + ' using sampling algorithm: ' + str(sampling_algorithm))
if (mode == 'predict' or mode == 'generate')and len(argv) < 4:
print('Prediction / Generation mode require data source input in format str:XXX or file:XXX')
exit(1)
input_mode = None
input_path = None
output_path = None
if mode == 'predict' or mode == 'generate':
input_arg = argv[3]
input_mode = input_arg[:input_arg.find(':')]
print('input_mode = ' + input_mode)
if input_mode != 'file' and input_mode != 'str':
print('Prediction / Generation mode require data source input in format str:XXX or file:XXX')
exit(1)
input_path = input_arg[input_arg.find(':') + 1 :]
if len(argv) > 4:
output_path = argv[4]
else:
output_path = '_outputs_/output.txt'
config_path = argv[1]
execution_config = None
# If config file is not found, then we look into predefined shortcut map for the config file
if not os.path.isfile(config_path):
config_path = ConfigMapper.get_config_path_for(config_path)
if config_path is None:
# Try to generate config as per shortcut text
execution_config = ConfigMapper.construct_json_config_for_shortcut(argv[1])
if execution_config is None:
print('Invalid run shortcut or JSON configure path.')
else:
dir_name = os.path.dirname(os.path.realpath(__file__))
config_path = dir_name + '/../' + config_path
if execution_config is None:
with open(config_path, 'r', encoding='utf8') as json_file:
execution_config = json.load(json_file)
engine = NLPEngine()
if mode == 'train':
engine.run_train(execution_config)
elif mode.startswith('ftrain:'):
node_count = int(mode[len('ftrain:'):])
print('[INFO] Perform Federated Training Simulation on ' + str(node_count) + ' node(s).')
engine.run_train_federated_simulation(execution_config, node_count)
elif mode == 'predict' or mode == 'generate':
(Y_output, Y_id_max, Y) = engine.run_prediction(mode, sampling_algorithm, generation_count, execution_config, input_mode, input_path)
print('==== PREDICTION OUTPUT ====')
print(Y_output)
# Save output to file
with open(output_path, 'w', encoding='utf-8') as fout:
for output_entry in Y_output:
fout.write(str(output_entry) + '\n')
print('Output is written to: ' + output_path)
elif mode == 'serve':
# Running model in serve mode
engine.run_server(execution_config)
print('Finish.')
# Main entry point
if __name__ == '__main__':
main(sys.argv) | PypiClean |
/GQCMS-0.0.4-py3-none-any.whl/build/lib/build/lib/gqcms/MPD.py | import numpy as np
from itertools import combinations
import pandas as pd
import scipy.linalg as sp
import heapq
from . import Hubbard
from gqcms.matrices import Determinant
class MPD:
"""
Will calculate the Maximum probability domains in a given basis set in the Hubbard model.
"""
def __init__(self, Hubbard_mol: Hubbard, coefs: np.ndarray):
"""
Constructor for the MPD class
Hubbard_mol, Hubbard object: the molecule you want to calculate the MPDs from
coefs, nparray of floats: the coeffcients of the wavefunction in ONV basis
sites, int: amount of sites in the Hubbard model.
circular, bool: are you looking at a cyclic molecule, default True
"""
self.basis = Hubbard_mol.basis
self.coefs = coefs
self.sites = Hubbard_mol.sites
self.circular = Hubbard_mol.circular
self.molecule = Hubbard_mol
def domainCalculator(sites):
"""
Will calculate all possible domains for the given molecule
Returns array of arrays, with all domains
"""
dom_list = []
sites_list = range(sites)
# creates all possible combinations of sites.
for count in sites_list:
dom_list += combinations(sites_list, count + 1)
return dom_list
self.domains = domainCalculator(self.sites)
def getCoefsPerONV(self):
"""returns dict of ONVs as keys with their respective coefficient"""
ziplist = zip(self.coefs, self.basis)
coefs_per_ONV = {str(ONV): coef for coef, ONV in ziplist}
return coefs_per_ONV
def probabilityCalculator(self, nu=1):
"""
Will calculate the probability of every possible domain
returns dict (domain: its probability)
nu, int: the amount of electrons in a domain
"""
# initiate probability list
prob_dict = []
# generate all possible domains, with or without symmetry
dom_list = self.domains
dom_dict = {}
# look at all the domains
for domain in dom_list:
probability = 0
# turn domain into bitstring
# can be compared via bitwise
domain_bits = Determinant.orbitals_to_onv(domain)
# we only want to save the 5 biggest ONVs for every domain
# We will use a priorityqueue for this
domain_parts = []
heapq.heapify(domain_parts)
# We will exclude any ONVs which are simply a swap of the
# alpha and beta parts
alpha_set, beta_set = set(), set()
for index, ONV in enumerate(self.basis):
# count the overlap of alpha and domain and beta and domain
alpha_count = len(
Determinant.onv_to_orbitals(ONV._alpha_onv & domain_bits)
)
beta_count = len(
Determinant.onv_to_orbitals(ONV._beta_onv & domain_bits)
)
# amount of electrons has to be equal to nu
if alpha_count + beta_count == nu:
# probability is linked to coef**2
probability += self.coefs[index] ** 2
# get the tuple with probability first for comparison
# then check if we save more then five
# if so, remove smallest element
candidate_pair = (self.coefs[index], str(ONV))
if not (ONV._alpha_onv in beta_set and ONV._beta_onv in alpha_set):
heapq.heappush(domain_parts, candidate_pair)
alpha_set.add(ONV._alpha_onv)
beta_set.add(ONV._beta_onv)
if len(domain_parts) > 5:
heapq.heappop(domain_parts)
# we need all lists to have the same value
while len(domain_parts) != 5:
domain_parts.append((np.nan, np.nan))
# we only the strings, the coef of each is stored elsewhere.
dom_dict[str(domain)] = [ONV for prob, ONV in domain_parts]
prob_dict.append(round(probability, 8))
# zip together to link domain to probability
prob_dict = zip(dom_list, prob_dict)
prob_dict = {dom: prob for dom, prob in prob_dict}
# prob_dict = probability per domain
# dom_dict = 5 most important ONVs per domain
return prob_dict, dom_dict
@staticmethod
def domainInverter(domain, sites):
"""Will return a list of values that are not in the domain"""
inverted_domain = []
domain_set = set(domain)
for i in range(sites):
if i not in domain_set:
inverted_domain.append(i)
return inverted_domain
def getProbabilityDataFrame(self, prob_dict):
"""
Will generate a DataFrame containing several interesting values for every domain
prob_dict, dict: dict of domains and their probabilities
dataframe structure
collumns: 'domain' | 'probability' | 'bits' | 'unocc_list' |
domain: list of sites in the domain
probability: the probability value of the domain
bits: the bitstring representation
unocc_list: list of unoccupied sites
"""
# The DataFrame allows for easy storage, manipulation and comparison of multiple values
prob_df = pd.DataFrame.from_dict(
prob_dict, orient="index", columns=["probability"]
)
prob_df.reset_index(inplace=True)
prob_df.rename(columns={"index": "domain"}, inplace=True)
# all domains can be represented as bitstrings.
prob_df["bits"] = prob_df["domain"].apply(Determinant.orbitals_to_onv)
# single site flips can be gotten from:
# unnocupieds, add 2**(site n°) for all sites that are not part of the domain
prob_df["unocc_list"] = prob_df["domain"].apply(
MPD.domainInverter, args=[self.sites]
)
# occupieds, subtract 2**(site n°) for all occupied sites => already stored in domain
return prob_df
def getSingleSiteFlips(self, domain, prob_dict=False, prob_df=False):
"""
Will generate all single site flips for a domain, and their corresponding probabilities.
domain, list of ints, the list representation of the domain.
prob_dict, dict: list of domains with their probability
prob_df, pd.DataFrame: it is possible to pass the prob_df directly, in order to spare time.
"""
if type(prob_df) == bool:
prob_df = self.getProbabilityDataFrame(prob_dict)
domain_row = prob_df[prob_df["domain"] == domain]
single_flip_list = []
# domain_row is now a pandas series that can be searched like a DataFrame
# first we will look at the sites that are not in the domain
for value in domain_row["unocc_list"].array[0]:
single_flip = prob_df[
prob_df["bits"] == domain_row["bits"].array[0] + 2**value
]
single_flip_list.append(single_flip)
for value in domain_row["domain"].array[0]:
single_flip = prob_df[
prob_df["bits"] == domain_row["bits"].array[0] - 2**value
]
single_flip_list.append(single_flip)
single_flip_frame = pd.concat(
single_flip_list, ignore_index=True, axis=0, join="outer"
)
return single_flip_frame
def MPDFinder(self, prob_dict):
"""
Will find MPD's of a molecule at U/t from Hubbard object, the probabilities of all domains are stored elsewhere.
prob_dict, dict: list of domains and their probabilities
"""
# list will store tuples of MPD with its probability
MPD_list = []
prob_df = self.getProbabilityDataFrame(prob_dict)
for index, domain in prob_df.iterrows():
if domain["probability"] > 1e-5:
single_flips = self.getSingleSiteFlips(
domain["domain"], prob_dict, prob_df=prob_df
)
if not np.any(single_flips["probability"] > domain["probability"]):
if np.any(single_flips["probability"] == domain["probability"]):
equals = single_flips[
single_flips["probability"] == domain["probability"]
]
with pd.option_context("mode.chained_assignment", None):
equals["size"] = equals["unocc_list"].apply(
len
) # reports SettingWithCopyWarning, but is a false positive (https://www.dataquest.io/blog/settingwithcopywarning/)
# We want to keep the MPDs as small as possible, meaning that the unocc_list is as large as possible
if not np.any(equals["size"] > len(domain["unocc_list"])):
MPD_list.append(domain["domain"])
else:
MPD_list.append(domain["domain"])
return MPD_list
def setGroundState(self, new_ground_state):
"""sets a new ground state"""
self.coefs = new_ground_state
def getDomainProbabilityDataFrame(
self, nu=1, U_max=20, stepsize=1, potdict=False, get_ONV_coefs=False
):
"""
Generates a dataframe with the domains as the columns and the U/t values as the rows
nu, int: the amount of electrons in the domain, default 1
U_max, float: the maximmum value of U, default = 20
stepsize, float: the stepsize, default 1
potdict, dict: {site:potential}, default False
get_ONV_coefs, bool: do you want to get the coefficients of the individual ONVs as well, default False
"""
# initialize dataframe
dom_prob, ONVs_per_domain = self.probabilityCalculator(nu=nu)
dom_prob_df = pd.DataFrame.from_dict([dom_prob])
# piggyback ONV_coefs
ONVcoefs = self.getCoefsPerONV()
ONV_coef_df = pd.DataFrame.from_dict([ONVcoefs])
# change U/t and add the coefficients to the dataframe
# we will start from the minimal value of U in the Hubbard object
ut_list = (
np.arange(self.molecule.U, U_max + stepsize, stepsize)
)
for U in ut_list[1:]:
self.molecule.U = U
self.molecule.onSiteRepulsionMatrix = self.molecule.OnSiteRepulsion()
ham = self.molecule.Hamiltonian()
E, C = sp.eigh(ham)
self.setGroundState(C[:, 0])
dom_prob, ONVs_per_domain = self.probabilityCalculator(nu=nu)
dom_prob = pd.DataFrame.from_dict([dom_prob])
dom_prob_df = pd.concat((dom_prob_df, dom_prob), axis=0, ignore_index=True)
ONVcoefs = self.getCoefsPerONV()
ONVcoefs = pd.DataFrame.from_dict([ONVcoefs])
ONV_coef_df = pd.concat((ONV_coef_df, ONVcoefs), axis=0, ignore_index=True)
dom_prob_df["U/t"] = ut_list
ONV_coef_df["U/t"] = ut_list
if get_ONV_coefs:
return dom_prob_df, ONV_coef_df
return dom_prob_df
def getMPDProbabilityDataFrame(self, dom_prob_df):
"""
Will generate a dataframe with the MPDs as the columns, and U/t as the rows
dom_prob_df pd.DataFrame: dataframe as generated by the getDomainProbabilityDataFrame method
"""
# intitialising indices to correct values
MPD_dict = {}
for row in dom_prob_df.iterrows():
prob_dict = row[1].to_dict()
prob_dict.pop("U/t")
MPD_list = self.MPDFinder(prob_dict)
# set for membership checks
MPD_set = set(MPD_list)
for domain in MPD_list:
if domain not in MPD_dict.keys():
# dict entry will hold beginning and end points of MPDs
MPD_dict[domain] = [row[1].name, "ongoing"]
for domain in MPD_dict.keys():
if domain not in MPD_set and MPD_dict[domain][1] == "ongoing":
MPD_dict[domain][1] = row[1].name
MPD_df = dom_prob_df.loc[:, MPD_dict.keys()]
MPD_df["U/t"] = dom_prob_df.loc[:, "U/t"]
for key in MPD_dict.keys():
if MPD_dict[key][0] != 0:
MPD_df[key][: MPD_dict[key][0]] = np.nan
if MPD_dict[key][-1] != "ongoing":
MPD_df[key][MPD_dict[key][-1] :] = np.nan
return MPD_df
# def MPDPlotter(sites, electrons, nu, U_max, domains, t=1.0, pot_dict={}, circular=True, step_size=1.0):
# """
# Support function to allow for the plotting of MPDs over a U/t range
# sites, int: the amount of sites in the molecule
# electrons, tuple of ints: the amount of alpha and beta electrons (alpha, beta)
# nu, int, amount of electrons in the domain
# U_max, int: the maximum U that needs to be plotted
# domains, list of tuples: the domains you want to study
# t, float: the hopping parameter, default 1.0
# pot_dict, dict{int:float}: dict that holds potentials that need top be applied in ionic Hubbard, default empty dict
# circular, bool: True if molecule is cyclic, default True
# check_con, bool: check whether the MPD is continuous or not, default=True
# step_size, float: the size of U/t steps required, default = 1
# """
# # dictionary will hold the probabilities for every domain as a list
# dom_dict = {}
# for domain in domains:
# dom_dict[domain] = []
# U_t_list = np.arange(0, U_max, step_size)
# # setting up some values for time saving
# benzene = Hubbard(sites, electrons, t, 0)
# hamiltonian = benzene.constructHubbardHamiltonian(benzene.constructAdjacencyMatrix(circular=circular))
# E, C = sp.eigh(hamiltonian)
# Hubbard_ground_state = C[:,0]
# benzene_MPD = MPD(benzene.detList, Hubbard_ground_state, benzene.sites)
# for U in U_t_list:
# benzene.setU(U)
# # we will apply a potential of 5 to site 0
# hamiltonian = benzene.constructHubbardHamiltonian(benzene.constructAdjacencyMatrix(circular=circular))
# if pot_dict:
# hamiltonian += benzene.applyPotential(pot_dict)
# E, C = sp.eigh(hamiltonian)
# Hubbard_ground_state = C[:,0]
# # calculate MPDs
# benzene_MPD.setGroundState(Hubbard_ground_state)
# # the implemented symmetry considerations do not work with ionic Hubbard models
# # as adding a potential breaks the molecular symmetry
# probabilities, coef_dict, ONV_per_domain = benzene_MPD.probabilityCalculator(nu=nu)
# MPDs = benzene_MPD.MPDFinder(probabilities)
# MPD_dict = {domain:probability for domain, probability in MPDs}
# # filling the dom_dict => will help us plot later
# for key in dom_dict.keys():
# if key in MPD_dict.keys():
# dom_dict[key].append(MPD_dict[key])
# else:
# dom_dict[key].append(np.nan)
# return dom_dict | PypiClean |
/chatglm6bpkg-0.0.1.tar.gz/chatglm6bpkg-0.0.1/ptuning/trainer.py | import contextlib
import functools
import glob
import inspect
import math
import os
import random
import re
import shutil
import sys
import time
import warnings
from collections.abc import Mapping
from distutils.util import strtobool
from pathlib import Path
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple, Union
from tqdm.auto import tqdm
# Integrations must be imported before ML frameworks:
# isort: off
from transformers.integrations import (
default_hp_search_backend,
get_reporting_integration_callbacks,
hp_params,
is_fairscale_available,
is_optuna_available,
is_ray_tune_available,
is_sigopt_available,
is_wandb_available,
run_hp_search_optuna,
run_hp_search_ray,
run_hp_search_sigopt,
run_hp_search_wandb,
)
# isort: on
import numpy as np
import torch
import torch.distributed as dist
from huggingface_hub import Repository, create_repo
from packaging import version
from torch import nn
from torch.utils.data import DataLoader, Dataset, RandomSampler, SequentialSampler
from torch.utils.data.distributed import DistributedSampler
from transformers import __version__
from transformers.configuration_utils import PretrainedConfig
from transformers.data.data_collator import DataCollator, DataCollatorWithPadding, default_data_collator
from transformers.debug_utils import DebugOption, DebugUnderflowOverflow
from transformers.deepspeed import deepspeed_init, is_deepspeed_zero3_enabled
from transformers.dependency_versions_check import dep_version_check
from transformers.modelcard import TrainingSummary
from transformers.modeling_utils import PreTrainedModel, load_sharded_checkpoint, unwrap_model
from transformers.models.auto.modeling_auto import MODEL_FOR_CAUSAL_LM_MAPPING_NAMES, MODEL_MAPPING_NAMES
from transformers.optimization import Adafactor, get_scheduler
from transformers.pytorch_utils import ALL_LAYERNORM_LAYERS, is_torch_greater_or_equal_than_1_10, is_torch_less_than_1_11
from transformers.tokenization_utils_base import PreTrainedTokenizerBase
from transformers.trainer_callback import (
CallbackHandler,
DefaultFlowCallback,
PrinterCallback,
ProgressCallback,
TrainerCallback,
TrainerControl,
TrainerState,
)
from transformers.trainer_pt_utils import (
DistributedLengthGroupedSampler,
DistributedSamplerWithLoop,
DistributedTensorGatherer,
IterableDatasetShard,
LabelSmoother,
LengthGroupedSampler,
SequentialDistributedSampler,
ShardSampler,
distributed_broadcast_scalars,
distributed_concat,
find_batch_size,
get_module_class_from_name,
get_parameter_names,
nested_concat,
nested_detach,
nested_numpify,
nested_truncate,
nested_xla_mesh_reduce,
reissue_pt_warnings,
)
from transformers.trainer_utils import (
PREFIX_CHECKPOINT_DIR,
BestRun,
EvalLoopOutput,
EvalPrediction,
FSDPOption,
HPSearchBackend,
HubStrategy,
IntervalStrategy,
PredictionOutput,
RemoveColumnsCollator,
ShardedDDPOption,
TrainerMemoryTracker,
TrainOutput,
default_compute_objective,
default_hp_space,
denumpify_detensorize,
enable_full_determinism,
find_executable_batch_size,
get_last_checkpoint,
has_length,
number_of_arguments,
seed_worker,
set_seed,
speed_metrics,
)
from transformers.training_args import OptimizerNames, ParallelMode, TrainingArguments
from transformers.utils import (
CONFIG_NAME,
WEIGHTS_INDEX_NAME,
WEIGHTS_NAME,
can_return_loss,
find_labels,
get_full_repo_name,
is_accelerate_available,
is_apex_available,
is_datasets_available,
is_in_notebook,
is_ipex_available,
is_sagemaker_dp_enabled,
is_sagemaker_mp_enabled,
is_torch_compile_available,
is_torch_neuroncore_available,
is_torch_tpu_available,
logging,
)
from transformers.utils.generic import ContextManagers
_is_native_cpu_amp_available = is_torch_greater_or_equal_than_1_10
DEFAULT_CALLBACKS = [DefaultFlowCallback]
DEFAULT_PROGRESS_CALLBACK = ProgressCallback
if is_in_notebook():
from transformers.utils.notebook import NotebookProgressCallback
DEFAULT_PROGRESS_CALLBACK = NotebookProgressCallback
if is_apex_available():
from apex import amp
if is_datasets_available():
import datasets
if is_torch_tpu_available(check_device=False):
import torch_xla.core.xla_model as xm
import torch_xla.debug.metrics as met
import torch_xla.distributed.parallel_loader as pl
if is_fairscale_available():
dep_version_check("fairscale")
import fairscale
from fairscale.nn.data_parallel import FullyShardedDataParallel as FullyShardedDDP
from fairscale.nn.data_parallel import ShardedDataParallel as ShardedDDP
from fairscale.nn.wrap import auto_wrap
from fairscale.optim import OSS
from fairscale.optim.grad_scaler import ShardedGradScaler
if is_sagemaker_mp_enabled():
import smdistributed.modelparallel.torch as smp
from smdistributed.modelparallel import __version__ as SMP_VERSION
IS_SAGEMAKER_MP_POST_1_10 = version.parse(SMP_VERSION) >= version.parse("1.10")
from transformers.trainer_pt_utils import smp_forward_backward, smp_forward_only, smp_gather, smp_nested_concat
else:
IS_SAGEMAKER_MP_POST_1_10 = False
skip_first_batches = None
if is_accelerate_available():
from accelerate import __version__ as accelerate_version
if version.parse(accelerate_version) >= version.parse("0.16"):
from accelerate import skip_first_batches
if TYPE_CHECKING:
import optuna
logger = logging.get_logger(__name__)
# Name of the files used for checkpointing
TRAINING_ARGS_NAME = "training_args.bin"
TRAINER_STATE_NAME = "trainer_state.json"
OPTIMIZER_NAME = "optimizer.pt"
SCHEDULER_NAME = "scheduler.pt"
SCALER_NAME = "scaler.pt"
class Trainer:
"""
Trainer is a simple but feature-complete training and eval loop for PyTorch, optimized for 🤗 Transformers.
Args:
model ([`PreTrainedModel`] or `torch.nn.Module`, *optional*):
The model to train, evaluate or use for predictions. If not provided, a `model_init` must be passed.
<Tip>
[`Trainer`] is optimized to work with the [`PreTrainedModel`] provided by the library. You can still use
your own models defined as `torch.nn.Module` as long as they work the same way as the 🤗 Transformers
models.
</Tip>
args ([`TrainingArguments`], *optional*):
The arguments to tweak for training. Will default to a basic instance of [`TrainingArguments`] with the
`output_dir` set to a directory named *tmp_trainer* in the current directory if not provided.
data_collator (`DataCollator`, *optional*):
The function to use to form a batch from a list of elements of `train_dataset` or `eval_dataset`. Will
default to [`default_data_collator`] if no `tokenizer` is provided, an instance of
[`DataCollatorWithPadding`] otherwise.
train_dataset (`torch.utils.data.Dataset` or `torch.utils.data.IterableDataset`, *optional*):
The dataset to use for training. If it is a [`~datasets.Dataset`], columns not accepted by the
`model.forward()` method are automatically removed.
Note that if it's a `torch.utils.data.IterableDataset` with some randomization and you are training in a
distributed fashion, your iterable dataset should either use a internal attribute `generator` that is a
`torch.Generator` for the randomization that must be identical on all processes (and the Trainer will
manually set the seed of this `generator` at each epoch) or have a `set_epoch()` method that internally
sets the seed of the RNGs used.
eval_dataset (Union[`torch.utils.data.Dataset`, Dict[str, `torch.utils.data.Dataset`]), *optional*):
The dataset to use for evaluation. If it is a [`~datasets.Dataset`], columns not accepted by the
`model.forward()` method are automatically removed. If it is a dictionary, it will evaluate on each
dataset prepending the dictionary key to the metric name.
tokenizer ([`PreTrainedTokenizerBase`], *optional*):
The tokenizer used to preprocess the data. If provided, will be used to automatically pad the inputs to the
maximum length when batching inputs, and it will be saved along the model to make it easier to rerun an
interrupted training or reuse the fine-tuned model.
model_init (`Callable[[], PreTrainedModel]`, *optional*):
A function that instantiates the model to be used. If provided, each call to [`~Trainer.train`] will start
from a new instance of the model as given by this function.
The function may have zero argument, or a single one containing the optuna/Ray Tune/SigOpt trial object, to
be able to choose different architectures according to hyper parameters (such as layer count, sizes of
inner layers, dropout probabilities etc).
compute_metrics (`Callable[[EvalPrediction], Dict]`, *optional*):
The function that will be used to compute metrics at evaluation. Must take a [`EvalPrediction`] and return
a dictionary string to metric values.
callbacks (List of [`TrainerCallback`], *optional*):
A list of callbacks to customize the training loop. Will add those to the list of default callbacks
detailed in [here](callback).
If you want to remove one of the default callbacks used, use the [`Trainer.remove_callback`] method.
optimizers (`Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]`, *optional*): A tuple
containing the optimizer and the scheduler to use. Will default to an instance of [`AdamW`] on your model
and a scheduler given by [`get_linear_schedule_with_warmup`] controlled by `args`.
preprocess_logits_for_metrics (`Callable[[torch.Tensor, torch.Tensor], torch.Tensor]`, *optional*):
A function that preprocess the logits right before caching them at each evaluation step. Must take two
tensors, the logits and the labels, and return the logits once processed as desired. The modifications made
by this function will be reflected in the predictions received by `compute_metrics`.
Note that the labels (second parameter) will be `None` if the dataset does not have them.
Important attributes:
- **model** -- Always points to the core model. If using a transformers model, it will be a [`PreTrainedModel`]
subclass.
- **model_wrapped** -- Always points to the most external model in case one or more other modules wrap the
original model. This is the model that should be used for the forward pass. For example, under `DeepSpeed`,
the inner model is wrapped in `DeepSpeed` and then again in `torch.nn.DistributedDataParallel`. If the inner
model hasn't been wrapped, then `self.model_wrapped` is the same as `self.model`.
- **is_model_parallel** -- Whether or not a model has been switched to a model parallel mode (different from
data parallelism, this means some of the model layers are split on different GPUs).
- **place_model_on_device** -- Whether or not to automatically place the model on the device - it will be set
to `False` if model parallel or deepspeed is used, or if the default
`TrainingArguments.place_model_on_device` is overridden to return `False` .
- **is_in_train** -- Whether or not a model is currently running `train` (e.g. when `evaluate` is called while
in `train`)
"""
from transformers.trainer_pt_utils import _get_learning_rate, log_metrics, metrics_format, save_metrics, save_state
def __init__(
self,
model: Union[PreTrainedModel, nn.Module] = None,
args: TrainingArguments = None,
data_collator: Optional[DataCollator] = None,
train_dataset: Optional[Dataset] = None,
eval_dataset: Optional[Union[Dataset, Dict[str, Dataset]]] = None,
tokenizer: Optional[PreTrainedTokenizerBase] = None,
model_init: Optional[Callable[[], PreTrainedModel]] = None,
compute_metrics: Optional[Callable[[EvalPrediction], Dict]] = None,
callbacks: Optional[List[TrainerCallback]] = None,
optimizers: Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None),
preprocess_logits_for_metrics: Optional[Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None,
save_prefixencoder: bool = False,
):
self.save_prefixencoder = save_prefixencoder
if args is None:
output_dir = "tmp_trainer"
logger.info(f"No `TrainingArguments` passed, using `output_dir={output_dir}`.")
args = TrainingArguments(output_dir=output_dir)
self.args = args
# Seed must be set before instantiating the model when using model
enable_full_determinism(self.args.seed) if self.args.full_determinism else set_seed(self.args.seed)
self.hp_name = None
self.deepspeed = None
self.is_in_train = False
# memory metrics - must set up as early as possible
self._memory_tracker = TrainerMemoryTracker(self.args.skip_memory_metrics)
self._memory_tracker.start()
# set the correct log level depending on the node
log_level = args.get_process_log_level()
logging.set_verbosity(log_level)
# force device and distributed setup init explicitly
args._setup_devices
if model is None:
if model_init is not None:
self.model_init = model_init
model = self.call_model_init()
else:
raise RuntimeError("`Trainer` requires either a `model` or `model_init` argument")
else:
if model_init is not None:
warnings.warn(
"`Trainer` requires either a `model` or `model_init` argument, but not both. `model_init` will"
" overwrite your model when calling the `train` method. This will become a fatal error in the next"
" release.",
FutureWarning,
)
self.model_init = model_init
if model.__class__.__name__ in MODEL_MAPPING_NAMES:
raise ValueError(
f"The model you have picked ({model.__class__.__name__}) cannot be used as is for training: it only "
"computes hidden states and does not accept any labels. You should choose a model with a head "
"suitable for your task like any of the `AutoModelForXxx` listed at "
"https://huggingface.co/docs/transformers/model_doc/auto."
)
if hasattr(model, "is_parallelizable") and model.is_parallelizable and model.model_parallel:
self.is_model_parallel = True
else:
self.is_model_parallel = False
# At this stage the model is already loaded
if getattr(model, "is_loaded_in_8bit", False):
if getattr(model, "_is_int8_training_enabled", False):
logger.info(
"The model is loaded in 8-bit precision. To train this model you need to add additional modules"
" inside the model such as adapters using `peft` library and freeze the model weights. Please"
" check "
" the examples in https://github.com/huggingface/peft for more details."
)
else:
raise ValueError(
"The model you want to train is loaded in 8-bit precision. if you want to fine-tune an 8-bit"
" model, please make sure that you have installed `bitsandbytes>=0.37.0`. "
)
# Setup Sharded DDP training
self.sharded_ddp = None
if len(args.sharded_ddp) > 0:
if args.deepspeed:
raise ValueError(
"Using --sharded_ddp xxx together with --deepspeed is not possible, deactivate one of those flags."
)
if len(args.fsdp) > 0:
raise ValueError(
"Using --sharded_ddp xxx together with --fsdp is not possible, deactivate one of those flags."
)
if args.local_rank == -1:
raise ValueError("Using sharded DDP only works in distributed training.")
elif not is_fairscale_available():
raise ImportError("Sharded DDP training requires fairscale: `pip install fairscale`.")
elif ShardedDDPOption.SIMPLE not in args.sharded_ddp and FullyShardedDDP is None:
raise ImportError(
"Sharded DDP in a mode other than simple training requires fairscale version >= 0.3, found "
f"{fairscale.__version__}. Upgrade your fairscale library: `pip install --upgrade fairscale`."
)
elif ShardedDDPOption.SIMPLE in args.sharded_ddp:
self.sharded_ddp = ShardedDDPOption.SIMPLE
elif ShardedDDPOption.ZERO_DP_2 in args.sharded_ddp:
self.sharded_ddp = ShardedDDPOption.ZERO_DP_2
elif ShardedDDPOption.ZERO_DP_3 in args.sharded_ddp:
self.sharded_ddp = ShardedDDPOption.ZERO_DP_3
self.fsdp = None
if len(args.fsdp) > 0:
if args.deepspeed:
raise ValueError(
"Using --fsdp xxx together with --deepspeed is not possible, deactivate one of those flags."
)
if not args.fsdp_config["xla"] and args.local_rank == -1:
raise ValueError("Using fsdp only works in distributed training.")
# dep_version_check("torch>=1.12.0")
# Would have to update setup.py with torch>=1.12.0
# which isn't ideally given that it will force people not using FSDP to also use torch>=1.12.0
# below is the current alternative.
if version.parse(version.parse(torch.__version__).base_version) < version.parse("1.12.0"):
raise ValueError("FSDP requires PyTorch >= 1.12.0")
from torch.distributed.fsdp.fully_sharded_data_parallel import BackwardPrefetch, ShardingStrategy
if FSDPOption.FULL_SHARD in args.fsdp:
self.fsdp = ShardingStrategy.FULL_SHARD
elif FSDPOption.SHARD_GRAD_OP in args.fsdp:
self.fsdp = ShardingStrategy.SHARD_GRAD_OP
elif FSDPOption.NO_SHARD in args.fsdp:
self.fsdp = ShardingStrategy.NO_SHARD
self.backward_prefetch = BackwardPrefetch.BACKWARD_PRE
if "backward_prefetch" in self.args.fsdp_config and "backward_pos" not in self.backward_prefetch:
self.backward_prefetch = BackwardPrefetch.BACKWARD_POST
self.forword_prefetch = False
if self.args.fsdp_config.get("forword_prefect", False):
self.forword_prefetch = True
self.limit_all_gathers = False
if self.args.fsdp_config.get("limit_all_gathers", False):
self.limit_all_gathers = True
# one place to sort out whether to place the model on device or not
# postpone switching model to cuda when:
# 1. MP - since we are trying to fit a much bigger than 1 gpu model
# 2. fp16-enabled DeepSpeed loads the model in half the size and it doesn't need .to() anyway,
# and we only use deepspeed for training at the moment
# 3. full bf16 or fp16 eval - since the model needs to be cast to the right dtype first
# 4. Sharded DDP - same as MP
# 5. FSDP - same as MP
self.place_model_on_device = args.place_model_on_device
if (
self.is_model_parallel
or args.deepspeed
or ((args.fp16_full_eval or args.bf16_full_eval) and not args.do_train)
or (self.sharded_ddp in [ShardedDDPOption.ZERO_DP_2, ShardedDDPOption.ZERO_DP_3])
or (self.fsdp is not None)
):
self.place_model_on_device = False
default_collator = default_data_collator if tokenizer is None else DataCollatorWithPadding(tokenizer)
self.data_collator = data_collator if data_collator is not None else default_collator
self.train_dataset = train_dataset
self.eval_dataset = eval_dataset
self.tokenizer = tokenizer
if self.place_model_on_device and not getattr(model, "is_loaded_in_8bit", False):
self._move_model_to_device(model, args.device)
# Force n_gpu to 1 to avoid DataParallel as MP will manage the GPUs
if self.is_model_parallel:
self.args._n_gpu = 1
# later use `self.model is self.model_wrapped` to check if it's wrapped or not
self.model_wrapped = model
self.model = model
self.compute_metrics = compute_metrics
self.preprocess_logits_for_metrics = preprocess_logits_for_metrics
self.optimizer, self.lr_scheduler = optimizers
if model_init is not None and (self.optimizer is not None or self.lr_scheduler is not None):
raise RuntimeError(
"Passing a `model_init` is incompatible with providing the `optimizers` argument. "
"You should subclass `Trainer` and override the `create_optimizer_and_scheduler` method."
)
if is_torch_tpu_available() and self.optimizer is not None:
for param in self.model.parameters():
model_device = param.device
break
for param_group in self.optimizer.param_groups:
if len(param_group["params"]) > 0:
optimizer_device = param_group["params"][0].device
break
if model_device != optimizer_device:
raise ValueError(
"The model and the optimizer parameters are not on the same device, which probably means you"
" created an optimizer around your model **before** putting on the device and passing it to the"
" `Trainer`. Make sure the lines `import torch_xla.core.xla_model as xm` and"
" `model.to(xm.xla_device())` is performed before the optimizer creation in your script."
)
if ((self.sharded_ddp is not None) or args.deepspeed or (self.fsdp is not None)) and (
self.optimizer is not None or self.lr_scheduler is not None
):
raise RuntimeError(
"Passing `optimizers` is not allowed if Fairscale, Deepspeed or PyTorch FSDP is enabled."
"You should subclass `Trainer` and override the `create_optimizer_and_scheduler` method."
)
default_callbacks = DEFAULT_CALLBACKS + get_reporting_integration_callbacks(self.args.report_to)
callbacks = default_callbacks if callbacks is None else default_callbacks + callbacks
self.callback_handler = CallbackHandler(
callbacks, self.model, self.tokenizer, self.optimizer, self.lr_scheduler
)
self.add_callback(PrinterCallback if self.args.disable_tqdm else DEFAULT_PROGRESS_CALLBACK)
# Will be set to True by `self._setup_loggers()` on first call to `self.log()`.
self._loggers_initialized = False
# Create clone of distant repo and output directory if needed
if self.args.push_to_hub:
self.init_git_repo(at_init=True)
# In case of pull, we need to make sure every process has the latest.
if is_torch_tpu_available():
xm.rendezvous("init git repo")
elif args.local_rank != -1:
dist.barrier()
if self.args.should_save:
os.makedirs(self.args.output_dir, exist_ok=True)
if not callable(self.data_collator) and callable(getattr(self.data_collator, "collate_batch", None)):
raise ValueError("The `data_collator` should be a simple callable (function, class with `__call__`).")
if args.max_steps > 0:
logger.info("max_steps is given, it will override any value given in num_train_epochs")
if train_dataset is not None and not has_length(train_dataset) and args.max_steps <= 0:
raise ValueError("train_dataset does not implement __len__, max_steps has to be specified")
if (
train_dataset is not None
and isinstance(train_dataset, torch.utils.data.IterableDataset)
and args.group_by_length
):
raise ValueError("the `--group_by_length` option is only available for `Dataset`, not `IterableDataset")
self._signature_columns = None
# Mixed precision setup
self.use_apex = False
self.use_cuda_amp = False
self.use_cpu_amp = False
# Mixed precision setup for SageMaker Model Parallel
if is_sagemaker_mp_enabled():
# BF16 + model parallelism in SageMaker: currently not supported, raise an error
if args.bf16:
raise ValueError("SageMaker Model Parallelism does not support BF16 yet. Please use FP16 instead ")
if IS_SAGEMAKER_MP_POST_1_10:
# When there's mismatch between SMP config and trainer argument, use SMP config as truth
if args.fp16 != smp.state.cfg.fp16:
logger.warning(
f"FP16 provided in SM_HP_MP_PARAMETERS is {smp.state.cfg.fp16},"
f"but FP16 provided in trainer argument is {args.fp16},"
f"setting to {smp.state.cfg.fp16}"
)
args.fp16 = smp.state.cfg.fp16
else:
# smp < 1.10 does not support fp16 in trainer.
if hasattr(smp.state.cfg, "fp16"):
logger.warning(
f"FP16 provided in SM_HP_MP_PARAMETERS is {smp.state.cfg.fp16}, "
"but SageMaker Model Parallelism < 1.10 does not support FP16 in trainer."
)
if args.fp16 or args.bf16:
if args.half_precision_backend == "auto":
if args.device == torch.device("cpu"):
if args.fp16:
raise ValueError("Tried to use `fp16` but it is not supported on cpu")
elif _is_native_cpu_amp_available:
args.half_precision_backend = "cpu_amp"
else:
raise ValueError("Tried to use cpu amp but native cpu amp is not available")
else:
args.half_precision_backend = "cuda_amp"
logger.info(f"Using {args.half_precision_backend} half precision backend")
self.do_grad_scaling = False
if (args.fp16 or args.bf16) and not (args.deepspeed or is_sagemaker_mp_enabled() or is_torch_tpu_available()):
# deepspeed and SageMaker Model Parallel manage their own half precision
if args.half_precision_backend == "cuda_amp":
self.use_cuda_amp = True
self.amp_dtype = torch.float16 if args.fp16 else torch.bfloat16
# bf16 does not need grad scaling
self.do_grad_scaling = self.amp_dtype == torch.float16
if self.do_grad_scaling:
if self.sharded_ddp is not None:
self.scaler = ShardedGradScaler()
elif self.fsdp is not None:
from torch.distributed.fsdp.sharded_grad_scaler import (
ShardedGradScaler as FSDPShardedGradScaler,
)
self.scaler = FSDPShardedGradScaler()
elif is_torch_tpu_available():
from torch_xla.amp import GradScaler
self.scaler = GradScaler()
else:
self.scaler = torch.cuda.amp.GradScaler()
elif args.half_precision_backend == "cpu_amp":
self.use_cpu_amp = True
self.amp_dtype = torch.bfloat16
else:
if not is_apex_available():
raise ImportError(
"Using FP16 with APEX but APEX is not installed, please refer to"
" https://www.github.com/nvidia/apex."
)
self.use_apex = True
# FP16 + model parallelism in SageMaker: gradient clipping does not work for now so we raise a helpful error.
if (
is_sagemaker_mp_enabled()
and self.use_cuda_amp
and args.max_grad_norm is not None
and args.max_grad_norm > 0
):
raise ValueError(
"SageMaker Model Parallelism in mixed precision mode does not support gradient clipping yet. Pass "
"along 'max_grad_norm': 0 in your hyperparameters."
)
# Label smoothing
if self.args.label_smoothing_factor != 0:
self.label_smoother = LabelSmoother(epsilon=self.args.label_smoothing_factor)
else:
self.label_smoother = None
self.state = TrainerState(
is_local_process_zero=self.is_local_process_zero(),
is_world_process_zero=self.is_world_process_zero(),
)
self.control = TrainerControl()
# Internal variable to count flos in each process, will be accumulated in `self.state.total_flos` then
# returned to 0 every time flos need to be logged
self.current_flos = 0
self.hp_search_backend = None
self.use_tune_checkpoints = False
default_label_names = find_labels(self.model.__class__)
self.label_names = default_label_names if self.args.label_names is None else self.args.label_names
self.can_return_loss = can_return_loss(self.model.__class__)
self.control = self.callback_handler.on_init_end(self.args, self.state, self.control)
# Internal variables to keep track of the original batch size
self._train_batch_size = args.train_batch_size
# very last
self._memory_tracker.stop_and_update_metrics()
# torch.compile
if args.torch_compile and not is_torch_compile_available():
raise RuntimeError("Using torch.compile requires PyTorch 2.0 or higher.")
def add_callback(self, callback):
"""
Add a callback to the current list of [`~transformer.TrainerCallback`].
Args:
callback (`type` or [`~transformer.TrainerCallback`]):
A [`~transformer.TrainerCallback`] class or an instance of a [`~transformer.TrainerCallback`]. In the
first case, will instantiate a member of that class.
"""
self.callback_handler.add_callback(callback)
def pop_callback(self, callback):
"""
Remove a callback from the current list of [`~transformer.TrainerCallback`] and returns it.
If the callback is not found, returns `None` (and no error is raised).
Args:
callback (`type` or [`~transformer.TrainerCallback`]):
A [`~transformer.TrainerCallback`] class or an instance of a [`~transformer.TrainerCallback`]. In the
first case, will pop the first member of that class found in the list of callbacks.
Returns:
[`~transformer.TrainerCallback`]: The callback removed, if found.
"""
return self.callback_handler.pop_callback(callback)
def remove_callback(self, callback):
"""
Remove a callback from the current list of [`~transformer.TrainerCallback`].
Args:
callback (`type` or [`~transformer.TrainerCallback`]):
A [`~transformer.TrainerCallback`] class or an instance of a [`~transformer.TrainerCallback`]. In the
first case, will remove the first member of that class found in the list of callbacks.
"""
self.callback_handler.remove_callback(callback)
def _move_model_to_device(self, model, device):
model = model.to(device)
# Moving a model to an XLA device disconnects the tied weights, so we have to retie them.
if self.args.parallel_mode == ParallelMode.TPU and hasattr(model, "tie_weights"):
model.tie_weights()
def _set_signature_columns_if_needed(self):
if self._signature_columns is None:
# Inspect model forward signature to keep only the arguments it accepts.
signature = inspect.signature(self.model.forward)
self._signature_columns = list(signature.parameters.keys())
# Labels may be named label or label_ids, the default data collator handles that.
self._signature_columns += list(set(["label", "label_ids"] + self.label_names))
def _remove_unused_columns(self, dataset: "datasets.Dataset", description: Optional[str] = None):
if not self.args.remove_unused_columns:
return dataset
self._set_signature_columns_if_needed()
signature_columns = self._signature_columns
ignored_columns = list(set(dataset.column_names) - set(signature_columns))
if len(ignored_columns) > 0:
dset_description = "" if description is None else f"in the {description} set"
logger.info(
f"The following columns {dset_description} don't have a corresponding argument in "
f"`{self.model.__class__.__name__}.forward` and have been ignored: {', '.join(ignored_columns)}."
f" If {', '.join(ignored_columns)} are not expected by `{self.model.__class__.__name__}.forward`, "
" you can safely ignore this message."
)
columns = [k for k in signature_columns if k in dataset.column_names]
if version.parse(datasets.__version__) < version.parse("1.4.0"):
dataset.set_format(
type=dataset.format["type"], columns=columns, format_kwargs=dataset.format["format_kwargs"]
)
return dataset
else:
return dataset.remove_columns(ignored_columns)
def _get_collator_with_removed_columns(
self, data_collator: Callable, description: Optional[str] = None
) -> Callable:
"""Wrap the data collator in a callable removing unused columns."""
if not self.args.remove_unused_columns:
return data_collator
self._set_signature_columns_if_needed()
signature_columns = self._signature_columns
remove_columns_collator = RemoveColumnsCollator(
data_collator=data_collator,
signature_columns=signature_columns,
logger=logger,
description=description,
model_name=self.model.__class__.__name__,
)
return remove_columns_collator
def _get_train_sampler(self) -> Optional[torch.utils.data.Sampler]:
if self.train_dataset is None or not has_length(self.train_dataset):
return None
generator = None
if self.args.world_size <= 1:
generator = torch.Generator()
# for backwards compatibility, we generate a seed here (which is sampled from a generator seeded with
# `args.seed`) if data_seed isn't provided.
# Further on in this method, we default to `args.seed` instead.
if self.args.data_seed is None:
seed = int(torch.empty((), dtype=torch.int64).random_().item())
else:
seed = self.args.data_seed
generator.manual_seed(seed)
seed = self.args.data_seed if self.args.data_seed is not None else self.args.seed
# Build the sampler.
if self.args.group_by_length:
if is_datasets_available() and isinstance(self.train_dataset, datasets.Dataset):
lengths = (
self.train_dataset[self.args.length_column_name]
if self.args.length_column_name in self.train_dataset.column_names
else None
)
else:
lengths = None
model_input_name = self.tokenizer.model_input_names[0] if self.tokenizer is not None else None
if self.args.world_size <= 1:
return LengthGroupedSampler(
self.args.train_batch_size * self.args.gradient_accumulation_steps,
dataset=self.train_dataset,
lengths=lengths,
model_input_name=model_input_name,
generator=generator,
)
else:
return DistributedLengthGroupedSampler(
self.args.train_batch_size * self.args.gradient_accumulation_steps,
dataset=self.train_dataset,
num_replicas=self.args.world_size,
rank=self.args.process_index,
lengths=lengths,
model_input_name=model_input_name,
seed=seed,
)
else:
if self.args.world_size <= 1:
return RandomSampler(self.train_dataset, generator=generator)
elif (
self.args.parallel_mode in [ParallelMode.TPU, ParallelMode.SAGEMAKER_MODEL_PARALLEL]
and not self.args.dataloader_drop_last
):
# Use a loop for TPUs when drop_last is False to have all batches have the same size.
return DistributedSamplerWithLoop(
self.train_dataset,
batch_size=self.args.per_device_train_batch_size,
num_replicas=self.args.world_size,
rank=self.args.process_index,
seed=seed,
)
else:
return DistributedSampler(
self.train_dataset,
num_replicas=self.args.world_size,
rank=self.args.process_index,
seed=seed,
)
def get_train_dataloader(self) -> DataLoader:
"""
Returns the training [`~torch.utils.data.DataLoader`].
Will use no sampler if `train_dataset` does not implement `__len__`, a random sampler (adapted to distributed
training if necessary) otherwise.
Subclass and override this method if you want to inject some custom behavior.
"""
if self.train_dataset is None:
raise ValueError("Trainer: training requires a train_dataset.")
train_dataset = self.train_dataset
data_collator = self.data_collator
if is_datasets_available() and isinstance(train_dataset, datasets.Dataset):
train_dataset = self._remove_unused_columns(train_dataset, description="training")
else:
data_collator = self._get_collator_with_removed_columns(data_collator, description="training")
if isinstance(train_dataset, torch.utils.data.IterableDataset):
if self.args.world_size > 1:
train_dataset = IterableDatasetShard(
train_dataset,
batch_size=self._train_batch_size,
drop_last=self.args.dataloader_drop_last,
num_processes=self.args.world_size,
process_index=self.args.process_index,
)
return DataLoader(
train_dataset,
batch_size=self._train_batch_size,
collate_fn=data_collator,
num_workers=self.args.dataloader_num_workers,
pin_memory=self.args.dataloader_pin_memory,
)
train_sampler = self._get_train_sampler()
return DataLoader(
train_dataset,
batch_size=self._train_batch_size,
sampler=train_sampler,
collate_fn=data_collator,
drop_last=self.args.dataloader_drop_last,
num_workers=self.args.dataloader_num_workers,
pin_memory=self.args.dataloader_pin_memory,
worker_init_fn=seed_worker,
)
def _get_eval_sampler(self, eval_dataset: Dataset) -> Optional[torch.utils.data.Sampler]:
# Deprecated code
if self.args.use_legacy_prediction_loop:
if is_torch_tpu_available():
return SequentialDistributedSampler(
eval_dataset, num_replicas=xm.xrt_world_size(), rank=xm.get_ordinal()
)
elif is_sagemaker_mp_enabled():
return SequentialDistributedSampler(
eval_dataset,
num_replicas=smp.dp_size(),
rank=smp.dp_rank(),
batch_size=self.args.per_device_eval_batch_size,
)
elif self.args.local_rank != -1:
return SequentialDistributedSampler(eval_dataset)
else:
return SequentialSampler(eval_dataset)
if self.args.world_size <= 1:
return SequentialSampler(eval_dataset)
else:
return ShardSampler(
eval_dataset,
batch_size=self.args.per_device_eval_batch_size,
num_processes=self.args.world_size,
process_index=self.args.process_index,
)
def get_eval_dataloader(self, eval_dataset: Optional[Dataset] = None) -> DataLoader:
"""
Returns the evaluation [`~torch.utils.data.DataLoader`].
Subclass and override this method if you want to inject some custom behavior.
Args:
eval_dataset (`torch.utils.data.Dataset`, *optional*):
If provided, will override `self.eval_dataset`. If it is a [`~datasets.Dataset`], columns not accepted
by the `model.forward()` method are automatically removed. It must implement `__len__`.
"""
if eval_dataset is None and self.eval_dataset is None:
raise ValueError("Trainer: evaluation requires an eval_dataset.")
eval_dataset = eval_dataset if eval_dataset is not None else self.eval_dataset
data_collator = self.data_collator
if is_datasets_available() and isinstance(eval_dataset, datasets.Dataset):
eval_dataset = self._remove_unused_columns(eval_dataset, description="evaluation")
else:
data_collator = self._get_collator_with_removed_columns(data_collator, description="evaluation")
if isinstance(eval_dataset, torch.utils.data.IterableDataset):
if self.args.world_size > 1:
eval_dataset = IterableDatasetShard(
eval_dataset,
batch_size=self.args.per_device_eval_batch_size,
drop_last=self.args.dataloader_drop_last,
num_processes=self.args.world_size,
process_index=self.args.process_index,
)
return DataLoader(
eval_dataset,
batch_size=self.args.eval_batch_size,
collate_fn=data_collator,
num_workers=self.args.dataloader_num_workers,
pin_memory=self.args.dataloader_pin_memory,
)
eval_sampler = self._get_eval_sampler(eval_dataset)
return DataLoader(
eval_dataset,
sampler=eval_sampler,
batch_size=self.args.eval_batch_size,
collate_fn=data_collator,
drop_last=self.args.dataloader_drop_last,
num_workers=self.args.dataloader_num_workers,
pin_memory=self.args.dataloader_pin_memory,
)
def get_test_dataloader(self, test_dataset: Dataset) -> DataLoader:
"""
Returns the test [`~torch.utils.data.DataLoader`].
Subclass and override this method if you want to inject some custom behavior.
Args:
test_dataset (`torch.utils.data.Dataset`, *optional*):
The test dataset to use. If it is a [`~datasets.Dataset`], columns not accepted by the
`model.forward()` method are automatically removed. It must implement `__len__`.
"""
data_collator = self.data_collator
if is_datasets_available() and isinstance(test_dataset, datasets.Dataset):
test_dataset = self._remove_unused_columns(test_dataset, description="test")
else:
data_collator = self._get_collator_with_removed_columns(data_collator, description="test")
if isinstance(test_dataset, torch.utils.data.IterableDataset):
if self.args.world_size > 1:
test_dataset = IterableDatasetShard(
test_dataset,
batch_size=self.args.eval_batch_size,
drop_last=self.args.dataloader_drop_last,
num_processes=self.args.world_size,
process_index=self.args.process_index,
)
return DataLoader(
test_dataset,
batch_size=self.args.eval_batch_size,
collate_fn=data_collator,
num_workers=self.args.dataloader_num_workers,
pin_memory=self.args.dataloader_pin_memory,
)
test_sampler = self._get_eval_sampler(test_dataset)
# We use the same batch_size as for eval.
return DataLoader(
test_dataset,
sampler=test_sampler,
batch_size=self.args.eval_batch_size,
collate_fn=data_collator,
drop_last=self.args.dataloader_drop_last,
num_workers=self.args.dataloader_num_workers,
pin_memory=self.args.dataloader_pin_memory,
)
def create_optimizer_and_scheduler(self, num_training_steps: int):
"""
Setup the optimizer and the learning rate scheduler.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the
Trainer's init through `optimizers`, or subclass and override this method (or `create_optimizer` and/or
`create_scheduler`) in a subclass.
"""
self.create_optimizer()
if IS_SAGEMAKER_MP_POST_1_10 and smp.state.cfg.fp16:
# If smp >= 1.10 and fp16 is enabled, we unwrap the optimizer
optimizer = self.optimizer.optimizer
else:
optimizer = self.optimizer
self.create_scheduler(num_training_steps=num_training_steps, optimizer=optimizer)
def create_optimizer(self):
"""
Setup the optimizer.
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the
Trainer's init through `optimizers`, or subclass and override this method in a subclass.
"""
opt_model = self.model_wrapped if is_sagemaker_mp_enabled() else self.model
if self.optimizer is None:
decay_parameters = get_parameter_names(opt_model, ALL_LAYERNORM_LAYERS)
decay_parameters = [name for name in decay_parameters if "bias" not in name]
optimizer_grouped_parameters = [
{
"params": [
p for n, p in opt_model.named_parameters() if (n in decay_parameters and p.requires_grad)
],
"weight_decay": self.args.weight_decay,
},
{
"params": [
p for n, p in opt_model.named_parameters() if (n not in decay_parameters and p.requires_grad)
],
"weight_decay": 0.0,
},
]
optimizer_cls, optimizer_kwargs = Trainer.get_optimizer_cls_and_kwargs(self.args)
if self.sharded_ddp == ShardedDDPOption.SIMPLE:
self.optimizer = OSS(
params=optimizer_grouped_parameters,
optim=optimizer_cls,
**optimizer_kwargs,
)
else:
self.optimizer = optimizer_cls(optimizer_grouped_parameters, **optimizer_kwargs)
if optimizer_cls.__name__ == "Adam8bit":
import bitsandbytes
manager = bitsandbytes.optim.GlobalOptimManager.get_instance()
skipped = 0
for module in opt_model.modules():
if isinstance(module, nn.Embedding):
skipped += sum({p.data_ptr(): p.numel() for p in module.parameters()}.values())
print(f"skipped {module}: {skipped/2**20}M params")
manager.register_module_override(module, "weight", {"optim_bits": 32})
logger.debug(f"bitsandbytes: will optimize {module} in fp32")
print(f"skipped: {skipped/2**20}M params")
if is_sagemaker_mp_enabled():
self.optimizer = smp.DistributedOptimizer(self.optimizer)
return self.optimizer
@staticmethod
def get_optimizer_cls_and_kwargs(args: TrainingArguments) -> Tuple[Any, Any]:
"""
Returns the optimizer class and optimizer parameters based on the training arguments.
Args:
args (`transformers.training_args.TrainingArguments`):
The training arguments for the training session.
"""
# parse args.optim_args
optim_args = {}
if args.optim_args:
for mapping in args.optim_args.replace(" ", "").split(","):
key, value = mapping.split("=")
optim_args[key] = value
optimizer_kwargs = {"lr": args.learning_rate}
adam_kwargs = {
"betas": (args.adam_beta1, args.adam_beta2),
"eps": args.adam_epsilon,
}
if args.optim == OptimizerNames.ADAFACTOR:
optimizer_cls = Adafactor
optimizer_kwargs.update({"scale_parameter": False, "relative_step": False})
elif args.optim == OptimizerNames.ADAMW_HF:
from transformers.optimization import AdamW
optimizer_cls = AdamW
optimizer_kwargs.update(adam_kwargs)
elif args.optim in [OptimizerNames.ADAMW_TORCH, OptimizerNames.ADAMW_TORCH_FUSED]:
from torch.optim import AdamW
optimizer_cls = AdamW
optimizer_kwargs.update(adam_kwargs)
if args.optim == OptimizerNames.ADAMW_TORCH_FUSED:
optimizer_kwargs.update({"fused": True})
elif args.optim == OptimizerNames.ADAMW_TORCH_XLA:
try:
from torch_xla.amp.syncfree import AdamW
optimizer_cls = AdamW
optimizer_kwargs.update(adam_kwargs)
except ImportError:
raise ValueError("Trainer failed to import syncfree AdamW from torch_xla.")
elif args.optim == OptimizerNames.ADAMW_APEX_FUSED:
try:
from apex.optimizers import FusedAdam
optimizer_cls = FusedAdam
optimizer_kwargs.update(adam_kwargs)
except ImportError:
raise ValueError("Trainer tried to instantiate apex FusedAdam but apex is not installed!")
elif args.optim == OptimizerNames.ADAMW_BNB:
try:
from bitsandbytes.optim import Adam8bit
optimizer_cls = Adam8bit
optimizer_kwargs.update(adam_kwargs)
except ImportError:
raise ValueError("Trainer tried to instantiate bnb Adam8bit but bnb is not installed!")
elif args.optim == OptimizerNames.ADAMW_ANYPRECISION:
try:
from torchdistx.optimizers import AnyPrecisionAdamW
optimizer_cls = AnyPrecisionAdamW
optimizer_kwargs.update(adam_kwargs)
# TODO Change dtypes back to M=FP32, Var = BF16, Kahan = False once they can be cast together in torchdistx.
optimizer_kwargs.update(
{
"use_kahan_summation": strtobool(optim_args.get("use_kahan_summation", "False")),
"momentum_dtype": getattr(torch, optim_args.get("momentum_dtype", "float32")),
"variance_dtype": getattr(torch, optim_args.get("variance_dtype", "float32")),
"compensation_buffer_dtype": getattr(
torch, optim_args.get("compensation_buffer_dtype", "bfloat16")
),
}
)
except ImportError:
raise ValueError("Please install https://github.com/pytorch/torchdistx")
elif args.optim == OptimizerNames.SGD:
optimizer_cls = torch.optim.SGD
elif args.optim == OptimizerNames.ADAGRAD:
optimizer_cls = torch.optim.Adagrad
else:
raise ValueError(f"Trainer cannot instantiate unsupported optimizer: {args.optim}")
return optimizer_cls, optimizer_kwargs
def create_scheduler(self, num_training_steps: int, optimizer: torch.optim.Optimizer = None):
"""
Setup the scheduler. The optimizer of the trainer must have been set up either before this method is called or
passed as an argument.
Args:
num_training_steps (int): The number of training steps to do.
"""
if self.lr_scheduler is None:
self.lr_scheduler = get_scheduler(
self.args.lr_scheduler_type,
optimizer=self.optimizer if optimizer is None else optimizer,
num_warmup_steps=self.args.get_warmup_steps(num_training_steps),
num_training_steps=num_training_steps,
)
return self.lr_scheduler
def num_examples(self, dataloader: DataLoader) -> int:
"""
Helper to get number of samples in a [`~torch.utils.data.DataLoader`] by accessing its dataset. When
dataloader.dataset does not exist or has no length, estimates as best it can
"""
try:
dataset = dataloader.dataset
# Special case for IterableDatasetShard, we need to dig deeper
if isinstance(dataset, IterableDatasetShard):
return len(dataloader.dataset.dataset)
return len(dataloader.dataset)
except (NameError, AttributeError, TypeError): # no dataset or length, estimate by length of dataloader
return len(dataloader) * self.args.per_device_train_batch_size
def _hp_search_setup(self, trial: Union["optuna.Trial", Dict[str, Any]]):
"""HP search setup code"""
self._trial = trial
if self.hp_search_backend is None or trial is None:
return
if self.hp_search_backend == HPSearchBackend.OPTUNA:
params = self.hp_space(trial)
elif self.hp_search_backend == HPSearchBackend.RAY:
params = trial
params.pop("wandb", None)
elif self.hp_search_backend == HPSearchBackend.SIGOPT:
params = {k: int(v) if isinstance(v, str) else v for k, v in trial.assignments.items()}
elif self.hp_search_backend == HPSearchBackend.WANDB:
params = trial
for key, value in params.items():
if not hasattr(self.args, key):
logger.warning(
f"Trying to set {key} in the hyperparameter search but there is no corresponding field in"
" `TrainingArguments`."
)
continue
old_attr = getattr(self.args, key, None)
# Casting value to the proper type
if old_attr is not None:
value = type(old_attr)(value)
setattr(self.args, key, value)
if self.hp_search_backend == HPSearchBackend.OPTUNA:
logger.info(f"Trial: {trial.params}")
if self.hp_search_backend == HPSearchBackend.SIGOPT:
logger.info(f"SigOpt Assignments: {trial.assignments}")
if self.hp_search_backend == HPSearchBackend.WANDB:
logger.info(f"W&B Sweep parameters: {trial}")
if self.args.deepspeed:
# Rebuild the deepspeed config to reflect the updated training parameters
from transformers.deepspeed import HfTrainerDeepSpeedConfig
self.args.hf_deepspeed_config = HfTrainerDeepSpeedConfig(self.args.deepspeed)
self.args.hf_deepspeed_config.trainer_config_process(self.args)
def _report_to_hp_search(self, trial: Union["optuna.Trial", Dict[str, Any]], step: int, metrics: Dict[str, float]):
if self.hp_search_backend is None or trial is None:
return
self.objective = self.compute_objective(metrics.copy())
if self.hp_search_backend == HPSearchBackend.OPTUNA:
import optuna
trial.report(self.objective, step)
if trial.should_prune():
self.callback_handler.on_train_end(self.args, self.state, self.control)
raise optuna.TrialPruned()
elif self.hp_search_backend == HPSearchBackend.RAY:
from ray import tune
if self.control.should_save:
self._tune_save_checkpoint()
tune.report(objective=self.objective, **metrics)
def _tune_save_checkpoint(self):
from ray import tune
if not self.use_tune_checkpoints:
return
with tune.checkpoint_dir(step=self.state.global_step) as checkpoint_dir:
output_dir = os.path.join(checkpoint_dir, f"{PREFIX_CHECKPOINT_DIR}-{self.state.global_step}")
self.save_model(output_dir, _internal_call=True)
if self.args.should_save:
self.state.save_to_json(os.path.join(output_dir, TRAINER_STATE_NAME))
torch.save(self.optimizer.state_dict(), os.path.join(output_dir, OPTIMIZER_NAME))
torch.save(self.lr_scheduler.state_dict(), os.path.join(output_dir, SCHEDULER_NAME))
def call_model_init(self, trial=None):
model_init_argcount = number_of_arguments(self.model_init)
if model_init_argcount == 0:
model = self.model_init()
elif model_init_argcount == 1:
model = self.model_init(trial)
else:
raise RuntimeError("model_init should have 0 or 1 argument.")
if model is None:
raise RuntimeError("model_init should not return None.")
return model
def torch_jit_model_eval(self, model, dataloader, training=False):
if not training:
if dataloader is None:
logger.warning("failed to use PyTorch jit mode due to current dataloader is none.")
return model
example_batch = next(iter(dataloader))
example_batch = self._prepare_inputs(example_batch)
try:
jit_model = model.eval()
with ContextManagers([self.autocast_smart_context_manager(cache_enabled=False), torch.no_grad()]):
if version.parse(version.parse(torch.__version__).base_version) >= version.parse("1.14.0"):
if isinstance(example_batch, dict):
jit_model = torch.jit.trace(jit_model, example_kwarg_inputs=example_batch, strict=False)
else:
jit_model = torch.jit.trace(
jit_model,
example_kwarg_inputs={key: example_batch[key] for key in example_batch},
strict=False,
)
else:
jit_inputs = []
for key in example_batch:
example_tensor = torch.ones_like(example_batch[key])
jit_inputs.append(example_tensor)
jit_inputs = tuple(jit_inputs)
jit_model = torch.jit.trace(jit_model, jit_inputs, strict=False)
jit_model = torch.jit.freeze(jit_model)
with torch.no_grad():
jit_model(**example_batch)
jit_model(**example_batch)
model = jit_model
self.use_cpu_amp = False
self.use_cuda_amp = False
except (RuntimeError, TypeError, ValueError, NameError, IndexError) as e:
logger.warning(f"failed to use PyTorch jit mode due to: {e}.")
return model
def ipex_optimize_model(self, model, training=False, dtype=torch.float32):
if not is_ipex_available():
raise ImportError(
"Using IPEX but IPEX is not installed or IPEX's version does not match current PyTorch, please refer"
" to https://github.com/intel/intel-extension-for-pytorch."
)
import intel_extension_for_pytorch as ipex
if not training:
model.eval()
dtype = torch.bfloat16 if not self.is_in_train and self.args.bf16_full_eval else dtype
# conv_bn_folding is disabled as it fails in symbolic tracing, resulting in ipex warnings
model = ipex.optimize(model, dtype=dtype, level="O1", conv_bn_folding=False, inplace=not self.is_in_train)
else:
if not model.training:
model.train()
model, self.optimizer = ipex.optimize(
model, dtype=dtype, optimizer=self.optimizer, inplace=True, level="O1"
)
return model
def _wrap_model(self, model, training=True, dataloader=None):
if self.args.torch_compile:
model = torch.compile(model, backend=self.args.torch_compile_backend, mode=self.args.torch_compile_mode)
if self.args.use_ipex:
dtype = torch.bfloat16 if self.use_cpu_amp else torch.float32
model = self.ipex_optimize_model(model, training, dtype=dtype)
if is_sagemaker_mp_enabled():
# Wrapping the base model twice in a DistributedModel will raise an error.
if isinstance(self.model_wrapped, smp.model.DistributedModel):
return self.model_wrapped
return smp.DistributedModel(model, backward_passes_per_step=self.args.gradient_accumulation_steps)
# already initialized its own DDP and AMP
if self.deepspeed:
return self.deepspeed
# train/eval could be run multiple-times - if already wrapped, don't re-wrap it again
if unwrap_model(model) is not model:
return model
# Mixed precision training with apex (torch < 1.6)
if self.use_apex and training:
model, self.optimizer = amp.initialize(model, self.optimizer, opt_level=self.args.fp16_opt_level)
# Multi-gpu training (should be after apex fp16 initialization)
if self.args.n_gpu > 1:
model = nn.DataParallel(model)
if self.args.jit_mode_eval:
start_time = time.time()
model = self.torch_jit_model_eval(model, dataloader, training)
self.jit_compilation_time = round(time.time() - start_time, 4)
# Note: in torch.distributed mode, there's no point in wrapping the model
# inside a DistributedDataParallel as we'll be under `no_grad` anyways.
if not training:
return model
# Distributed training (should be after apex fp16 initialization)
if self.sharded_ddp is not None:
# Sharded DDP!
if self.sharded_ddp == ShardedDDPOption.SIMPLE:
model = ShardedDDP(model, self.optimizer)
else:
mixed_precision = self.args.fp16 or self.args.bf16
cpu_offload = ShardedDDPOption.OFFLOAD in self.args.sharded_ddp
zero_3 = self.sharded_ddp == ShardedDDPOption.ZERO_DP_3
# XXX: Breaking the self.model convention but I see no way around it for now.
if ShardedDDPOption.AUTO_WRAP in self.args.sharded_ddp:
model = auto_wrap(model)
self.model = model = FullyShardedDDP(
model,
mixed_precision=mixed_precision,
reshard_after_forward=zero_3,
cpu_offload=cpu_offload,
).to(self.args.device)
# Distributed training using PyTorch FSDP
elif self.fsdp is not None:
if not self.args.fsdp_config["xla"]:
# PyTorch FSDP!
from torch.distributed.fsdp.fully_sharded_data_parallel import CPUOffload, MixedPrecision
from torch.distributed.fsdp.fully_sharded_data_parallel import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp.wrap import size_based_auto_wrap_policy, transformer_auto_wrap_policy
if FSDPOption.OFFLOAD in self.args.fsdp:
cpu_offload = CPUOffload(offload_params=True)
else:
cpu_offload = CPUOffload(offload_params=False)
auto_wrap_policy = None
if FSDPOption.AUTO_WRAP in self.args.fsdp:
if self.args.fsdp_config["fsdp_min_num_params"] > 0:
auto_wrap_policy = functools.partial(
size_based_auto_wrap_policy, min_num_params=self.args.fsdp_config["fsdp_min_num_params"]
)
elif self.args.fsdp_config.get("fsdp_transformer_layer_cls_to_wrap", None) is not None:
transformer_cls_to_wrap = set()
for layer_class in self.args.fsdp_config["fsdp_transformer_layer_cls_to_wrap"]:
transformer_cls = get_module_class_from_name(model, layer_class)
if transformer_cls is None:
raise Exception("Could not find the transformer layer class to wrap in the model.")
else:
transformer_cls_to_wrap.add(transformer_cls)
auto_wrap_policy = functools.partial(
transformer_auto_wrap_policy,
# Transformer layer class to wrap
transformer_layer_cls=transformer_cls_to_wrap,
)
mixed_precision_policy = None
dtype = None
if self.args.fp16:
dtype = torch.float16
elif self.args.bf16:
dtype = torch.bfloat16
if dtype is not None:
mixed_precision_policy = MixedPrecision(param_dtype=dtype, reduce_dtype=dtype, buffer_dtype=dtype)
if type(model) != FSDP:
# XXX: Breaking the self.model convention but I see no way around it for now.
self.model = model = FSDP(
model,
sharding_strategy=self.fsdp,
cpu_offload=cpu_offload,
auto_wrap_policy=auto_wrap_policy,
mixed_precision=mixed_precision_policy,
device_id=self.args.device,
backward_prefetch=self.backward_prefetch,
forward_prefetch=self.forword_prefetch,
limit_all_gathers=self.limit_all_gathers,
)
else:
try:
from torch_xla.distributed.fsdp import XlaFullyShardedDataParallel as FSDP
from torch_xla.distributed.fsdp import checkpoint_module
from torch_xla.distributed.fsdp.wrap import (
size_based_auto_wrap_policy,
transformer_auto_wrap_policy,
)
except ImportError:
raise ImportError("Missing XLA FSDP related module; please make sure to use torch-xla >= 2.0.")
auto_wrap_policy = None
auto_wrapper_callable = None
if self.args.fsdp_config["fsdp_min_num_params"] > 0:
auto_wrap_policy = functools.partial(
size_based_auto_wrap_policy, min_num_params=self.args.fsdp_config["fsdp_min_num_params"]
)
elif self.args.fsdp_config.get("fsdp_transformer_layer_cls_to_wrap", None) is not None:
transformer_cls_to_wrap = set()
for layer_class in self.args.fsdp_config["fsdp_transformer_layer_cls_to_wrap"]:
transformer_cls = get_module_class_from_name(model, layer_class)
if transformer_cls is None:
raise Exception("Could not find the transformer layer class to wrap in the model.")
else:
transformer_cls_to_wrap.add(transformer_cls)
auto_wrap_policy = functools.partial(
transformer_auto_wrap_policy,
# Transformer layer class to wrap
transformer_layer_cls=transformer_cls_to_wrap,
)
fsdp_kwargs = self.args.xla_fsdp_config
if self.args.fsdp_config["xla_fsdp_grad_ckpt"]:
# Apply gradient checkpointing to auto-wrapped sub-modules if specified
def auto_wrapper_callable(m, *args, **kwargs):
return FSDP(checkpoint_module(m), *args, **kwargs)
# Wrap the base model with an outer FSDP wrapper
self.model = model = FSDP(
model,
auto_wrap_policy=auto_wrap_policy,
auto_wrapper_callable=auto_wrapper_callable,
**fsdp_kwargs,
)
# Patch `xm.optimizer_step` should not reduce gradients in this case,
# as FSDP does not need gradient reduction over sharded parameters.
def patched_optimizer_step(optimizer, barrier=False, optimizer_args={}):
loss = optimizer.step(**optimizer_args)
if barrier:
xm.mark_step()
return loss
xm.optimizer_step = patched_optimizer_step
elif is_sagemaker_dp_enabled():
model = nn.parallel.DistributedDataParallel(
model, device_ids=[int(os.getenv("SMDATAPARALLEL_LOCAL_RANK"))]
)
elif self.args.local_rank != -1:
kwargs = {}
if self.args.ddp_find_unused_parameters is not None:
kwargs["find_unused_parameters"] = self.args.ddp_find_unused_parameters
elif isinstance(model, PreTrainedModel):
# find_unused_parameters breaks checkpointing as per
# https://github.com/huggingface/transformers/pull/4659#issuecomment-643356021
kwargs["find_unused_parameters"] = not model.is_gradient_checkpointing
else:
kwargs["find_unused_parameters"] = True
if self.args.ddp_bucket_cap_mb is not None:
kwargs["bucket_cap_mb"] = self.args.ddp_bucket_cap_mb
if is_torch_neuroncore_available():
return model
model = nn.parallel.DistributedDataParallel(
model,
device_ids=[self.args.local_rank] if self.args._n_gpu != 0 else None,
output_device=self.args.local_rank if self.args._n_gpu != 0 else None,
**kwargs,
)
return model
def train(
self,
resume_from_checkpoint: Optional[Union[str, bool]] = None,
trial: Union["optuna.Trial", Dict[str, Any]] = None,
ignore_keys_for_eval: Optional[List[str]] = None,
**kwargs,
):
"""
Main training entry point.
Args:
resume_from_checkpoint (`str` or `bool`, *optional*):
If a `str`, local path to a saved checkpoint as saved by a previous instance of [`Trainer`]. If a
`bool` and equals `True`, load the last checkpoint in *args.output_dir* as saved by a previous instance
of [`Trainer`]. If present, training will resume from the model/optimizer/scheduler states loaded here.
trial (`optuna.Trial` or `Dict[str, Any]`, *optional*):
The trial run or the hyperparameter dictionary for hyperparameter search.
ignore_keys_for_eval (`List[str]`, *optional*)
A list of keys in the output of your model (if it is a dictionary) that should be ignored when
gathering predictions for evaluation during the training.
kwargs:
Additional keyword arguments used to hide deprecated arguments
"""
if resume_from_checkpoint is False:
resume_from_checkpoint = None
# memory metrics - must set up as early as possible
self._memory_tracker.start()
args = self.args
self.is_in_train = True
# do_train is not a reliable argument, as it might not be set and .train() still called, so
# the following is a workaround:
if (args.fp16_full_eval or args.bf16_full_eval) and not args.do_train:
self._move_model_to_device(self.model, args.device)
if "model_path" in kwargs:
resume_from_checkpoint = kwargs.pop("model_path")
warnings.warn(
"`model_path` is deprecated and will be removed in a future version. Use `resume_from_checkpoint` "
"instead.",
FutureWarning,
)
if len(kwargs) > 0:
raise TypeError(f"train() received got unexpected keyword arguments: {', '.join(list(kwargs.keys()))}.")
# This might change the seed so needs to run first.
self._hp_search_setup(trial)
self._train_batch_size = self.args.train_batch_size
# Model re-init
model_reloaded = False
if self.model_init is not None:
# Seed must be set before instantiating the model when using model_init.
enable_full_determinism(self.args.seed) if self.args.full_determinism else set_seed(self.args.seed)
self.model = self.call_model_init(trial)
model_reloaded = True
# Reinitializes optimizer and scheduler
self.optimizer, self.lr_scheduler = None, None
# Load potential model checkpoint
if isinstance(resume_from_checkpoint, bool) and resume_from_checkpoint:
resume_from_checkpoint = get_last_checkpoint(args.output_dir)
if resume_from_checkpoint is None:
raise ValueError(f"No valid checkpoint found in output directory ({args.output_dir})")
if resume_from_checkpoint is not None and not is_sagemaker_mp_enabled() and args.deepspeed is None:
self._load_from_checkpoint(resume_from_checkpoint)
# If model was re-initialized, put it on the right device and update self.model_wrapped
if model_reloaded:
if self.place_model_on_device:
self._move_model_to_device(self.model, args.device)
self.model_wrapped = self.model
inner_training_loop = find_executable_batch_size(
self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
)
return inner_training_loop(
args=args,
resume_from_checkpoint=resume_from_checkpoint,
trial=trial,
ignore_keys_for_eval=ignore_keys_for_eval,
)
def _inner_training_loop(
self, batch_size=None, args=None, resume_from_checkpoint=None, trial=None, ignore_keys_for_eval=None
):
self._train_batch_size = batch_size
# Data loader and number of training steps
train_dataloader = self.get_train_dataloader()
# Setting up training control variables:
# number of training epochs: num_train_epochs
# number of training steps per epoch: num_update_steps_per_epoch
# total number of training steps to execute: max_steps
total_train_batch_size = args.train_batch_size * args.gradient_accumulation_steps * args.world_size
len_dataloader = None
if has_length(train_dataloader):
len_dataloader = len(train_dataloader)
num_update_steps_per_epoch = len_dataloader // args.gradient_accumulation_steps
num_update_steps_per_epoch = max(num_update_steps_per_epoch, 1)
num_examples = self.num_examples(train_dataloader)
if args.max_steps > 0:
max_steps = args.max_steps
num_train_epochs = args.max_steps // num_update_steps_per_epoch + int(
args.max_steps % num_update_steps_per_epoch > 0
)
# May be slightly incorrect if the last batch in the training dataloader has a smaller size but it's
# the best we can do.
num_train_samples = args.max_steps * total_train_batch_size
else:
max_steps = math.ceil(args.num_train_epochs * num_update_steps_per_epoch)
num_train_epochs = math.ceil(args.num_train_epochs)
num_train_samples = self.num_examples(train_dataloader) * args.num_train_epochs
elif args.max_steps > 0: # Rely on max_steps when dataloader does not have a working size
max_steps = args.max_steps
# Setting a very large number of epochs so we go as many times as necessary over the iterator.
num_train_epochs = sys.maxsize
num_update_steps_per_epoch = max_steps
num_examples = total_train_batch_size * args.max_steps
num_train_samples = args.max_steps * total_train_batch_size
else:
raise ValueError(
"args.max_steps must be set to a positive value if dataloader does not have a length, was"
f" {args.max_steps}"
)
if DebugOption.UNDERFLOW_OVERFLOW in self.args.debug:
if self.args.n_gpu > 1:
# nn.DataParallel(model) replicates the model, creating new variables and module
# references registered here no longer work on other gpus, breaking the module
raise ValueError(
"Currently --debug underflow_overflow is not supported under DP. Please use DDP"
" (torch.distributed.launch)."
)
else:
debug_overflow = DebugUnderflowOverflow(self.model) # noqa
delay_optimizer_creation = (
self.sharded_ddp is not None
and self.sharded_ddp != ShardedDDPOption.SIMPLE
or is_sagemaker_mp_enabled()
or self.fsdp is not None
)
if args.deepspeed:
deepspeed_engine, optimizer, lr_scheduler = deepspeed_init(
self, num_training_steps=max_steps, resume_from_checkpoint=resume_from_checkpoint
)
self.model = deepspeed_engine.module
self.model_wrapped = deepspeed_engine
self.deepspeed = deepspeed_engine
self.optimizer = optimizer
self.lr_scheduler = lr_scheduler
elif not delay_optimizer_creation:
self.create_optimizer_and_scheduler(num_training_steps=max_steps)
self.state = TrainerState()
self.state.is_hyper_param_search = trial is not None
# Activate gradient checkpointing if needed
if args.gradient_checkpointing:
self.model.gradient_checkpointing_enable()
model = self._wrap_model(self.model_wrapped)
if is_sagemaker_mp_enabled() and resume_from_checkpoint is not None:
self._load_from_checkpoint(resume_from_checkpoint, model)
# for the rest of this function `model` is the outside model, whether it was wrapped or not
if model is not self.model:
self.model_wrapped = model
if delay_optimizer_creation:
self.create_optimizer_and_scheduler(num_training_steps=max_steps)
# Check if saved optimizer or scheduler states exist
self._load_optimizer_and_scheduler(resume_from_checkpoint)
# important: at this point:
# self.model is the Transformers Model
# self.model_wrapped is DDP(Transformers Model), Deepspeed(Transformers Model), etc.
# Train!
logger.info("***** Running training *****")
logger.info(f" Num examples = {num_examples}")
logger.info(f" Num Epochs = {num_train_epochs}")
logger.info(f" Instantaneous batch size per device = {args.per_device_train_batch_size}")
logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_train_batch_size}")
logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
logger.info(f" Total optimization steps = {max_steps}")
logger.info(
f" Number of trainable parameters = {sum(p.numel() for p in model.parameters() if p.requires_grad)}"
)
self.state.epoch = 0
start_time = time.time()
epochs_trained = 0
steps_trained_in_current_epoch = 0
steps_trained_progress_bar = None
# Check if continuing training from a checkpoint
if resume_from_checkpoint is not None and os.path.isfile(
os.path.join(resume_from_checkpoint, TRAINER_STATE_NAME)
):
self.state = TrainerState.load_from_json(os.path.join(resume_from_checkpoint, TRAINER_STATE_NAME))
epochs_trained = self.state.global_step // num_update_steps_per_epoch
if not args.ignore_data_skip:
steps_trained_in_current_epoch = self.state.global_step % (num_update_steps_per_epoch)
steps_trained_in_current_epoch *= args.gradient_accumulation_steps
else:
steps_trained_in_current_epoch = 0
logger.info(" Continuing training from checkpoint, will skip to saved global_step")
logger.info(f" Continuing training from epoch {epochs_trained}")
logger.info(f" Continuing training from global step {self.state.global_step}")
if not args.ignore_data_skip:
if skip_first_batches is None:
logger.info(
f" Will skip the first {epochs_trained} epochs then the first"
f" {steps_trained_in_current_epoch} batches in the first epoch. If this takes a lot of time,"
" you can install the latest version of Accelerate with `pip install -U accelerate`.You can"
" also add the `--ignore_data_skip` flag to your launch command, but you will resume the"
" training on data already seen by your model."
)
else:
logger.info(
f" Will skip the first {epochs_trained} epochs then the first"
f" {steps_trained_in_current_epoch} batches in the first epoch."
)
if self.is_local_process_zero() and not args.disable_tqdm and skip_first_batches is None:
steps_trained_progress_bar = tqdm(total=steps_trained_in_current_epoch)
steps_trained_progress_bar.set_description("Skipping the first batches")
# Update the references
self.callback_handler.model = self.model
self.callback_handler.optimizer = self.optimizer
self.callback_handler.lr_scheduler = self.lr_scheduler
self.callback_handler.train_dataloader = train_dataloader
if self.hp_name is not None and self._trial is not None:
# use self._trial because the SigOpt/Optuna hpo only call `_hp_search_setup(trial)` instead of passing trial
# parameter to Train when using DDP.
self.state.trial_name = self.hp_name(self._trial)
if trial is not None:
assignments = trial.assignments if self.hp_search_backend == HPSearchBackend.SIGOPT else trial
self.state.trial_params = hp_params(assignments)
else:
self.state.trial_params = None
# This should be the same if the state has been saved but in case the training arguments changed, it's safer
# to set this after the load.
self.state.max_steps = max_steps
self.state.num_train_epochs = num_train_epochs
self.state.is_local_process_zero = self.is_local_process_zero()
self.state.is_world_process_zero = self.is_world_process_zero()
# tr_loss is a tensor to avoid synchronization of TPUs through .item()
tr_loss = torch.tensor(0.0).to(args.device)
# _total_loss_scalar is updated everytime .item() has to be called on tr_loss and stores the sum of all losses
self._total_loss_scalar = 0.0
self._globalstep_last_logged = self.state.global_step
model.zero_grad()
self.control = self.callback_handler.on_train_begin(args, self.state, self.control)
# Skip the first epochs_trained epochs to get the random state of the dataloader at the right point.
if not args.ignore_data_skip:
for epoch in range(epochs_trained):
is_random_sampler = hasattr(train_dataloader, "sampler") and isinstance(
train_dataloader.sampler, RandomSampler
)
if is_torch_less_than_1_11 or not is_random_sampler:
# We just need to begin an iteration to create the randomization of the sampler.
# That was before PyTorch 1.11 however...
for _ in train_dataloader:
break
else:
# Otherwise we need to call the whooooole sampler cause there is some random operation added
# AT THE VERY END!
_ = list(train_dataloader.sampler)
total_batched_samples = 0
for epoch in range(epochs_trained, num_train_epochs):
if isinstance(train_dataloader, DataLoader) and isinstance(train_dataloader.sampler, DistributedSampler):
train_dataloader.sampler.set_epoch(epoch)
elif hasattr(train_dataloader, "dataset") and isinstance(train_dataloader.dataset, IterableDatasetShard):
train_dataloader.dataset.set_epoch(epoch)
if is_torch_tpu_available():
parallel_loader = pl.ParallelLoader(train_dataloader, [args.device]).per_device_loader(args.device)
epoch_iterator = parallel_loader
else:
epoch_iterator = train_dataloader
# Reset the past mems state at the beginning of each epoch if necessary.
if args.past_index >= 0:
self._past = None
steps_in_epoch = (
len(epoch_iterator)
if len_dataloader is not None
else args.max_steps * args.gradient_accumulation_steps
)
self.control = self.callback_handler.on_epoch_begin(args, self.state, self.control)
if epoch == epochs_trained and resume_from_checkpoint is not None and steps_trained_in_current_epoch == 0:
self._load_rng_state(resume_from_checkpoint)
rng_to_sync = False
steps_skipped = 0
if skip_first_batches is not None and steps_trained_in_current_epoch > 0:
epoch_iterator = skip_first_batches(epoch_iterator, steps_trained_in_current_epoch)
steps_skipped = steps_trained_in_current_epoch
steps_trained_in_current_epoch = 0
rng_to_sync = True
step = -1
for step, inputs in enumerate(epoch_iterator):
total_batched_samples += 1
if rng_to_sync:
self._load_rng_state(resume_from_checkpoint)
rng_to_sync = False
# Skip past any already trained steps if resuming training
if steps_trained_in_current_epoch > 0:
steps_trained_in_current_epoch -= 1
if steps_trained_progress_bar is not None:
steps_trained_progress_bar.update(1)
if steps_trained_in_current_epoch == 0:
self._load_rng_state(resume_from_checkpoint)
continue
elif steps_trained_progress_bar is not None:
steps_trained_progress_bar.close()
steps_trained_progress_bar = None
if step % args.gradient_accumulation_steps == 0:
self.control = self.callback_handler.on_step_begin(args, self.state, self.control)
if (
(total_batched_samples % args.gradient_accumulation_steps != 0)
and args.local_rank != -1
and args._no_sync_in_gradient_accumulation
):
# Avoid unnecessary DDP synchronization since there will be no backward pass on this example.
with model.no_sync():
tr_loss_step = self.training_step(model, inputs)
else:
tr_loss_step = self.training_step(model, inputs)
if (
args.logging_nan_inf_filter
and not is_torch_tpu_available()
and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))
):
# if loss is nan or inf simply add the average of previous logged losses
tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)
else:
tr_loss += tr_loss_step
self.current_flos += float(self.floating_point_ops(inputs))
# Optimizer step for deepspeed must be called on every step regardless of the value of gradient_accumulation_steps
if self.deepspeed:
self.deepspeed.step()
if total_batched_samples % args.gradient_accumulation_steps == 0 or (
# last step in epoch but step is always smaller than gradient_accumulation_steps
steps_in_epoch <= args.gradient_accumulation_steps
and (step + 1) == steps_in_epoch
):
# Gradient clipping
if args.max_grad_norm is not None and args.max_grad_norm > 0 and not self.deepspeed:
# deepspeed does its own clipping
if self.do_grad_scaling:
# Reduce gradients first for XLA
if is_torch_tpu_available():
gradients = xm._fetch_gradients(self.optimizer)
xm.all_reduce("sum", gradients, scale=1.0 / xm.xrt_world_size())
# AMP: gradients need unscaling
self.scaler.unscale_(self.optimizer)
if is_sagemaker_mp_enabled() and args.fp16:
self.optimizer.clip_master_grads(args.max_grad_norm)
elif hasattr(self.optimizer, "clip_grad_norm"):
# Some optimizers (like the sharded optimizer) have a specific way to do gradient clipping
self.optimizer.clip_grad_norm(args.max_grad_norm)
elif hasattr(model, "clip_grad_norm_"):
# Some models (like FullyShardedDDP) have a specific way to do gradient clipping
model.clip_grad_norm_(args.max_grad_norm)
else:
# Revert to normal clipping otherwise, handling Apex or full precision
nn.utils.clip_grad_norm_(
amp.master_params(self.optimizer) if self.use_apex else model.parameters(),
args.max_grad_norm,
)
# Optimizer step
optimizer_was_run = True
if self.deepspeed:
pass # called outside the loop
elif is_torch_tpu_available():
if self.do_grad_scaling:
self.scaler.step(self.optimizer)
self.scaler.update()
else:
xm.optimizer_step(self.optimizer)
elif self.do_grad_scaling:
scale_before = self.scaler.get_scale()
self.scaler.step(self.optimizer)
self.scaler.update()
scale_after = self.scaler.get_scale()
optimizer_was_run = scale_before <= scale_after
else:
self.optimizer.step()
if optimizer_was_run and not self.deepspeed:
self.lr_scheduler.step()
model.zero_grad()
self.state.global_step += 1
self.state.epoch = epoch + (step + 1 + steps_skipped) / steps_in_epoch
self.control = self.callback_handler.on_step_end(args, self.state, self.control)
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
else:
self.control = self.callback_handler.on_substep_end(args, self.state, self.control)
if self.control.should_epoch_stop or self.control.should_training_stop:
break
if step < 0:
logger.warning(
"There seems to be not a single sample in your epoch_iterator, stopping training at step"
f" {self.state.global_step}! This is expected if you're using an IterableDataset and set"
f" num_steps ({max_steps}) higher than the number of available samples."
)
self.control.should_training_stop = True
self.control = self.callback_handler.on_epoch_end(args, self.state, self.control)
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
if DebugOption.TPU_METRICS_DEBUG in self.args.debug:
if is_torch_tpu_available():
# tpu-comment: Logging debug metrics for PyTorch/XLA (compile, execute times, ops, etc.)
xm.master_print(met.metrics_report())
else:
logger.warning(
"You enabled PyTorch/XLA debug metrics but you don't have a TPU "
"configured. Check your training configuration if this is unexpected."
)
if self.control.should_training_stop:
break
if args.past_index and hasattr(self, "_past"):
# Clean the state at the end of training
delattr(self, "_past")
logger.info("\n\nTraining completed. Do not forget to share your model on huggingface.co/models =)\n\n")
if args.load_best_model_at_end and self.state.best_model_checkpoint is not None:
# Wait for everyone to get here so we are sur the model has been saved by process 0.
if is_torch_tpu_available():
xm.rendezvous("load_best_model_at_end")
elif args.local_rank != -1:
dist.barrier()
elif is_sagemaker_mp_enabled():
smp.barrier()
self._load_best_model()
# add remaining tr_loss
self._total_loss_scalar += tr_loss.item()
train_loss = self._total_loss_scalar / self.state.global_step
metrics = speed_metrics("train", start_time, num_samples=num_train_samples, num_steps=self.state.max_steps)
self.store_flos()
metrics["total_flos"] = self.state.total_flos
metrics["train_loss"] = train_loss
self.is_in_train = False
self._memory_tracker.stop_and_update_metrics(metrics)
self.log(metrics)
run_dir = self._get_output_dir(trial)
checkpoints_sorted = self._sorted_checkpoints(use_mtime=False, output_dir=run_dir)
# Delete the last checkpoint when save_total_limit=1 if it's different from the best checkpoint and process allowed to save.
if self.args.should_save and self.state.best_model_checkpoint is not None and self.args.save_total_limit == 1:
for checkpoint in checkpoints_sorted:
if checkpoint != self.state.best_model_checkpoint:
logger.info(f"Deleting older checkpoint [{checkpoint}] due to args.save_total_limit")
shutil.rmtree(checkpoint)
self.control = self.callback_handler.on_train_end(args, self.state, self.control)
return TrainOutput(self.state.global_step, train_loss, metrics)
def _get_output_dir(self, trial):
if self.hp_search_backend is not None and trial is not None:
if self.hp_search_backend == HPSearchBackend.OPTUNA:
run_id = trial.number
elif self.hp_search_backend == HPSearchBackend.RAY:
from ray import tune
run_id = tune.get_trial_id()
elif self.hp_search_backend == HPSearchBackend.SIGOPT:
run_id = trial.id
elif self.hp_search_backend == HPSearchBackend.WANDB:
import wandb
run_id = wandb.run.id
run_name = self.hp_name(trial) if self.hp_name is not None else f"run-{run_id}"
run_dir = os.path.join(self.args.output_dir, run_name)
else:
run_dir = self.args.output_dir
return run_dir
def _load_from_checkpoint(self, resume_from_checkpoint, model=None):
if model is None:
model = self.model
if not os.path.isfile(os.path.join(resume_from_checkpoint, WEIGHTS_NAME)) and not os.path.isfile(
os.path.join(resume_from_checkpoint, WEIGHTS_INDEX_NAME)
):
raise ValueError(f"Can't find a valid checkpoint at {resume_from_checkpoint}")
logger.info(f"Loading model from {resume_from_checkpoint}.")
if os.path.isfile(os.path.join(resume_from_checkpoint, CONFIG_NAME)):
config = PretrainedConfig.from_json_file(os.path.join(resume_from_checkpoint, CONFIG_NAME))
checkpoint_version = config.transformers_version
if checkpoint_version is not None and checkpoint_version != __version__:
logger.warning(
f"You are resuming training from a checkpoint trained with {checkpoint_version} of "
f"Transformers but your current version is {__version__}. This is not recommended and could "
"yield to errors or unwanted behaviors."
)
if os.path.isfile(os.path.join(resume_from_checkpoint, WEIGHTS_NAME)):
# If the model is on the GPU, it still works!
if is_sagemaker_mp_enabled():
if os.path.isfile(os.path.join(resume_from_checkpoint, "user_content.pt")):
# If the 'user_content.pt' file exists, load with the new smp api.
# Checkpoint must have been saved with the new smp api.
smp.resume_from_checkpoint(
path=resume_from_checkpoint, tag=WEIGHTS_NAME, partial=False, load_optimizer=False
)
else:
# If the 'user_content.pt' file does NOT exist, load with the old smp api.
# Checkpoint must have been saved with the old smp api.
if hasattr(self.args, "fp16") and self.args.fp16 is True:
logger.warning(
"Enabling FP16 and loading from smp < 1.10 checkpoint together is not suppported."
)
state_dict = torch.load(os.path.join(resume_from_checkpoint, WEIGHTS_NAME), map_location="cpu")
# Required for smp to not auto-translate state_dict from hf to smp (is already smp).
state_dict["_smp_is_partial"] = False
load_result = model.load_state_dict(state_dict, strict=True)
# release memory
del state_dict
else:
# We load the model state dict on the CPU to avoid an OOM error.
state_dict = torch.load(os.path.join(resume_from_checkpoint, WEIGHTS_NAME), map_location="cpu")
# workaround for FSDP bug https://github.com/pytorch/pytorch/issues/82963
# which takes *args instead of **kwargs
load_result = model.load_state_dict(state_dict, False)
# release memory
del state_dict
self._issue_warnings_after_load(load_result)
else:
# We load the sharded checkpoint
load_result = load_sharded_checkpoint(model, resume_from_checkpoint, strict=is_sagemaker_mp_enabled())
if not is_sagemaker_mp_enabled():
self._issue_warnings_after_load(load_result)
def _load_best_model(self):
logger.info(f"Loading best model from {self.state.best_model_checkpoint} (score: {self.state.best_metric}).")
best_model_path = os.path.join(self.state.best_model_checkpoint, WEIGHTS_NAME)
model = self.model_wrapped if is_sagemaker_mp_enabled() else self.model
if os.path.exists(best_model_path):
if self.deepspeed:
if self.model_wrapped is not None:
# this removes the pre-hooks from the previous engine
self.model_wrapped.destroy()
self.model_wrapped = None
# temp hack until Deepspeed fixes the problem with resume from an existing engine that did some stepping
deepspeed_engine, optimizer, lr_scheduler = deepspeed_init(
self,
num_training_steps=self.args.max_steps,
resume_from_checkpoint=self.state.best_model_checkpoint,
)
self.model = deepspeed_engine.module
self.model_wrapped = deepspeed_engine
self.deepspeed = deepspeed_engine
self.optimizer = optimizer
self.lr_scheduler = lr_scheduler
else:
if is_sagemaker_mp_enabled():
if os.path.isfile(os.path.join(self.state.best_model_checkpoint, "user_content.pt")):
# If the 'user_content.pt' file exists, load with the new smp api.
# Checkpoint must have been saved with the new smp api.
smp.resume_from_checkpoint(
path=self.state.best_model_checkpoint,
tag=WEIGHTS_NAME,
partial=False,
load_optimizer=False,
)
else:
# If the 'user_content.pt' file does NOT exist, load with the old smp api.
# Checkpoint must have been saved with the old smp api.
state_dict = torch.load(best_model_path, map_location="cpu")
state_dict["_smp_is_partial"] = False
load_result = model.load_state_dict(state_dict, strict=True)
else:
# We load the model state dict on the CPU to avoid an OOM error.
state_dict = torch.load(best_model_path, map_location="cpu")
# If the model is on the GPU, it still works!
# workaround for FSDP bug https://github.com/pytorch/pytorch/issues/82963
# which takes *args instead of **kwargs
load_result = model.load_state_dict(state_dict, False)
if not is_sagemaker_mp_enabled():
self._issue_warnings_after_load(load_result)
elif os.path.exists(os.path.join(self.state.best_model_checkpoint, WEIGHTS_INDEX_NAME)):
load_result = load_sharded_checkpoint(
model, self.state.best_model_checkpoint, strict=is_sagemaker_mp_enabled()
)
if not is_sagemaker_mp_enabled():
self._issue_warnings_after_load(load_result)
else:
logger.warning(
f"Could not locate the best model at {best_model_path}, if you are running a distributed training "
"on multiple nodes, you should activate `--save_on_each_node`."
)
def _issue_warnings_after_load(self, load_result):
if len(load_result.missing_keys) != 0:
if self.model._keys_to_ignore_on_save is not None and set(load_result.missing_keys) == set(
self.model._keys_to_ignore_on_save
):
self.model.tie_weights()
else:
logger.warning(f"There were missing keys in the checkpoint model loaded: {load_result.missing_keys}.")
if len(load_result.unexpected_keys) != 0:
logger.warning(
f"There were unexpected keys in the checkpoint model loaded: {load_result.unexpected_keys}."
)
def _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval):
if self.control.should_log:
if is_torch_tpu_available():
xm.mark_step()
logs: Dict[str, float] = {}
# all_gather + mean() to get average loss over all processes
tr_loss_scalar = self._nested_gather(tr_loss).mean().item()
# reset tr_loss to zero
tr_loss -= tr_loss
logs["loss"] = round(tr_loss_scalar / (self.state.global_step - self._globalstep_last_logged), 4)
logs["learning_rate"] = self._get_learning_rate()
self._total_loss_scalar += tr_loss_scalar
self._globalstep_last_logged = self.state.global_step
self.store_flos()
self.log(logs)
metrics = None
if self.control.should_evaluate:
if isinstance(self.eval_dataset, dict):
for eval_dataset_name, eval_dataset in self.eval_dataset.items():
metrics = self.evaluate(
eval_dataset=eval_dataset,
ignore_keys=ignore_keys_for_eval,
metric_key_prefix=f"eval_{eval_dataset_name}",
)
else:
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
self._report_to_hp_search(trial, self.state.global_step, metrics)
if self.control.should_save:
self._save_checkpoint(model, trial, metrics=metrics)
self.control = self.callback_handler.on_save(self.args, self.state, self.control)
def _load_rng_state(self, checkpoint):
# Load RNG states from `checkpoint`
if checkpoint is None:
return
if self.args.world_size > 1:
process_index = self.args.process_index
rng_file = os.path.join(checkpoint, f"rng_state_{process_index}.pth")
if not os.path.isfile(rng_file):
logger.info(
f"Didn't find an RNG file for process {process_index}, if you are resuming a training that "
"wasn't launched in a distributed fashion, reproducibility is not guaranteed."
)
return
else:
rng_file = os.path.join(checkpoint, "rng_state.pth")
if not os.path.isfile(rng_file):
logger.info(
"Didn't find an RNG file, if you are resuming a training that was launched in a distributed "
"fashion, reproducibility is not guaranteed."
)
return
checkpoint_rng_state = torch.load(rng_file)
random.setstate(checkpoint_rng_state["python"])
np.random.set_state(checkpoint_rng_state["numpy"])
torch.random.set_rng_state(checkpoint_rng_state["cpu"])
if torch.cuda.is_available():
if self.args.local_rank != -1:
torch.cuda.random.set_rng_state(checkpoint_rng_state["cuda"])
else:
try:
torch.cuda.random.set_rng_state_all(checkpoint_rng_state["cuda"])
except Exception as e:
logger.info(
f"Didn't manage to set back the RNG states of the GPU because of the following error:\n {e}"
"\nThis won't yield the same results as if the training had not been interrupted."
)
if is_torch_tpu_available():
xm.set_rng_state(checkpoint_rng_state["xla"])
def _save_checkpoint(self, model, trial, metrics=None):
# In all cases, including ddp/dp/deepspeed, self.model is always a reference to the model we
# want to save except FullyShardedDDP.
# assert unwrap_model(model) is self.model, "internal model should be a reference to self.model"
# Save model checkpoint
checkpoint_folder = f"{PREFIX_CHECKPOINT_DIR}-{self.state.global_step}"
if self.hp_search_backend is None and trial is None:
self.store_flos()
run_dir = self._get_output_dir(trial=trial)
output_dir = os.path.join(run_dir, checkpoint_folder)
self.save_model(output_dir, _internal_call=True)
if self.deepspeed:
# under zero3 model file itself doesn't get saved since it's bogus! Unless deepspeed
# config `stage3_gather_16bit_weights_on_model_save` is True
self.deepspeed.save_checkpoint(output_dir)
# Save optimizer and scheduler
if self.sharded_ddp == ShardedDDPOption.SIMPLE:
self.optimizer.consolidate_state_dict()
if is_torch_tpu_available():
xm.rendezvous("saving_optimizer_states")
xm.save(self.optimizer.state_dict(), os.path.join(output_dir, OPTIMIZER_NAME))
with warnings.catch_warnings(record=True) as caught_warnings:
xm.save(self.lr_scheduler.state_dict(), os.path.join(output_dir, SCHEDULER_NAME))
reissue_pt_warnings(caught_warnings)
elif is_sagemaker_mp_enabled():
opt_state_dict = self.optimizer.local_state_dict(gather_if_shard=False)
smp.barrier()
if smp.rdp_rank() == 0 or smp.state.cfg.shard_optimizer_state:
smp.save(
opt_state_dict,
os.path.join(output_dir, OPTIMIZER_NAME),
partial=True,
v3=smp.state.cfg.shard_optimizer_state,
)
if self.args.should_save:
with warnings.catch_warnings(record=True) as caught_warnings:
torch.save(self.lr_scheduler.state_dict(), os.path.join(output_dir, SCHEDULER_NAME))
reissue_pt_warnings(caught_warnings)
if self.do_grad_scaling:
torch.save(self.scaler.state_dict(), os.path.join(output_dir, SCALER_NAME))
elif self.args.should_save and not self.deepspeed:
# deepspeed.save_checkpoint above saves model/optim/sched
torch.save(self.optimizer.state_dict(), os.path.join(output_dir, OPTIMIZER_NAME))
with warnings.catch_warnings(record=True) as caught_warnings:
torch.save(self.lr_scheduler.state_dict(), os.path.join(output_dir, SCHEDULER_NAME))
reissue_pt_warnings(caught_warnings)
if self.do_grad_scaling:
torch.save(self.scaler.state_dict(), os.path.join(output_dir, SCALER_NAME))
# Determine the new best metric / best model checkpoint
if metrics is not None and self.args.metric_for_best_model is not None:
metric_to_check = self.args.metric_for_best_model
if not metric_to_check.startswith("eval_"):
metric_to_check = f"eval_{metric_to_check}"
metric_value = metrics[metric_to_check]
operator = np.greater if self.args.greater_is_better else np.less
if (
self.state.best_metric is None
or self.state.best_model_checkpoint is None
or operator(metric_value, self.state.best_metric)
):
self.state.best_metric = metric_value
self.state.best_model_checkpoint = output_dir
# Save the Trainer state
if self.args.should_save:
self.state.save_to_json(os.path.join(output_dir, TRAINER_STATE_NAME))
# Save RNG state in non-distributed training
rng_states = {
"python": random.getstate(),
"numpy": np.random.get_state(),
"cpu": torch.random.get_rng_state(),
}
if torch.cuda.is_available():
if self.args.local_rank == -1:
# In non distributed, we save the global CUDA RNG state (will take care of DataParallel)
rng_states["cuda"] = torch.cuda.random.get_rng_state_all()
else:
rng_states["cuda"] = torch.cuda.random.get_rng_state()
if is_torch_tpu_available():
rng_states["xla"] = xm.get_rng_state()
# A process can arrive here before the process 0 has a chance to save the model, in which case output_dir may
# not yet exist.
os.makedirs(output_dir, exist_ok=True)
if self.args.world_size <= 1:
torch.save(rng_states, os.path.join(output_dir, "rng_state.pth"))
else:
torch.save(rng_states, os.path.join(output_dir, f"rng_state_{self.args.process_index}.pth"))
if self.args.push_to_hub:
self._push_from_checkpoint(output_dir)
# Maybe delete some older checkpoints.
if self.args.should_save:
self._rotate_checkpoints(use_mtime=True, output_dir=run_dir)
def _load_optimizer_and_scheduler(self, checkpoint):
"""If optimizer and scheduler states exist, load them."""
if checkpoint is None:
return
if self.deepspeed:
# deepspeed loads optimizer/lr_scheduler together with the model in deepspeed_init
return
checkpoint_file_exists = (
glob.glob(os.path.join(checkpoint, OPTIMIZER_NAME) + "_*")
if is_sagemaker_mp_enabled()
else os.path.isfile(os.path.join(checkpoint, OPTIMIZER_NAME))
)
if checkpoint_file_exists and os.path.isfile(os.path.join(checkpoint, SCHEDULER_NAME)):
# Load in optimizer and scheduler states
if is_torch_tpu_available():
# On TPU we have to take some extra precautions to properly load the states on the right device.
optimizer_state = torch.load(os.path.join(checkpoint, OPTIMIZER_NAME), map_location="cpu")
with warnings.catch_warnings(record=True) as caught_warnings:
lr_scheduler_state = torch.load(os.path.join(checkpoint, SCHEDULER_NAME), map_location="cpu")
reissue_pt_warnings(caught_warnings)
xm.send_cpu_data_to_device(optimizer_state, self.args.device)
xm.send_cpu_data_to_device(lr_scheduler_state, self.args.device)
self.optimizer.load_state_dict(optimizer_state)
self.lr_scheduler.load_state_dict(lr_scheduler_state)
else:
map_location = "cpu" if is_sagemaker_mp_enabled() else self.args.device
if is_sagemaker_mp_enabled():
if os.path.isfile(os.path.join(checkpoint, "user_content.pt")):
# Optimizer checkpoint was saved with smp >= 1.10
def opt_load_hook(mod, opt):
opt.load_state_dict(smp.load(os.path.join(checkpoint, OPTIMIZER_NAME), partial=True))
else:
# Optimizer checkpoint was saved with smp < 1.10
def opt_load_hook(mod, opt):
if IS_SAGEMAKER_MP_POST_1_10:
opt.load_state_dict(
smp.load(os.path.join(checkpoint, OPTIMIZER_NAME), partial=True, back_compat=True)
)
else:
opt.load_state_dict(smp.load(os.path.join(checkpoint, OPTIMIZER_NAME), partial=True))
self.model_wrapped.register_post_step_hook(opt_load_hook)
else:
self.optimizer.load_state_dict(
torch.load(os.path.join(checkpoint, OPTIMIZER_NAME), map_location=map_location)
)
with warnings.catch_warnings(record=True) as caught_warnings:
self.lr_scheduler.load_state_dict(torch.load(os.path.join(checkpoint, SCHEDULER_NAME)))
reissue_pt_warnings(caught_warnings)
if self.do_grad_scaling and os.path.isfile(os.path.join(checkpoint, SCALER_NAME)):
self.scaler.load_state_dict(torch.load(os.path.join(checkpoint, SCALER_NAME)))
def hyperparameter_search(
self,
hp_space: Optional[Callable[["optuna.Trial"], Dict[str, float]]] = None,
compute_objective: Optional[Callable[[Dict[str, float]], float]] = None,
n_trials: int = 20,
direction: str = "minimize",
backend: Optional[Union["str", HPSearchBackend]] = None,
hp_name: Optional[Callable[["optuna.Trial"], str]] = None,
**kwargs,
) -> BestRun:
"""
Launch an hyperparameter search using `optuna` or `Ray Tune` or `SigOpt`. The optimized quantity is determined
by `compute_objective`, which defaults to a function returning the evaluation loss when no metric is provided,
the sum of all metrics otherwise.
<Tip warning={true}>
To use this method, you need to have provided a `model_init` when initializing your [`Trainer`]: we need to
reinitialize the model at each new run. This is incompatible with the `optimizers` argument, so you need to
subclass [`Trainer`] and override the method [`~Trainer.create_optimizer_and_scheduler`] for custom
optimizer/scheduler.
</Tip>
Args:
hp_space (`Callable[["optuna.Trial"], Dict[str, float]]`, *optional*):
A function that defines the hyperparameter search space. Will default to
[`~trainer_utils.default_hp_space_optuna`] or [`~trainer_utils.default_hp_space_ray`] or
[`~trainer_utils.default_hp_space_sigopt`] depending on your backend.
compute_objective (`Callable[[Dict[str, float]], float]`, *optional*):
A function computing the objective to minimize or maximize from the metrics returned by the `evaluate`
method. Will default to [`~trainer_utils.default_compute_objective`].
n_trials (`int`, *optional*, defaults to 100):
The number of trial runs to test.
direction (`str`, *optional*, defaults to `"minimize"`):
Whether to optimize greater or lower objects. Can be `"minimize"` or `"maximize"`, you should pick
`"minimize"` when optimizing the validation loss, `"maximize"` when optimizing one or several metrics.
backend (`str` or [`~training_utils.HPSearchBackend`], *optional*):
The backend to use for hyperparameter search. Will default to optuna or Ray Tune or SigOpt, depending
on which one is installed. If all are installed, will default to optuna.
hp_name (`Callable[["optuna.Trial"], str]]`, *optional*):
A function that defines the trial/run name. Will default to None.
kwargs (`Dict[str, Any]`, *optional*):
Additional keyword arguments passed along to `optuna.create_study` or `ray.tune.run`. For more
information see:
- the documentation of
[optuna.create_study](https://optuna.readthedocs.io/en/stable/reference/generated/optuna.study.create_study.html)
- the documentation of [tune.run](https://docs.ray.io/en/latest/tune/api_docs/execution.html#tune-run)
- the documentation of [sigopt](https://app.sigopt.com/docs/endpoints/experiments/create)
Returns:
[`trainer_utils.BestRun`]: All the information about the best run. Experiment summary can be found in
`run_summary` attribute for Ray backend.
"""
if backend is None:
backend = default_hp_search_backend()
if backend is None:
raise RuntimeError(
"At least one of optuna or ray should be installed. "
"To install optuna run `pip install optuna`. "
"To install ray run `pip install ray[tune]`. "
"To install sigopt run `pip install sigopt`."
)
backend = HPSearchBackend(backend)
if backend == HPSearchBackend.OPTUNA and not is_optuna_available():
raise RuntimeError("You picked the optuna backend, but it is not installed. Use `pip install optuna`.")
if backend == HPSearchBackend.RAY and not is_ray_tune_available():
raise RuntimeError(
"You picked the Ray Tune backend, but it is not installed. Use `pip install 'ray[tune]'`."
)
if backend == HPSearchBackend.SIGOPT and not is_sigopt_available():
raise RuntimeError("You picked the sigopt backend, but it is not installed. Use `pip install sigopt`.")
if backend == HPSearchBackend.WANDB and not is_wandb_available():
raise RuntimeError("You picked the wandb backend, but it is not installed. Use `pip install wandb`.")
self.hp_search_backend = backend
if self.model_init is None:
raise RuntimeError(
"To use hyperparameter search, you need to pass your model through a model_init function."
)
self.hp_space = default_hp_space[backend] if hp_space is None else hp_space
self.hp_name = hp_name
self.compute_objective = default_compute_objective if compute_objective is None else compute_objective
backend_dict = {
HPSearchBackend.OPTUNA: run_hp_search_optuna,
HPSearchBackend.RAY: run_hp_search_ray,
HPSearchBackend.SIGOPT: run_hp_search_sigopt,
HPSearchBackend.WANDB: run_hp_search_wandb,
}
best_run = backend_dict[backend](self, n_trials, direction, **kwargs)
self.hp_search_backend = None
return best_run
def log(self, logs: Dict[str, float]) -> None:
"""
Log `logs` on the various objects watching training.
Subclass and override this method to inject custom behavior.
Args:
logs (`Dict[str, float]`):
The values to log.
"""
if self.state.epoch is not None:
logs["epoch"] = round(self.state.epoch, 2)
output = {**logs, **{"step": self.state.global_step}}
self.state.log_history.append(output)
self.control = self.callback_handler.on_log(self.args, self.state, self.control, logs)
def _prepare_input(self, data: Union[torch.Tensor, Any]) -> Union[torch.Tensor, Any]:
"""
Prepares one `data` before feeding it to the model, be it a tensor or a nested list/dictionary of tensors.
"""
if isinstance(data, Mapping):
return type(data)({k: self._prepare_input(v) for k, v in data.items()})
elif isinstance(data, (tuple, list)):
return type(data)(self._prepare_input(v) for v in data)
elif isinstance(data, torch.Tensor):
kwargs = {"device": self.args.device}
if self.deepspeed and (torch.is_floating_point(data) or torch.is_complex(data)):
# NLP models inputs are int/uint and those get adjusted to the right dtype of the
# embedding. Other models such as wav2vec2's inputs are already float and thus
# may need special handling to match the dtypes of the model
kwargs.update({"dtype": self.args.hf_deepspeed_config.dtype()})
return data.to(**kwargs)
return data
def _prepare_inputs(self, inputs: Dict[str, Union[torch.Tensor, Any]]) -> Dict[str, Union[torch.Tensor, Any]]:
"""
Prepare `inputs` before feeding them to the model, converting them to tensors if they are not already and
handling potential state.
"""
inputs = self._prepare_input(inputs)
if len(inputs) == 0:
raise ValueError(
"The batch received was empty, your model won't be able to train on it. Double-check that your "
f"training dataset contains keys expected by the model: {','.join(self._signature_columns)}."
)
if self.args.past_index >= 0 and self._past is not None:
inputs["mems"] = self._past
return inputs
def compute_loss_context_manager(self):
"""
A helper wrapper to group together context managers.
"""
return self.autocast_smart_context_manager()
def autocast_smart_context_manager(self, cache_enabled: Optional[bool] = True):
"""
A helper wrapper that creates an appropriate context manager for `autocast` while feeding it the desired
arguments, depending on the situation.
"""
if self.use_cuda_amp or self.use_cpu_amp:
if is_torch_greater_or_equal_than_1_10:
ctx_manager = (
torch.cpu.amp.autocast(cache_enabled=cache_enabled, dtype=self.amp_dtype)
if self.use_cpu_amp
else torch.cuda.amp.autocast(cache_enabled=cache_enabled, dtype=self.amp_dtype)
)
else:
ctx_manager = torch.cuda.amp.autocast()
else:
ctx_manager = contextlib.nullcontext() if sys.version_info >= (3, 7) else contextlib.suppress()
return ctx_manager
def training_step(self, model: nn.Module, inputs: Dict[str, Union[torch.Tensor, Any]]) -> torch.Tensor:
"""
Perform a training step on a batch of inputs.
Subclass and override to inject custom behavior.
Args:
model (`nn.Module`):
The model to train.
inputs (`Dict[str, Union[torch.Tensor, Any]]`):
The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
argument `labels`. Check your model's documentation for all accepted arguments.
Return:
`torch.Tensor`: The tensor with training loss on this batch.
"""
model.train()
inputs = self._prepare_inputs(inputs)
if is_sagemaker_mp_enabled():
loss_mb = smp_forward_backward(model, inputs, self.args.gradient_accumulation_steps)
return loss_mb.reduce_mean().detach().to(self.args.device)
with self.compute_loss_context_manager():
loss = self.compute_loss(model, inputs)
if self.args.n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu parallel training
if self.args.gradient_accumulation_steps > 1 and not self.deepspeed:
# deepspeed handles loss scaling by gradient_accumulation_steps in its `backward`
loss = loss / self.args.gradient_accumulation_steps
if self.do_grad_scaling:
self.scaler.scale(loss).backward()
elif self.use_apex:
with amp.scale_loss(loss, self.optimizer) as scaled_loss:
scaled_loss.backward()
elif self.deepspeed:
# loss gets scaled under gradient_accumulation_steps in deepspeed
loss = self.deepspeed.backward(loss)
else:
loss.backward()
return loss.detach()
def compute_loss(self, model, inputs, return_outputs=False):
"""
How the loss is computed by Trainer. By default, all models return the loss in the first element.
Subclass and override for custom behavior.
"""
if self.label_smoother is not None and "labels" in inputs:
labels = inputs.pop("labels")
else:
labels = None
outputs = model(**inputs)
# Save past state if it exists
# TODO: this needs to be fixed and made cleaner later.
if self.args.past_index >= 0:
self._past = outputs[self.args.past_index]
if labels is not None:
if unwrap_model(model)._get_name() in MODEL_FOR_CAUSAL_LM_MAPPING_NAMES.values():
loss = self.label_smoother(outputs, labels, shift_labels=True)
else:
loss = self.label_smoother(outputs, labels)
else:
if isinstance(outputs, dict) and "loss" not in outputs:
raise ValueError(
"The model did not return a loss from the inputs, only the following keys: "
f"{','.join(outputs.keys())}. For reference, the inputs it received are {','.join(inputs.keys())}."
)
# We don't use .loss here since the model may return tuples instead of ModelOutput.
loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0]
return (loss, outputs) if return_outputs else loss
def is_local_process_zero(self) -> bool:
"""
Whether or not this process is the local (e.g., on one machine if training in a distributed fashion on several
machines) main process.
"""
return self.args.local_process_index == 0
def is_world_process_zero(self) -> bool:
"""
Whether or not this process is the global main process (when training in a distributed fashion on several
machines, this is only going to be `True` for one process).
"""
# Special case for SageMaker ModelParallel since there process_index is dp_process_index, not the global
# process index.
if is_sagemaker_mp_enabled():
return smp.rank() == 0
else:
return self.args.process_index == 0
def save_model(self, output_dir: Optional[str] = None, _internal_call: bool = False):
"""
Will save the model, so you can reload it using `from_pretrained()`.
Will only save from the main process.
"""
if output_dir is None:
output_dir = self.args.output_dir
if is_torch_tpu_available():
self._save_tpu(output_dir)
elif is_sagemaker_mp_enabled():
# Calling the state_dict needs to be done on the wrapped model and on all processes.
os.makedirs(output_dir, exist_ok=True)
state_dict = self.model_wrapped.state_dict()
if self.args.should_save:
self._save(output_dir, state_dict=state_dict)
if IS_SAGEMAKER_MP_POST_1_10:
# 'user_content.pt' indicates model state_dict saved with smp >= 1.10
Path(os.path.join(output_dir, "user_content.pt")).touch()
elif (
ShardedDDPOption.ZERO_DP_2 in self.args.sharded_ddp
or ShardedDDPOption.ZERO_DP_3 in self.args.sharded_ddp
or self.fsdp is not None
):
state_dict = self.model.state_dict()
if self.args.should_save:
self._save(output_dir, state_dict=state_dict)
elif self.deepspeed:
# this takes care of everything as long as we aren't under zero3
if self.args.should_save:
self._save(output_dir)
if is_deepspeed_zero3_enabled():
# It's too complicated to try to override different places where the weights dump gets
# saved, so since under zero3 the file is bogus, simply delete it. The user should
# either user deepspeed checkpoint to resume or to recover full weights use
# zero_to_fp32.py stored in the checkpoint.
if self.args.should_save:
file = os.path.join(output_dir, WEIGHTS_NAME)
if os.path.isfile(file):
# logger.info(f"deepspeed zero3: removing {file}, see zero_to_fp32.py to recover weights")
os.remove(file)
# now save the real model if stage3_gather_16bit_weights_on_model_save=True
# if false it will not be saved.
# This must be called on all ranks
if not self.deepspeed.save_16bit_model(output_dir, WEIGHTS_NAME):
logger.warning(
"deepspeed.save_16bit_model didn't save the model, since"
" stage3_gather_16bit_weights_on_model_save=false. Saving the full checkpoint instead, use"
" zero_to_fp32.py to recover weights"
)
self.deepspeed.save_checkpoint(output_dir)
elif self.args.should_save:
self._save(output_dir)
# Push to the Hub when `save_model` is called by the user.
if self.args.push_to_hub and not _internal_call:
self.push_to_hub(commit_message="Model save")
def _save_tpu(self, output_dir: Optional[str] = None):
output_dir = output_dir if output_dir is not None else self.args.output_dir
logger.info(f"Saving model checkpoint to {output_dir}")
if xm.is_master_ordinal():
os.makedirs(output_dir, exist_ok=True)
torch.save(self.args, os.path.join(output_dir, TRAINING_ARGS_NAME))
# Save a trained model and configuration using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
xm.rendezvous("saving_checkpoint")
if not isinstance(self.model, PreTrainedModel):
if isinstance(unwrap_model(self.model), PreTrainedModel):
unwrap_model(self.model).save_pretrained(
output_dir,
is_main_process=self.args.should_save,
state_dict=self.model.state_dict(),
save_function=xm.save,
)
else:
logger.info("Trainer.model is not a `PreTrainedModel`, only saving its state dict.")
state_dict = self.model.state_dict()
xm.save(state_dict, os.path.join(output_dir, WEIGHTS_NAME))
else:
self.model.save_pretrained(output_dir, is_main_process=self.args.should_save, save_function=xm.save)
if self.tokenizer is not None and self.args.should_save:
self.tokenizer.save_pretrained(output_dir)
def _save(self, output_dir: Optional[str] = None, state_dict=None):
# If we are executing this function, we are the process zero, so we don't check for that.
output_dir = output_dir if output_dir is not None else self.args.output_dir
os.makedirs(output_dir, exist_ok=True)
logger.info(f"Saving model checkpoint to {output_dir}")
# Save a trained model and configuration using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
if not isinstance(self.model, PreTrainedModel):
if isinstance(unwrap_model(self.model), PreTrainedModel):
if state_dict is None:
state_dict = self.model.state_dict()
unwrap_model(self.model).save_pretrained(output_dir, state_dict=filtered_state_dict)
else:
logger.info("Trainer.model is not a `PreTrainedModel`, only saving its state dict.")
if state_dict is None:
state_dict = self.model.state_dict()
torch.save(state_dict, os.path.join(output_dir, WEIGHTS_NAME))
else:
if self.save_prefixencoder:
print("Saving PrefixEncoder")
state_dict = self.model.state_dict()
filtered_state_dict = {}
for k, v in self.model.named_parameters():
if v.requires_grad:
filtered_state_dict[k] = state_dict[k]
self.model.save_pretrained(output_dir, state_dict=filtered_state_dict)
else:
print("Saving the whole model")
self.model.save_pretrained(output_dir, state_dict=state_dict)
if self.tokenizer is not None:
self.tokenizer.save_pretrained(output_dir)
# Good practice: save your training arguments together with the trained model
torch.save(self.args, os.path.join(output_dir, TRAINING_ARGS_NAME))
def store_flos(self):
# Storing the number of floating-point operations that went into the model
if self.args.local_rank != -1:
self.state.total_flos += (
distributed_broadcast_scalars([self.current_flos], device=self.args.device).sum().item()
)
self.current_flos = 0
else:
self.state.total_flos += self.current_flos
self.current_flos = 0
def _sorted_checkpoints(
self, output_dir=None, checkpoint_prefix=PREFIX_CHECKPOINT_DIR, use_mtime=False
) -> List[str]:
ordering_and_checkpoint_path = []
glob_checkpoints = [str(x) for x in Path(output_dir).glob(f"{checkpoint_prefix}-*") if os.path.isdir(x)]
for path in glob_checkpoints:
if use_mtime:
ordering_and_checkpoint_path.append((os.path.getmtime(path), path))
else:
regex_match = re.match(f".*{checkpoint_prefix}-([0-9]+)", path)
if regex_match is not None and regex_match.groups() is not None:
ordering_and_checkpoint_path.append((int(regex_match.groups()[0]), path))
checkpoints_sorted = sorted(ordering_and_checkpoint_path)
checkpoints_sorted = [checkpoint[1] for checkpoint in checkpoints_sorted]
# Make sure we don't delete the best model.
if self.state.best_model_checkpoint is not None:
best_model_index = checkpoints_sorted.index(str(Path(self.state.best_model_checkpoint)))
for i in range(best_model_index, len(checkpoints_sorted) - 2):
checkpoints_sorted[i], checkpoints_sorted[i + 1] = checkpoints_sorted[i + 1], checkpoints_sorted[i]
return checkpoints_sorted
def _rotate_checkpoints(self, use_mtime=False, output_dir=None) -> None:
if self.args.save_total_limit is None or self.args.save_total_limit <= 0:
return
# Check if we should delete older checkpoint(s)
checkpoints_sorted = self._sorted_checkpoints(use_mtime=use_mtime, output_dir=output_dir)
if len(checkpoints_sorted) <= self.args.save_total_limit:
return
# If save_total_limit=1 with load_best_model_at_end=True, we could end up deleting the last checkpoint, which
# we don't do to allow resuming.
save_total_limit = self.args.save_total_limit
if (
self.state.best_model_checkpoint is not None
and self.args.save_total_limit == 1
and checkpoints_sorted[-1] != self.state.best_model_checkpoint
):
save_total_limit = 2
number_of_checkpoints_to_delete = max(0, len(checkpoints_sorted) - save_total_limit)
checkpoints_to_be_deleted = checkpoints_sorted[:number_of_checkpoints_to_delete]
for checkpoint in checkpoints_to_be_deleted:
logger.info(f"Deleting older checkpoint [{checkpoint}] due to args.save_total_limit")
shutil.rmtree(checkpoint, ignore_errors=True)
def evaluate(
self,
eval_dataset: Optional[Dataset] = None,
ignore_keys: Optional[List[str]] = None,
metric_key_prefix: str = "eval",
) -> Dict[str, float]:
"""
Run evaluation and returns metrics.
The calling script will be responsible for providing a method to compute metrics, as they are task-dependent
(pass it to the init `compute_metrics` argument).
You can also subclass and override this method to inject custom behavior.
Args:
eval_dataset (`Dataset`, *optional*):
Pass a dataset if you wish to override `self.eval_dataset`. If it is a [`~datasets.Dataset`], columns
not accepted by the `model.forward()` method are automatically removed. It must implement the `__len__`
method.
ignore_keys (`Lst[str]`, *optional*):
A list of keys in the output of your model (if it is a dictionary) that should be ignored when
gathering predictions.
metric_key_prefix (`str`, *optional*, defaults to `"eval"`):
An optional prefix to be used as the metrics key prefix. For example the metrics "bleu" will be named
"eval_bleu" if the prefix is "eval" (default)
Returns:
A dictionary containing the evaluation loss and the potential metrics computed from the predictions. The
dictionary also contains the epoch number which comes from the training state.
"""
# memory metrics - must set up as early as possible
self._memory_tracker.start()
eval_dataloader = self.get_eval_dataloader(eval_dataset)
start_time = time.time()
eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop
output = eval_loop(
eval_dataloader,
description="Evaluation",
# No point gathering the predictions if there are no metrics, otherwise we defer to
# self.args.prediction_loss_only
prediction_loss_only=True if self.compute_metrics is None else None,
ignore_keys=ignore_keys,
metric_key_prefix=metric_key_prefix,
)
total_batch_size = self.args.eval_batch_size * self.args.world_size
if f"{metric_key_prefix}_jit_compilation_time" in output.metrics:
start_time += output.metrics[f"{metric_key_prefix}_jit_compilation_time"]
output.metrics.update(
speed_metrics(
metric_key_prefix,
start_time,
num_samples=output.num_samples,
num_steps=math.ceil(output.num_samples / total_batch_size),
)
)
self.log(output.metrics)
if DebugOption.TPU_METRICS_DEBUG in self.args.debug:
# tpu-comment: Logging debug metrics for PyTorch/XLA (compile, execute times, ops, etc.)
xm.master_print(met.metrics_report())
self.control = self.callback_handler.on_evaluate(self.args, self.state, self.control, output.metrics)
self._memory_tracker.stop_and_update_metrics(output.metrics)
return output.metrics
def predict(
self, test_dataset: Dataset, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = "test"
) -> PredictionOutput:
"""
Run prediction and returns predictions and potential metrics.
Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method
will also return metrics, like in `evaluate()`.
Args:
test_dataset (`Dataset`):
Dataset to run the predictions on. If it is an `datasets.Dataset`, columns not accepted by the
`model.forward()` method are automatically removed. Has to implement the method `__len__`
ignore_keys (`Lst[str]`, *optional*):
A list of keys in the output of your model (if it is a dictionary) that should be ignored when
gathering predictions.
metric_key_prefix (`str`, *optional*, defaults to `"test"`):
An optional prefix to be used as the metrics key prefix. For example the metrics "bleu" will be named
"test_bleu" if the prefix is "test" (default)
<Tip>
If your predictions or labels have different sequence length (for instance because you're doing dynamic padding
in a token classification task) the predictions will be padded (on the right) to allow for concatenation into
one array. The padding index is -100.
</Tip>
Returns: *NamedTuple* A namedtuple with the following keys:
- predictions (`np.ndarray`): The predictions on `test_dataset`.
- label_ids (`np.ndarray`, *optional*): The labels (if the dataset contained some).
- metrics (`Dict[str, float]`, *optional*): The potential dictionary of metrics (if the dataset contained
labels).
"""
# memory metrics - must set up as early as possible
self._memory_tracker.start()
test_dataloader = self.get_test_dataloader(test_dataset)
start_time = time.time()
eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop
output = eval_loop(
test_dataloader, description="Prediction", ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix
)
total_batch_size = self.args.eval_batch_size * self.args.world_size
if f"{metric_key_prefix}_jit_compilation_time" in output.metrics:
start_time += output.metrics[f"{metric_key_prefix}_jit_compilation_time"]
output.metrics.update(
speed_metrics(
metric_key_prefix,
start_time,
num_samples=output.num_samples,
num_steps=math.ceil(output.num_samples / total_batch_size),
)
)
self.control = self.callback_handler.on_predict(self.args, self.state, self.control, output.metrics)
self._memory_tracker.stop_and_update_metrics(output.metrics)
return PredictionOutput(predictions=output.predictions, label_ids=output.label_ids, metrics=output.metrics)
def evaluation_loop(
self,
dataloader: DataLoader,
description: str,
prediction_loss_only: Optional[bool] = None,
ignore_keys: Optional[List[str]] = None,
metric_key_prefix: str = "eval",
) -> EvalLoopOutput:
"""
Prediction/evaluation loop, shared by `Trainer.evaluate()` and `Trainer.predict()`.
Works both with or without labels.
"""
args = self.args
prediction_loss_only = prediction_loss_only if prediction_loss_only is not None else args.prediction_loss_only
# if eval is called w/o train init deepspeed here
if args.deepspeed and not self.deepspeed:
# XXX: eval doesn't have `resume_from_checkpoint` arg but we should be able to do eval
# from the checkpoint eventually
deepspeed_engine, _, _ = deepspeed_init(
self, num_training_steps=0, resume_from_checkpoint=None, inference=True
)
self.model = deepspeed_engine.module
self.model_wrapped = deepspeed_engine
self.deepspeed = deepspeed_engine
model = self._wrap_model(self.model, training=False, dataloader=dataloader)
# if full fp16 or bf16 eval is wanted and this ``evaluation`` or ``predict`` isn't called
# while ``train`` is running, cast it to the right dtype first and then put on device
if not self.is_in_train:
if args.fp16_full_eval:
model = model.to(dtype=torch.float16, device=args.device)
elif args.bf16_full_eval:
model = model.to(dtype=torch.bfloat16, device=args.device)
batch_size = self.args.eval_batch_size
logger.info(f"***** Running {description} *****")
if has_length(dataloader):
logger.info(f" Num examples = {self.num_examples(dataloader)}")
else:
logger.info(" Num examples: Unknown")
logger.info(f" Batch size = {batch_size}")
model.eval()
self.callback_handler.eval_dataloader = dataloader
# Do this before wrapping.
eval_dataset = getattr(dataloader, "dataset", None)
if is_torch_tpu_available():
dataloader = pl.ParallelLoader(dataloader, [args.device]).per_device_loader(args.device)
if args.past_index >= 0:
self._past = None
# Initialize containers
# losses/preds/labels on GPU/TPU (accumulated for eval_accumulation_steps)
losses_host = None
preds_host = None
labels_host = None
inputs_host = None
# losses/preds/labels on CPU (final containers)
all_losses = None
all_preds = None
all_labels = None
all_inputs = None
# Will be useful when we have an iterable dataset so don't know its length.
observed_num_examples = 0
# Main evaluation loop
for step, inputs in enumerate(dataloader):
# Update the observed num examples
observed_batch_size = find_batch_size(inputs)
if observed_batch_size is not None:
observed_num_examples += observed_batch_size
# For batch samplers, batch_size is not known by the dataloader in advance.
if batch_size is None:
batch_size = observed_batch_size
# Prediction step
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
inputs_decode = self._prepare_input(inputs["input_ids"]) if args.include_inputs_for_metrics else None
if is_torch_tpu_available():
xm.mark_step()
# Update containers on host
if loss is not None:
losses = self._nested_gather(loss.repeat(batch_size))
losses_host = losses if losses_host is None else torch.cat((losses_host, losses), dim=0)
if labels is not None:
labels = self._pad_across_processes(labels)
labels = self._nested_gather(labels)
labels_host = labels if labels_host is None else nested_concat(labels_host, labels, padding_index=-100)
if inputs_decode is not None:
inputs_decode = self._pad_across_processes(inputs_decode)
inputs_decode = self._nested_gather(inputs_decode)
inputs_host = (
inputs_decode
if inputs_host is None
else nested_concat(inputs_host, inputs_decode, padding_index=-100)
)
if logits is not None:
logits = self._pad_across_processes(logits)
logits = self._nested_gather(logits)
if self.preprocess_logits_for_metrics is not None:
logits = self.preprocess_logits_for_metrics(logits, labels)
preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100)
self.control = self.callback_handler.on_prediction_step(args, self.state, self.control)
# Gather all tensors and put them back on the CPU if we have done enough accumulation steps.
if args.eval_accumulation_steps is not None and (step + 1) % args.eval_accumulation_steps == 0:
if losses_host is not None:
losses = nested_numpify(losses_host)
all_losses = losses if all_losses is None else np.concatenate((all_losses, losses), axis=0)
if preds_host is not None:
logits = nested_numpify(preds_host)
all_preds = logits if all_preds is None else nested_concat(all_preds, logits, padding_index=-100)
if inputs_host is not None:
inputs_decode = nested_numpify(inputs_host)
all_inputs = (
inputs_decode
if all_inputs is None
else nested_concat(all_inputs, inputs_decode, padding_index=-100)
)
if labels_host is not None:
labels = nested_numpify(labels_host)
all_labels = (
labels if all_labels is None else nested_concat(all_labels, labels, padding_index=-100)
)
# Set back to None to begin a new accumulation
losses_host, preds_host, inputs_host, labels_host = None, None, None, None
if args.past_index and hasattr(self, "_past"):
# Clean the state at the end of the evaluation loop
delattr(self, "_past")
# Gather all remaining tensors and put them back on the CPU
if losses_host is not None:
losses = nested_numpify(losses_host)
all_losses = losses if all_losses is None else np.concatenate((all_losses, losses), axis=0)
if preds_host is not None:
logits = nested_numpify(preds_host)
all_preds = logits if all_preds is None else nested_concat(all_preds, logits, padding_index=-100)
if inputs_host is not None:
inputs_decode = nested_numpify(inputs_host)
all_inputs = (
inputs_decode if all_inputs is None else nested_concat(all_inputs, inputs_decode, padding_index=-100)
)
if labels_host is not None:
labels = nested_numpify(labels_host)
all_labels = labels if all_labels is None else nested_concat(all_labels, labels, padding_index=-100)
# Number of samples
if has_length(eval_dataset):
num_samples = len(eval_dataset)
# The instance check is weird and does not actually check for the type, but whether the dataset has the right
# methods. Therefore we need to make sure it also has the attribute.
elif isinstance(eval_dataset, IterableDatasetShard) and getattr(eval_dataset, "num_examples", 0) > 0:
num_samples = eval_dataset.num_examples
else:
if has_length(dataloader):
num_samples = self.num_examples(dataloader)
else: # both len(dataloader.dataset) and len(dataloader) fail
num_samples = observed_num_examples
if num_samples == 0 and observed_num_examples > 0:
num_samples = observed_num_examples
# Number of losses has been rounded to a multiple of batch_size and in a distributed training, the number of
# samplers has been rounded to a multiple of batch_size, so we truncate.
if all_losses is not None:
all_losses = all_losses[:num_samples]
if all_preds is not None:
all_preds = nested_truncate(all_preds, num_samples)
if all_labels is not None:
all_labels = nested_truncate(all_labels, num_samples)
if all_inputs is not None:
all_inputs = nested_truncate(all_inputs, num_samples)
# Metrics!
if self.compute_metrics is not None and all_preds is not None and all_labels is not None:
if args.include_inputs_for_metrics:
metrics = self.compute_metrics(
EvalPrediction(predictions=all_preds, label_ids=all_labels, inputs=all_inputs)
)
else:
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
else:
metrics = {}
# To be JSON-serializable, we need to remove numpy types or zero-d tensors
metrics = denumpify_detensorize(metrics)
if all_losses is not None:
metrics[f"{metric_key_prefix}_loss"] = all_losses.mean().item()
if hasattr(self, "jit_compilation_time"):
metrics[f"{metric_key_prefix}_jit_compilation_time"] = self.jit_compilation_time
# Prefix all keys with metric_key_prefix + '_'
for key in list(metrics.keys()):
if not key.startswith(f"{metric_key_prefix}_"):
metrics[f"{metric_key_prefix}_{key}"] = metrics.pop(key)
return EvalLoopOutput(predictions=all_preds, label_ids=all_labels, metrics=metrics, num_samples=num_samples)
def _nested_gather(self, tensors, name=None):
"""
Gather value of `tensors` (tensor or list/tuple of nested tensors) and convert them to numpy before
concatenating them to `gathered`
"""
if tensors is None:
return
if is_torch_tpu_available():
if name is None:
name = "nested_gather"
tensors = nested_xla_mesh_reduce(tensors, name)
elif is_sagemaker_mp_enabled():
tensors = smp_gather(tensors)
elif self.args.local_rank != -1:
tensors = distributed_concat(tensors)
return tensors
# Copied from Accelerate.
def _pad_across_processes(self, tensor, pad_index=-100):
"""
Recursively pad the tensors in a nested list/tuple/dictionary of tensors from all devices to the same size so
they can safely be gathered.
"""
if isinstance(tensor, (list, tuple)):
return type(tensor)(self._pad_across_processes(t, pad_index=pad_index) for t in tensor)
elif isinstance(tensor, dict):
return type(tensor)({k: self._pad_across_processes(v, pad_index=pad_index) for k, v in tensor.items()})
elif not isinstance(tensor, torch.Tensor):
raise TypeError(
f"Can't pad the values of type {type(tensor)}, only of nested list/tuple/dicts of tensors."
)
if len(tensor.shape) < 2:
return tensor
# Gather all sizes
size = torch.tensor(tensor.shape, device=tensor.device)[None]
sizes = self._nested_gather(size).cpu()
max_size = max(s[1] for s in sizes)
# When extracting XLA graphs for compilation, max_size is 0,
# so use inequality to avoid errors.
if tensor.shape[1] >= max_size:
return tensor
# Then pad to the maximum size
old_size = tensor.shape
new_size = list(old_size)
new_size[1] = max_size
new_tensor = tensor.new_zeros(tuple(new_size)) + pad_index
new_tensor[:, : old_size[1]] = tensor
return new_tensor
def prediction_step(
self,
model: nn.Module,
inputs: Dict[str, Union[torch.Tensor, Any]],
prediction_loss_only: bool,
ignore_keys: Optional[List[str]] = None,
) -> Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]:
"""
Perform an evaluation step on `model` using `inputs`.
Subclass and override to inject custom behavior.
Args:
model (`nn.Module`):
The model to evaluate.
inputs (`Dict[str, Union[torch.Tensor, Any]]`):
The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
argument `labels`. Check your model's documentation for all accepted arguments.
prediction_loss_only (`bool`):
Whether or not to return the loss only.
ignore_keys (`Lst[str]`, *optional*):
A list of keys in the output of your model (if it is a dictionary) that should be ignored when
gathering predictions.
Return:
Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]: A tuple with the loss,
logits and labels (each being optional).
"""
has_labels = False if len(self.label_names) == 0 else all(inputs.get(k) is not None for k in self.label_names)
# For CLIP-like models capable of returning loss values.
# If `return_loss` is not specified or being `None` in `inputs`, we check if the default value of `return_loss`
# is `True` in `model.forward`.
return_loss = inputs.get("return_loss", None)
if return_loss is None:
return_loss = self.can_return_loss
loss_without_labels = True if len(self.label_names) == 0 and return_loss else False
inputs = self._prepare_inputs(inputs)
if ignore_keys is None:
if hasattr(self.model, "config"):
ignore_keys = getattr(self.model.config, "keys_to_ignore_at_inference", [])
else:
ignore_keys = []
# labels may be popped when computing the loss (label smoothing for instance) so we grab them first.
if has_labels or loss_without_labels:
labels = nested_detach(tuple(inputs.get(name) for name in self.label_names))
if len(labels) == 1:
labels = labels[0]
else:
labels = None
with torch.no_grad():
if is_sagemaker_mp_enabled():
raw_outputs = smp_forward_only(model, inputs)
if has_labels or loss_without_labels:
if isinstance(raw_outputs, dict):
loss_mb = raw_outputs["loss"]
logits_mb = tuple(v for k, v in raw_outputs.items() if k not in ignore_keys + ["loss"])
else:
loss_mb = raw_outputs[0]
logits_mb = raw_outputs[1:]
loss = loss_mb.reduce_mean().detach().cpu()
logits = smp_nested_concat(logits_mb)
else:
loss = None
if isinstance(raw_outputs, dict):
logits_mb = tuple(v for k, v in raw_outputs.items() if k not in ignore_keys)
else:
logits_mb = raw_outputs
logits = smp_nested_concat(logits_mb)
else:
if has_labels or loss_without_labels:
with self.compute_loss_context_manager():
loss, outputs = self.compute_loss(model, inputs, return_outputs=True)
loss = loss.mean().detach()
if isinstance(outputs, dict):
logits = tuple(v for k, v in outputs.items() if k not in ignore_keys + ["loss"])
else:
logits = outputs[1:]
else:
loss = None
with self.compute_loss_context_manager():
outputs = model(**inputs)
if isinstance(outputs, dict):
logits = tuple(v for k, v in outputs.items() if k not in ignore_keys)
else:
logits = outputs
# TODO: this needs to be fixed and made cleaner later.
if self.args.past_index >= 0:
self._past = outputs[self.args.past_index - 1]
if prediction_loss_only:
return (loss, None, None)
logits = nested_detach(logits)
if len(logits) == 1:
logits = logits[0]
return (loss, logits, labels)
def floating_point_ops(self, inputs: Dict[str, Union[torch.Tensor, Any]]):
"""
For models that inherit from [`PreTrainedModel`], uses that method to compute the number of floating point
operations for every backward + forward pass. If using another model, either implement such a method in the
model or subclass and override this method.
Args:
inputs (`Dict[str, Union[torch.Tensor, Any]]`):
The inputs and targets of the model.
Returns:
`int`: The number of floating-point operations.
"""
if hasattr(self.model, "floating_point_ops"):
return self.model.floating_point_ops(inputs)
else:
return 0
def init_git_repo(self, at_init: bool = False):
"""
Initializes a git repo in `self.args.hub_model_id`.
Args:
at_init (`bool`, *optional*, defaults to `False`):
Whether this function is called before any training or not. If `self.args.overwrite_output_dir` is
`True` and `at_init` is `True`, the path to the repo (which is `self.args.output_dir`) might be wiped
out.
"""
if not self.is_world_process_zero():
return
if self.args.hub_model_id is None:
repo_name = Path(self.args.output_dir).absolute().name
else:
repo_name = self.args.hub_model_id
if "/" not in repo_name:
repo_name = get_full_repo_name(repo_name, token=self.args.hub_token)
# Make sure the repo exists.
create_repo(repo_name, token=self.args.hub_token, private=self.args.hub_private_repo, exist_ok=True)
try:
self.repo = Repository(self.args.output_dir, clone_from=repo_name, token=self.args.hub_token)
except EnvironmentError:
if self.args.overwrite_output_dir and at_init:
# Try again after wiping output_dir
shutil.rmtree(self.args.output_dir)
self.repo = Repository(self.args.output_dir, clone_from=repo_name, token=self.args.hub_token)
else:
raise
self.repo.git_pull()
# By default, ignore the checkpoint folders
if (
not os.path.exists(os.path.join(self.args.output_dir, ".gitignore"))
and self.args.hub_strategy != HubStrategy.ALL_CHECKPOINTS
):
with open(os.path.join(self.args.output_dir, ".gitignore"), "w", encoding="utf-8") as writer:
writer.writelines(["checkpoint-*/"])
# Add "*.sagemaker" to .gitignore if using SageMaker
if os.environ.get("SM_TRAINING_ENV"):
self._add_sm_patterns_to_gitignore()
self.push_in_progress = None
def create_model_card(
self,
language: Optional[str] = None,
license: Optional[str] = None,
tags: Union[str, List[str], None] = None,
model_name: Optional[str] = None,
finetuned_from: Optional[str] = None,
tasks: Union[str, List[str], None] = None,
dataset_tags: Union[str, List[str], None] = None,
dataset: Union[str, List[str], None] = None,
dataset_args: Union[str, List[str], None] = None,
):
"""
Creates a draft of a model card using the information available to the `Trainer`.
Args:
language (`str`, *optional*):
The language of the model (if applicable)
license (`str`, *optional*):
The license of the model. Will default to the license of the pretrained model used, if the original
model given to the `Trainer` comes from a repo on the Hub.
tags (`str` or `List[str]`, *optional*):
Some tags to be included in the metadata of the model card.
model_name (`str`, *optional*):
The name of the model.
finetuned_from (`str`, *optional*):
The name of the model used to fine-tune this one (if applicable). Will default to the name of the repo
of the original model given to the `Trainer` (if it comes from the Hub).
tasks (`str` or `List[str]`, *optional*):
One or several task identifiers, to be included in the metadata of the model card.
dataset_tags (`str` or `List[str]`, *optional*):
One or several dataset tags, to be included in the metadata of the model card.
dataset (`str` or `List[str]`, *optional*):
One or several dataset identifiers, to be included in the metadata of the model card.
dataset_args (`str` or `List[str]`, *optional*):
One or several dataset arguments, to be included in the metadata of the model card.
"""
if not self.is_world_process_zero():
return
training_summary = TrainingSummary.from_trainer(
self,
language=language,
license=license,
tags=tags,
model_name=model_name,
finetuned_from=finetuned_from,
tasks=tasks,
dataset_tags=dataset_tags,
dataset=dataset,
dataset_args=dataset_args,
)
model_card = training_summary.to_model_card()
with open(os.path.join(self.args.output_dir, "README.md"), "w") as f:
f.write(model_card)
def _push_from_checkpoint(self, checkpoint_folder):
# Only push from one node.
if not self.is_world_process_zero() or self.args.hub_strategy == HubStrategy.END:
return
# If we haven't finished the last push, we don't do this one.
if self.push_in_progress is not None and not self.push_in_progress.is_done:
return
output_dir = self.args.output_dir
# To avoid a new synchronization of all model weights, we just copy the file from the checkpoint folder
modeling_files = [CONFIG_NAME, WEIGHTS_NAME]
for modeling_file in modeling_files:
if os.path.isfile(os.path.join(checkpoint_folder, modeling_file)):
shutil.copy(os.path.join(checkpoint_folder, modeling_file), os.path.join(output_dir, modeling_file))
# Saving the tokenizer is fast and we don't know how many files it may have spawned, so we resave it to be sure.
if self.tokenizer is not None:
self.tokenizer.save_pretrained(output_dir)
# Same for the training arguments
torch.save(self.args, os.path.join(output_dir, TRAINING_ARGS_NAME))
try:
if self.args.hub_strategy == HubStrategy.CHECKPOINT:
# Temporarily move the checkpoint just saved for the push
tmp_checkpoint = os.path.join(output_dir, "last-checkpoint")
# We have to remove the "last-checkpoint" dir if it exists, otherwise the checkpoint is moved as a
# subfolder.
if os.path.isdir(tmp_checkpoint):
shutil.rmtree(tmp_checkpoint)
shutil.move(checkpoint_folder, tmp_checkpoint)
if self.args.save_strategy == IntervalStrategy.STEPS:
commit_message = f"Training in progress, step {self.state.global_step}"
else:
commit_message = f"Training in progress, epoch {int(self.state.epoch)}"
_, self.push_in_progress = self.repo.push_to_hub(
commit_message=commit_message, blocking=False, auto_lfs_prune=True
)
finally:
if self.args.hub_strategy == HubStrategy.CHECKPOINT:
# Move back the checkpoint to its place
shutil.move(tmp_checkpoint, checkpoint_folder)
def push_to_hub(self, commit_message: Optional[str] = "End of training", blocking: bool = True, **kwargs) -> str:
"""
Upload *self.model* and *self.tokenizer* to the 🤗 model hub on the repo *self.args.hub_model_id*.
Parameters:
commit_message (`str`, *optional*, defaults to `"End of training"`):
Message to commit while pushing.
blocking (`bool`, *optional*, defaults to `True`):
Whether the function should return only when the `git push` has finished.
kwargs:
Additional keyword arguments passed along to [`~Trainer.create_model_card`].
Returns:
The url of the commit of your model in the given repository if `blocking=False`, a tuple with the url of
the commit and an object to track the progress of the commit if `blocking=True`
"""
# If a user calls manually `push_to_hub` with `self.args.push_to_hub = False`, we try to create the repo but
# it might fail.
if not hasattr(self, "repo"):
self.init_git_repo()
model_name = kwargs.pop("model_name", None)
if model_name is None and self.args.should_save:
if self.args.hub_model_id is None:
model_name = Path(self.args.output_dir).name
else:
model_name = self.args.hub_model_id.split("/")[-1]
# Needs to be executed on all processes for TPU training, but will only save on the processed determined by
# self.args.should_save.
self.save_model(_internal_call=True)
# Only push from one node.
if not self.is_world_process_zero():
return
# Cancel any async push in progress if blocking=True. The commits will all be pushed together.
if blocking and self.push_in_progress is not None and not self.push_in_progress.is_done:
self.push_in_progress._process.kill()
self.push_in_progress = None
git_head_commit_url = self.repo.push_to_hub(
commit_message=commit_message, blocking=blocking, auto_lfs_prune=True
)
# push separately the model card to be independant from the rest of the model
if self.args.should_save:
self.create_model_card(model_name=model_name, **kwargs)
try:
self.repo.push_to_hub(
commit_message="update model card README.md", blocking=blocking, auto_lfs_prune=True
)
except EnvironmentError as exc:
logger.error(f"Error pushing update to the model card. Please read logs and retry.\n${exc}")
return git_head_commit_url
#
# Deprecated code
#
def prediction_loop(
self,
dataloader: DataLoader,
description: str,
prediction_loss_only: Optional[bool] = None,
ignore_keys: Optional[List[str]] = None,
metric_key_prefix: str = "eval",
) -> EvalLoopOutput:
"""
Prediction/evaluation loop, shared by `Trainer.evaluate()` and `Trainer.predict()`.
Works both with or without labels.
"""
args = self.args
if not has_length(dataloader):
raise ValueError("dataloader must implement a working __len__")
prediction_loss_only = prediction_loss_only if prediction_loss_only is not None else args.prediction_loss_only
# if eval is called w/o train init deepspeed here
if args.deepspeed and not self.deepspeed:
# XXX: eval doesn't have `resume_from_checkpoint` arg but we should be able to do eval
# from the checkpoint eventually
deepspeed_engine, _, _ = deepspeed_init(self, num_training_steps=0, resume_from_checkpoint=None)
self.model = deepspeed_engine.module
self.model_wrapped = deepspeed_engine
self.deepspeed = deepspeed_engine
# XXX: we don't need optim/sched for inference, but this needs to be sorted out, since
# for example the Z3-optimizer is a must for zero3 to work even for inference - what we
# don't need is the deepspeed basic optimizer which is self.optimizer.optimizer
deepspeed_engine.optimizer.optimizer = None
deepspeed_engine.lr_scheduler = None
model = self._wrap_model(self.model, training=False, dataloader=dataloader)
# if full fp16 or bf16 eval is wanted and this ``evaluation`` or ``predict`` isn't called
# while ``train`` is running, cast it to the right dtype first and then put on device
if not self.is_in_train:
if args.fp16_full_eval:
model = model.to(dtype=torch.float16, device=args.device)
elif args.bf16_full_eval:
model = model.to(dtype=torch.bfloat16, device=args.device)
batch_size = dataloader.batch_size
num_examples = self.num_examples(dataloader)
logger.info(f"***** Running {description} *****")
logger.info(f" Num examples = {num_examples}")
logger.info(f" Batch size = {batch_size}")
losses_host: torch.Tensor = None
preds_host: Union[torch.Tensor, List[torch.Tensor]] = None
labels_host: Union[torch.Tensor, List[torch.Tensor]] = None
inputs_host: Union[torch.Tensor, List[torch.Tensor]] = None
world_size = max(1, args.world_size)
eval_losses_gatherer = DistributedTensorGatherer(world_size, num_examples, make_multiple_of=batch_size)
if not prediction_loss_only:
# The actual number of eval_sample can be greater than num_examples in distributed settings (when we pass
# a batch size to the sampler)
make_multiple_of = None
if hasattr(dataloader, "sampler") and isinstance(dataloader.sampler, SequentialDistributedSampler):
make_multiple_of = dataloader.sampler.batch_size
preds_gatherer = DistributedTensorGatherer(world_size, num_examples, make_multiple_of=make_multiple_of)
labels_gatherer = DistributedTensorGatherer(world_size, num_examples, make_multiple_of=make_multiple_of)
inputs_gatherer = DistributedTensorGatherer(world_size, num_examples, make_multiple_of=make_multiple_of)
model.eval()
if is_torch_tpu_available():
dataloader = pl.ParallelLoader(dataloader, [args.device]).per_device_loader(args.device)
if args.past_index >= 0:
self._past = None
self.callback_handler.eval_dataloader = dataloader
for step, inputs in enumerate(dataloader):
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
inputs_decode = self._prepare_input(inputs["input_ids"]) if args.include_inputs_for_metrics else None
if loss is not None:
losses = loss.repeat(batch_size)
losses_host = losses if losses_host is None else torch.cat((losses_host, losses), dim=0)
if logits is not None:
preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100)
if labels is not None:
labels_host = labels if labels_host is None else nested_concat(labels_host, labels, padding_index=-100)
if inputs_decode is not None:
inputs_host = (
inputs_decode
if inputs_host is None
else nested_concat(inputs_host, inputs_decode, padding_index=-100)
)
self.control = self.callback_handler.on_prediction_step(args, self.state, self.control)
# Gather all tensors and put them back on the CPU if we have done enough accumulation steps.
if args.eval_accumulation_steps is not None and (step + 1) % args.eval_accumulation_steps == 0:
eval_losses_gatherer.add_arrays(self._gather_and_numpify(losses_host, "eval_losses"))
if not prediction_loss_only:
preds_gatherer.add_arrays(self._gather_and_numpify(preds_host, "eval_preds"))
labels_gatherer.add_arrays(self._gather_and_numpify(labels_host, "eval_label_ids"))
inputs_gatherer.add_arrays(self._gather_and_numpify(inputs_host, "eval_inputs_ids"))
# Set back to None to begin a new accumulation
losses_host, preds_host, labels_host, inputs_host = None, None, None, None
if args.past_index and hasattr(self, "_past"):
# Clean the state at the end of the evaluation loop
delattr(self, "_past")
# Gather all remaining tensors and put them back on the CPU
eval_losses_gatherer.add_arrays(self._gather_and_numpify(losses_host, "eval_losses"))
if not prediction_loss_only:
preds_gatherer.add_arrays(self._gather_and_numpify(preds_host, "eval_preds"))
labels_gatherer.add_arrays(self._gather_and_numpify(labels_host, "eval_label_ids"))
inputs_gatherer.add_arrays(self._gather_and_numpify(inputs_host, "eval_inputs_ids"))
eval_loss = eval_losses_gatherer.finalize()
preds = preds_gatherer.finalize() if not prediction_loss_only else None
label_ids = labels_gatherer.finalize() if not prediction_loss_only else None
inputs_ids = inputs_gatherer.finalize() if not prediction_loss_only else None
if self.compute_metrics is not None and preds is not None and label_ids is not None:
if args.include_inputs_for_metrics:
metrics = self.compute_metrics(
EvalPrediction(predictions=preds, label_ids=label_ids, inputs=inputs_ids)
)
else:
metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))
else:
metrics = {}
# To be JSON-serializable, we need to remove numpy types or zero-d tensors
metrics = denumpify_detensorize(metrics)
if eval_loss is not None:
metrics[f"{metric_key_prefix}_loss"] = eval_loss.mean().item()
# Prefix all keys with metric_key_prefix + '_'
for key in list(metrics.keys()):
if not key.startswith(f"{metric_key_prefix}_"):
metrics[f"{metric_key_prefix}_{key}"] = metrics.pop(key)
return EvalLoopOutput(predictions=preds, label_ids=label_ids, metrics=metrics, num_samples=num_examples)
def _gather_and_numpify(self, tensors, name):
"""
Gather value of `tensors` (tensor or list/tuple of nested tensors) and convert them to numpy before
concatenating them to `gathered`
"""
if tensors is None:
return
if is_torch_tpu_available():
tensors = nested_xla_mesh_reduce(tensors, name)
elif is_sagemaker_mp_enabled():
tensors = smp_gather(tensors)
elif self.args.local_rank != -1:
tensors = distributed_concat(tensors)
return nested_numpify(tensors)
def _add_sm_patterns_to_gitignore(self) -> None:
"""Add SageMaker Checkpointing patterns to .gitignore file."""
# Make sure we only do this on the main process
if not self.is_world_process_zero():
return
patterns = ["*.sagemaker-uploading", "*.sagemaker-uploaded"]
# Get current .gitignore content
if os.path.exists(os.path.join(self.repo.local_dir, ".gitignore")):
with open(os.path.join(self.repo.local_dir, ".gitignore"), "r") as f:
current_content = f.read()
else:
current_content = ""
# Add the patterns to .gitignore
content = current_content
for pattern in patterns:
if pattern not in content:
if content.endswith("\n"):
content += pattern
else:
content += f"\n{pattern}"
# Write the .gitignore file if it has changed
if content != current_content:
with open(os.path.join(self.repo.local_dir, ".gitignore"), "w") as f:
logger.debug(f"Writing .gitignore file. Content: {content}")
f.write(content)
self.repo.git_add(".gitignore")
# avoid race condition with git status
time.sleep(0.5)
if not self.repo.is_repo_clean():
self.repo.git_commit("Add *.sagemaker patterns to .gitignore.")
self.repo.git_push() | PypiClean |
/DjangoDjangoAppCenter-0.0.11-py3-none-any.whl/AppCenter/simpleui/static/admin/simpleui-x/elementui/divider.js | module.exports =
/******/ (function (modules) { // webpackBootstrap
/******/ // The module cache
/******/
var installedModules = {};
/******/
/******/ // The require function
/******/
function __webpack_require__(moduleId) {
/******/
/******/ // Check if module is in cache
/******/
if (installedModules[moduleId]) {
/******/
return installedModules[moduleId].exports;
/******/
}
/******/ // Create a new module (and put it into the cache)
/******/
var module = installedModules[moduleId] = {
/******/ i: moduleId,
/******/ l: false,
/******/ exports: {}
/******/
};
/******/
/******/ // Execute the module function
/******/
modules[moduleId].call(module.exports, module, module.exports, __webpack_require__);
/******/
/******/ // Flag the module as loaded
/******/
module.l = true;
/******/
/******/ // Return the exports of the module
/******/
return module.exports;
/******/
}
/******/
/******/
/******/ // expose the modules object (__webpack_modules__)
/******/
__webpack_require__.m = modules;
/******/
/******/ // expose the module cache
/******/
__webpack_require__.c = installedModules;
/******/
/******/ // define getter function for harmony exports
/******/
__webpack_require__.d = function (exports, name, getter) {
/******/
if (!__webpack_require__.o(exports, name)) {
/******/
Object.defineProperty(exports, name, {enumerable: true, get: getter});
/******/
}
/******/
};
/******/
/******/ // define __esModule on exports
/******/
__webpack_require__.r = function (exports) {
/******/
if (typeof Symbol !== 'undefined' && Symbol.toStringTag) {
/******/
Object.defineProperty(exports, Symbol.toStringTag, {value: 'Module'});
/******/
}
/******/
Object.defineProperty(exports, '__esModule', {value: true});
/******/
};
/******/
/******/ // create a fake namespace object
/******/ // mode & 1: value is a module id, require it
/******/ // mode & 2: merge all properties of value into the ns
/******/ // mode & 4: return value when already ns object
/******/ // mode & 8|1: behave like require
/******/
__webpack_require__.t = function (value, mode) {
/******/
if (mode & 1) value = __webpack_require__(value);
/******/
if (mode & 8) return value;
/******/
if ((mode & 4) && typeof value === 'object' && value && value.__esModule) return value;
/******/
var ns = Object.create(null);
/******/
__webpack_require__.r(ns);
/******/
Object.defineProperty(ns, 'default', {enumerable: true, value: value});
/******/
if (mode & 2 && typeof value != 'string') for (var key in value) __webpack_require__.d(ns, key, function (key) {
return value[key];
}.bind(null, key));
/******/
return ns;
/******/
};
/******/
/******/ // getDefaultExport function for compatibility with non-harmony modules
/******/
__webpack_require__.n = function (module) {
/******/
var getter = module && module.__esModule ?
/******/ function getDefault() {
return module['default'];
} :
/******/ function getModuleExports() {
return module;
};
/******/
__webpack_require__.d(getter, 'a', getter);
/******/
return getter;
/******/
};
/******/
/******/ // Object.prototype.hasOwnProperty.call
/******/
__webpack_require__.o = function (object, property) {
return Object.prototype.hasOwnProperty.call(object, property);
};
/******/
/******/ // __webpack_public_path__
/******/
__webpack_require__.p = "/dist/";
/******/
/******/
/******/ // Load entry module and return exports
/******/
return __webpack_require__(__webpack_require__.s = 131);
/******/
})
/************************************************************************/
/******/({
/***/ 131:
/***/ (function (module, __webpack_exports__, __webpack_require__) {
"use strict";
__webpack_require__.r(__webpack_exports__);
// CONCATENATED MODULE: ./packages/divider/src/main.js
/* harmony default export */
var main = ({
functional: true,
name: 'ElDivider',
props: {
direction: {
type: String,
default: 'horizontal',
validator: function validator(val) {
return ['horizontal', 'vertical'].indexOf(val) !== -1;
}
},
contentPosition: {
type: String,
default: 'center',
validator: function validator(val) {
return ['left', 'center', 'right'].indexOf(val) !== -1;
}
}
},
render: function render(h, context) {
var $slots = context.slots();
var _context$props = context.props,
direction = _context$props.direction,
contentPosition = _context$props.contentPosition;
return h(
'div',
{'class': ['el-divider', 'el-divider--' + direction]},
[$slots.default && direction !== 'vertical' ? h(
'div',
{'class': ['el-divider__text', 'is-' + contentPosition]},
[$slots.default]
) : null]
);
}
});
// CONCATENATED MODULE: ./packages/divider/index.js
/* istanbul ignore next */
main.install = function (Vue) {
Vue.component(main.name, main);
};
/* harmony default export */
var divider = __webpack_exports__["default"] = (main);
/***/
})
/******/
}); | PypiClean |
/Mazes-0.2.1.tar.gz/Mazes-0.2.1/mazes/cli.py | import click
import colorama
import termcolor
import pathlib
import pickle
from mazes import Maze
from mazes.store import MazeIO
class DimensionsParamType(click.ParamType):
name = "dimensions"
def convert(self, value, param, ctx):
try:
w, h = value.split('x')
return int(w), int(h)
except ValueError:
self.fail(f"{value!r} does not follow dimension scheme 'WxH'", param, ctx)
class MazeFileParamType(click.ParamType):
name = "maze_file_path"
def convert(self, value, param, ctx):
if pathlib.Path(value).is_file():
value = pathlib.Path(value)
elif pathlib.Path(MAZE_STORE.MAZES_DIRECTORY, value).is_file():
value = pathlib.Path(MAZE_STORE.MAZES_DIRECTORY, value)
else:
self.fail(f'The maze file "{value}" does not exist, sorry :(', param, ctx)
return
with open(value, 'rb') as f:
maze = pickle.load(f)
maze.location = value
return maze
class ColorParamType(click.ParamType):
name = 'maze_file_path'
def convert(self, value, param, ctx):
colors = [
"grey",
"red",
"green",
"yellow",
"blue",
"magenta",
"cyan",
"white"
]
if value in colors:
return value
else:
self.fail(
f"'{value}' is not a valid color. Valid colors are {', '.join(colors)}",
param,
ctx
)
MAZE_FILE = MazeFileParamType()
DIMENSIONS = DimensionsParamType()
MAZE_STORE = MazeIO()
COLOR = ColorParamType()
@click.group()
def cli():
"""
Makes and solves some mazes!!
Mazes generated are saved in ~/.mazes
"""
@cli.command(name='generate')
@click.argument('dimensions', type=DIMENSIONS)
@click.option('-n', '--name')
@click.option('-s', '--show-creation', is_flag=True)
@click.option('--update-wait', type=float, default=1)
def cli_generate(
dimensions: tuple,
update_wait: int,
name=None,
show_creation=False
):
"""Generates mazes using a recursive backtracker algorithm."""
maze = Maze(*dimensions)
maze.generate(show_updates=show_creation, update_wait=update_wait)
print(colorama.ansi.clear_screen())
name = MAZE_STORE.save_maze(maze, name)
print('{}x{} maze saved as "{}"'.format(*dimensions, name))
@cli.command(name='delete')
@click.argument('maze', type=MAZE_FILE)
def cli_delete(maze):
"""Deletes maze files from your filesystem"""
MAZE_STORE.delete_maze(maze.location)
print(f'Successfully removed {maze.location}')
@cli.command(name='solve')
def cli_solve():
"""Solves mazes that are generated."""
@cli.command('list')
def cli_list():
"""Lists the maze files that are saved in ~/.mazes"""
print(MAZE_STORE.list_mazes())
@cli.command(name='show')
@click.argument('maze', type=MAZE_FILE)
@click.option('--bold', is_flag=True)
@click.option('-c', '--color', type=COLOR, default=None)
@click.option('-b', '--background', type=COLOR, default=None)
def cli_show(maze, bold=False, color=None, background=None):
"""Displays the maze content of maze files"""
if background:
background = f'on_{background}'
print(maze.show(bold=bold, color=color, background=background))
if __name__ == '__main__':
cli() | PypiClean |
/FiPy-3.4.4.tar.gz/FiPy-3.4.4/fipy/viewers/matplotlibViewer/matplotlib2DGridContourViewer.py | from __future__ import division
from __future__ import unicode_literals
__docformat__ = 'restructuredtext'
from future.builtins import super
from fipy.tools import numerix
from fipy.viewers.matplotlibViewer.matplotlib2DViewer import AbstractMatplotlib2DViewer
__all__ = ["Matplotlib2DGridContourViewer"]
from future.utils import text_to_native_str
__all__ = [text_to_native_str(n) for n in __all__]
class Matplotlib2DGridContourViewer(AbstractMatplotlib2DViewer):
"""Displays a contour plot of a 2D `CellVariable` object.
The `Matplotlib2DGridContourViewer` plots a 2D `CellVariable` using Matplotlib_.
.. _Matplotlib: http://matplotlib.sourceforge.net/
"""
def __init__(self, vars, title=None, limits={}, cmap=None, colorbar='vertical', axes=None, levels=None, figaspect='auto', **kwlimits):
"""Creates a `Matplotlib2DViewer`.
Parameters
----------
vars : ~fipy.variables.cellVariable.CellVariable
the `Variable` to display.
title : str, optional
displayed at the top of the `Viewer` window
limits : dict
a (deprecated) alternative to limit keyword arguments
xmin, xmax, ymin, ymax, datamin, datamax : float, optional
displayed range of data. Any limit set to
a (default) value of `None` will autoscale.
cmap : ~matplotlib.colors.Colormap, optional
the :class:`~matplotlib.colors.Colormap`.
Defaults to `matplotlib.cm.jet`
colorbar : bool, optional
plot a color bar if not `None`
axes : ~matplotlib.axes.Axes, optional
if not `None`, `vars` will be plotted into this Matplotlib `Axes` object
levels : int or array-like, optional
Determines the number and positions of the contour lines /
regions. If an int `n`, tries to automatically choose no more
than `n+1` "nice" contour levels over the range of `vars`. If
array-like, draw contour lines at the specified levels. The
values must be in increasing order. E.g. to draw just the zero
contour pass ``levels=[0]``.
figaspect : float, optional
desired aspect ratio of figure. If a number, use that aspect
ratio. If `auto`, the aspect ratio will be determined from
the *vars*'s mesh.
"""
kwlimits.update(limits)
AbstractMatplotlib2DViewer.__init__(self, vars=vars, title=title,
cmap=cmap, colorbar=colorbar, axes=axes, figaspect=figaspect,
**kwlimits)
self.levels = levels
self._plot()
@property
def levels(self):
"""The number of automatically-chosen contours or their values."""
return self._levels
@levels.setter
def levels(self, value):
self._levels = value
def _getSuitableVars(self, vars):
from fipy.meshes.nonUniformGrid2D import NonUniformGrid2D
from fipy.meshes.uniformGrid2D import UniformGrid2D
from fipy.variables.cellVariable import CellVariable
vars = [var for var in AbstractMatplotlib2DViewer._getSuitableVars(self, vars) \
if ((isinstance(var.mesh, NonUniformGrid2D)
or isinstance(var.mesh, UniformGrid2D))
and isinstance(var, CellVariable))]
if len(vars) == 0:
from fipy.viewers import MeshDimensionError
raise MeshDimensionError("The mesh must be a Grid2D instance")
# this viewer can only display one variable
return [vars[0]]
def _plot(self):
super(Matplotlib2DGridContourViewer, self)._plot()
## plt.clf()
## ## Added garbage collection since matplotlib objects seem to hang
## ## around and accumulate.
## import gc
## gc.collect()
mesh = self.vars[0].mesh
shape = mesh.shape
X, Y = mesh.cellCenters
Z = self.vars[0].value
X, Y, Z = [v.reshape(shape, order='F') for v in (X, Y, Z)]
zmin = self._norm.vmin
zmax = self._norm.vmax
self.axes.contourf(X, Y, Z, levels=self.levels,
vmin=zmin, vmax=zmax, cmap=self.cmap)
self.axes.set_xlim(xmin=self._getLimit('xmin'),
xmax=self._getLimit('xmax'))
self.axes.set_ylim(ymin=self._getLimit('ymin'),
ymax=self._getLimit('ymax'))
@classmethod
def _doctest_body(cls):
return cls._test2D()
@classmethod
def _doctest_extra(cls):
return ("""
>>> viewer.levels = 2
""" + super()._doctest_extra())
def _test():
from fipy.viewers.viewer import _test2D
_test2D(Matplotlib2DGridContourViewer)
if __name__ == "__main__":
import fipy.tests.doctestPlus
fipy.tests.doctestPlus.execButNoTest() | PypiClean |
/Flask_AdminLTE3-1.0.9-py3-none-any.whl/flask_adminlte3/static/plugins/datatables-rowreorder/js/dataTables.rowReorder.min.js | (function(e){"function"===typeof define&&define.amd?define(["jquery","datatables.net"],function(f){return e(f,window,document)}):"object"===typeof exports?module.exports=function(f,g){f||(f=window);if(!g||!g.fn.dataTable)g=require("datatables.net")(f,g).$;return e(g,f,f.document)}:e(jQuery,window,document)})(function(e,f,g,o){var i=e.fn.dataTable,j=function(d,c){if(!i.versionCheck||!i.versionCheck("1.10.8"))throw"DataTables RowReorder requires DataTables 1.10.8 or newer";this.c=e.extend(!0,{},i.defaults.rowReorder,
j.defaults,c);this.s={bodyTop:null,dt:new i.Api(d),getDataFn:i.ext.oApi._fnGetObjectDataFn(this.c.dataSrc),middles:null,scroll:{},scrollInterval:null,setDataFn:i.ext.oApi._fnSetObjectDataFn(this.c.dataSrc),start:{top:0,left:0,offsetTop:0,offsetLeft:0,nodes:[]},windowHeight:0,documentOuterHeight:0,domCloneOuterHeight:0};this.dom={clone:null,dtScroll:e("div.dataTables_scrollBody",this.s.dt.table().container())};var b=this.s.dt.settings()[0],a=b.rowreorder;if(a)return a;this.dom.dtScroll.length||(this.dom.dtScroll=
e(this.s.dt.table().container(),"tbody"));b.rowreorder=this;this._constructor()};e.extend(j.prototype,{_constructor:function(){var d=this,c=this.s.dt,b=e(c.table().node());"static"===b.css("position")&&b.css("position","relative");e(c.table().container()).on("mousedown.rowReorder touchstart.rowReorder",this.c.selector,function(a){if(d.c.enable){if(e(a.target).is(d.c.excludedChildren))return!0;var b=e(this).closest("tr"),h=c.row(b);if(h.any())return d._emitEvent("pre-row-reorder",{node:h.node(),index:h.index()}),
d._mouseDown(a,b),!1}});c.on("destroy.rowReorder",function(){e(c.table().container()).off(".rowReorder");c.off(".rowReorder")})},_cachePositions:function(){var d=this.s.dt,c=e(d.table().node()).find("thead").outerHeight(),b=e.unique(d.rows({page:"current"}).nodes().toArray()),b=e.map(b,function(b){var d=e(b).position().top-c;return(d+d+e(b).outerHeight())/2});this.s.middles=b;this.s.bodyTop=e(d.table().body()).offset().top;this.s.windowHeight=e(f).height();this.s.documentOuterHeight=e(g).outerHeight()},
_clone:function(d){var c=e(this.s.dt.table().node().cloneNode(!1)).addClass("dt-rowReorder-float").append("<tbody/>").append(d.clone(!1)),b=d.outerWidth(),a=d.outerHeight(),g=d.children().map(function(){return e(this).width()});c.width(b).height(a).find("tr").children().each(function(b){this.style.width=g[b]+"px"});c.appendTo("body");this.dom.clone=c;this.s.domCloneOuterHeight=c.outerHeight()},_clonePosition:function(d){var c=this.s.start,b=this._eventToPage(d,"Y")-c.top,d=this._eventToPage(d,"X")-
c.left,a=this.c.snapX,b=b+c.offsetTop,c=!0===a?c.offsetLeft:"number"===typeof a?c.offsetLeft+a:d+c.offsetLeft;0>b?b=0:b+this.s.domCloneOuterHeight>this.s.documentOuterHeight&&(b=this.s.documentOuterHeight-this.s.domCloneOuterHeight);this.dom.clone.css({top:b,left:c})},_emitEvent:function(d,c){this.s.dt.iterator("table",function(b){e(b.nTable).triggerHandler(d+".dt",c)})},_eventToPage:function(d,c){return-1!==d.type.indexOf("touch")?d.originalEvent.touches[0]["page"+c]:d["page"+c]},_mouseDown:function(d,
c){var b=this,a=this.s.dt,m=this.s.start,h=c.offset();m.top=this._eventToPage(d,"Y");m.left=this._eventToPage(d,"X");m.offsetTop=h.top;m.offsetLeft=h.left;m.nodes=e.unique(a.rows({page:"current"}).nodes().toArray());this._cachePositions();this._clone(c);this._clonePosition(d);this.dom.target=c;c.addClass("dt-rowReorder-moving");e(g).on("mouseup.rowReorder touchend.rowReorder",function(a){b._mouseUp(a)}).on("mousemove.rowReorder touchmove.rowReorder",function(a){b._mouseMove(a)});e(f).width()===e(g).width()&&
e(g.body).addClass("dt-rowReorder-noOverflow");a=this.dom.dtScroll;this.s.scroll={windowHeight:e(f).height(),windowWidth:e(f).width(),dtTop:a.length?a.offset().top:null,dtLeft:a.length?a.offset().left:null,dtHeight:a.length?a.outerHeight():null,dtWidth:a.length?a.outerWidth():null}},_mouseMove:function(d){this._clonePosition(d);for(var c=this._eventToPage(d,"Y")-this.s.bodyTop,b=this.s.middles,a=null,g=this.s.dt,h=0,f=b.length;h<f;h++)if(c<b[h]){a=h;break}null===a&&(a=b.length);if(null===this.s.lastInsert||
this.s.lastInsert!==a)c=e.unique(g.rows({page:"current"}).nodes().toArray()),a>this.s.lastInsert?this.dom.target.insertAfter(c[a-1]):this.dom.target.insertBefore(c[a]),this._cachePositions(),this.s.lastInsert=a;this._shiftScroll(d)},_mouseUp:function(d){var c=this,b=this.s.dt,a,f,h=this.c.dataSrc;this.dom.clone.remove();this.dom.clone=null;this.dom.target.removeClass("dt-rowReorder-moving");e(g).off(".rowReorder");e(g.body).removeClass("dt-rowReorder-noOverflow");clearInterval(this.s.scrollInterval);
this.s.scrollInterval=null;var n=this.s.start.nodes,l=e.unique(b.rows({page:"current"}).nodes().toArray()),j={},i=[],k=[],p=this.s.getDataFn,o=this.s.setDataFn;a=0;for(f=n.length;a<f;a++)if(n[a]!==l[a]){var q=b.row(l[a]).id(),u=b.row(l[a]).data(),r=b.row(n[a]).data();q&&(j[q]=p(r));i.push({node:l[a],oldData:p(u),newData:p(r),newPosition:a,oldPosition:e.inArray(l[a],n)});k.push(l[a])}var s=[i,{dataSrc:h,nodes:k,values:j,triggerRow:b.row(this.dom.target),originalEvent:d}];this._emitEvent("row-reorder",
s);var t=function(){if(c.c.update){a=0;for(f=i.length;a<f;a++){var d=b.row(i[a].node).data();o(d,i[a].newData);b.columns().every(function(){this.dataSrc()===h&&b.cell(i[a].node,this.index()).invalidate("data")})}c._emitEvent("row-reordered",s);b.draw(!1)}};this.c.editor?(this.c.enable=!1,this.c.editor.edit(k,!1,e.extend({submit:"changed"},this.c.formOptions)).multiSet(h,j).one("preSubmitCancelled.rowReorder",function(){c.c.enable=!0;c.c.editor.off(".rowReorder");b.draw(!1)}).one("submitUnsuccessful.rowReorder",
function(){b.draw(!1)}).one("submitSuccess.rowReorder",function(){t()}).one("submitComplete",function(){c.c.enable=!0;c.c.editor.off(".rowReorder")}).submit()):t()},_shiftScroll:function(d){var c=this,b=this.s.scroll,a=!1,i=d.pageY-g.body.scrollTop,h,j;i<e(f).scrollTop()+65?h=-5:i>b.windowHeight+e(f).scrollTop()-65&&(h=5);null!==b.dtTop&&d.pageY<b.dtTop+65?j=-5:null!==b.dtTop&&d.pageY>b.dtTop+b.dtHeight-65&&(j=5);h||j?(b.windowVert=h,b.dtVert=j,a=!0):this.s.scrollInterval&&(clearInterval(this.s.scrollInterval),
this.s.scrollInterval=null);!this.s.scrollInterval&&a&&(this.s.scrollInterval=setInterval(function(){if(b.windowVert){var a=e(g).scrollTop();e(g).scrollTop(a+b.windowVert);if(a!==e(g).scrollTop()){a=parseFloat(c.dom.clone.css("top"));c.dom.clone.css("top",a+b.windowVert)}}if(b.dtVert){a=c.dom.dtScroll[0];if(b.dtVert)a.scrollTop=a.scrollTop+b.dtVert}},20))}});j.defaults={dataSrc:0,editor:null,enable:!0,formOptions:{},selector:"td:first-child",snapX:!1,update:!0,excludedChildren:"a"};var k=e.fn.dataTable.Api;
k.register("rowReorder()",function(){return this});k.register("rowReorder.enable()",function(d){d===o&&(d=!0);return this.iterator("table",function(c){c.rowreorder&&(c.rowreorder.c.enable=d)})});k.register("rowReorder.disable()",function(){return this.iterator("table",function(d){d.rowreorder&&(d.rowreorder.c.enable=!1)})});j.version="1.2.8";e.fn.dataTable.RowReorder=j;e.fn.DataTable.RowReorder=j;e(g).on("init.dt.dtr",function(d,c){if("dt"===d.namespace){var b=c.oInit.rowReorder,a=i.defaults.rowReorder;
if(b||a)a=e.extend({},b,a),!1!==b&&new j(c,a)}});return j}); | PypiClean |
/DataQualityFramework-0.0.7.tar.gz/DataQualityFramework-0.0.7/data_quality_framework/quality/timeliness.py |
# GNU AFFERO GENERAL PUBLIC LICENSE
# Version 3, 19 November 2007
# Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
# Everyone is permitted to copy and distribute verbatim copies
# of this license document, but changing it is not allowed.
# Preamble
# The GNU Affero General Public License is a free, copyleft license for
# software and other kinds of works, specifically designed to ensure
# cooperation with the community in the case of network server software.
# The licenses for most software and other practical works are designed
# to take away your freedom to share and change the works. By contrast,
# our General Public Licenses are intended to guarantee your freedom to
# share and change all versions of a program--to make sure it remains free
# software for all its users.
# When we speak of free software, we are referring to freedom, not
# price. Our General Public Licenses are designed to make sure that you
# have the freedom to distribute copies of free software (and charge for
# them if you wish), that you receive source code or can get it if you
# want it, that you can change the software or use pieces of it in new
# free programs, and that you know you can do these things.
# Developers that use our General Public Licenses protect your rights
# with two steps: (1) assert copyright on the software, and (2) offer
# you this License which gives you legal permission to copy, distribute
# and/or modify the software.
# A secondary benefit of defending all users' freedom is that
# improvements made in alternate versions of the program, if they
# receive widespread use, become available for other developers to
# incorporate. Many developers of free software are heartened and
# encouraged by the resulting cooperation. However, in the case of
# software used on network servers, this result may fail to come about.
# The GNU General Public License permits making a modified version and
# letting the public access it on a server without ever releasing its
# source code to the public.
# The GNU Affero General Public License is designed specifically to
# ensure that, in such cases, the modified source code becomes available
# to the community. It requires the operator of a network server to
# provide the source code of the modified version running there to the
# users of that server. Therefore, public use of a modified version, on
# a publicly accessible server, gives the public access to the source
# code of the modified version.
# An older license, called the Affero General Public License and
# published by Affero, was designed to accomplish similar goals. This is
# a different license, not a version of the Affero GPL, but Affero has
# released a new version of the Affero GPL which permits relicensing under
# this license.
# The precise terms and conditions for copying, distribution and
# modification follow.
# TERMS AND CONDITIONS
# 0. Definitions.
# "This License" refers to version 3 of the GNU Affero General Public
# License.
# "Copyright" also means copyright-like laws that apply to other kinds of
# works, such as semiconductor masks.
# "The Program" refers to any copyrightable work licensed under this
# License. Each licensee is addressed as "you". "Licensees" and
# "recipients" may be individuals or organizations.
# To "modify" a work means to copy from or adapt all or part of the work
# in a fashion requiring copyright permission, other than the making of an
# exact copy. The resulting work is called a "modified version" of the
# earlier work or a work "based on" the earlier work.
# A "covered work" means either the unmodified Program or a work based
# on the Program.
# To "propagate" a work means to do anything with it that, without
# permission, would make you directly or secondarily liable for
# infringement under applicable copyright law, except executing it on a
# computer or modifying a private copy. Propagation includes copying,
# distribution (with or without modification), making available to the
# public, and in some countries other activities as well.
# To "convey" a work means any kind of propagation that enables other
# parties to make or receive copies. Mere interaction with a user through
# a computer network, with no transfer of a copy, is not conveying.
# An interactive user interface displays "Appropriate Legal Notices"
# to the extent that it includes a convenient and prominently visible
# feature that (1) displays an appropriate copyright notice, and (2)
# tells the user that there is no warranty for the work (except to the
# extent that warranties are provided), that licensees may convey the
# work under this License, and how to view a copy of this License. If
# the interface presents a list of user commands or options, such as a
# menu, a prominent item in the list meets this criterion.
# 1. Source Code.
# The "source code" for a work means the preferred form of the work
# for making modifications to it. "Object code" means any non-source
# form of a work.
# A "Standard Interface" means an interface that either is an official
# standard defined by a recognized standards body, or, in the case of
# interfaces specified for a particular programming language, one that
# is widely used among developers working in that language.
# The "System Libraries" of an executable work include anything, other
# than the work as a whole, that (a) is included in the normal form of
# packaging a Major Component, but which is not part of that Major
# Component, and (b) serves only to enable use of the work with that
# Major Component, or to implement a Standard Interface for which an
# implementation is available to the public in source code form. A
# "Major Component", in this context, means a major essential component
# (kernel, window system, and so on) of the specific operating system
# (if any) on which the executable work runs, or a compiler used to
# produce the work, or an object code interpreter used to run it.
# The "Corresponding Source" for a work in object code form means all
# the source code needed to generate, install, and (for an executable
# work) run the object code and to modify the work, including scripts to
# control those activities. However, it does not include the work's
# System Libraries, or general-purpose tools or generally available free
# programs which are used unmodified in performing those activities but
# which are not part of the work. For example, Corresponding Source
# includes interface definition files associated with source files for
# the work, and the source code for shared libraries and dynamically
# linked subprograms that the work is specifically designed to require,
# such as by intimate data communication or control flow between those
# subprograms and other parts of the work.
# The Corresponding Source need not include anything that users
# can regenerate automatically from other parts of the Corresponding
# Source.
# The Corresponding Source for a work in source code form is that
# same work.
# 2. Basic Permissions.
# All rights granted under this License are granted for the term of
# copyright on the Program, and are irrevocable provided the stated
# conditions are met. This License explicitly affirms your unlimited
# permission to run the unmodified Program. The output from running a
# covered work is covered by this License only if the output, given its
# content, constitutes a covered work. This License acknowledges your
# rights of fair use or other equivalent, as provided by copyright law.
# You may make, run and propagate covered works that you do not
# convey, without conditions so long as your license otherwise remains
# in force. You may convey covered works to others for the sole purpose
# of having them make modifications exclusively for you, or provide you
# with facilities for running those works, provided that you comply with
# the terms of this License in conveying all material for which you do
# not control copyright. Those thus making or running the covered works
# for you must do so exclusively on your behalf, under your direction
# and control, on terms that prohibit them from making any copies of
# your copyrighted material outside their relationship with you.
# Conveying under any other circumstances is permitted solely under
# the conditions stated below. Sublicensing is not allowed; section 10
# makes it unnecessary.
# 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
# No covered work shall be deemed part of an effective technological
# measure under any applicable law fulfilling obligations under article
# 11 of the WIPO copyright treaty adopted on 20 December 1996, or
# similar laws prohibiting or restricting circumvention of such
# measures.
# When you convey a covered work, you waive any legal power to forbid
# circumvention of technological measures to the extent such circumvention
# is effected by exercising rights under this License with respect to
# the covered work, and you disclaim any intention to limit operation or
# modification of the work as a means of enforcing, against the work's
# users, your or third parties' legal rights to forbid circumvention of
# technological measures.
# 4. Conveying Verbatim Copies.
# You may convey verbatim copies of the Program's source code as you
# receive it, in any medium, provided that you conspicuously and
# appropriately publish on each copy an appropriate copyright notice;
# keep intact all notices stating that this License and any
# non-permissive terms added in accord with section 7 apply to the code;
# keep intact all notices of the absence of any warranty; and give all
# recipients a copy of this License along with the Program.
# You may charge any price or no price for each copy that you convey,
# and you may offer support or warranty protection for a fee.
# 5. Conveying Modified Source Versions.
# You may convey a work based on the Program, or the modifications to
# produce it from the Program, in the form of source code under the
# terms of section 4, provided that you also meet all of these conditions:
# a) The work must carry prominent notices stating that you modified
# it, and giving a relevant date.
# b) The work must carry prominent notices stating that it is
# released under this License and any conditions added under section
# 7. This requirement modifies the requirement in section 4 to
# "keep intact all notices".
# c) You must license the entire work, as a whole, under this
# License to anyone who comes into possession of a copy. This
# License will therefore apply, along with any applicable section 7
# additional terms, to the whole of the work, and all its parts,
# regardless of how they are packaged. This License gives no
# permission to license the work in any other way, but it does not
# invalidate such permission if you have separately received it.
# d) If the work has interactive user interfaces, each must display
# Appropriate Legal Notices; however, if the Program has interactive
# interfaces that do not display Appropriate Legal Notices, your
# work need not make them do so.
# A compilation of a covered work with other separate and independent
# works, which are not by their nature extensions of the covered work,
# and which are not combined with it such as to form a larger program,
# in or on a volume of a storage or distribution medium, is called an
# "aggregate" if the compilation and its resulting copyright are not
# used to limit the access or legal rights of the compilation's users
# beyond what the individual works permit. Inclusion of a covered work
# in an aggregate does not cause this License to apply to the other
# parts of the aggregate.
# 6. Conveying Non-Source Forms.
# You may convey a covered work in object code form under the terms
# of sections 4 and 5, provided that you also convey the
# machine-readable Corresponding Source under the terms of this License,
# in one of these ways:
# a) Convey the object code in, or embodied in, a physical product
# (including a physical distribution medium), accompanied by the
# Corresponding Source fixed on a durable physical medium
# customarily used for software interchange.
# b) Convey the object code in, or embodied in, a physical product
# (including a physical distribution medium), accompanied by a
# written offer, valid for at least three years and valid for as
# long as you offer spare parts or customer support for that product
# model, to give anyone who possesses the object code either (1) a
# copy of the Corresponding Source for all the software in the
# product that is covered by this License, on a durable physical
# medium customarily used for software interchange, for a price no
# more than your reasonable cost of physically performing this
# conveying of source, or (2) access to copy the
# Corresponding Source from a network server at no charge.
# c) Convey individual copies of the object code with a copy of the
# written offer to provide the Corresponding Source. This
# alternative is allowed only occasionally and noncommercially, and
# only if you received the object code with such an offer, in accord
# with subsection 6b.
# d) Convey the object code by offering access from a designated
# place (gratis or for a charge), and offer equivalent access to the
# Corresponding Source in the same way through the same place at no
# further charge. You need not require recipients to copy the
# Corresponding Source along with the object code. If the place to
# copy the object code is a network server, the Corresponding Source
# may be on a different server (operated by you or a third party)
# that supports equivalent copying facilities, provided you maintain
# clear directions next to the object code saying where to find the
# Corresponding Source. Regardless of what server hosts the
# Corresponding Source, you remain obligated to ensure that it is
# available for as long as needed to satisfy these requirements.
# e) Convey the object code using peer-to-peer transmission, provided
# you inform other peers where the object code and Corresponding
# Source of the work are being offered to the general public at no
# charge under subsection 6d.
# A separable portion of the object code, whose source code is excluded
# from the Corresponding Source as a System Library, need not be
# included in conveying the object code work.
# A "User Product" is either (1) a "consumer product", which means any
# tangible personal property which is normally used for personal, family,
# or household purposes, or (2) anything designed or sold for incorporation
# into a dwelling. In determining whether a product is a consumer product,
# doubtful cases shall be resolved in favor of coverage. For a particular
# product received by a particular user, "normally used" refers to a
# typical or common use of that class of product, regardless of the status
# of the particular user or of the way in which the particular user
# actually uses, or expects or is expected to use, the product. A product
# is a consumer product regardless of whether the product has substantial
# commercial, industrial or non-consumer uses, unless such uses represent
# the only significant mode of use of the product.
# "Installation Information" for a User Product means any methods,
# procedures, authorization keys, or other information required to install
# and execute modified versions of a covered work in that User Product from
# a modified version of its Corresponding Source. The information must
# suffice to ensure that the continued functioning of the modified object
# code is in no case prevented or interfered with solely because
# modification has been made.
# If you convey an object code work under this section in, or with, or
# specifically for use in, a User Product, and the conveying occurs as
# part of a transaction in which the right of possession and use of the
# User Product is transferred to the recipient in perpetuity or for a
# fixed term (regardless of how the transaction is characterized), the
# Corresponding Source conveyed under this section must be accompanied
# by the Installation Information. But this requirement does not apply
# if neither you nor any third party retains the ability to install
# modified object code on the User Product (for example, the work has
# been installed in ROM).
# The requirement to provide Installation Information does not include a
# requirement to continue to provide support service, warranty, or updates
# for a work that has been modified or installed by the recipient, or for
# the User Product in which it has been modified or installed. Access to a
# network may be denied when the modification itself materially and
# adversely affects the operation of the network or violates the rules and
# protocols for communication across the network.
# Corresponding Source conveyed, and Installation Information provided,
# in accord with this section must be in a format that is publicly
# documented (and with an implementation available to the public in
# source code form), and must require no special password or key for
# unpacking, reading or copying.
# 7. Additional Terms.
# "Additional permissions" are terms that supplement the terms of this
# License by making exceptions from one or more of its conditions.
# Additional permissions that are applicable to the entire Program shall
# be treated as though they were included in this License, to the extent
# that they are valid under applicable law. If additional permissions
# apply only to part of the Program, that part may be used separately
# under those permissions, but the entire Program remains governed by
# this License without regard to the additional permissions.
# When you convey a copy of a covered work, you may at your option
# remove any additional permissions from that copy, or from any part of
# it. (Additional permissions may be written to require their own
# removal in certain cases when you modify the work.) You may place
# additional permissions on material, added by you to a covered work,
# for which you have or can give appropriate copyright permission.
# Notwithstanding any other provision of this License, for material you
# add to a covered work, you may (if authorized by the copyright holders of
# that material) supplement the terms of this License with terms:
# a) Disclaiming warranty or limiting liability differently from the
# terms of sections 15 and 16 of this License; or
# b) Requiring preservation of specified reasonable legal notices or
# author attributions in that material or in the Appropriate Legal
# Notices displayed by works containing it; or
# c) Prohibiting misrepresentation of the origin of that material, or
# requiring that modified versions of such material be marked in
# reasonable ways as different from the original version; or
# d) Limiting the use for publicity purposes of names of licensors or
# authors of the material; or
# e) Declining to grant rights under trademark law for use of some
# trade names, trademarks, or service marks; or
# f) Requiring indemnification of licensors and authors of that
# material by anyone who conveys the material (or modified versions of
# it) with contractual assumptions of liability to the recipient, for
# any liability that these contractual assumptions directly impose on
# those licensors and authors.
# All other non-permissive additional terms are considered "further
# restrictions" within the meaning of section 10. If the Program as you
# received it, or any part of it, contains a notice stating that it is
# governed by this License along with a term that is a further
# restriction, you may remove that term. If a license document contains
# a further restriction but permits relicensing or conveying under this
# License, you may add to a covered work material governed by the terms
# of that license document, provided that the further restriction does
# not survive such relicensing or conveying.
# If you add terms to a covered work in accord with this section, you
# must place, in the relevant source files, a statement of the
# additional terms that apply to those files, or a notice indicating
# where to find the applicable terms.
# Additional terms, permissive or non-permissive, may be stated in the
# form of a separately written license, or stated as exceptions;
# the above requirements apply either way.
# 8. Termination.
# You may not propagate or modify a covered work except as expressly
# provided under this License. Any attempt otherwise to propagate or
# modify it is void, and will automatically terminate your rights under
# this License (including any patent licenses granted under the third
# paragraph of section 11).
# However, if you cease all violation of this License, then your
# license from a particular copyright holder is reinstated (a)
# provisionally, unless and until the copyright holder explicitly and
# finally terminates your license, and (b) permanently, if the copyright
# holder fails to notify you of the violation by some reasonable means
# prior to 60 days after the cessation.
# Moreover, your license from a particular copyright holder is
# reinstated permanently if the copyright holder notifies you of the
# violation by some reasonable means, this is the first time you have
# received notice of violation of this License (for any work) from that
# copyright holder, and you cure the violation prior to 30 days after
# your receipt of the notice.
# Termination of your rights under this section does not terminate the
# licenses of parties who have received copies or rights from you under
# this License. If your rights have been terminated and not permanently
# reinstated, you do not qualify to receive new licenses for the same
# material under section 10.
# 9. Acceptance Not Required for Having Copies.
# You are not required to accept this License in order to receive or
# run a copy of the Program. Ancillary propagation of a covered work
# occurring solely as a consequence of using peer-to-peer transmission
# to receive a copy likewise does not require acceptance. However,
# nothing other than this License grants you permission to propagate or
# modify any covered work. These actions infringe copyright if you do
# not accept this License. Therefore, by modifying or propagating a
# covered work, you indicate your acceptance of this License to do so.
# 10. Automatic Licensing of Downstream Recipients.
# Each time you convey a covered work, the recipient automatically
# receives a license from the original licensors, to run, modify and
# propagate that work, subject to this License. You are not responsible
# for enforcing compliance by third parties with this License.
# An "entity transaction" is a transaction transferring control of an
# organization, or substantially all assets of one, or subdividing an
# organization, or merging organizations. If propagation of a covered
# work results from an entity transaction, each party to that
# transaction who receives a copy of the work also receives whatever
# licenses to the work the party's predecessor in interest had or could
# give under the previous paragraph, plus a right to possession of the
# Corresponding Source of the work from the predecessor in interest, if
# the predecessor has it or can get it with reasonable efforts.
# You may not impose any further restrictions on the exercise of the
# rights granted or affirmed under this License. For example, you may
# not impose a license fee, royalty, or other charge for exercise of
# rights granted under this License, and you may not initiate litigation
# (including a cross-claim or counterclaim in a lawsuit) alleging that
# any patent claim is infringed by making, using, selling, offering for
# sale, or importing the Program or any portion of it.
# 11. Patents.
# A "contributor" is a copyright holder who authorizes use under this
# License of the Program or a work on which the Program is based. The
# work thus licensed is called the contributor's "contributor version".
# A contributor's "essential patent claims" are all patent claims
# owned or controlled by the contributor, whether already acquired or
# hereafter acquired, that would be infringed by some manner, permitted
# by this License, of making, using, or selling its contributor version,
# but do not include claims that would be infringed only as a
# consequence of further modification of the contributor version. For
# purposes of this definition, "control" includes the right to grant
# patent sublicenses in a manner consistent with the requirements of
# this License.
# Each contributor grants you a non-exclusive, worldwide, royalty-free
# patent license under the contributor's essential patent claims, to
# make, use, sell, offer for sale, import and otherwise run, modify and
# propagate the contents of its contributor version.
# In the following three paragraphs, a "patent license" is any express
# agreement or commitment, however denominated, not to enforce a patent
# (such as an express permission to practice a patent or covenant not to
# sue for patent infringement). To "grant" such a patent license to a
# party means to make such an agreement or commitment not to enforce a
# patent against the party.
# If you convey a covered work, knowingly relying on a patent license,
# and the Corresponding Source of the work is not available for anyone
# to copy, free of charge and under the terms of this License, through a
# publicly available network server or other readily accessible means,
# then you must either (1) cause the Corresponding Source to be so
# available, or (2) arrange to deprive yourself of the benefit of the
# patent license for this particular work, or (3) arrange, in a manner
# consistent with the requirements of this License, to extend the patent
# license to downstream recipients. "Knowingly relying" means you have
# actual knowledge that, but for the patent license, your conveying the
# covered work in a country, or your recipient's use of the covered work
# in a country, would infringe one or more identifiable patents in that
# country that you have reason to believe are valid.
# If, pursuant to or in connection with a single transaction or
# arrangement, you convey, or propagate by procuring conveyance of, a
# covered work, and grant a patent license to some of the parties
# receiving the covered work authorizing them to use, propagate, modify
# or convey a specific copy of the covered work, then the patent license
# you grant is automatically extended to all recipients of the covered
# work and works based on it.
# A patent license is "discriminatory" if it does not include within
# the scope of its coverage, prohibits the exercise of, or is
# conditioned on the non-exercise of one or more of the rights that are
# specifically granted under this License. You may not convey a covered
# work if you are a party to an arrangement with a third party that is
# in the business of distributing software, under which you make payment
# to the third party based on the extent of your activity of conveying
# the work, and under which the third party grants, to any of the
# parties who would receive the covered work from you, a discriminatory
# patent license (a) in connection with copies of the covered work
# conveyed by you (or copies made from those copies), or (b) primarily
# for and in connection with specific products or compilations that
# contain the covered work, unless you entered into that arrangement,
# or that patent license was granted, prior to 28 March 2007.
# Nothing in this License shall be construed as excluding or limiting
# any implied license or other defenses to infringement that may
# otherwise be available to you under applicable patent law.
# 12. No Surrender of Others' Freedom.
# If conditions are imposed on you (whether by court order, agreement or
# otherwise) that contradict the conditions of this License, they do not
# excuse you from the conditions of this License. If you cannot convey a
# covered work so as to satisfy simultaneously your obligations under this
# License and any other pertinent obligations, then as a consequence you may
# not convey it at all. For example, if you agree to terms that obligate you
# to collect a royalty for further conveying from those to whom you convey
# the Program, the only way you could satisfy both those terms and this
# License would be to refrain entirely from conveying the Program.
# 13. Remote Network Interaction; Use with the GNU General Public License.
# Notwithstanding any other provision of this License, if you modify the
# Program, your modified version must prominently offer all users
# interacting with it remotely through a computer network (if your version
# supports such interaction) an opportunity to receive the Corresponding
# Source of your version by providing access to the Corresponding Source
# from a network server at no charge, through some standard or customary
# means of facilitating copying of software. This Corresponding Source
# shall include the Corresponding Source for any work covered by version 3
# of the GNU General Public License that is incorporated pursuant to the
# following paragraph.
# Notwithstanding any other provision of this License, you have
# permission to link or combine any covered work with a work licensed
# under version 3 of the GNU General Public License into a single
# combined work, and to convey the resulting work. The terms of this
# License will continue to apply to the part which is the covered work,
# but the work with which it is combined will remain governed by version
# 3 of the GNU General Public License.
# 14. Revised Versions of this License.
# The Free Software Foundation may publish revised and/or new versions of
# the GNU Affero General Public License from time to time. Such new versions
# will be similar in spirit to the present version, but may differ in detail to
# address new problems or concerns.
# Each version is given a distinguishing version number. If the
# Program specifies that a certain numbered version of the GNU Affero General
# Public License "or any later version" applies to it, you have the
# option of following the terms and conditions either of that numbered
# version or of any later version published by the Free Software
# Foundation. If the Program does not specify a version number of the
# GNU Affero General Public License, you may choose any version ever published
# by the Free Software Foundation.
# If the Program specifies that a proxy can decide which future
# versions of the GNU Affero General Public License can be used, that proxy's
# public statement of acceptance of a version permanently authorizes you
# to choose that version for the Program.
# Later license versions may give you additional or different
# permissions. However, no additional obligations are imposed on any
# author or copyright holder as a result of your choosing to follow a
# later version.
# 15. Disclaimer of Warranty.
# THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
# APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
# HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
# OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
# IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
# ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
# 16. Limitation of Liability.
# IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
# WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
# THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
# GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
# USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
# DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
# PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
# EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGES.
# 17. Interpretation of Sections 15 and 16.
# If the disclaimer of warranty and limitation of liability provided
# above cannot be given local legal effect according to their terms,
# reviewing courts shall apply local law that most closely approximates
# an absolute waiver of all civil liability in connection with the
# Program, unless a warranty or assumption of liability accompanies a
# copy of the Program in return for a fee.
# END OF TERMS AND CONDITIONS
# How to Apply These Terms to Your New Programs
# If you develop a new program, and you want it to be of the greatest
# possible use to the public, the best way to achieve this is to make it
# free software which everyone can redistribute and change under these terms.
# To do so, attach the following notices to the program. It is safest
# to attach them to the start of each source file to most effectively
# state the exclusion of warranty; and each file should have at least
# the "copyright" line and a pointer to where the full notice is found.
# <one line to give the program's name and a brief idea of what it does.>
# Copyright (C) <year> <name of author>
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# Also add information on how to contact you by electronic and paper mail.
# If your software can interact with users remotely through a computer
# network, you should also make sure that it provides a way for users to
# get its source. For example, if your program is a web application, its
# interface could display a "Source" link that leads users to an archive
# of the code. There are many ways you could offer source, and different
# solutions will be better for different programs; see section 13 for the
# specific requirements.
# You should also get your employer (if you work as a programmer) or school,
# if any, to sign a "copyright disclaimer" for the program, if necessary.
# For more information on this, and how to apply and follow the GNU AGPL, see
# <https://www.gnu.org/licenses/>. | PypiClean |
/Camelot-13.04.13-gpl-pyqt.tar.gz/Camelot-13.04.13-gpl-pyqt/doc/sphinx/source/doc/index.rst | .. _doc-index:
########################
Camelot Documentation
########################
This is the reference documentation for developing projects using the Camelot
library. The first time Camelot developer is encouraged to read
:ref:`doc-models` and :ref:`doc-admin`.
The section :ref:`doc-threads` is for developers whishing to maintain a
responsive UI when faced with significant delays in their application code.
All other sections can be read on an as needed base.
.. toctree::
install.rst
models.rst
admin.rst
application_admin.rst
forms.rst
actions.rst
reports.rst
delegates.rst
charts.rst
documents.rst
under_the_hood.rst
data_model.rst
fixtures.rst
manage.rst
threads.rst
faq.rst
| PypiClean |
/BuildChecker-0.1.3.zip/BuildChecker-0.1.3/README.rst | ===============================
BuildChecker
===============================
.. image:: https://img.shields.io/pypi/v/BuildChecker.svg
:target: https://pypi.python.org/pypi/BuildChecker
.. image:: https://img.shields.io/travis/ashwin1992/BuildChecker.svg
:target: https://travis-ci.org/ashwin1992/BuildChecker
.. image:: https://readthedocs.org/projects/BuildChecker/badge/?version=latest
:target: https://BuildChecker.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://pyup.io/repos/github/ashwin1992/BuildChecker/shield.svg
:target: https://pyup.io/repos/github/ashwin1992/BuildChecker/
:alt: Updates
Helps to get notification when building your application. Used mainly while training your deep neural network.
Training your deep neural network takes atleast 48 hours and it is very tedious to log in to the server every now and then to check the status. This Build Checker can be deployed to your program and it will send a notification to your mail and phone number if there are any errors in the build.
* Free software: Apache Software License 2.0
* Documentation: https://BuildChecker.readthedocs.io.
Features
--------
* TODO
Credits
---------
This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template.
.. _Cookiecutter: https://github.com/audreyr/cookiecutter
.. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage
| PypiClean |
/Ciw-3.0.0.tar.gz/Ciw-3.0.0/docs/Guides/set_distributions.rst | .. _set-dists:
==========================================
How to Set Arrival & Service Distributions
==========================================
Ciw offeres a variety of inter-arrival and service time distributions.
A full list can be found :ref:`here <refs-dists>`.
They are objects, that are defined in the :code:`Network` with the :code:`'arrival_distributions'` and :code:`'service_distributions'` keywords.
+ :code:`'arrival_distributions'`: This is the distribution that inter-arrival times are drawn from. That is the time between two consecutive arrivals. It is particular to specific nodes and customer classes.
+ :code:`'service_distributions'`: This is the distribution that service times are drawn from. That is the amount of time a customer spends with a server (independent of how many servers there are). It is particular for to specific node and customer classes.
The following example, with two nodes and two customer classes, uses eight different arrival and service rate distributions::
>>> import ciw
>>> N = ciw.create_network(
... arrival_distributions={'Class 0': [ciw.dists.Deterministic(value=0.4),
... ciw.dists.Empirical(observations=[0.1, 0.1, 0.1, 0.2])],
... 'Class 1': [ciw.dists.Deterministic(value=0.2),
... ciw.dists.Pmf(values=[0.2, 0.4], probs=[0.5, 0.5])]},
... service_distributions={'Class 0': [ciw.dists.Exponential(rate=6.0),
... ciw.dists.Lognormal(mean=-1, sd=0.5)],
... 'Class 1': [ciw.dists.Uniform(lower=0.1, upper=0.7),
... ciw.dists.Triangular(lower=0.2, mode=0.3, upper=0.7)]},
... routing={'Class 0': [[0.0, 0.0], [0.0, 0.0]],
... 'Class 1': [[0.0, 0.0], [0.0, 0.0]]},
... number_of_servers=[1, 1]
... )
We'll run this (in :ref:`exact <exact-arithmetic>` mode) for 25 time units::
>>> ciw.seed(10)
>>> Q = ciw.Simulation(N, exact=10)
>>> Q.simulate_until_max_time(50)
>>> recs = Q.get_all_records()
The system uses the following eight distribution objects:
+ :code:`ciw.dists.Deterministic(value=0.4)`:
+ Always sample 0.4.
+ :code:`ciw.dists.Deterministic(value=0.2)`:
+ Always sample 0.2.
+ :code:`ciw.dists.Empirical(observations=[0.1, 0.1, 0.1, 0.2])`:
+ Randomly sample from the numbers 0.1, 0.1, 0.1 and 0.2.
+ :code:`ciw.dists.Pmf(values=[0.2, 0.4], probs=[0.5, 0.5])`:
+ Sample 0.2 half the time, and 0.4 half the time.
+ :code:`ciw.dists.Exponential(rate=6.0)`:
+ Sample from the `exponential <https://en.wikipedia.org/wiki/Exponential_distribution>`_ distribution with parameter :math:`\lambda = 6.0`. Expected mean of 0.1666...
+ :code:`ciw.dists.Uniform(lower=0.1, upper=0.7)`:
+ Sample any number between 0.1 and 0.7 with equal probablity. Expected mean of 0.4.
+ :code:`ciw.dists.Lognormal(mean=-1, sd=0.5)`:
+ Sample from the `lognormal <https://en.wikipedia.org/wiki/Log-normal_distribution>`_ distribution with parameters :math:`\mu = -1` and :math:`\sigma = 0.5`. Expected mean of 0.4724...
+ :code:`ciw.dists.Triangular(lower=0.2, mode=0.3, upper=0.7)`:
+ Sample from the `triangular <https://en.wikipedia.org/wiki/Triangular_distribution>`_ distribution, with mode 0.3, lower limit 0.2 and upper limit 0.7. Expected mean of 0.4.
From the records, collect the service times and arrival dates for each node and each customer class::
>>> servicetimes_n1c0 = [r.service_time for r in recs if r.node==1 and r.customer_class=='Class 0']
>>> servicetimes_n2c0 = [r.service_time for r in recs if r.node==2 and r.customer_class=='Class 0']
>>> servicetimes_n1c1 = [r.service_time for r in recs if r.node==1 and r.customer_class=='Class 1']
>>> servicetimes_n2c1 = [r.service_time for r in recs if r.node==2 and r.customer_class=='Class 1']
>>> arrivals_n1c0 = sorted([r.arrival_date for r in recs if r.node==1 and r.customer_class=='Class 0'])
>>> arrivals_n2c0 = sorted([r.arrival_date for r in recs if r.node==2 and r.customer_class=='Class 0'])
>>> arrivals_n1c1 = sorted([r.arrival_date for r in recs if r.node==1 and r.customer_class=='Class 1'])
>>> arrivals_n2c1 = sorted([r.arrival_date for r in recs if r.node==2 and r.customer_class=='Class 1'])
Now let's see if the mean service time and inter-arrival times of the simulation matches the distributions::
>>> from decimal import Decimal
>>> sum(servicetimes_n1c0) / len(servicetimes_n1c0) # Expected 0.1666...
Decimal('0.1600313200')
>>> sum(servicetimes_n2c0) / len(servicetimes_n2c0) # Expected 0.4724...
Decimal('0.4250531396')
>>> sum(servicetimes_n1c1) / len(servicetimes_n1c1) # Expected 0.4
Decimal('0.4108660556')
>>> sum(servicetimes_n2c1) / len(servicetimes_n2c1) # Expected 0.4
Decimal('0.3942034906')
>>> set([r2-r1 for r1, r2 in zip(arrivals_n1c0, arrivals_n1c0[1:])]) # Should only sample 0.4
{Decimal('0.4')}
>>> set([r2-r1 for r1, r2 in zip(arrivals_n1c1, arrivals_n1c1[1:])]) # Should only sample 0.2
{Decimal('0.2')}
>>> expected_samples = {Decimal('0.2'), Decimal('0.1')} # Should only sample 0.1 and 0.2
>>> set([r2-r1 for r1, r2 in zip(arrivals_n2c0, arrivals_n2c0[1:])]) == expected_samples
True
>>> expected_samples = {Decimal('0.2'), Decimal('0.4')}# Should only sample 0.2 and 0.4
>>> set([r2-r1 for r1, r2 in zip(arrivals_n2c1, arrivals_n2c1[1:])]) == expected_samples
True
Custom Distributions
--------------------
A distribution is defined by inheriting from the generic `ciw.dists.Distribution` class.
This allows users to define their own distributions.
Consider a distribution that samples the value `3.0` 50% of the time, and samples a uniform random number between 0 and 1 otherwise. That is written by inheriting from the generic class, and defining a new :code:`sample` method::
>>> import random
>>> class CustomDistribution(ciw.dists.Distribution):
... def sample(self, t=None, ind=None):
... if random.random() < 0.5:
... return 3.0
... return random.random()
This can then be implemented into a :code:`Network` object in the usual way.
Combined Distributions
----------------------
As distribution objects inherit from the generic distirbution function, they can be *combined* using the operations :code:`+`, :code:`-`, :code:`*`, and :code:`/`.
For example, let's combine an Exponential distribution with a Deterministic distribution in all four ways::
>>> Exp_add_Det = ciw.dists.Exponential(rate=0.05) + ciw.dists.Deterministic(value=3.0)
>>> Exp_sub_Det = ciw.dists.Exponential(rate=0.05) - ciw.dists.Deterministic(value=3.0)
>>> Exp_mul_Det = ciw.dists.Exponential(rate=0.05) * ciw.dists.Deterministic(value=3.0)
>>> Exp_div_Det = ciw.dists.Exponential(rate=0.05) / ciw.dists.Deterministic(value=3.0)
These combined distributions return the combined sampled values:
>>> ciw.seed(10)
>>> Ex = ciw.dists.Exponential(rate=0.05)
>>> Dt = ciw.dists.Deterministic(value=3.0)
>>> [round(Ex.sample(), 2) for _ in range(5)]
[16.94, 11.2, 17.26, 4.62, 33.57]
>>> [round(Dt.sample(), 2) for _ in range(5)]
[3.0, 3.0, 3.0, 3.0, 3.0]
>>> # Addition
>>> ciw.seed(10)
>>> [round(Exp_add_Det.sample(), 2) for _ in range(5)]
[19.94, 14.2, 20.26, 7.62, 36.57]
>>> # Subtraction
>>> ciw.seed(10)
>>> [round(Exp_sub_Det.sample(), 2) for _ in range(5)]
[13.94, 8.2, 14.26, 1.62, 30.57]
>>> # Multiplication
>>> ciw.seed(10)
>>> [round(Exp_mul_Det.sample(), 2) for _ in range(5)]
[50.83, 33.61, 51.78, 13.85, 100.7]
>>> # Division
>>> ciw.seed(10)
>>> [round(Exp_div_Det.sample(), 2) for _ in range(5)]
[5.65, 3.73, 5.75, 1.54, 11.19]
| PypiClean |
/AstroScheduller-1.0.3.2210-py3-none-any.whl/astroscheduller/interfaces/get_schedule.py | import tkinter
from tkinter import ttk, messagebox
from tkinter.filedialog import askopenfilename, asksaveasfilename
class get_schedule():
def __init__(self, upper):
self.upper = upper
self.setup_window()
self.setup_widgets()
self.root.mainloop
def setup_window(self):
self.root = tkinter.Toplevel(self.upper.root)
self.root.title("Generate a plan")
self.root.geometry("500x100")
self.root.resizable(False, True)
self.root.protocol("WM_DELETE_WINDOW", self.close)
def setup_widgets(self):
self.mainInterface = tkinter.Frame(self.root)
self.mainInterface.pack(pady=10, padx=10, fill="both", expand=True)
self.actionsInterface = tkinter.Frame(self.root)
self.actionsInterface.pack(pady=10, padx=10, fill="both", expand=True)
tkinter.Label(self.mainInterface, text="Generate a plan from: ").grid(row=0, column=0, sticky="w")
self.planFrom = tkinter.StringVar()
self.planFrom.set("all")
self.planFromAll = tkinter.Radiobutton(self.mainInterface, text="All Objects", variable=self.planFrom, value="all")
self.planFromAll.grid(row=0, column=1, sticky="w")
self.planFromList = tkinter.Radiobutton(self.mainInterface, text="Listed Objects", variable=self.planFrom, value="list")
self.planFromList.grid(row=0, column=2, sticky="w")
self.cancelButton = tkinter.Button(self.actionsInterface, text="What is this?", command=self.help)
self.cancelButton.pack(side="left", fill="x")
self.comfirmButton = tkinter.Button(self.actionsInterface, text="Confirm", command=self.confirm)
self.comfirmButton.pack(side="right", fill="x")
self.cancelButton = tkinter.Button(self.actionsInterface, text="Cancel", command=self.cancel)
self.cancelButton.pack(side="right", fill="x")
def close(self, event=None):
self.root.destroy()
self.upper.root.focus_force()
def confirm(self):
if(self.planFrom.get() == "all"):
self.upper.actions.get_schedule_from_all()
self.close()
elif(self.planFrom.get() == "list"):
self.upper.actions.get_schedule_from_listed()
self.close()
else:
messagebox.showerror("Error", "Please select a planning option. For more information, click the \"What is this?\" button.")
return
def cancel(self):
self.close()
def help(self):
text = "The AstroScheduller Algorithm is a simple algorithm that tries to find the best possible schedule for your observing schedule.\n\n"
text += "If \"All Objects\" is selected, the algorithm will try to find the best possible schedule for all objects imported into the AstroScheduller [scheduller.objects_all()].\n\n"
text += "If \"Listed Objects\" is selected, the algorithm will only try to find the best possible schedule for the objects imported into this editor that are shown as the list in the homepage of this editor [scheduller.objects_scheduled()].\n\n"
messagebox.showinfo("Help", text) | PypiClean |
/EpyNN-1.2.11.tar.gz/EpyNN-1.2.11/epynnlive/dummy_image/train.py | import random
# Related third party imports
import numpy as np
# Local application/library specific imports
import epynn.initialize
from epynn.commons.maths import relu, softmax
from epynn.commons.library import (
configure_directory,
read_model,
)
from epynn.network.models import EpyNN
from epynn.embedding.models import Embedding
from epynn.convolution.models import Convolution
from epynn.pooling.models import Pooling
from epynn.flatten.models import Flatten
from epynn.dropout.models import Dropout
from epynn.dense.models import Dense
from prepare_dataset import prepare_dataset
from settings import se_hPars
########################## CONFIGURE ##########################
random.seed(1)
np.random.seed(1)
np.set_printoptions(threshold=10)
np.seterr(all='warn')
configure_directory()
############################ DATASET ##########################
X_features, Y_label = prepare_dataset(N_SAMPLES=750)
####################### BUILD AND TRAIN MODEL #################
embedding = Embedding(X_data=X_features,
Y_data=Y_label,
X_scale=True,
Y_encode=True,
batch_size=32,
relative_size=(2, 1, 0))
### Feed-Forward
# Model
name = 'Flatten_Dropout-02_Dense-64-relu_Dropout-05_Dense-2-softmax'
se_hPars['learning_rate'] = 0.01
flatten = Flatten()
dropout1 = Dropout(drop_prob=0.2)
hidden_dense = Dense(64, relu)
dropout2 = Dropout(drop_prob=0.5)
dense = Dense(2, softmax)
layers = [embedding, flatten, dropout1, hidden_dense, dropout2, dense]
model = EpyNN(layers=layers, name=name)
model.initialize(loss='MSE', seed=1, se_hPars=se_hPars.copy())
model.train(epochs=100, init_logs=False)
### Convolutional Neural Network
# Model
name = 'Convolution-6-4_Pooling-2-Max_Flatten_Dense-2-softmax'
se_hPars['learning_rate'] = 0.001
convolution = Convolution(unit_filters=6, filter_size=(4, 4), activate=relu)
pooling = Pooling(pool_size=(2, 2))
flatten = Flatten()
dense = Dense(2, softmax)
layers = [embedding, convolution, pooling, flatten, dense]
model = EpyNN(layers=layers, name=name)
model.initialize(loss='MSE', seed=1, se_hPars=se_hPars.copy())
model.train(epochs=100, init_logs=False)
### Write/read model
model.write()
# model.write(path=/your/custom/path)
model = read_model()
# model = read_model(path=/your/custom/path)
### Predict
X_features, _ = prepare_dataset(N_SAMPLES=10)
dset = model.predict(X_features)
for n, pred, probs in zip(dset.ids, dset.P, dset.A):
print(n, pred, probs) | PypiClean |
/OctoBot-Trading-2.4.23.tar.gz/OctoBot-Trading-2.4.23/octobot_trading/signals/channel/signal_producer.py | import async_channel.enums as channel_enums
import octobot_commons.logging as logging
import octobot_commons.enums as commons_enums
import octobot_commons.authentication as authentication
import octobot_commons.signals as signals
import octobot_trading.signals.channel.remote_trading_signal as signals_channel
class RemoteTradingSignalProducer(signals_channel.RemoteTradingSignalChannelProducer):
def __init__(self, channel, bot_id):
super().__init__(channel)
# the trading mode instance logger
self.logger = logging.get_logger(self.__class__.__name__)
# Define trading modes default consumer priority level
self.priority_level: int = channel_enums.ChannelConsumerPriorityLevels.MEDIUM.value
self.bot_id = bot_id
async def stop(self) -> None:
"""
Stops non-triggered tasks management
"""
self.logger.debug("Stopping producer: this should normally not be happening unless OctoBot is stopping")
await super().stop()
async def subscribe_to_product_feed(self, feed_id):
await authentication.Authenticator.instance().register_feed_callback(commons_enums.CommunityChannelTypes.SIGNAL,
self.on_new_signal,
identifier=feed_id)
async def on_new_signal(self, parsed_message) -> None:
try:
signal_bundle = signals.create_signal_bundle(parsed_message)
if not signal_bundle.signals:
self.logger.info(f"No signal in received signal bundle, message: {parsed_message}")
for signal in signal_bundle.signals:
await self.send(signal, self.bot_id, signal_bundle.identifier, signal_bundle.version)
except Exception as e:
self.logger.exception(e, True, f"Error when processing signal: {e}")
def flush(self) -> None:
"""
Flush all instance objects reference
"""
self.exchange_manager = None | PypiClean |
/Montreal-Forced-Aligner-3.0.0a3.tar.gz/Montreal-Forced-Aligner-3.0.0a3/montreal_forced_aligner/command_line/configure.py | import rich_click as click
from montreal_forced_aligner import config
__all__ = ["configure_cli"]
@click.command(
"configure",
help="The configure command is used to set global defaults for MFA so "
"you don't have to set them every time you call an MFA command.",
)
@click.option(
"-p",
"--profile",
"profile",
help='Configuration profile to use, defaults to "global"',
type=str,
default=None,
)
@click.option(
"--temporary_directory",
"-t",
help=f"Set the default temporary directory."
f"Currently defaults to {config.TEMPORARY_DIRECTORY}",
type=str,
default=None,
)
@click.option(
"--num_jobs",
"-j",
help=f"Set the number of processes to use by default. "
f"Currently defaults to {config.NUM_JOBS}",
type=int,
default=None,
)
@click.option(
"--always_clean/--never_clean",
"clean",
help="Turn on/off clean mode where MFA will clean temporary files before each run. "
f"Currently defaults to {config.CLEAN}.",
default=None,
)
@click.option(
"--always_verbose/--never_verbose",
"verbose",
help="Turn on/off verbose mode where MFA will print more output. "
f"Currently defaults to {config.VERBOSE}.",
default=None,
)
@click.option(
"--always_quiet/--never_quiet",
"quiet",
help="Turn on/off quiet mode where MFA will not print any output. "
f"Currently defaults to {config.QUIET}.",
default=None,
)
@click.option(
"--always_debug/--never_debug",
"debug",
help="Turn on/off extra debugging functionality. " f"Currently defaults to {config.DEBUG}.",
default=None,
)
@click.option(
"--always_overwrite/--never_overwrite",
"overwrite",
help="Turn on/off overwriting export files. " f"Currently defaults to {config.OVERWRITE}.",
default=None,
)
@click.option(
"--enable_mp/--disable_mp",
"use_mp",
help="Turn on/off multiprocessing. "
"Multiprocessing is recommended will allow for faster executions. "
f"Currently defaults to {config.USE_MP}.",
default=None,
)
@click.option(
"--enable_textgrid_cleanup/--disable_textgrid_cleanup",
"cleanup_textgrids",
help="Turn on/off post-processing of TextGrids that cleans up "
"silences and recombines compound words and clitics. "
f"Currently defaults to {config.CLEANUP_TEXTGRIDS}.",
default=None,
)
@click.option(
"--enable_auto_server/--disable_auto_server",
"auto_server",
help="If auto_server is enabled, MFA will start a server at the beginning of a command and close it at the end. "
"If turned off, use the `mfa server` commands to initialize, start, and stop a profile's server. "
f"Currently defaults to {config.AUTO_SERVER}.",
default=None,
)
@click.option(
"--enable_use_postgres/--disable_use_postgres",
"use_postgres",
help="If use_postgres is enabled, MFA will use PostgreSQL as the database backend instead of sqlite. "
f"Currently defaults to {config.USE_POSTGRES}.",
default=None,
)
@click.option(
"--blas_num_threads",
help="Number of threads to use for BLAS libraries, 1 is recommended "
"due to how much MFA relies on multiprocessing. "
f"Currently defaults to {config.BLAS_NUM_THREADS}.",
type=int,
default=None,
)
@click.option(
"--github_token",
default=None,
help="Github token to use for model downloading.",
type=str,
)
@click.option(
"--bytes_limit",
default=None,
help="Bytes limit for Joblib Memory caching on disk.",
type=int,
)
@click.option(
"--seed",
default=None,
help="Random seed to set for various pseudorandom processes.",
type=int,
)
@click.help_option("-h", "--help")
def configure_cli(**kwargs) -> None:
"""
Configure Montreal Forced Aligner command lines to new defaults
"""
if kwargs.get("profile", None) is not None:
config.GLOBAL_CONFIG.current_profile_name = kwargs.pop("profile")
config.GLOBAL_CONFIG.current_profile.update(kwargs)
config.GLOBAL_CONFIG.save() | PypiClean |
/Camelot-13.04.13-gpl-pyqt.tar.gz/Camelot-13.04.13-gpl-pyqt/camelot/view/controls/standalone_wizard_page.py |
from PyQt4 import QtGui
from PyQt4.QtCore import Qt
from PyQt4.QtGui import QDialog, QFrame, QGridLayout, QLabel, QVBoxLayout, \
QWidget
from camelot.view.model_thread import object_thread
from camelot.core.utils import ugettext_lazy as _
class HSeparator(QFrame):
def __init__(self, parent=None):
super(HSeparator, self).__init__(parent)
self.setFrameStyle(QFrame.HLine | QFrame.Sunken)
class StandaloneWizardPage(QDialog):
"""A Standalone Wizard Page Dialog for quick configuration windows"""
def __init__(self, window_title=None, parent=None, flags=Qt.Dialog):
super(StandaloneWizardPage, self).__init__(parent, flags)
self.setWindowTitle( unicode(window_title or ' ') )
self.set_layouts()
def set_layouts(self):
assert object_thread( self )
self._vlayout = QVBoxLayout()
self._vlayout.setSpacing(0)
self._vlayout.setContentsMargins(0,0,0,0)
# needed in case we have a widget that changes the size
# of the widget and can be hidden
# this prevents the ChangeObjects dialog from being scaleable,
# therefor commented out
#self._vlayout.setSizeConstraint(QLayout.SetFixedSize)
banner_layout = QGridLayout()
banner_layout.setColumnStretch(0, 1)
banner_layout.addWidget(QLabel(), 0, 1, Qt.AlignRight)
banner_layout.addLayout(QVBoxLayout(), 0, 0)
# TODO: allow banner widget to be supplied
banner_widget = QWidget()
banner_widget.setLayout(banner_layout)
self._vlayout.addWidget(banner_widget)
self._vlayout.addWidget(HSeparator())
self._vlayout.addWidget(QFrame(), 1)
self._vlayout.addWidget(HSeparator())
self._vlayout.addWidget(QWidget())
self.setLayout(self._vlayout)
def banner_widget(self):
return self._vlayout.itemAt(0).widget()
def main_widget(self):
return self._vlayout.itemAt(2).widget()
def buttons_widget(self):
return self._vlayout.itemAt(4).widget()
def banner_layout(self):
return self.banner_widget().layout()
def banner_logo_holder(self):
return self.banner_layout().itemAtPosition(0, 1).widget()
def banner_text_layout(self):
return self.banner_layout().itemAtPosition(0, 0).layout()
def set_banner_logo_pixmap(self, pixmap):
self.banner_logo_holder().setPixmap(pixmap)
def set_banner_title(self, title):
title_widget = QLabel('<dt><b>%s</b></dt>' % title)
self.banner_text_layout().insertWidget(0, title_widget)
def set_banner_subtitle(self, subtitle):
subtitle_widget = QLabel('<dd>%s</dd>' % subtitle)
self.banner_text_layout().insertWidget(1, subtitle_widget)
def set_default_buttons( self,
accept = _('OK'),
reject = _('Cancel'),
done = None ):
"""add an :guilabel:`ok` and a :guilabel:`cancel` button.
"""
layout = QtGui.QHBoxLayout()
layout.setDirection( QtGui.QBoxLayout.RightToLeft )
if accept != None:
ok_button = QtGui.QPushButton( unicode( accept ), self )
ok_button.setObjectName( 'accept' )
ok_button.pressed.connect( self.accept )
layout.addWidget( ok_button )
if reject != None:
cancel_button = QtGui.QPushButton( unicode( reject ), self )
cancel_button.setObjectName( 'reject' )
cancel_button.pressed.connect( self.reject )
layout.addWidget( cancel_button )
layout.addStretch()
self.buttons_widget().setLayout( layout ) | PypiClean |
/DuckMapper-1.0.1.tar.gz/DuckMapper-1.0.1/README.md | # DuckMapper
A tiny library to help you to convert classes to DTO and vice versa.
# Usage
### Using the default function
```python
class testClass():
def __init__(self):
self.id = None
self.gender =None
self.github = None
self.name = None
self.password = None
self.phone_number = None
```
```python
class testDTO:
def __init__(self):
self.name = None
self.github = None
self.telephone = None
```
```python
x = testClass()
x.name = "Lucas"
x.phone_number = 99999999
x.github = "https://github.com/lucascz37"
result = convertTo(source=x, target=testDTO, default=10)
```
obs: fields not mapped will receive the value passed on the default param
### using decorator
With the same two classes created before with decorator you can specify fields that dont have the same name. My suggestion it's that you can create a class that have functions with the decorator that convert all your project's DTO to classes and the other way around.
```python
@mapBy(fields={"telephone": "phone_number"})
def convertTestClasstoTestDTO():
pass
teste = convertTestClasstoTestDTO(source=x, target=testDTO, default=100)
```
obs: the fields dict should be like this: {"target_field1": "source_field1", "target_field2": "source_field2"...} | PypiClean |
/MuPhyN-0.1.1.post4-py3-none-any.whl/muphyn/docs/2. Create a custom box.md | # Create a custom box
To create a functionnal process box for Muphyn application
Build two files:
- a model description file ;
- a process file.
The two files must have the same basefilename :
- model description file: `myFile`.yaml
- process file: `myFile`.py
- [Create a custom box](#create-a-custom-box)
- [1. Model description file (.yaml file)](#1-model-description-file-yaml-file)
- [1.1. Metadata - General Box Informations](#11-metadata---general-box-informations)
- [*Box name*](#box-name)
- [*Box library*](#box-library)
- [*Box version*](#box-version)
- [*Box creator*](#box-creator)
- [*Box icon*](#box-icon)
- [*Box creation date*](#box-creation-date)
- [*Box type*](#box-type)
- [*Box init function*](#box-init-function)
- [*Box simulation function*](#box-simulation-function)
- [*Box end function*](#box-end-function)
- [*Box "wait for event" parameter*](#box-wait-for-event-parameter)
- [*Box "wait for all signal events" parameter*](#box-wait-for-all-signal-events-parameter)
- [1.2. Box parameters](#12-box-parameters)
- [*Inputs groups*](#inputs-groups)
- [*Outputs groups*](#outputs-groups)
- [*Parameters*](#parameters)
- [2. Process file (.py file)](#2-process-file-py-file)
- [2.1. box\_init\_method](#21-box_init_method)
- [2.2. box\_function](#22-box_function)
- [2.3. box\_end\_method](#23-box_end_method)
## 1. Model description file (.yaml file)
The `.yaml` file must contains all these following parameters. For now there is no default value for these parameters
### 1.1. Metadata - General Box Informations
#### *Box name*
```yaml
name: Name of the box
```
#### *Box library*
This parameter is the library of the box. This parameter must be two words separated by a "." and the first word must be "Boxes":
```yaml
library: Boxes.libraryName
```
#### *Box version*
```yaml
version: 1.0
```
#### *Box creator*
Name of the box creator
```yaml
creator: FirstName LastName
```
#### *Box icon*
Path to the box icon image file. The image file can be a matrix image file like `.jpg` or `.png` and vector image file like `.svg`
```yaml
icon: Path/To/Image/file.jpg
```
#### *Box creation date*
Box creation date
```yaml
date_creation: YYYY-MM-DD
```
#### *Box type*
Box type must be `code`
```yaml
type: code
```
#### *Box init function*
Name of the initialization simulation function in the process file. If no init function in the process file this paramter mus tbe set to `None`.
```yaml
box_init_method: _init_simulation
```
#### *Box simulation function*
Name of the function called at each step during the simulation in the process file. If no simulation function in the process file this paramter mus tbe set to `None`.
```yaml
box_function: _step_simumation
```
#### *Box end function*
Name of the ending simulation function in the process file. If no ending function in the process file this paramter mus tbe set to `None`.
```yaml
box_end_method: _finish_simulation
```
#### *Box "wait for event" parameter*
This parameter can be set to `False` or `True`. If the box can generate its own value and return it to the output this parameter must be set to `False` like in the case of a source box. If a box must get data form its inputs to generate data for its outputs this parameter must be set to `True`.
```yaml
wait_for_events: True
```
#### *Box "wait for all signal events" parameter*
This parameter must be set to `True`.
```yaml
wait_for_all_signal_events: True
```
### 1.2. Box parameters
#### *Inputs groups*
The `inputs` parameter let us declare multiple inputs groups. Each inputs group must have a `name`, `type` & `isInfinite` value. If the `isInfinite` value is set to `False` the `count` value must be set.
```yaml
inputs:
- name: Inputs Group 1
type: float
isInfinite: True # default count value = 2
minimumCount: 0
maximumCount: 5
count: 1
- name: Inputs Group 2
type: float
isInfinite: False # default count value = 2
count: 2
```
#### *Outputs groups*
The `outputs` parameter let us declare multiple inputs groups. Each inputs group must have a `name`, `type` & `isInfinite` value. If the `isInfinite` value is set to `False` the `count` value must be set.
```yaml
outputs:
- name: Outputs Group 1
type: float
isInfinite: True # default count value = 2
minimumCount: 0
maximumCount: 5
count: 1
- name: Outputs Group 2
type: float
isInfinite: False # default count value = 2
count: 2
```
#### *Parameters*
List of all parameters. For now there are the following supported types:
- `string`: text. We can add the `maxLength` parameter which limit the number of characters in the string.
- `int`: integer value. There are `min` & `max` parameter which allows to create range to respect for the integer value.
- `float`: floating value. There are `min` & `max` parameter which allows to create range to respect for the float value.
- `boolean`: boolean value.
- `anyFile`: any file path value. This parameter allows to select an existing or not file (ex: it can be used in the case of selecting the path where to save the file).
- `directory`: directory path value. Existing folder can be selected.
- `existingFile`: existing file path value. Only one existing file can be selected.
- `existingFiles`: multiple exisiting file path value. Multiple existing files can be selected.
Each parameter can active a function when the parameter value has been changed. This function take one parameter:
- box_model: `BoxModel`
```yaml
params:
String property:
type: string
value: ""
maxLength: 10
Integer property:
type: int
value: 1234567890
min: 0
max: 100
Float property:
type: float
value: 12345.67890
min: 0.0
max: 100.0
Boolean property:
type: boolean
value: False
Any File property:
type: anyFile
value: ""
Directory property:
type: directory
value: ""
Existing File property:
type: existingFile
value: ""
Existing Files property:
type: existingFiles
value: ""
```
### 1.3 Full YAML file example
```yaml
box:
name: BoxName
library: Library.Path
version: 1.0
creator: Firstname Lastname
date_creation: YYYY-MM-DD
description: Description
icon: "Path/To/Image/File"
type: code
box_init_method: None
box_function: name_of_function
box_end_method: None
wait_for_events: True
wait_for_all_signal_events: True
inputs:
- name: Term
type: float
isInfinite: True # default count value = 2
outputs:
- name: Sum
type: float
value: 0.0
count: 1
params: None
```
## 2. Process file (.py file)
The process file is a classical Python script file in which we can declare three functions:
- `box_init_method`
- `box_function`
- `box_end_method`
### 2.1. box_init_method
Function that is activated before starting the simulation. This function has two parameters:
- box: `Box`
- simulation_params: `SchedulerParams`
### 2.2. box_function
Function that is activated at each step of the simulation. This function has two parameters:
- box: `Box`
- simulation_params: `SchedulerParams`
### 2.3. box_end_method
Function that is activated after the simulation. This function has one parameter:
- box: `Box`
| PypiClean |
/Assessor-0.2.13.tar.bz2/Assessor-0.2.13/tastypie/api.py | import warnings
from django.conf.urls.defaults import *
from django.core.exceptions import ImproperlyConfigured
from django.core.urlresolvers import reverse
from django.http import HttpResponse
from tastypie.exceptions import NotRegistered, BadRequest
from tastypie.serializers import Serializer
from tastypie.utils import trailing_slash, is_valid_jsonp_callback_value
from tastypie.utils.mime import determine_format, build_content_type
class Api(object):
"""
Implements a registry to tie together the various resources that make up
an API.
Especially useful for navigation, HATEOAS and for providing multiple
versions of your API.
Optionally supplying ``api_name`` allows you to name the API. Generally,
this is done with version numbers (i.e. ``v1``, ``v2``, etc.) but can
be named any string.
"""
def __init__(self, api_name="v1"):
self.api_name = api_name
self._registry = {}
self._canonicals = {}
def register(self, resource, canonical=True):
"""
Registers an instance of a ``Resource`` subclass with the API.
Optionally accept a ``canonical`` argument, which indicates that the
resource being registered is the canonical variant. Defaults to
``True``.
"""
resource_name = getattr(resource._meta, 'resource_name', None)
if resource_name is None:
raise ImproperlyConfigured("Resource %r must define a 'resource_name'." % resource)
self._registry[resource_name] = resource
if canonical is True:
if resource_name in self._canonicals:
warnings.warn("A new resource '%r' is replacing the existing canonical URL for '%s'." % (resource, resource_name), Warning, stacklevel=2)
self._canonicals[resource_name] = resource
# TODO: This is messy, but makes URI resolution on FK/M2M fields
# work consistently.
resource._meta.api_name = self.api_name
resource.__class__.Meta.api_name = self.api_name
def unregister(self, resource_name):
"""
If present, unregisters a resource from the API.
"""
if resource_name in self._registry:
del(self._registry[resource_name])
if resource_name in self._canonicals:
del(self._canonicals[resource_name])
def canonical_resource_for(self, resource_name):
"""
Returns the canonical resource for a given ``resource_name``.
"""
if resource_name in self._canonicals:
return self._canonicals[resource_name]
raise NotRegistered("No resource was registered as canonical for '%s'." % resource_name)
def wrap_view(self, view):
def wrapper(request, *args, **kwargs):
return getattr(self, view)(request, *args, **kwargs)
return wrapper
def override_urls(self):
"""
A hook for adding your own URLs or overriding the default URLs.
"""
return []
@property
def urls(self):
"""
Provides URLconf details for the ``Api`` and all registered
``Resources`` beneath it.
"""
pattern_list = [
url(r"^(?P<api_name>%s)%s$" % (self.api_name, trailing_slash()), self.wrap_view('top_level'), name="api_%s_top_level" % self.api_name),
]
for name in sorted(self._registry.keys()):
self._registry[name].api_name = self.api_name
pattern_list.append((r"^(?P<api_name>%s)/" % self.api_name, include(self._registry[name].urls)))
urlpatterns = self.override_urls() + patterns('',
*pattern_list
)
return urlpatterns
def top_level(self, request, api_name=None):
"""
A view that returns a serialized list of all resources registers
to the ``Api``. Useful for discovery.
"""
serializer = Serializer()
available_resources = {}
if api_name is None:
api_name = self.api_name
for name in sorted(self._registry.keys()):
available_resources[name] = {
'list_endpoint': self._build_reverse_url("api_dispatch_list", kwargs={
'api_name': api_name,
'resource_name': name,
}),
'schema': self._build_reverse_url("api_get_schema", kwargs={
'api_name': api_name,
'resource_name': name,
}),
}
desired_format = determine_format(request, serializer)
options = {}
if 'text/javascript' in desired_format:
callback = request.GET.get('callback', 'callback')
if not is_valid_jsonp_callback_value(callback):
raise BadRequest('JSONP callback name is invalid.')
options['callback'] = callback
serialized = serializer.serialize(available_resources, desired_format, options)
return HttpResponse(content=serialized, content_type=build_content_type(desired_format))
def _build_reverse_url(self, name, args=None, kwargs=None):
"""
A convenience hook for overriding how URLs are built.
See ``NamespacedApi._build_reverse_url`` for an example.
"""
return reverse(name, args=args, kwargs=kwargs)
class NamespacedApi(Api):
"""
An API subclass that respects Django namespaces.
"""
def __init__(self, api_name="v1", urlconf_namespace=None):
super(NamespacedApi, self).__init__(api_name=api_name)
self.urlconf_namespace = urlconf_namespace
def register(self, resource, canonical=True):
super(NamespacedApi, self).register(resource, canonical=canonical)
if canonical is True:
# Plop in the namespace here as well.
resource._meta.urlconf_namespace = self.urlconf_namespace
def _build_reverse_url(self, name, args=None, kwargs=None):
namespaced = "%s:%s" % (self.urlconf_namespace, name)
return reverse(namespaced, args=args, kwargs=kwargs) | PypiClean |
/Couchapp-1.0.2.tar.gz/Couchapp-1.0.2/couchapp/localdoc.py |
from __future__ import with_statement
import base64
import logging
import mimetypes
import os
import os.path
import re
import urlparse
import webbrowser
try:
import desktopcouch
try:
from desktopcouch.application import local_files
except ImportError:
from desktopcouch import local_files
except ImportError:
desktopcouch = None
from couchapp.errors import ResourceNotFound, AppError
from couchapp.macros import package_shows, package_views
from couchapp import util
if os.name == 'nt':
def _replace_backslash(name):
return name.replace("\\", "/")
else:
def _replace_backslash(name):
return name
re_comment = re.compile("((?:\/\*(?:[^*]|(?:\*+[^*\/]))*\*+\/)|(?:\/\/.*))")
DEFAULT_IGNORE = """[
// filenames matching these regexps will not be pushed to the database
// uncomment to activate; separate entries with ","
// ".*~$"
// ".*\\\\.swp$"
// ".*\\\\.bak$"
]"""
logger = logging.getLogger(__name__)
class LocalDoc(object):
def __init__(self, path, create=False, docid=None, is_ddoc=True):
self.docdir = path
self.ignores = []
self.is_ddoc = is_ddoc
ignorefile = os.path.join(path, '.couchappignore')
if os.path.exists(ignorefile):
# A .couchappignore file is a json file containing a
# list of regexps for things to skip
with open(ignorefile, 'r') as f:
self.ignores = util.json.loads(
util.remove_comments(f.read())
)
if not docid:
docid = self.get_id()
self.docid = docid
self._doc = {'_id': self.docid}
if create:
self.create()
def get_id(self):
"""
if there is an _id file, docid is extracted from it,
else we take the current folder name.
"""
idfile = os.path.join(self.docdir, '_id')
if os.path.exists(idfile):
docid = util.read(idfile).split("\n")[0].strip()
if docid:
return docid
if self.is_ddoc:
return "_design/%s" % os.path.split(self.docdir)[1]
else:
return os.path.split(self.docdir)[1]
def __repr__(self):
return "<%s (%s/%s)>" % (self.__class__.__name__, self.docdir,
self.docid)
def __str__(self):
return util.json.dumps(self.doc())
def create(self):
if not os.path.isdir(self.docdir):
logger.error("%s directory doesn't exist." % self.docdir)
rcfile = os.path.join(self.docdir, '.couchapprc')
ignfile = os.path.join(self.docdir, '.couchappignore')
if not os.path.isfile(rcfile):
util.write_json(rcfile, {})
util.write(ignfile, DEFAULT_IGNORE)
else:
logger.info("CouchApp already initialized in %s." % self.docdir)
def push(self, dbs, noatomic=False, browser=False, force=False,
noindex=False):
"""Push a doc to a list of database `dburls`. If noatomic is true
each attachments will be sent one by one."""
for db in dbs:
if noatomic:
doc = self.doc(db, with_attachments=False, force=force)
db.save_doc(doc, force_update=True)
attachments = doc.get('_attachments') or {}
for name, filepath in self.attachments():
if name not in attachments:
logger.debug("attach %s " % name)
db.put_attachment(doc, open(filepath, "r"),
name=name)
else:
doc = self.doc(db, force=force)
db.save_doc(doc, force_update=True)
indexurl = self.index(db.raw_uri, doc['couchapp'].get('index'))
if indexurl and not noindex:
if "@" in indexurl:
u = urlparse.urlparse(indexurl)
indexurl = urlparse.urlunparse((u.scheme,
u.netloc.split("@")[-1],
u.path, u.params, u.query,
u.fragment))
logger.info("Visit your CouchApp here:\n%s" % indexurl)
if browser:
self.browse_url(indexurl)
def browse(self, dbs):
for db in dbs:
doc = self.doc()
indexurl = self.index(db.raw_uri, doc['couchapp'].get('index'))
if indexurl:
self.browse_url(indexurl)
def browse_url(self, url):
if url.startswith("desktopcouch://"):
if not desktopcouch:
raise AppError("Desktopcouch isn't available on this" +
"machine. You can't access to %s" % url)
ctx = local_files.DEFAULT_CONTEXT
bookmark_file = os.path.join(ctx.db_dir, "couchdb.html")
try:
username, password = \
re.findall("<!-- !!([^!]+)!!([^!]+)!! -->",
open(bookmark_file).read())[-1]
except ValueError:
raise IOError("Bookmark file is corrupt." +
"Username/password are missing.")
url = "http://%s:%s@localhost:%s/%s" % (username, password,
desktopcouch.find_port(),
url[15:])
webbrowser.open_new_tab(url)
def attachment_stub(self, name, filepath):
att = {}
with open(filepath, "rb") as f:
re_sp = re.compile('\s')
att = {"data": re_sp.sub('', base64.b64encode(f.read())),
"content_type":
';'.join(filter(None, mimetypes.guess_type(name)))}
return att
def doc(self, db=None, with_attachments=True, force=False):
""" Function to reetrieve document object from
document directory. If `with_attachments` is True
attachments will be included and encoded"""
manifest = []
objects = {}
signatures = {}
attachments = {}
self._doc = {'_id': self.docid}
# get designdoc
self._doc.update(self.dir_to_fields(self.docdir, manifest=manifest))
if not 'couchapp' in self._doc:
self._doc['couchapp'] = {}
self.olddoc = {}
if db is not None:
try:
self.olddoc = db.open_doc(self._doc['_id'])
attachments = self.olddoc.get('_attachments') or {}
self._doc.update({'_rev': self.olddoc['_rev']})
except ResourceNotFound:
self.olddoc = {}
if 'couchapp' in self.olddoc:
old_signatures = self.olddoc['couchapp'].get('signatures', {})
else:
old_signatures = {}
for name, filepath in self.attachments():
signatures[name] = util.sign(filepath)
if with_attachments and not old_signatures:
logger.debug("attach %s " % name)
attachments[name] = self.attachment_stub(name, filepath)
if old_signatures:
for name, signature in old_signatures.items():
cursign = signatures.get(name)
if not cursign:
logger.debug("detach %s " % name)
del attachments[name]
elif cursign != signature:
logger.debug("detach %s " % name)
del attachments[name]
else:
continue
if with_attachments:
for name, filepath in self.attachments():
if old_signatures.get(name) != \
signatures.get(name) or force:
logger.debug("attach %s " % name)
attachments[name] = self.attachment_stub(name,
filepath)
self._doc['_attachments'] = attachments
self._doc['couchapp'].update({
'manifest': manifest,
'objects': objects,
'signatures': signatures
})
if self.docid.startswith('_design/'): # process macros
for funs in ['shows', 'lists', 'updates', 'filters', 'spatial']:
if funs in self._doc:
package_shows(self._doc, self._doc[funs], self.docdir,
objects)
if 'validate_doc_update' in self._doc:
tmp_dict = {'validate_doc_update':
self._doc["validate_doc_update"]}
package_shows(self._doc, tmp_dict, self.docdir, objects)
self._doc.update(tmp_dict)
if 'views' in self._doc:
# clean views
# we remove empty views and malformed from the list
# of pushed views. We also clean manifest
views = {}
dmanifest = {}
for i, fname in enumerate(manifest):
if fname.startswith("views/") and fname != "views/":
name, ext = os.path.splitext(fname)
if name.endswith('/'):
name = name[:-1]
dmanifest[name] = i
for vname, value in self._doc['views'].iteritems():
if value and isinstance(value, dict):
views[vname] = value
else:
del manifest[dmanifest["views/%s" % vname]]
self._doc['views'] = views
package_views(self._doc, self._doc["views"], self.docdir,
objects)
if "fulltext" in self._doc:
package_views(self._doc, self._doc["fulltext"], self.docdir,
objects)
return self._doc
def check_ignore(self, item):
for i in self.ignores:
match = re.match(i, item)
if match:
logger.debug("ignoring %s" % item)
return True
return False
def dir_to_fields(self, current_dir='', depth=0, manifest=[]):
""" process a directory and get all members """
fields = {}
if not current_dir:
current_dir = self.docdir
for name in os.listdir(current_dir):
current_path = os.path.join(current_dir, name)
rel_path = _replace_backslash(util.relpath(current_path,
self.docdir))
if name.startswith("."):
continue
elif self.check_ignore(name):
continue
elif depth == 0 and name.startswith('_'):
# files starting with "_" are always "special"
continue
elif name == '_attachments':
continue
elif depth == 0 and (name == 'couchapp' or
name == 'couchapp.json'):
# we are in app_meta
if name == "couchapp":
manifest.append('%s/' % rel_path)
content = self.dir_to_fields(current_path,
depth=depth+1,
manifest=manifest)
else:
manifest.append(rel_path)
content = util.read_json(current_path)
if not isinstance(content, dict):
content = {"meta": content}
if 'signatures' in content:
del content['signatures']
if 'manifest' in content:
del content['manifest']
if 'objects' in content:
del content['objects']
if 'length' in content:
del content['length']
if 'couchapp' in fields:
fields['couchapp'].update(content)
else:
fields['couchapp'] = content
elif os.path.isdir(current_path):
manifest.append('%s/' % rel_path)
fields[name] = self.dir_to_fields(current_path, depth=depth+1,
manifest=manifest)
else:
logger.debug("push %s" % rel_path)
content = ''
if name.endswith('.json'):
try:
content = util.read_json(current_path)
except ValueError:
logger.error("Json invalid in %s" % current_path)
else:
try:
content = util.read(current_path).strip()
except UnicodeDecodeError:
logger.warning("%s isn't encoded in utf8" %
current_path)
content = util.read(current_path, utf8=False)
try:
content.encode('utf-8')
except UnicodeError:
logger.warning("plan B didn't work, %s is a binary"
% current_path)
logger.warning("use plan C: encode to base64")
content = "base64-encoded;%s" % \
base64.b64encode(content)
# remove extension
name, ext = os.path.splitext(name)
if name in fields:
logger.warning("%(name)s is already in properties. " +
"Can't add (%(fqn)s)" % {"name": name,
"fqn": rel_path})
else:
manifest.append(rel_path)
fields[name] = content
return fields
def _process_attachments(self, path, vendor=None):
""" the function processing directory to yeld
attachments. """
if os.path.isdir(path):
for root, dirs, files in os.walk(path):
for dirname in dirs:
if self.check_ignore(dirname):
dirs.remove(dirname)
if files:
for filename in files:
if self.check_ignore(filename):
continue
else:
filepath = os.path.join(root, filename)
name = util.relpath(filepath, path)
if vendor is not None:
name = os.path.join('vendor', vendor, name)
name = _replace_backslash(name)
yield (name, filepath)
def attachments(self):
""" This function yield a tuple (name, filepath) corresponding
to each attachment (vendor included) in the couchapp. `name`
is the name of attachment in `_attachments` member and `filepath`
the path to the attachment on the disk.
attachments are processed later to allow us to send attachments inline
or one by one.
"""
# process main attachments
attachdir = os.path.join(self.docdir, "_attachments")
for attachment in self._process_attachments(attachdir):
yield attachment
vendordir = os.path.join(self.docdir, 'vendor')
if not os.path.isdir(vendordir):
logger.debug("%s don't exist" % vendordir)
return
for name in os.listdir(vendordir):
current_path = os.path.join(vendordir, name)
if os.path.isdir(current_path):
attachdir = os.path.join(current_path, '_attachments')
if os.path.isdir(attachdir):
for attachment in self._process_attachments(attachdir,
vendor=name):
yield attachment
def index(self, dburl, index):
if index is not None:
return "%s/%s/%s" % (dburl, self.docid, index)
elif os.path.isfile(os.path.join(self.docdir, "_attachments",
'index.html')):
return "%s/%s/index.html" % (dburl, self.docid)
return False
def to_json(self):
return self.__str__()
def document(path, create=False, docid=None, is_ddoc=True):
return LocalDoc(path, create=create, docid=docid, is_ddoc=is_ddoc) | PypiClean |
/Complex-API-0.1.0.tar.gz/Complex-API-0.1.0/Complex_API/complex_api.py | import requests
API_URL = "https://API.jagthefriend.repl.co"
GITHUB_REPO = "https://github.com/JagTheFriend/Complex-API"
__version__ = "0.1.0"
__all__ = [
"compile", "reddit", "lyrics",
"ascii", "temp", "length",
"inspire", "calculator", "hex_to_denary"
]
def main() -> str:
return requests.get(f"{API_URL}").text
def compile(*, lang: str, code: str) -> dict:
"""
Gets the result of compiling code from the `Compiler API`
:param lang: The language which the compiler would use to compile code
:param code: The code to be compiled
:return: Dictionary
"""
return requests.get(f"{API_URL}/compile={lang}_{code}").json()
def reddit(*, limit: float, subreddit: str) -> dict:
"""
Gets a limited amount of posts from a specific subreddit
:param subreddit: Name of the subreddit
:param limit: Number of posts to be returned
:return: Dictionary
"""
return requests.get(f"{API_URL}/reddit={subreddit}+{limit}").json()
def lyrics(*, song: str) -> dict:
"""
Gets the lyrics of a song from the `Lyrics API`
:param song: Name of the song
:return: Dictionary
"""
return requests.get(f"{API_URL}/lyrics+{song}").json()
def ascii(*, text: str) -> dict:
"""
Gets Pixel art from the ASCII API
:param text: The text which should be converted to Pixel art
:return: Dictionary
"""
return requests.get(f"{API_URL}/ascii_{text}").json()
def temp(*, place: str, unit: str = "metric") -> dict:
"""
Gets the weather of a place
:param place: The name of the place whose weather would be found
:param unit: The unit used for measuring amounts,
(it can be either 'metric' or 'imperial)
:return: Dictionary
"""
return requests.get(f"{API_URL}/temp={place}+{unit}").json()
def length(*, playlist: str) -> dict:
"""
Gets the length of playlist
:param playlist: This a unique id given to each playlist
:return: Dictionary
"""
return requests.get(f"{API_URL}/length+{playlist}").json()
def inspire() -> dict:
"""
Gets a random inspirational text
:return: Dictionary
"""
return requests.get(f"{API_URL}/inspire").json()
def calculator(*, formula: str) -> dict:
"""
Gets the result of a calculation
:param formula: Stuff on which calculation will be carried
:return: Dictionary
"""
new_formula = formula.replace('/', '\\')
return requests.get(f"{API_URL}/cal_{new_formula}").json()
def hex_to_denary(*, hex_code: str) -> dict:
"""
Converts Hexadecimal code to decimal(or denary)
:param formula: Stuff on which calculation will be carried on
:return: Dictionary
"""
return requests.get(f"{API_URL}/hex+{hex_code}").json()
def binary_to_denary(*, binary) -> dict:
"""
Converts Denary code to binary
:param binary: Stuff on which calculation will be carried on Example: 4569
:return: Dictionary
"""
return requests.get(f"{API_URL}/binary={binary}").json()
def ai(*, text="Hello Gamer") -> dict:
"""
Converts Denary code to binary
:param text: Stuff on which `AI` would use to give a valid reply
:return: Dictionary
"""
return requests.get(f"{API_URL}/ai_{text}").json() | PypiClean |
/Flask-AppBuilder-red-2.1.13.tar.gz/Flask-AppBuilder-red-2.1.13/flask_appbuilder/security/forms.py | from flask_babel import lazy_gettext
from flask_wtf.recaptcha import RecaptchaField
from wtforms import BooleanField, PasswordField, StringField
from wtforms.validators import DataRequired, Email, EqualTo
from ..fieldwidgets import BS3PasswordFieldWidget, BS3TextFieldWidget
from ..forms import DynamicForm
class LoginForm_oid(DynamicForm):
openid = StringField(lazy_gettext("OpenID"), validators=[DataRequired()])
username = StringField(lazy_gettext("User Name"))
remember_me = BooleanField(lazy_gettext("Remember me"), default=False)
class LoginForm_db(DynamicForm):
username = StringField(lazy_gettext("User Name"), validators=[DataRequired()])
password = PasswordField(lazy_gettext("Password"), validators=[DataRequired()])
class UserInfoEdit(DynamicForm):
first_name = StringField(
lazy_gettext("First Name"),
validators=[DataRequired()],
widget=BS3TextFieldWidget(),
description=lazy_gettext("Write the user first name or names"),
)
last_name = StringField(
lazy_gettext("Last Name"),
validators=[DataRequired()],
widget=BS3TextFieldWidget(),
description=lazy_gettext("Write the user last name"),
)
class ResetPasswordForm(DynamicForm):
password = PasswordField(
lazy_gettext("Password"),
description=lazy_gettext(
"Please use a good password policy,"
" this application does not check this for you"
),
validators=[DataRequired()],
widget=BS3PasswordFieldWidget(),
)
conf_password = PasswordField(
lazy_gettext("Confirm Password"),
description=lazy_gettext("Please rewrite the password to confirm"),
validators=[EqualTo("password", message=lazy_gettext("Passwords must match"))],
widget=BS3PasswordFieldWidget(),
)
class RegisterUserDBForm(DynamicForm):
username = StringField(
lazy_gettext("User Name"),
validators=[DataRequired()],
widget=BS3TextFieldWidget(),
)
first_name = StringField(
lazy_gettext("First Name"),
validators=[DataRequired()],
widget=BS3TextFieldWidget(),
)
last_name = StringField(
lazy_gettext("Last Name"),
validators=[DataRequired()],
widget=BS3TextFieldWidget(),
)
email = StringField(
lazy_gettext("Email"),
validators=[DataRequired(), Email()],
widget=BS3TextFieldWidget(),
)
password = PasswordField(
lazy_gettext("Password"),
description=lazy_gettext(
"Please use a good password policy,"
" this application does not check this for you"
),
validators=[DataRequired()],
widget=BS3PasswordFieldWidget(),
)
conf_password = PasswordField(
lazy_gettext("Confirm Password"),
description=lazy_gettext("Please rewrite the password to confirm"),
validators=[EqualTo("password", message=lazy_gettext("Passwords must match"))],
widget=BS3PasswordFieldWidget(),
)
recaptcha = RecaptchaField()
class RegisterUserOIDForm(DynamicForm):
username = StringField(
lazy_gettext("User Name"),
validators=[DataRequired()],
widget=BS3TextFieldWidget(),
)
first_name = StringField(
lazy_gettext("First Name"),
validators=[DataRequired()],
widget=BS3TextFieldWidget(),
)
last_name = StringField(
lazy_gettext("Last Name"),
validators=[DataRequired()],
widget=BS3TextFieldWidget(),
)
email = StringField(
lazy_gettext("Email"),
validators=[DataRequired(), Email()],
widget=BS3TextFieldWidget(),
)
recaptcha = RecaptchaField() | PypiClean |
/DUELink-1.0.1-py3-none-any.whl/DUE/DUEController.py | from DUE.Analog import AnalogController
from DUE.Button import ButtonController
from DUE.Digital import DigitalController
from DUE.Display import DisplayController
from DUE.DistanceSensor import DistanceSensorController
from DUE.Frequency import FrequencyController
from DUE.I2C import I2cController
from DUE.Infrared import InfraredController
from DUE.Neo import NeoController
from DUE.System import SystemController
from DUE.SerialInterface import SerialInterface
from DUE.ServoMoto import ServoMotoController
from DUE.Sound import SoundController
from DUE.Spi import SpiController
from DUE.Touch import TouchController
from DUE.Uart import UartController
from DUE.Led import LedController
from DUE.Script import ScriptController
from DUE.DeviceConfiguration import DeviceConfiguration
from enum import Enum
import platform
class DUEController:
def __init__(self, comPort: str):
if comPort is None:
raise ValueError(f"Invalid comport: {comPort}")
try:
self.__Connect(comPort)
except:
raise Exception(f"Could not connect to the comport: {comPort}")
if self.serialPort is None:
raise Exception(f"serialPort is null")
self.Analog = AnalogController(self.serialPort)
self.Digital = DigitalController(self.serialPort)
self.I2c = I2cController(self.serialPort)
self.ServoMoto = ServoMotoController(self.serialPort)
self.Frequency = FrequencyController(self.serialPort)
self.Spi = SpiController(self.serialPort)
self.Infrared = InfraredController(self.serialPort)
self.Neo = NeoController(self.serialPort)
self.System = SystemController(self.serialPort)
self.Uart = UartController(self.serialPort)
self.Button = ButtonController(self.serialPort)
self.Distance = DistanceSensorController(self.serialPort)
self.Sound = SoundController(self.serialPort)
self.Display = DisplayController(self.serialPort)
self.Touch = TouchController(self.serialPort)
self.Led = LedController(self.serialPort)
self.Script = ScriptController(self.serialPort)
def __Connect(self, comPort: str):
self.serialPort = SerialInterface(comPort)
self.serialPort.Connect()
self.Version = self.serialPort.GetVersion().split("\n")[0]
if self.Version == "" or len(self.Version) != 7:
raise Exception("The device is not supported.")
self.DeviceConfig = DeviceConfiguration()
if self.Version[len(self.Version) -1] == 'P':
self.DeviceConfig.IsPulse = True
self.DeviceConfig.MaxPinIO = 23
self.DeviceConfig.MaxPinAnalog = 29
elif self.Version[len(self.Version) -1] == 'I':
self.DeviceConfig.IsPico = True
self.DeviceConfig.MaxPinIO = 29
self.DeviceConfig.MaxPinAnalog = 29
elif self.Version[len(self.Version) -1] == 'F':
self.DeviceConfig.IsFlea = True
self.DeviceConfig.MaxPinIO = 11
self.DeviceConfig.MaxPinAnalog = 29
elif self.Version[len(self.Version) -1] == 'E':
self.DeviceConfig.IsFlea = True
self.DeviceConfig.MaxPinIO = 22
self.DeviceConfig.MaxPinAnalog = 11
self.serialPort.DeviceConfig = self.DeviceConfig
def Disconnect(self):
self.serialPort.Disconnect()
def GetConnectionPort():
try:
from serial.tools.list_ports import comports
except ImportError:
return ""
if comports:
com_ports_list = list(comports())
ebb_ports_list = []
for port in com_ports_list:
if port.vid ==0x1B9F and port.pid==0xF300:
if (platform.system() == 'Windows'):
return port.name
else:
return port.device
return ""
class Pin:
ButtonA = 97
ButtonB = 98
Led = 108
class Input:
PullNone = 0
PullUp = 1
PullDown = 2 | PypiClean |
/DeerLab-1.1.1.tar.gz/DeerLab-1.1.1/deerlab/fit.py |
import numpy as np
from deerlab.solvers import snlls
from deerlab.fitresult import FitResult
from deerlab.utils import formatted_table, parse_multidatasets
from deerlab.bootstrap_analysis import bootstrap_analysis
from deerlab.classes import UQResult
from sys import stdout
from deerlab.model import Model
from copy import copy
#--------------------------------------------------------------------------
def _outerOptimization(fitfcn,penalty_objects,sigma):
"""
(Private function)
A method to optimize the fit of a model with penalties.
This method returns a function that can be used to evaluate the fit of a model with penalties. It takes in the following arguments:
fitfcn : callable
The function to be optimized, which takes in a set of parameters and returns a scalar value representing the fit of the model.
penalty_objects : list of Penalty objects
A list of penalty objects that define the penalty functions to be applied to the fit function. The list can have up to three penalty objects.
sigma : numpy.ndarray
The vector of observation uncertainties to be used in the penalty functions.
Returns
fitfcn_ : callable
A function that can be used to evaluate the fit of a model with penalties. This function takes in a set of parameters and returns a scalar value representing the fit of the model.
"""
# If there are no penalties
if len(penalty_objects)==0:
fitfcn_ = lambda y: fitfcn(y,[None])
# Otherwise, prepare to solve multiobjective problem
elif len(penalty_objects)==3:
thirdfcn = lambda y,*param: penalty_objects[2].optimize(lambda weight: fitfcn(y,[*param,weight]),y,sigma)[1]
secondfcn = lambda y,*param: penalty_objects[1].optimize(lambda weight: fitfcn(y,[*param,weight,thirdfcn(y,*param,weight)]),y,sigma)[1]
fitfcn_ = lambda y: penalty_objects[0].optimize(lambda weight: fitfcn(y,[weight,secondfcn(y,weight),thirdfcn(y,weight,secondfcn(weight))]),y,sigma)[0]
elif len(penalty_objects)==2:
secondfcn = lambda y,*param: penalty_objects[1].optimize(lambda weight: fitfcn(y,[*param,weight]),y,sigma)[1]
fitfcn_ = lambda y: penalty_objects[0].optimize(lambda weight: fitfcn(y,[weight,secondfcn(y,weight)]),y,sigma)[0]
elif len(penalty_objects)==1:
fitfcn_ = lambda y: penalty_objects[0].optimize(lambda weight: fitfcn(y,[weight]),y,sigma)[0]
else:
raise RuntimeError('The fit() function can only handle up to three penalties.')
return fitfcn_
#--------------------------------------------------------------------------
def _print_fitresults(fitresult,model):
"""
(Private function)
Construct summary table of fit results to print
This helper method takes the output of a model fit and constructs a
summary table of the goodness-of-fit statistics, along with the
hyperparameters of the model (if any). The method is intended to be used
to print the results of the model fit.
Parameters
----------
fitresult : ``FitResult`` object
Result of a model fit.
model : ``Model`` object
Model object instance used for the fit.
Returns
-------
table_string : str
A string containing the summary table of the fit results.
"""
#-----------------------------------------------------
def colortxt(str, color, spaces, is_tty=stdout.isatty()):
"""
(Private function)
Helper method that applies ANSI codes to add colors to the text
in a terminal.
Parameters
----------
str : str
The string to be colored.
color : str
The color to be applied to the text. It can be 'red', 'yellow', or 'white'.
spaces : int
The number of spaces to add to the beginning and end of the string.
is_tty : bool
A boolean value indicating whether the output is a terminal or not.
If it is not a terminal, the method simply returns the string without
applying any coloring.
Returns
-------
colored_str : str
Colored string.
"""
if color=='red': color = '\033[91m'
if color=='yellow': color = '\033[93m'
if color=='white': color = ''
if is_tty:
return str
else:
return f"{color}" +" "*spaces + f"{str}"+" "*spaces + "\033[00m"
#-----------------------------------------------------
# Start printout string
string = ''
# Get number of models in the fit
modelfits = fitresult.model
if not isinstance(modelfits,list):
modelfits = [modelfits]
Ndatasets = len(modelfits)
# Construct table of goodness-of-fit statistics
table = []
table.append([f'Dataset','Noise level','Reduced 𝛘2','Residual autocorr.','RMSD']) # Header
alignment = ['^','^','^','^','^'] # Tab alignment
stats = np.atleast_1d(fitresult.stats)
noiselevels = np.atleast_1d(fitresult.noiselvl)
for n in range(Ndatasets):
noiselvl = noiselevels[n]
chi2red = stats[n]['chi2red']
rmsd = stats[n]['rmsd']
autocorr = stats[n]['autocorr']
# Use colored text to warn of very poor fits
autocorrcolor = lambda str: colortxt(str,'white',7)
if autocorr>0.5 and autocorr<1:
# Relatively acceptable autocorrelations (yellow)
autocorrcolor = lambda str: colortxt(str,'yellow',7)
elif autocorr>1:
# Worrisome autocorrelations (red)
autocorrcolor = lambda str: colortxt(str,'red',7)
chicolor = lambda str: colortxt(str,'white',3)
# Standard deviation of reduced 𝛘2 statistic's uncertainty (Gaussian limit)
chi2red_sigma = np.sqrt(2/len(modelfits[n]))*3
if abs(1-chi2red)>3*chi2red_sigma and abs(1-chi2red)<6*chi2red_sigma:
# Poor case (yellow), 𝛘2 exceeds thrice the expected uncertainty
chicolor = lambda str: colortxt(str,'yellow',3)
elif abs(1-chi2red)>6*chi2red_sigma:
# Horrible case (red), 𝛘2 exceeds six times the expected uncertainty
chicolor = lambda str: colortxt(str,'red',3)
# Convert numbers to well-formatted strings
noiselvl,chi2red,autocorr,rmsd = [f'{var:.3f}' if var<1e3 or var>1e-3 else f'{var:.2e}' for var in [noiselvl,chi2red,autocorr,rmsd]]
table.append([f'#{1+n}',noiselvl,chicolor(chi2red),autocorrcolor(autocorr),rmsd])
# Add auto-formatted table string
string += 'Goodness-of-fit: \n'
string += formatted_table(table,alignment) + '\n'
# Construct table of model hyperparameters
hasregularization = fitresult.regparam!=0
haspenalties = fitresult.penweights
if hasregularization or haspenalties:
string += 'Model hyperparameters: \n'
tags,values,alignment = [],[],[]
# If regularization was used, add regularization parameter
if hasregularization:
alignment.append('^')
tags.append('Regularization parameter')
regparam = fitresult.regparam
if regparam is None: regparam = 0
values.append(regparam)
# If additional penalties were used, add their weights
if haspenalties:
for n,penweight in enumerate(fitresult.penweights):
alignment.append('^')
tags.append(f'Penalty weight #{n+1}')
values.append(penweight)
# Format the values
values = [f'{var:.3f}' if var<1e3 and var>1e-3 else f'{var:.2e}' for var in values]
table = [tags,values]
# Add to the table
string += formatted_table(table,alignment) + '\n'
# Construct table of model parameters fits
table = []
table.append([f'Parameter','Value','95%-Confidence interval','Unit','Description']) # Header
alignment = ['<','<','<','^','<'] # Alignment
for param in model._parameter_list('vector'):
if len(np.atleast_1d(getattr(model,param).idx))==1:
if np.any(getattr(model,param).frozen):
# If parameter is frozen, print just the value
value = getattr(model,param).value
try:
if isinstance(value, (list, tuple, np.ndarray)): value = value[0]
except: pass
value = f'{value:.3f}' if abs(value)<1e3 and abs(value)>1e-3 else f'{value:.2e}'
ci = '(frozen)'
else:
# If parameter is scalar, report values and CIs
value = getattr(fitresult,param)
if getattr(fitresult,param+'Uncert').type == 'void':
ci = ''
else:
ci_lower,ci_upper = getattr(fitresult,param+'Uncert').ci(95)
value,ci_lower,ci_upper = [f'{var:.3f}' if abs(var)<1e3 and abs(var)>1e-3 else f'{var:.2e}' for var in [value,ci_lower,ci_upper]]
ci = f'({ci_lower},{ci_upper})'
else:
# If parameter is vectorial, print just dots
value = '...'
ci = '(...,...)'
unit = str(getattr(model,param).unit)
description = str(getattr(model,param).description)
table.append([f'{param}',value,ci,unit,description])
# Add auto-formatted table string
string += 'Model parameters: \n'
string += formatted_table(table,alignment)
string += '\n'
return string
#--------------------------------------------------------------------------
def _insert_snlls_optionals_docstrings():
"""
(Private decorator)
A decorator that takes a function ``func` as input and replaces the
string ``'snlls_keyargs_docstrings'`` in the function's docstring with
the optional keyword arguments documentation for the ``snlls.py``
function. This is done by splitting the ``snlls.py`` docstring into
paragraphs, filtering out paragraphs that are already included in the
outer function's docstring, and concatenating the remaining paragraphs.
The resulting string is then inserted into ``func``'s docstring and
the modified function is returned. This allows for the optional keyword
arguments documentation to be easily updated in the docstring of any
function that uses the ``snlls.py`` function.
"""
# Get the documentation for the optional keyword arguments in snlls.py also used by fit()
text = snlls.__doc__
text = text.split('\n\n')
# Exclude arguments already set by the outer function
exclude = ['lb','ub','lbl','ubl','subsets','lin_frozen','nonlin_frozen','regparam','reg','regparamrange', 'extrapenalty']
paragraphs = [s for s in text if not any(e in s for e in exclude)]
# Concatenate the arguments
snlls_keyargs_docs = ''
for paragraph in paragraphs:
# Only keep optional keyword arguments
if 'optional' in paragraph:
snlls_keyargs_docs += paragraph + '\n'
def decorator(func):
func.__doc__ = func.__doc__.replace('snlls_keyargs_docstrings',snlls_keyargs_docs)
return func
return decorator
#==============================================================================================
@_insert_snlls_optionals_docstrings()
def fit(model_, y, *constants, par0=None, penalties=None, bootstrap=0, noiselvl=None, mask=None, weights=None,
regparam='aic',reg='auto',regparamrange=None, bootcores=1,**kwargs):
r"""
Fit the model(s) to the dataset(s)
Fit the input model to the data ``y`` via one of the three following approaches:
- Non-linear least-squares
- Regularized linear-least-squares
- Separable non-linear least-squares
The most appropiate solver is chosen automatically based on the model structure.
Parameters
----------
model : :ref:`Model`
Model object.
y : array_like
Data to be fitted.
par0 : array_like, optional
Value at which to initialize the parameter at the start of a fit routine.
Must be specified if not defined in the model. Otherwise, it overrides the definition in the model.
penalties: callable or list thereof, optional
Custom penalty function(s) to impose upon the solution. A single penalty must be specified as a callable function.
Multiple penalties can be specified as a list of callable functons. Each function must take two inputs, a vector of non-linear parameters
and a vector of linear parameters, and return a vector to be added to the residual vector (``pen = fcn(pnonlin,plin)``).
The square of the penalty is computed internally.
bootstrap : scalar, optional,
Bootstrap samples for uncertainty quantification. If ``bootstrap>0``, the uncertainty quantification will be
performed via the boostrapping method with based on the number of samples specified as the argument.
bootcores : scalar, optional
Number of CPU cores/processes for parallelization of the bootstrap uncertainty quantification. If ``cores=1`` no parallel
computing is used. If ``cores=-1`` all available CPUs are used. The default is one core (no parallelization).
reg : boolean or string, optional
Determines the use of regularization on the solution of the linear problem.
* ``'auto'`` - Automatic decision based con the condition number of the non-linear model ``Amodel``.
* ``True`` - Forces regularization regardless of the condition number
* ``False`` - Disables regularization regardless of the condition number
The default is ``'auto'``.
regparam : string or float scalar, optional
Method for the automatic selection of the optimal regularization parameter:
* ``'lr'`` - L-curve minimum-radius method (LR)
* ``'lc'`` - L-curve maximum-curvature method (LC)
* ``'cv'`` - Cross validation (CV)
* ``'gcv'`` - Generalized Cross Validation (GCV)
* ``'rgcv'`` - Robust Generalized Cross Validation (rGCV)
* ``'srgcv'`` - Strong Robust Generalized Cross Validation (srGCV)
* ``'aic'`` - Akaike information criterion (AIC)
* ``'bic'`` - Bayesian information criterion (BIC)
* ``'aicc'`` - Corrected Akaike information criterion (AICC)
* ``'rm'`` - Residual method (RM)
* ``'ee'`` - Extrapolated Error (EE)
* ``'ncp'`` - Normalized Cumulative Periodogram (NCP)
* ``'gml'`` - Generalized Maximum Likelihood (GML)
* ``'mcl'`` - Mallows' C_L (MCL)
The regularization parameter can be manually specified by passing a scalar value
instead of a string. The default ``'aic'``.
regparamrange : array_like, optional
Search range for the optimization of the regularization parameter. Must be specified as a list ``[regparam_lb, regparam_ub]``
with the lower/upper boundaries of the regularization parameter. The default range is ``[1e-8, 1e3]``.
regop : 2D array_like, optional
Regularization operator matrix, the default is the second-order differential operator.
alphareopt : float scalar, optional
Relative parameter change threshold for reoptimizing the regularization parameter
when using a selection method, the default is ``1e-3``.
nnlsSolver : string, optional
Solver used to solve a non-negative least-squares problem (if applicable):
* ``'qp'`` - Optimization of the NNLS problem using the ``quadprog`` package. Only Python <= 3.10.
* ``'cvx'`` - Optimization of the NNLS problem using the ``cvxopt`` package.
* ``'fnnls'`` - Optimization using the fast NNLS algorithm.
The default is ``'cvx'``.
snlls_keyargs_docstrings
Returns
-------
:ref:`FitResult` with the following fields defined:
<parameter_name> : :ref:`Parameter`
Fitted value of the <parameter_name> model parameter.
<parameter_name>Uncert : :ref:`UQResult`
Uncertainty quantification of the <parameter_name> model parameter.
param : ndarray
Fitted parameter vector ordered according to the model parameter indices.
paramUncert : :ref:`UQResult`
Uncertainty quantification of the parameter vector ordered according to the model parameter indices.
paramlist : list
List of the fitted parameter names ordered according to the model parameter indices.
model : ndarray
Fitted model response.
regparam : scalar
Regularization parameter value used for the regularization of the linear parameters.
penweights : scalar or list thereof
Penalty weight value(s) used for the penalties specified through ``penalties``.
stats : dict
Goodness of fit statistical estimators
* ``stats['chi2red']`` - Reduced \chi^2 test
* ``stats['r2']`` - R^2 test
* ``stats['rmsd']`` - Root-mean squared deviation (RMSD)
* ``stats['aic']`` - Akaike information criterion
* ``stats['aicc']`` - Corrected Akaike information criterion
* ``stats['bic']`` - Bayesian information criterion
cost : float
Value of the cost function at the solution.
noiselvl : ndarray
Estimated or user-given noise standard deviations of the individual datasets.
"""
if not isinstance(model_,Model):
raise TypeError('The input model must be a valid deerlab.Model object.')
else:
model = copy(model_)
required = len(model._constantsInfo)
if len(constants)!=required:
raise SyntaxError(f'The input model requires {required} constant(s) to be specified. Specify them via fit(model,y,*constants).')
elif len(constants)>0:
constants = np.atleast_1d(constants)
if model.Nlin==0:
model.addlinear('scale',lb=-np.inf,ub=np.inf,description='Scaling factor')
normalization = False
normfactor_keys = []
for key in model._parameter_list():
param = getattr(model,key)
if np.all(param.linear):
if param.normalization is not None:
normfactor_key = f'{key}_scale'
normfactor_keys.append(normfactor_key)
model.addnonlinear(normfactor_key,lb=-np.inf,ub=np.inf,par0=1,description=f'Normalization factor of {key}')
getattr(model,normfactor_key).freeze(1)
normalization = True
# Get boundaries and conditions for the linear and nonlinear parameters
ubl,ub = model._split_linear(model._vecsort(model._getvector('ub')))
lbl,lb = model._split_linear(model._vecsort(model._getvector('lb')))
frozenl,frozen = model._split_linear(model._vecsort(model._getvector('frozen')))
valuesl,values = model._split_linear(model._vecsort(model._getvector('value')))
# Check the initial conditions and whether they are defined
if par0 is None:
_,par0 = model._split_linear(model._vecsort(model._getvector('par0')))
if np.any(par0==None):
raise RuntimeError(f"It appears some start values (par0) have not been specified. Either specify them in the model definition or using the keyword.")
linfrozen = np.full(model.Nlin,None)
linfrozen[frozenl] = valuesl[frozenl]
nonlinfrozen = np.full(model.Nnonlin,None)
nonlinfrozen[frozen] = values[frozen]
if len(linfrozen)==0: linfrozen = [1]
if type(y) is not list: y = [y]
ysplit = y.copy()
y, _, weights, mask, ysubsets, noiselvl = parse_multidatasets(y, None, weights, noiselvl, precondition=False, masks=mask)
sigmas = np.concatenate([np.full_like(yset,sigma) for sigma,yset in zip(noiselvl,ysplit)])
if model.Nlin==0 and model.Nnonlin==0:
raise AssertionError(f'The model has no parameters to fit.')
# Get parameter indices in the order spitted out by the solver
param_idx = [[] for _ in model._parameter_list('vector')]
idxprev = 0
for islinear in [False,True]:
for n,param in enumerate(model._parameter_list('vector')):
if np.all(getattr(model,param).linear == islinear):
N = len(np.atleast_1d(getattr(model,param).idx))
param_idx[n] = np.arange(idxprev,idxprev + N)
idxprev += N
# If there are penalties in the model
if penalties is not None:
if not hasattr(penalties, '__iter__'):
penalties = [penalties]
# Get the parameter names of the model
modelparam = model._parameter_list('vector')
penaltyfcns = []
for penalty in penalties:
# Determine the indices of the subset of parameters the model depends on
subsets = [getattr(model,modelparam[np.where(np.asarray(modelparam)==param)[0][0]]).idx for param in penalty.signature]
# Adapt the signature of penaltyfcn for snlls
penaltyfcns.append(lambda pnonlin,plin,weight: penalty.penaltyfcn(weight,*[np.concatenate([pnonlin,plin])[subset] for subset in subsets]))
# Prepare the penalties to input to snlls
extrapenalties = lambda weights: [lambda nonlin,lin: penaltyfcn(nonlin,lin,weight) for penaltyfcn,weight in zip(penaltyfcns,weights)]
else:
# If there are no penalties in the model
penalties = []
extrapenalties = lambda *_: None
# Prepare the separable non-linear least-squares solver
Amodel_fcn = lambda param: model.nonlinmodel(*constants,*param)
fitfcn = lambda y,penweights: snlls(y, Amodel_fcn, par0, lb=lb, ub=ub, lbl=lbl, ubl=ubl, mask=mask, weights=weights,
subsets=ysubsets, lin_frozen=linfrozen, nonlin_frozen=nonlinfrozen,
regparam=regparam, reg=reg, regparamrange=regparamrange, noiselvl=noiselvl,
extrapenalty=extrapenalties(penweights), **kwargs)
# Prepare outer optimization of the penalty weights, if necessary
fitfcn = _outerOptimization(fitfcn,penalties,sigmas)
# Run the fitting algorithm
fitresults = fitfcn(y)
penweights = [penalty._weight_value for penalty in penalties]
# If requested, perform a bootstrap analysis
if bootstrap>0:
def bootstrap_fcn(ysim):
fit = fitfcn(np.concatenate(ysim))
if not isinstance(fit.model,list): fit.model = [fit.model]
return (fit.param,*fit.model)
# Bootstrapped uncertainty quantification
if not ('verbose' in kwargs): # Only show boostrap progress bar when verbose is set in kwargs.
bootstrap_verbose = True
else:
bootstrap_verbose = False
param_uq = bootstrap_analysis(bootstrap_fcn,ysplit,fitresults.model,samples=bootstrap,noiselvl=noiselvl,cores=bootcores, verbose=bootstrap_verbose)
# Include information on the boundaries for better uncertainty estimates
paramlb = model._vecsort(model._getvector('lb'))[np.concatenate(param_idx)]
paramub = model._vecsort(model._getvector('ub'))[np.concatenate(param_idx)]
fitresults.paramUncert = UQResult('bootstrap',data=param_uq[0].samples,lb=paramlb,ub=paramub)
fitresults.param = fitresults.paramUncert.median
# Get the uncertainty estimates for the model response
fitresults.model = [param_uq[n].median for n in range(1,len(param_uq))]
if len(fitresults.model)==1:
fitresults.model = fitresults.model[0]
# Get some basic information on the parameter vector
keys = model._parameter_list(order='vector')
# Dictionary of parameter names and fitted values
FitResult_param = {key : fitvalue if len(fitvalue)>1 else fitvalue[0] for key,fitvalue in zip(keys,[fitresults.param[idx] for idx in param_idx])}
# Dictionary of parameter names and fit uncertainties
FitResult_paramuq = {f'{key}Uncert': model._getparamuq(fitresults.paramUncert,idx) for key,idx in zip(keys,param_idx)}
# Dictionary of other fit quantities of interest
FitResult_dict = {key: getattr(fitresults,key) for key in ['y','mask','param','paramUncert','model','cost','plot','residuals','stats','regparam','regparam_stats','__plot_inputs']}
_paramlist = model._parameter_list('vector')
param_idx = [[] for _ in _paramlist]
idxprev = 0
for islinear in [False,True]:
for n,param in enumerate(_paramlist):
if np.all(getattr(model,param).linear == islinear):
N = len(np.atleast_1d(getattr(model,param).idx))
param_idx[n] = np.arange(idxprev,idxprev + N)
idxprev += N
# Enforce normalization of the linear parameters (if needed) for the final output
FitResult_param_,FitResult_paramuq_ = FitResult_param.copy(),FitResult_paramuq.copy()
if normalization:
for key in keys:
param = getattr(model,key)
if key in normfactor_keys:
param.unfreeze()
if np.all(param.linear):
if param.normalization is not None:
def _scale(x):
x = x + np.finfo(float).eps
return np.mean(x/param.normalization(x))
FitResult_param_[f'{key}_scale'] = _scale(FitResult_param_[key]) # Normalization factor
FitResult_param_[key] = param.normalization(FitResult_param_[key]) # Normalized value
FitResult_paramuq_[f'{key}_scaleUncert'] = FitResult_paramuq_[f'{key}Uncert'].propagate(_scale)
FitResult_paramuq_[f'{key}Uncert'] = FitResult_paramuq_[f'{key}Uncert'].propagate(lambda x: x/FitResult_param_[f'{key}_scale'], lb=param.lb, ub=param.ub) # Normalization of the uncertainty
if len(noiselvl)==1:
noiselvl = noiselvl[0]
# Generate FitResult object from all the dictionaries
fitresult = FitResult({**FitResult_param_,**FitResult_paramuq_, **FitResult_dict,'penweights':penweights,'noiselvl':noiselvl,'paramlist':_paramlist, '_param_idx':param_idx})
fitresult._summary = _print_fitresults(fitresult,model)
return fitresult
#============================================================================================== | PypiClean |
/KPyGithub-1.32a1.tar.gz/KPyGithub-1.32a1/github/Installation.py |
# ########################## Copyrights and license ############################
# #
# Copyright 2017 Jannis Gebauer <[email protected]> #
# #
# This file is part of PyGithub. #
# http://pygithub.github.io/PyGithub/v1/index.html #
# #
# PyGithub is free software: you can redistribute it and/or modify it under #
# the terms of the GNU Lesser General Public License as published by the Free #
# Software Foundation, either version 3 of the License, or (at your option) #
# any later version. #
# #
# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
# details. #
# #
# You should have received a copy of the GNU Lesser General Public License #
# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
# #
# ##############################################################################
import github.GithubObject
import github.PaginatedList
import github.Gist
import github.Repository
import github.NamedUser
import github.Plan
import github.Organization
import github.UserKey
import github.Issue
import github.Event
import github.Authorization
import github.Notification
INTEGRATION_PREVIEW_HEADERS = {"Accept": "application/vnd.github.machine-man-preview+json"}
class Installation(github.GithubObject.NonCompletableGithubObject):
"""
This class represents Installations as in https://developer.github.com/v3/integrations/installations
"""
def __repr__(self):
return self.get__repr__({"id": self._id.value})
@property
def id(self):
return self._id
def get_repos(self):
"""
:calls: `GET /installation/repositories <https://developer.github.com/v3/integrations/installations/#list-repositories>`_
:rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.Repository.Repository`
"""
url_parameters = dict()
return github.PaginatedList.PaginatedList(
contentClass=github.Repository.Repository,
requester=self._requester,
firstUrl="/installation/repositories",
firstParams=url_parameters,
headers=INTEGRATION_PREVIEW_HEADERS,
list_item='repositories'
)
def _initAttributes(self):
self._id = github.GithubObject.NotSet
def _useAttributes(self, attributes):
if "id" in attributes: # pragma no branch
self._id = self._makeIntAttribute(attributes["id"]) | PypiClean |
/EnergySystemModels-0.1.17.post63-py3-none-any.whl/PyqtSimulator/nodes/cooling_coil.py | from PyQt5.QtCore import *
from PyqtSimulator.calc_conf import *
from PyqtSimulator.calc_node_base import *
from NodeEditor.nodeeditor.utils import dumpException
#############les modèles d'un groupe frigorifique#################
from AHU.Coil import CoolingCoil
from AHU.Connect import Air_connect
from AHU.AirPort.AirPort import AirPort
class CalcCoolingCoilContent(QDMNodeContentWidget):
def initUI(self):
self.P_drop_lbl=QLabel("Perte de pression (bar)", self)
self.P_drop_edit=QLineEdit("0.001", self)
self.HA_target_lbl=QLabel("poid d'eau (g/kgas)", self)
self.HA_target_edit=QLineEdit("5", self)
self.Tsat_lbl=QLabel("Temp Eau glacée ou évap (°C)", self)
self.Tsat_edit=QLineEdit("8.0", self)
self.Qth_lbl_title = QLabel("Qhex(kW)", self)
self.Qth_lbl = QLabel("", self)
self.Eff_lbl = QLabel("", self)
self.FB_lbl = QLabel("", self)
# self.Qlosses_lbl_title = QLabel("Energie dissipée (kW):", self)
# self.Qlosses_lbl = QLabel("", self)
# self.Tis_lbl_title = QLabel("Temp. isentrop. (°C)", self)
# self.Tis_lbl = QLabel("", self)
self.layout=QVBoxLayout()
self.layout.addWidget(self.P_drop_lbl)
self.layout.addWidget(self.P_drop_edit)
self.layout.addWidget(self.Tsat_lbl)
self.layout.addWidget(self.Tsat_edit)
self.layout.addWidget(self.HA_target_lbl)
self.layout.addWidget(self.HA_target_edit)
self.layout.addWidget(self.Qth_lbl_title)
self.layout.addWidget(self.Qth_lbl)
self.layout.addWidget(self.Eff_lbl)
self.layout.addWidget(self.FB_lbl)
# self.layout.addWidget(self.Qlosses_lbl_title)
# self.layout.addWidget(self.Qlosses_lbl)
# self.layout.addWidget(self.Tis_lbl_title)
# self.layout.addWidget(self.Tis_lbl)
self.setLayout(self.layout)
self.layout.setAlignment(Qt.AlignRight)
self.layout.setObjectName(self.node.content_label_objname)
def serialize(self):
res = super().serialize()
res['P_drop'] = self.P_drop_edit.text()
res2 = super().serialize()
res2['HA_target'] = self.HA_target_edit.text()
res3 = super().serialize()
res3['Tsat'] = self.Tsat_edit.text()
# res4 = super().serialize()
# res4['Tsat'] = self.F_kgs_edit.text()
return res,res2,res3
def deserialize(self, data, hashmap={}):
res = super().deserialize(data, hashmap)
res2 = super().deserialize(data, hashmap)
# res3 = super().deserialize(data, hashmap)
# res4 = super().deserialize(data, hashmap)
# print("res=",res,res2,res3,res4)
# print("dataaaaaaaaaa=",data)
try:
P_drop = data[0]["P_drop"]
HA_target = data[1]['HA_target']
# value3 = data[2]['value3']
Tsat = data[2]['Tsat']
# print("values=",value,HA_target,value3,Tsat)
self.P_drop_edit.setText(P_drop)
self.HA_target_edit.setText(HA_target)
self.Tsat_edit.setText(Tsat)
return True & res & res2 & res3 #& res4
except Exception as e:
dumpException(e)
return res ,res2,res3 #,res4
@register_node(OP_NODE_COOLING_COIL)
class CalcNode_cooling_coil(CalcNode):
icon = "icons/cooling_coil.png"
op_code = OP_NODE_COOLING_COIL
op_title = "Cooling Coil"
content_label = "/"
content_label_objname = "calc_node_cooling_coil"
def __init__(self, scene):
super().__init__(scene, inputs=[2], outputs=[1])
self.eval()
def initInnerClasses(self):
self.content = CalcCoolingCoilContent(self)
self.grNode = CalcGraphicsNode(self)
self.grNode.height=300
self.grNode.width=250
self.content.P_drop_edit.textChanged.connect(self.onInputChanged)
self.content.Tsat_edit.textChanged.connect(self.onInputChanged)
self.content.HA_target_edit.textChanged.connect(self.onInputChanged)
def evalOperation(self, input1, input2):
self.value=[]
a=AirPort()
a.HA=input1[0]
a.F_kgs=input1[1]
a.P=input1[2]*1e5
a.h=input1[3]
COOLING_COIL=CoolingCoil.Object()
Air_connect(COOLING_COIL.Inlet,a)
################""""
u_P_drop = self.content.P_drop_edit.text()
s_P_drop = float(u_P_drop)
####################
COOLING_COIL.P_drop=1e5*s_P_drop
COOLING_COIL.HA_target=float(self.content.HA_target_edit.text())
COOLING_COIL.T_sat=float(self.content.Tsat_edit.text())
COOLING_COIL.calculate()
self.value.append(COOLING_COIL.Outlet.HA) #HAe
self.value.append(COOLING_COIL.Outlet.F_kgs) #débit
self.value.append(COOLING_COIL.Outlet.P/1e5) #pression min
self.value.append(COOLING_COIL.Outlet.h) #Enthalpie
self.content.Qth_lbl.setText("%f" % (COOLING_COIL.Qth)) #"%d" %
self.content.Eff_lbl.setText("Efficacité="+"%f" % (COOLING_COIL.Eff)) #"%d" %
self.content.FB_lbl.setText("Facteur bypass="+"%f" % (COOLING_COIL.FB)) #"%d" %
return self.value | PypiClean |
/HOPP-0.0.5-py3-none-any.whl/tools/optimization/optimizer/CMA_ES_optimizer.py | import math
from typing import (
List,
Optional,
Tuple,
)
# matplotlib.use('tkagg')
import numpy as np
import scipy
from ..data_logging.data_recorder import DataRecorder
from .ask_tell_optimizer import AskTellOptimizer
from .dimension.gaussian_dimension import DimensionInfo, Gaussian
# sys.path.append('../examples/flatirons')
class CMAESOptimizer(AskTellOptimizer):
"""
An implementation of the covariance matrix adaptation evolution strategy
http://www.cmap.polytechnique.fr/~nikolaus.hansen/copenhagen-cma-es.pdf
https://arxiv.org/pdf/1604.00772.pdf
"""
recorder: Optional[DataRecorder]
_best_candidate: Optional[Tuple[float, float, any]]
verbose: bool
_lambda: int
mu: int
c_m: float
sigma: float
n: int
chi_n: float
C: np.ndarray
C: np.ndarray
p_c: np.ndarray
p_sigma: np.ndarray
c_c: float
c_sigma: float
c_1: float
w: np.ndarray
mu_eff: float
sum_w: float
c_mu: float
d_sigma: float
m: np.ndarray
D: np.ndarray
B: np.ndarray
eigenvalue: float
count_eval: int
def __init__(self,
generation_size: int = 100,
selection_proportion: float = .5,
c_m: float = 1.0,
sigma: float = .5,
dimensions: Optional[List[DimensionInfo]] = None,
verbose: bool = False,
) -> None:
self.recorder = None
self._best_candidate = None
self.verbose = verbose
self._lambda = generation_size
self.mu = math.floor(selection_proportion * generation_size)
self.c_m = c_m
self.sigma = sigma
self.alpha_cov = 2
if dimensions is not None:
self.setup(dimensions)
def setup(self, dimensions: [Gaussian], recorder: DataRecorder) -> None:
"""
Setup parameters given initial conditions of the candidate
:param dimensions: list of search dimensions
:param recorder: data recorder
"""
n = len(dimensions)
self.n = n
self.print('_n {}', self.n)
self.chi_n = math.sqrt(n) * (1.0 - 1.0 / (4.0 * n) + 1.0 / (21.0 * (n ** 2)))
self.print('_chi_n {}', self.chi_n)
self.m = np.fromiter((d.mean() for d in dimensions), float).reshape((n, 1))
self.print('_m {}', self.m)
# Strategy parameter setting: Selection
# muXone recombination weights
w = np.zeros((self._lambda, 1))
for i in range(self.mu):
w[i] = math.log(self.mu + .5) - math.log(i + 1)
w /= np.sum(w) # normalize recombination weights array
self.w = w
self.print('_w {}', self.w)
self.print('sum w_i, i = 1 ... mu {}', sum([w[i] for i in range(self.mu)]))
# variance-effective size of mu
self.mu_eff = np.sum(w) ** 2 / np.sum(w ** 2)
self.print('_mu_eff {}', self.mu_eff)
# Strategy parameter setting: Adaptation
# time constant for accumulation for C
self.c_c = (4 + self.mu_eff / n) / (n + 4 + 2 * self.mu_eff / n)
self.print('_c_c {}', self.c_c)
# t-const for cumulation for sigma control
self.c_sigma = (self.mu_eff + 2) / (n + self.mu_eff + 5)
self.print('_c_sigma {}', self.c_sigma)
# learning rate for rank-one update of C
# self.c_1 = self.alpha_cov / ((n + 1.3) ** 2 + self.mu_eff)
self.c_1 = 2 / ((n + 1.3) ** 2 + self.mu_eff)
self.print('_c_1 {}', self.c_1)
# learning rate for rank-mu update
# self.c_mu = min(1 - self.c_1,
# self.alpha_cov * (self.mu_eff - 2 + 1.0 / self.mu_eff) /
# ((n + 2) ** 2 + self.alpha_cov * self.mu_eff / 2))
self.c_mu = 2 * (self.mu_eff - 2 + 1 / self.mu_eff) / ((n + 2) ** 2 + 2 * self.mu_eff / 2)
self.print('_c_mu {}', self.c_mu)
# damping for sigma
self.d_sigma = 1 + 2 * max(0.0, math.sqrt((self.mu_eff - 1) / (n + 1)) - 1) + self.c_sigma
self.print('_d_sigma {}', self.d_sigma)
# Initialize dynamic (internal) strategy parameters and constants
self.p_c = np.zeros((n, 1), dtype=np.float64)
self.p_sigma = np.zeros((n, 1), dtype=np.float64)
self.B = np.eye(n)
self.D = np.eye(n)
for i, d in enumerate(dimensions):
self.D[i, i] = math.sqrt(d.variance())
self.print('D\n{}', self.D)
BD = np.matmul(self.B, self.D)
self.C = np.matmul(BD, BD.transpose())
self.print('C\n{}', self.C)
self.eigenvalue = 0.0
self.count_eval = 0
self.recorder = recorder
self.recorder.add_columns('generation', 'mean', 'variance', 'covariance', '_sigma', '_p_c', '_p_sigma')
def stop(self) -> bool:
"""
:return: True when the optimizer thinks it has reached a stopping point
"""
return False
def ask(self, num: Optional[int] = None) -> [any]:
"""
:param num: the number of search points to return. If undefined, the optimizer will choose how many to return.
:return: a list of search points generated by the optimizer
"""
n = self.n
zero = np.zeros(n)
eye = np.eye(n)
BD = np.matmul(self.B, self.D)
# self.print('B\n{}', self.B)
# self.print('D\n{}', self.D)
# self.print('BD\n{}', BD)
candidates = []
for _ in range(self._lambda):
z = np.random.multivariate_normal(zero, eye).reshape((n, 1))
# self.print('z\n{}', z)
x = self.m + self.sigma * np.matmul(BD, z)
# self.print('x\n{}', x)
candidates.append(x.reshape(n))
self.count_eval += len(candidates)
return candidates
def tell(self, evaluations: [Tuple[float, float, any]]) -> None:
"""
Updates the optimizer with the objective evaluations of a list of search points
:param evaluations: a list of tuples of (evaluation, search point)
"""
def best_key(e):
return e[1], e[0]
best = max(evaluations, key=best_key)
self._best_candidate = best if self._best_candidate is None else max((self._best_candidate, best), key=best_key)
evaluations.sort(key=lambda e: (e[0], e[1]), reverse=True)
# selection and recombination
x = [np.array(e[2]).reshape((self.n, 1)) for e in evaluations]
# self.print('x\n{}', x)
y = [(x_i - self.m) / self.sigma for x_i in x]
BD = np.matmul(self.B, self.D)
# self.print('B\n{}', self.B)
# self.print('D\n{}', self.D)
# self.print('BD\n{}', BD)
BDinv = np.linalg.inv(BD)
# self.print('BDinv\n{}', BDinv)
# C = np.matmul(BD, BD.transpose())
# Cinv = np.matmul(np.linalg.inv(self.D), self.B.transpose())
z = [np.matmul(BDinv, y_i) for y_i in y]
self.m = sum((self.w[i] * x[i] for i in range(self._lambda)))
z_mean = sum((self.w[i] * z[i] for i in range(self._lambda)))
# Accumulation: Update evolution paths
self.print('m\n{}', self.m)
self.print('z_mean\n{}', z_mean)
self.print('B\n{}', self.B)
self.p_sigma = (1.0 - self.c_sigma) * self.p_sigma + \
math.sqrt(self.c_sigma * (2.0 - self.c_sigma) * self.mu_eff) * \
np.matmul(self.B, z_mean)
self.print('p_sigma {}', self.p_sigma)
h_sigma = 0.0
h_sigma_threshold = np.linalg.norm(self.p_sigma) / np.sqrt(
1 - (1 - self.c_sigma) ** (2 * self.count_eval / self._lambda)) / self.chi_n
self.print('h_sigma_threshold {}', h_sigma_threshold)
if h_sigma_threshold < (1.4 + 2 / (self.n + 1)):
h_sigma = 1
self.print('h_sigma {}', h_sigma)
# self.print('pc1\n{}', (1.0 - self.c_c) * self.p_c)
# self.print('pc2\n{}', math.sqrt(self.c_c * (2.0 - self.c_c) * self.mu_eff))
# self.print('pc3\n{}', h_sigma * math.sqrt(self.c_c * (2.0 - self.c_c) * self.mu_eff))
# self.print('pc4\n{}', z_mean)
# self.print('pc5\n{}', BD)
# self.print('pc6\n{}', np.matmul(BD, z_mean))
self.p_c = (1.0 - self.c_c) * self.p_c + \
h_sigma * math.sqrt(self.c_c * (2.0 - self.c_c) * self.mu_eff) * np.matmul(BD, z_mean)
self.print('p_c\n{}', self.p_c)
# Adapt covariance matrix C
# self.print('C1\n{}', self.C)
self.print('C2 {}', (1 - self.c_1 - self.c_mu))
# self.print('C2\n{}', (1 - self.c_1 - self.c_mu) * self.C)
# self.print('C3\n{}', self.p_c)
# self.print('C4\n{}', np.matmul(self.p_c, self.p_c.transpose()))
# self.print('C6 {}', (1 - h_sigma) * self.c_c * (2 - self.c_c))
self.print('c_1 {}', self.c_1)
self.print('c_mu {}', self.c_mu)
# self.print('C7\n{}', (1 - h_sigma) * self.c_c * (2 - self.c_c) * self.C)
# print('C5\n', self.c_1 * (
# np.matmul(self.p_c, self.p_c.transpose()) + (1 - h_sigma) * self.c_c * (2 - self.c_c) * self.C))
# self.print('C8\n{}', sum((self.w[i] * np.matmul(y[i], y[i].transpose()) for i in range(self._lambda))))
# self.print('C9\n{}',
# self.c_mu * sum((self.w[i] * np.matmul(y[i], y[i].transpose()) for i in range(self._lambda))))
self.C = (1 - self.c_1 - self.c_mu) * self.C + \
self.c_1 * (np.matmul(self.p_c, self.p_c.transpose()) +
(1 - h_sigma) * self.c_c * (2 - self.c_c) * self.C) + \
self.c_mu * sum((self.w[i] * np.matmul(y[i], y[i].transpose()) for i in range(self._lambda)))
self.print('C\n{}', self.C)
# Adapt step-size sigma
p_sigma_norm = scipy.linalg.norm(self.p_sigma)
self.print('p_sigma_norm {}', p_sigma_norm)
self.sigma = self.sigma * math.exp(
(self.c_sigma / self.d_sigma) *
(p_sigma_norm / self.chi_n - 1))
self.print('sigma {}', self.sigma)
# Update B and D from C
if self.count_eval - self.eigenvalue > self._lambda / (self.c_1 + self.c_mu) / self.n / 10:
self.eigenvalue = self.count_eval
self.print('eigenvalue {}', self.eigenvalue)
self.C = np.real(np.triu(self.C) + np.triu(self.C, 1).transpose()) # force symmetry
# self.print('C symmetric\n{}', self.C)
w, v = np.linalg.eig(self.C) # eigen decomposition
self.D = np.real(np.diag(w))
self.B = np.real(v)
# self.B, self.D
# B are the normalized eigenvectors
# self.print('D initial\n{}', self.D)
self.D = np.real(np.diag(np.sqrt(np.diag(self.D))))
# D are the standard deviations
# self.print('B\n{}', self.B)
# self.print('D\n{}', self.D)
# Escape flat fitness
if evaluations[0][0] == evaluations[math.ceil(.7 * self._lambda)][0]:
self.sigma *= math.exp(.2 + self.c_sigma / self.d_sigma)
print('warning: flat fitness, consider reformulating the objective')
self.recorder.accumulate(evaluations, self.mean(), self.variance(), self.C, self.sigma, self.p_c,
self.p_sigma)
def best_solution(self) -> Optional[Tuple[float, float, any]]:
"""
:return: the current best solution
"""
return self._best_candidate
def central_solution(self) -> (Optional[float], Optional[float], any):
return None, None, self.mean()
def get_num_candidates(self) -> int:
return self._lambda
def get_num_dimensions(self) -> int:
return self.m.size
def mean(self) -> any:
return self.m.reshape(self.get_num_dimensions())
def variance(self) -> any:
return self.C.diagonal()
def print(self, message: str, *args, **kwargs) -> None:
if self.verbose:
print(message.format(*args, **kwargs))
# def setup(self, dimensions: [Gaussian], recorder: DataRecorder) -> None:
# """
# Setup parameters given initial conditions of the candidate
# :param dimensions: list of search dimensions
# :param recorder: data recorder
# """
# n = len(dimensions)
# self.n = n
# self.print('_n {}', self.n)
#
# self.chi_n = math.sqrt(n) * (1.0 - 1.0 / (4.0 * n) + 1.0 / (21.0 * (n ** 2)))
# self.print('_chi_n {}', self.chi_n)
#
# self.p_sigma = np.zeros(n, dtype=np.float64)
# self.p_c = np.zeros((n, n), dtype=np.float64)
#
# self.m = np.fromiter((d.mean() for d in dimensions), float)
# self.print('_m {}', self.m)
# self.C = np.zeros((n, n))
# for i, d in enumerate(dimensions):
# self.C[i, i] = d.variance()
# self.print('_C {}', self.C)
#
# # self.w = np.zeros(self._lambda)
#
# #._lambda, w, c_sigma, d_sigma, c_c, c_1, c_mu
# w_prime = np.zeros(self._lambda)
# weight = math.log((self._lambda + 1) / 2)
# for i in range(self._lambda):
# w_prime[i] = weight - math.log(i + 1)
# self.print('w_prime {}', w_prime)
#
# self.mu_eff = np.sum(w_prime) ** 2 / np.sum(w_prime ** 2)
# self.print('_mu_eff {}', self.mu_eff)
#
# mu_mask = np.zeros(self._lambda)
# for i in range(self.mu):
# mu_mask[i] = 1.0
#
# w_prime_mu = w_prime * mu_mask
# mu_eff_neg = np.sum(w_prime_mu) ** 2 / np.sum(w_prime_mu ** 2)
# self.print('mu_eff_bar {}', mu_eff_neg)
#
# self.c_1 = self.alpha_cov / ((n + 1.3) ** 2 + self.mu_eff)
# self.print('_c_1 {}', self.c_1)
#
# self.c_c = (4 + self.mu_eff / n) / (n + 4 + 2 * self.mu_eff / n)
# self.print('_c_c {}', self.c_c)
#
# self.c_mu = min(1 - self.c_1,
# self.alpha_cov * (self.mu_eff - 2 + 1.0 / self.mu_eff) /
# ((n + 2) ** 2 + self.alpha_cov * self.mu_eff / 2))
# self.print('_c_mu {}', self.c_mu)
#
# self.c_sigma = (self.mu_eff + 2) / (n + self.mu_eff + 5)
# self.print('_c_sigma {}', self.c_sigma)
#
# self.d_sigma = 1 + 2 * max(0.0, math.sqrt((self.mu_eff - 1) / (n + 1)) - 1) + self.c_sigma
# self.print('_d_sigma {}', self.d_sigma)
#
# alpha_mu_neg = 1 + self.c_1 / self.c_mu
# self.print('alpha_mu_neg {}', alpha_mu_neg)
#
# alpha_mu_eff_neg = 1 + (2 * mu_eff_neg) / (self.mu_eff + 2)
# self.print('alpha_mu_eff_neg {}', alpha_mu_eff_neg)
#
# alpha_pos_def_neg = (1 - self.c_1 - self.c_mu) / (n * self.c_mu)
# self.print('alpha_pos_def_neg {}', alpha_pos_def_neg)
#
# min_alpha = min(alpha_mu_neg, alpha_mu_eff_neg, alpha_pos_def_neg)
# self.print('min_alpha {}', min_alpha)
#
# sum_positive_w_prime = 0.0
# sum_negative_w_prime = 0.0
# for w_prime_i in w_prime:
# if w_prime_i > 0:
# sum_positive_w_prime += w_prime_i
# else:
# sum_negative_w_prime -= w_prime_i
# self.print('sum_positive_w_prime {}', sum_positive_w_prime)
# self.print('sum_negative_w_prime {}', sum_negative_w_prime)
#
# w = np.zeros(self._lambda)
# for i in range(self._lambda):
# w_prime_i = w_prime[i]
# w_i = 0.0
# if w_prime_i >= 0:
# w_i = w_prime_i / sum_positive_w_prime
# else:
# w_i = w_prime_i * min_alpha / sum_negative_w_prime
# w[i] = w_i
# self.w = w
# self.print('_w {}', self.w)
# self.print('sum w_i, i = 1 ... mu {}', sum([w[i] for i in range(self.mu)]))
#
# self.sum_w = np.sum(self.w)
# self.print('_sum_w {}', self.sum_w)
#
# # make sure that u_w ~= .3._lambda
# # norm_weights = math.sqrt(.3 * self._lambda / np.sum(self.w ** 2))
# # self.w *= norm_weights
#
# self.c_m = 1.0
# self.print('_c_m {}', self.c_m)
#
# # self.c_mu = min(1.0, self.mu_eff / n ** 2)
# # c_norm = max(1.0, self.c_1 + self.c_mu)
# # self.c_1 /= c_norm
# # self.c_mu /= c_norm
# # self.d_sigma = 1 + math.sqrt(self.mu_eff / n)
#
# self.recorder = recorder
# self.recorder.add_columns('generation', 'mean', 'variance', 'covariance', '_sigma', '_p_c', '_p_sigma')
# def tell(self, evaluations: [Tuple[float, any]]) -> None:
# """
# Updates the optimizer with the objective evaluations of a list of search points
# :param evaluations: a list of tuples of (evaluation, search point)
# """
# evaluations.sort(key._lambda evaluation: evaluation[0], reverse=True)
# if self.best_candidate is None or evaluations[0][0] > self.best_candidate[0]:
# self.best_candidate = evaluations[0]
#
# # selection_size = math.ceil(self.selection_proportion * len(evaluations))
# # del evaluations[selection_size:]
#
# # selection and recombination
# x = [e[1] for e in evaluations]
# y = [(x_i - self.m) / self.sigma for x_i in x]
# y_weighted = [self.w[i] * y_i for i, y_i in enumerate(y)]
# y_weighted_sum = sum(y_weighted)
# self.print('y_weighted_sum {}', y_weighted_sum)
#
# print('_m', self.m, self.c_m * self.sigma * y_weighted_sum,
# self.m + self.c_m * self.sigma * y_weighted_sum)
# self.m = self.m + self.c_m * self.sigma * y_weighted_sum
#
# inv_sqrt_C = scipy.linalg.fractional_matrix_power(self.C, -.5)
# self.print('inv_sqrt_C {}', y_weighted_sum)
#
# # step-size control
# # ps = (1 - cs) * ps...
# # + sqrt(cs * (2 - cs) * mueff) * invsqrtC * (xmean - xold) / sigma;
# self.p_sigma = (1.0 - self.c_sigma) * self.p_sigma + \
# math.sqrt(self.c_sigma * (2.0 - self.c_sigma) * self.mu_eff) * \
# np.matmul(inv_sqrt_C, y_weighted_sum)
# self.print('_p_sigma {}', self.p_sigma)
#
# # hsig = sum(ps. ^ 2) / (1 - (1 - cs) ^ (2 * counteval ._lambda)) / N < 2 + 4 / (N + 1);
# p_sigma_norm = scipy.linalg.norm(self.p_sigma)
# self.print('p_sigma_norm {}', p_sigma_norm)
#
# self.sigma = self.sigma * math.exp(
# (self.c_sigma / self.d_sigma) *
# (p_sigma_norm / self.chi_n - 1))
# self.print('_sigma {}', self.sigma)
#
# # covariance matrix adaptation
# # hsig = sum(ps. ^ 2) / (1 - (1 - cs) ^ (2 * counteval ._lambda)) / N < 2 + 4 / (N + 1);
# h_sigma = 0.0
# if p_sigma_norm / np.sqrt(1.0 - (1.0 - self.c_sigma) ** 2) < (1.4 + 2 / (self.n + 1)) * self.chi_n:
# h_sigma = 1
# self.print('h_sigma {}', h_sigma)
#
# self.p_c = (1.0 - self.c_c) * self.p_c + \
# h_sigma * math.sqrt(self.c_c * (2.0 - self.c_c) * self.mu_eff) * y_weighted_sum
# self.print('_p_c {}', self.p_c)
#
# # w_i_dot = [w_i if w_i >= 0 else self.n / (np.linalg.norm(np.) ** 2) for i, w_i in enumerate(self.w)]
#
# del_h_sigma = 1 if (1 - h_sigma) * self.c_c * (2 - self.c_c) <= 1 else 0
# self.print('del_h_sigma {}', del_h_sigma)
#
# w_dot = np.zeros(self.w.shape)
# for i, w_i in enumerate(self.w):
# w_i_dot = w_i
# if w_i < 0:
# w_i_dot *= self.n / (np.linalg.norm(np.matmul(inv_sqrt_C, y[i])) ** 2)
# w_dot[i] = w_i_dot
#
# self.print('w_dot {}', w_dot)
#
# self.C = (1 + self.c_1 * del_h_sigma - self.c_1 - self.c_mu * self.sum_w) * self.C + \
# self.c_1 * np.matmul(self.p_c, self.p_c.transpose()) + \
# self.c_mu * sum([w_dot[i] * np.matmul(y[i], y[i].transpose()) for i, y_i in enumerate(y)])
# self.print('_C {}', self.C)
#
# # x = np.empty((self.mean.size, len(evaluations)))
# # for i, e in enumerate(evaluations):
# # x[:, i] = e[1]
# # y[:, i]
#
# # samples = np.empty((self.mean.size, len(evaluations)))
# #
# # for i, e in enumerate(evaluations):
# # samples[:, i] = e[1]
# #
# # self.mean = np.mean(samples, 1)
# # self.covariance = np.cov(samples, ddof=1) * self.variance_scales
#
# self.p_sigma = np.real(self.p_sigma)
# self.sigma = np.real(self.sigma)
# self.p_c = np.real(self.p_c)
# self.m = np.real(self.m)
# self.C = np.real(self.C)
#
# self.recorder.accumulate(evaluations, self.mean(), self.variance(), self.C, self.sigma, self.p_c,
# self.p_sigma) | PypiClean |
/LiftoffTools-0.4.4.tar.gz/LiftoffTools-0.4.4/liftofftools/cli_arguments.py | import argparse
def parse_args(arglist):
parser = argparse.ArgumentParser(description='Compare gene annotations across genome assemblies')
parser.add_argument('subcommand', choices=['clusters', 'variants', 'synteny', 'all'])
parser.add_argument('-r', help='reference fasta', required=True)
parser.add_argument('-t', help='target fasta', required=True)
parser.add_argument('-rg', metavar='GFF/GTF or DB', help='reference annotation file to lift over in GFF or GTF '
'format or gffutils database created in previous liftoff '
'or liftofftools run', required=True)
parser.add_argument('-tg', metavar='GFF/GTF or DB', help=' target annotation file to lift over in GFF or GTF '
'format or gffutils databased created in previous '
'liftoff or liftofftools run', required=True)
parser.add_argument('-c', action='store_true', default=False, help='analyze protein coding gene clusters only',
required=False)
parser.add_argument('-f', required=False, help='text file with additional feature types besides genes to analyze')
parser.add_argument('-infer-genes', required=False, action='store_true', default=False)
parser.add_argument('-dir', default="liftofftools_output", required=False, help="output directory")
parser.add_argument('-force', required=False, default=False, action='store_true', help="force overwrite of "
"output/intermediate files in -dir")
clusters_group = parser.add_argument_group("clusters arguments")
clusters_group.add_argument('-mmseqs_path', help="mmseqs path if not in working directory or PATH", required=False)
clusters_group.add_argument('-mmseqs_params', default="--min-seq-id 0.9 -c 0.9", metavar='=STR',
required=False,help='space delimited list of additional mmseqs parameters. Default="--min-seq-id 0.9 -c 0.9"')
synteny_group = parser.add_argument_group("synteny arguments")
synteny_group.add_argument('-edit-distance', help="calculate edit distance between reference gene order and "
"target gene order", required=False, action='store_true')
synteny_group.add_argument('-r-sort', help="txt file with the order of the reference chromosomes to be plotted on the x-axis", required=False, default=None)
synteny_group.add_argument('-t-sort', help="txt file with the order of the target chromosomes to be plotted on the y-axis", required=False, default=None)
parser.add_argument('-V', '--version', help='show program version', action='version', version='v0.4.3')
parser._positionals.title = 'Subcommands'
args = parser.parse_args(arglist)
if bool(args.r_sort) ^ bool(args.t_sort):
parser.error('-r-sort and -t-sort must be given together')
return args | PypiClean |
/CodeIntel-2.0.0b19-cp34-cp34m-macosx_10_12_x86_64.whl/codeintel/test2/bits/rails/habitual-readers/README | == Welcome to Rails
Rails is a web-application and persistence framework that includes everything
needed to create database-backed web-applications according to the
Model-View-Control pattern of separation. This pattern splits the view (also
called the presentation) into "dumb" templates that are primarily responsible
for inserting pre-built data in between HTML tags. The model contains the
"smart" domain objects (such as Account, Product, Person, Post) that holds all
the business logic and knows how to persist themselves to a database. The
controller handles the incoming requests (such as Save New Account, Update
Product, Show Post) by manipulating the model and directing data to the view.
In Rails, the model is handled by what's called an object-relational mapping
layer entitled Active Record. This layer allows you to present the data from
database rows as objects and embellish these data objects with business logic
methods. You can read more about Active Record in
link:files/vendor/rails/activerecord/README.html.
The controller and view are handled by the Action Pack, which handles both
layers by its two parts: Action View and Action Controller. These two layers
are bundled in a single package due to their heavy interdependence. This is
unlike the relationship between the Active Record and Action Pack that is much
more separate. Each of these packages can be used independently outside of
Rails. You can read more about Action Pack in
link:files/vendor/rails/actionpack/README.html.
== Getting started
1. Start the web server: <tt>ruby script/server</tt> (run with --help for options)
2. Go to http://localhost:3000/ and get "Welcome aboard: You’re riding the Rails!"
3. Follow the guidelines to start developing your application
== Web servers
Rails uses the built-in web server in Ruby called WEBrick by default, so you don't
have to install or configure anything to play around.
If you have lighttpd installed, though, it'll be used instead when running script/server.
It's considerably faster than WEBrick and suited for production use, but requires additional
installation and currently only works well on OS X/Unix (Windows users are encouraged
to start with WEBrick). We recommend version 1.4.11 and higher. You can download it from
http://www.lighttpd.net.
If you want something that's halfway between WEBrick and lighttpd, we heartily recommend
Mongrel. It's a Ruby-based web server with a C-component (so it requires compilation) that
also works very well with Windows. See more at http://mongrel.rubyforge.org/.
But of course its also possible to run Rails with the premiere open source web server Apache.
To get decent performance, though, you'll need to install FastCGI. For Apache 1.3, you want
to use mod_fastcgi. For Apache 2.0+, you want to use mod_fcgid.
See http://wiki.rubyonrails.com/rails/pages/FastCGI for more information on FastCGI.
== Example for Apache conf
<VirtualHost *:80>
ServerName rails
DocumentRoot /path/application/public/
ErrorLog /path/application/log/server.log
<Directory /path/application/public/>
Options ExecCGI FollowSymLinks
AllowOverride all
Allow from all
Order allow,deny
</Directory>
</VirtualHost>
NOTE: Be sure that CGIs can be executed in that directory as well. So ExecCGI
should be on and ".cgi" should respond. All requests from 127.0.0.1 go
through CGI, so no Apache restart is necessary for changes. All other requests
go through FCGI (or mod_ruby), which requires a restart to show changes.
== Debugging Rails
Have "tail -f" commands running on both the server.log, production.log, and
test.log files. Rails will automatically display debugging and runtime
information to these files. Debugging info will also be shown in the browser
on requests from 127.0.0.1.
== Breakpoints
Breakpoint support is available through the script/breakpointer client. This
means that you can break out of execution at any point in the code, investigate
and change the model, AND then resume execution! Example:
class WeblogController < ActionController::Base
def index
@posts = Post.find_all
breakpoint "Breaking out from the list"
end
end
So the controller will accept the action, run the first line, then present you
with a IRB prompt in the breakpointer window. Here you can do things like:
Executing breakpoint "Breaking out from the list" at .../webrick_server.rb:16 in 'breakpoint'
>> @posts.inspect
=> "[#<Post:0x14a6be8 @attributes={\"title\"=>nil, \"body\"=>nil, \"id\"=>\"1\"}>,
#<Post:0x14a6620 @attributes={\"title\"=>\"Rails you know!\", \"body\"=>\"Only ten..\", \"id\"=>\"2\"}>]"
>> @posts.first.title = "hello from a breakpoint"
=> "hello from a breakpoint"
...and even better is that you can examine how your runtime objects actually work:
>> f = @posts.first
=> #<Post:0x13630c4 @attributes={"title"=>nil, "body"=>nil, "id"=>"1"}>
>> f.
Display all 152 possibilities? (y or n)
Finally, when you're ready to resume execution, you press CTRL-D
== Console
You can interact with the domain model by starting the console through script/console.
Here you'll have all parts of the application configured, just like it is when the
application is running. You can inspect domain models, change values, and save to the
database. Starting the script without arguments will launch it in the development environment.
Passing an argument will specify a different environment, like <tt>script/console production</tt>.
To reload your controllers and models after launching the console run <tt>reload!</tt>
== Description of contents
app
Holds all the code that's specific to this particular application.
app/controllers
Holds controllers that should be named like weblog_controller.rb for
automated URL mapping. All controllers should descend from
ActionController::Base.
app/models
Holds models that should be named like post.rb.
Most models will descend from ActiveRecord::Base.
app/views
Holds the template files for the view that should be named like
weblog/index.rhtml for the WeblogController#index action. All views use eRuby
syntax. This directory can also be used to keep stylesheets, images, and so on
that can be symlinked to public.
app/helpers
Holds view helpers that should be named like weblog_helper.rb.
app/apis
Holds API classes for web services.
config
Configuration files for the Rails environment, the routing map, the database, and other dependencies.
components
Self-contained mini-applications that can bundle together controllers, models, and views.
db
Contains the database schema in schema.rb. db/migrate contains all
the sequence of Migrations for your schema.
lib
Application specific libraries. Basically, any kind of custom code that doesn't
belong under controllers, models, or helpers. This directory is in the load path.
public
The directory available for the web server. Contains subdirectories for images, stylesheets,
and javascripts. Also contains the dispatchers and the default HTML files.
script
Helper scripts for automation and generation.
test
Unit and functional tests along with fixtures.
vendor
External libraries that the application depends on. Also includes the plugins subdirectory.
This directory is in the load path. | PypiClean |
/Netzob-2.0.0.tar.gz/Netzob-2.0.0/src/netzob/Model/Vocabulary/Domain/Variables/Leafs/AbstractVariableLeaf.py |
# +---------------------------------------------------------------------------+
# | 01001110 01100101 01110100 01111010 01101111 01100010 |
# | |
# | Netzob : Inferring communication protocols |
# +---------------------------------------------------------------------------+
# | Copyright (C) 2011-2017 Georges Bossert and Frédéric Guihéry |
# | This program is free software: you can redistribute it and/or modify |
# | it under the terms of the GNU General Public License as published by |
# | the Free Software Foundation, either version 3 of the License, or |
# | (at your option) any later version. |
# | |
# | This program is distributed in the hope that it will be useful, |
# | but WITHOUT ANY WARRANTY; without even the implied warranty of |
# | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
# | GNU General Public License for more details. |
# | |
# | You should have received a copy of the GNU General Public License |
# | along with this program. If not, see <http://www.gnu.org/licenses/>. |
# +---------------------------------------------------------------------------+
# | @url : http://www.netzob.org |
# | @contact : [email protected] |
# | @sponsors : Amossys, http://www.amossys.fr |
# | Supélec, http://www.rennes.supelec.fr/ren/rd/cidre/ |
# +---------------------------------------------------------------------------+
# +---------------------------------------------------------------------------+
# | File contributors : |
# | - Georges Bossert <georges.bossert (a) supelec.fr> |
# | - Frédéric Guihéry <frederic.guihery (a) amossys.fr> |
# +---------------------------------------------------------------------------+
# +---------------------------------------------------------------------------+
# | Standard library imports |
# +---------------------------------------------------------------------------+
import abc
from bitarray import bitarray
# +---------------------------------------------------------------------------+
# | Related third party imports |
# +---------------------------------------------------------------------------+
# +---------------------------------------------------------------------------+
# | Local application imports |
# +---------------------------------------------------------------------------+
from netzob.Common.Utils.Decorators import typeCheck, NetzobLogger
from netzob.Model.Vocabulary.Domain.Variables.AbstractVariable import AbstractVariable
from netzob.Model.Vocabulary.Domain.Variables.Scope import Scope
from netzob.Model.Vocabulary.Domain.Parser.ParsingPath import ParsingException
from netzob.Model.Vocabulary.Types.AbstractType import AbstractType
@NetzobLogger
class AbstractVariableLeaf(AbstractVariable):
"""Represents a leaf in the variable definition of a field.
A leaf is a variable with no children. Most of of leaf variables
are :class:`Data <netzob.Model.Vocabulary.Domain.Variables.Leafs.Data.Data>` variables and
:class:`AbstractRelation <netzob.Model.Vocabulary.Domain.Variables.Leafs.Relations.AbstractRelation.AbstractRelation>`.
"""
def __init__(self, varType, name=None, dataType=None, scope=None):
super(AbstractVariableLeaf, self).__init__(
varType, name=name, scope=scope)
self.dataType = dataType
def isnode(self):
return False
def count(self, preset=None):
from netzob.Fuzzing.Mutators.DomainMutator import FuzzingMode
if preset is not None and preset.get(self) is not None and preset.get(self).mode in [FuzzingMode.GENERATE, FuzzingMode.FIXED]:
# Retrieve the mutator
mutator = preset.get(self)
return mutator.count()
else:
return self.dataType.count()
def parse(self, parsingPath, acceptCallBack=True, carnivorous=False, triggered=False):
"""@toto TO BE DOCUMENTED"""
if self.scope is None:
raise Exception(
"Cannot parse if the variable has no assigned Scope.")
try:
if self.isDefined(parsingPath):
if self.scope == Scope.CONSTANT or self.scope == Scope.SESSION:
return self.valueCMP(
parsingPath, acceptCallBack, carnivorous=carnivorous, triggered=triggered)
elif self.scope == Scope.MESSAGE:
return self.learn(
parsingPath, acceptCallBack, carnivorous=carnivorous, triggered=triggered)
elif self.scope == Scope.NONE:
return self.domainCMP(
parsingPath, acceptCallBack, carnivorous=carnivorous, triggered=triggered)
else:
if self.scope == Scope.CONSTANT:
self._logger.debug(
"Cannot parse '{0}' as scope is CONSTANT and no value is available.".
format(self))
return []
elif self.scope == Scope.MESSAGE or self.scope == Scope.SESSION:
return self.learn(
parsingPath, acceptCallBack, carnivorous=carnivorous, triggered=triggered)
elif self.scope == Scope.NONE:
return self.domainCMP(
parsingPath, acceptCallBack, carnivorous=carnivorous, triggered=triggered)
except ParsingException:
self._logger.info("Error in parsing of variable")
return []
raise Exception("Not yet implemented: {0}.".format(self.scope))
#
# methods that must be defined to support the abstraction process
#
@abc.abstractmethod
def isDefined(self, parsingPath):
raise NotImplementedError("method isDefined is not implemented")
@abc.abstractmethod
def domainCMP(self, parsingPath, acceptCallBack, carnivorous):
raise NotImplementedError("method domainCMP is not implemented")
@abc.abstractmethod
def valueCMP(self, parsingPath, acceptCallBack, carnivorous):
raise NotImplementedError("method valueCMP is not implemented")
@abc.abstractmethod
def learn(self, parsingPath, acceptCallBack, carnivorous):
raise NotImplementedError("method learn is not implemented")
def getVariables(self):
return [self]
def specialize(self, parsingPath, preset=None, acceptCallBack=True, triggered=False):
"""Specializes a Leaf"""
from netzob.Fuzzing.Mutator import MaxFuzzingException
from netzob.Fuzzing.Mutators.DomainMutator import FuzzingMode
# Fuzzing has priority over generating a legitimate value
if preset is not None and preset.get(self) is not None and preset.get(self).mode in [FuzzingMode.GENERATE, FuzzingMode.FIXED]:
# Retrieve the mutator
mutator = preset.get(self)
def fuzzing_generate():
if preset.get(self).mode == FuzzingMode.FIXED:
nb_iterations = AbstractType.MAXIMUM_POSSIBLE_VALUES
else:
nb_iterations = self.count(preset=preset)
for _ in range(nb_iterations):
try:
# Mutate a value according to the current field attributes
generated_value = mutator.generate()
except MaxFuzzingException:
self._logger.debug("Maximum mutation counter reached")
break
else:
if isinstance(generated_value, bitarray):
value = generated_value
else:
# Convert the return bytes into bitarray
value = bitarray()
value.frombytes(generated_value)
# Associate the generated value to the current variable
newParsingPath = parsingPath.copy()
newParsingPath.addResult(self, value)
yield newParsingPath
return fuzzing_generate()
if self.scope is None:
raise Exception(
"Cannot specialize if the variable has no assigned Scope.")
if self.isDefined(parsingPath):
if self.scope == Scope.CONSTANT or self.scope == Scope.SESSION:
newParsingPaths = self.use(parsingPath, acceptCallBack, preset=preset, triggered=triggered)
elif self.scope == Scope.MESSAGE:
newParsingPaths = self.regenerateAndMemorize(parsingPath, acceptCallBack, preset=preset, triggered=triggered)
elif self.scope == Scope.NONE:
newParsingPaths = self.regenerate(parsingPath, acceptCallBack, preset=preset, triggered=triggered)
else:
if self.scope == Scope.CONSTANT:
self._logger.debug(
"Cannot specialize '{0}' as scope is CONSTANT and no value is available.".
format(self))
newParsingPaths = iter(())
elif self.scope == Scope.MESSAGE or self.scope == Scope.SESSION:
newParsingPaths = self.regenerateAndMemorize(parsingPath, acceptCallBack, preset=preset, triggered=triggered)
elif self.scope == Scope.NONE:
newParsingPaths = self.regenerate(parsingPath, acceptCallBack, preset=preset, triggered=triggered)
if preset is not None and preset.get(self) is not None and preset.get(self).mode == FuzzingMode.MUTATE:
def fuzzing_mutate():
for path in newParsingPaths:
generatedData = path.getData(self)
# Retrieve the mutator
mutator = preset.get(self)
while True:
# Mutate a value according to the current field attributes
mutator.mutate(generatedData)
yield path
return fuzzing_mutate()
else:
return newParsingPaths
def str_structure(self, preset=None, deepness=0):
"""Returns a string which denotes
the current field definition using a tree display"""
tab = [" " for x in range(deepness - 1)]
tab.append("|-- ")
tab.append("{0}".format(self))
# Add information regarding preset configuration
if preset is not None and preset.get(self) is not None:
tmp_data = " ["
tmp_data += str(preset.get(self).mode)
try:
tmp_data += " ({})".format(preset[self])
except Exception as e:
pass
tmp_data += "]"
tab.append(tmp_data)
return ''.join(tab)
def getFixedBitSize(self):
self._logger.debug("Determine the deterministic size of the value of "
"the leaf variable")
if not hasattr(self, 'dataType'):
return super().getFixedBitSize()
return self.dataType.getFixedBitSize()
## Properties
@property
def dataType(self):
"""The datatype used to encode the result of the computed relation field.
:type: :class:`AbstractType <netzob.Model.Vocabulary.Types.AbstractType.AbstractType>`
"""
return self.__dataType
@dataType.setter # type: ignore
@typeCheck(AbstractType)
def dataType(self, dataType):
if dataType is None:
raise TypeError("Datatype cannot be None")
(minSize, maxSize) = dataType.size
if maxSize is None:
raise ValueError(
"The datatype of a relation field must declare its length")
self.__dataType = dataType | PypiClean |
/EModelRunner-1.1.16.tar.gz/EModelRunner-1.1.16/emodelrunner/load.py |
# Copyright 2020-2022 Blue Brain Project / EPFL
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import json
from bluepyopt import ephys
from emodelrunner.synapses.mechanism import NrnMODPointProcessMechanismCustom
from emodelrunner.locations import multi_locations
from emodelrunner.configuration import get_validated_config
def load_config(config_path):
"""Returns the validated configuration file.
Args:
config_path (str or Path): path to the configuration file.
Returns:
configparser.ConfigParser: loaded config object
"""
return get_validated_config(config_path)
def load_emodel_params(emodel, params_path):
"""Get optimized parameters.
Args:
emodel (str): name of the emodel
params_path (str): path to the optimized parameters json file
Returns:
dict: optimized parameters for the given emodel
"""
with open(params_path, "r", encoding="utf-8") as params_file:
params = json.load(params_file)
param_dict = params[emodel]["params"]
return param_dict
def get_syn_setup_params(
syn_extra_params_path,
cpre_cpost_path,
fit_params_path,
gid,
invivo,
):
"""Load json files and return syn_setup_params dict.
Args:
syn_extra_params_path (str): path to the glusynapses related extra parameters file
cpre_cpost_path (str): path to the c_pre and c_post related file
c_pre (resp. c_post) is the calcium amplitude during isolated presynaptic
(resp. postsynaptic) activation
fit_params_path (str): path to the file containing the glusynapse fitted parameters
The fitted parameters are time constant of calcium integrator, depression rate,
potentiation rate, and factors used in plasticity threshold computation.
gid (int): ID of the postsynaptic cell
invivo (bool): whether to run the simulation in 'in vivo' conditions
Returns:
dict: glusynapse setup related parameters
"""
with open(syn_extra_params_path, "r", encoding="utf-8") as f:
syn_extra_params = json.load(f)
with open(cpre_cpost_path, "r", encoding="utf-8") as f:
cpre_cpost = json.load(f)
with open(fit_params_path, "r", encoding="utf-8") as f:
fit_params = json.load(f)
return {
"syn_extra_params": syn_extra_params,
"c_pre": cpre_cpost["c_pre"],
"c_post": cpre_cpost["c_post"],
"fit_params": fit_params,
"postgid": gid,
"invivo": invivo,
}
def get_release_params(config, precell=False):
"""Return the final parameters.
Args:
config (configparser.ConfigParser): configuration
precell (bool): True to load precell optimized parameters. False to get usual parameters.
Returns:
dict: optimized parameters
"""
if precell:
emodel = config.get("Cell", "precell_emodel")
else:
emodel = config.get("Cell", "emodel")
params_path = config.get("Paths", "params_path")
release_params = load_emodel_params(params_path=params_path, emodel=emodel)
return release_params
def load_mechanisms(mechs_path):
"""Define mechanisms.
Args:
mechs_path (str): path to the unoptimized parameters json file
Returns:
list of ephys.mechanisms.NrnMODMechanism from file
"""
with open(mechs_path, "r", encoding="utf-8") as mechs_file:
mechs = json.load(mechs_file)
mech_definitions = mechs["mechanisms"]
mechanisms_list = []
for sectionlist, channels in mech_definitions.items():
seclist_locs = multi_locations(sectionlist)
for channel in channels["mech"]:
mechanisms_list.append(
ephys.mechanisms.NrnMODMechanism(
name=f"{channel}.{sectionlist}",
mod_path=None,
suffix=channel,
locations=seclist_locs,
preloaded=True,
)
)
return mechanisms_list
def load_unoptimized_parameters(params_path, v_init, celsius):
"""Load unoptimized parameters as BluePyOpt parameters.
Args:
params_path (str): path to the json file containing
the non-optimised parameters
v_init (int): initial voltage (mV). Will override v_init value from parameter file
celsius (int): cell temperature in celsius.
Will override celsius value from parameter file
Returns:
list of parameters
"""
# pylint: disable=too-many-locals, too-many-branches, too-many-statements
parameters = []
with open(params_path, "r", encoding="utf-8") as params_file:
definitions = json.load(params_file)
# set distributions
distributions = collections.OrderedDict()
distributions["uniform"] = ephys.parameterscalers.NrnSegmentLinearScaler()
distributions_definitions = definitions["distributions"]
for distribution, definition in distributions_definitions.items():
if "parameters" in definition:
dist_param_names = definition["parameters"]
else:
dist_param_names = None
distributions[
distribution
] = ephys.parameterscalers.NrnSegmentSomaDistanceScaler(
name=distribution,
distribution=definition["fun"],
dist_param_names=dist_param_names,
)
params_definitions = definitions["parameters"]
if "__comment" in params_definitions:
del params_definitions["__comment"]
for sectionlist, params in params_definitions.items():
if sectionlist == "global":
seclist_locs = None
is_global = True
is_dist = False
elif "distribution_" in sectionlist:
is_dist = True
seclist_locs = None
is_global = False
dist_name = sectionlist.split("distribution_")[1]
dist = distributions[dist_name]
else:
seclist_locs = multi_locations(sectionlist)
is_global = False
is_dist = False
bounds = None
value = None
for param_config in params:
param_name = param_config["name"]
if isinstance(param_config["val"], (list, tuple)):
is_frozen = False
bounds = param_config["val"]
value = None
else:
is_frozen = True
value = param_config["val"]
bounds = None
if is_global:
# force v_init to the given value
if param_name == "v_init":
value = v_init
elif param_name == "celsius":
value = celsius
parameters.append(
ephys.parameters.NrnGlobalParameter(
name=param_name,
param_name=param_name,
frozen=is_frozen,
bounds=bounds,
value=value,
)
)
elif is_dist:
parameters.append(
ephys.parameters.MetaParameter(
name=f"{param_name}.{sectionlist}",
obj=dist,
attr_name=param_name,
frozen=is_frozen,
bounds=bounds,
value=value,
)
)
else:
if "dist" in param_config:
dist = distributions[param_config["dist"]]
use_range = True
else:
dist = distributions["uniform"]
use_range = False
if use_range:
parameters.append(
ephys.parameters.NrnRangeParameter(
name=f"{param_name}.{sectionlist}",
param_name=param_name,
value_scaler=dist,
value=value,
bounds=bounds,
frozen=is_frozen,
locations=seclist_locs,
)
)
else:
parameters.append(
ephys.parameters.NrnSectionParameter(
name=f"{param_name}.{sectionlist}",
param_name=param_name,
value_scaler=dist,
value=value,
bounds=bounds,
frozen=is_frozen,
locations=seclist_locs,
)
)
return parameters
def get_rin_exp_voltage_base(features_path):
"""Get experimental rin voltage base from feature file when having MainProtocol."""
with open(features_path, "r", encoding="utf-8") as features_file:
feature_definitions = json.load(features_file)
rin_exp_voltage_base = None
for feature in feature_definitions["Rin"]["soma.v"]:
if feature["feature"] == "voltage_base":
rin_exp_voltage_base = feature["val"][0]
if rin_exp_voltage_base is None:
raise KeyError(f"No voltage_base feature found for 'Rin' in {features_path}")
return rin_exp_voltage_base
def load_syn_mechs(
seed,
rng_settings_mode,
syn_data_path,
syn_conf_path,
pre_mtypes=None,
stim_params=None,
use_glu_synapse=False,
syn_setup_params=None,
):
"""Load synapse mechanisms.
Args:
seed (int): random number generator seed number
rng_settings_mode (str): mode of the random number generator
Can be "Random123" or "Compatibility"
syn_data_path (str): path to the (tsv) synapses data file
syn_conf_path (str): path to the synapse configuration data file
pre_mtypes (list of ints): activate only synapses whose pre_mtype
is in this list. if None, all synapses are activated
stim_params (dict or None): dict with pre_mtype as key,
and netstim params list as item.
netstim params list is [start, interval, number, noise]
use_glu_synapse (bool): if True, instantiate synapses to use GluSynapse
syn_setup_params (dict): contains extra parameters to setup synapses
when using GluSynapseCustom
Returns:
NrnMODPointProcessMechanismCustom: the synapses mechanisms
"""
# load synapse file data
synapses_data = load_synapses_tsv_data(syn_data_path)
# load synapse configuration
synconf_dict = load_synapse_configuration_data(syn_conf_path)
return NrnMODPointProcessMechanismCustom(
"synapse_mechs",
synapses_data,
synconf_dict,
seed,
rng_settings_mode,
pre_mtypes,
stim_params,
use_glu_synapse=use_glu_synapse,
syn_setup_params=syn_setup_params,
)
def load_synapses_tsv_data(tsv_path):
"""Load synapse data from tsv.
Args:
tsv_path (str): path to the tsv synapses data file
Returns:
list of dicts containing each data for one synapse
"""
synapses = []
with open(tsv_path, "r", encoding="utf-8") as f:
# first line is dimensions
for line in f.readlines()[1:]:
syn = {}
items = line.strip().split("\t")
syn["sid"] = int(items[0])
syn["pre_cell_id"] = int(items[1])
syn["sectionlist_id"] = int(items[2])
syn["sectionlist_index"] = int(items[3])
syn["seg_x"] = float(items[4])
syn["synapse_type"] = int(items[5])
syn["dep"] = float(items[6])
syn["fac"] = float(items[7])
syn["use"] = float(items[8])
syn["tau_d"] = float(items[9])
syn["delay"] = float(items[10])
syn["weight"] = float(items[11])
syn["Nrrp"] = float(items[12])
syn["pre_mtype"] = int(items[13])
synapses.append(syn)
return synapses
def load_synapse_configuration_data(synconf_path):
"""Load synapse configuration data into dict[command]=list(ids).
Args:
synconf_path (str): path to the synapse configuration data file
Returns:
dict: configuration data
each key contains a command to execute using hoc,
and each value contains a list of synapse id on which to execute the command
"""
synconf_dict = {}
with open(synconf_path, "r", encoding="utf-8") as f:
synconfs = f.read().split("-1000000000000000.0")
for synconf in synconfs:
tmp = synconf.split("\n")
if "" in tmp:
tmp.remove("")
if len(tmp) == 2:
cmd, ids = tmp
ids = ids.replace(") ", ");")
ids = ids.split(";")
if "" in ids:
ids.remove("")
synconf_dict[cmd] = ids
return synconf_dict | PypiClean |
/Electrum-VTC-2.9.3.3.tar.gz/Electrum-VTC-2.9.3.3/packages/google/protobuf/internal/import_test_package/outer_pb2.py |
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
from google.protobuf import descriptor_pb2
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf.internal.import_test_package import inner_pb2 as google_dot_protobuf_dot_internal_dot_import__test__package_dot_inner__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='google/protobuf/internal/import_test_package/outer.proto',
package='google.protobuf.python.internal.import_test_package',
syntax='proto2',
serialized_pb=_b('\n8google/protobuf/internal/import_test_package/outer.proto\x12\x33google.protobuf.python.internal.import_test_package\x1a\x38google/protobuf/internal/import_test_package/inner.proto\"R\n\x05Outer\x12I\n\x05inner\x18\x01 \x01(\x0b\x32:.google.protobuf.python.internal.import_test_package.Inner')
,
dependencies=[google_dot_protobuf_dot_internal_dot_import__test__package_dot_inner__pb2.DESCRIPTOR,])
_OUTER = _descriptor.Descriptor(
name='Outer',
full_name='google.protobuf.python.internal.import_test_package.Outer',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='inner', full_name='google.protobuf.python.internal.import_test_package.Outer.inner', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=171,
serialized_end=253,
)
_OUTER.fields_by_name['inner'].message_type = google_dot_protobuf_dot_internal_dot_import__test__package_dot_inner__pb2._INNER
DESCRIPTOR.message_types_by_name['Outer'] = _OUTER
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Outer = _reflection.GeneratedProtocolMessageType('Outer', (_message.Message,), dict(
DESCRIPTOR = _OUTER,
__module__ = 'google.protobuf.internal.import_test_package.outer_pb2'
# @@protoc_insertion_point(class_scope:google.protobuf.python.internal.import_test_package.Outer)
))
_sym_db.RegisterMessage(Outer)
# @@protoc_insertion_point(module_scope) | PypiClean |
/MetaCalls-0.0.5-cp310-cp310-manylinux2014_x86_64.whl/metacalls/node_modules/string_decoder/lib/string_decoder.js |
'use strict';
/*<replacement>*/
var Buffer = require('safe-buffer').Buffer;
/*</replacement>*/
var isEncoding = Buffer.isEncoding || function (encoding) {
encoding = '' + encoding;
switch (encoding && encoding.toLowerCase()) {
case 'hex':case 'utf8':case 'utf-8':case 'ascii':case 'binary':case 'base64':case 'ucs2':case 'ucs-2':case 'utf16le':case 'utf-16le':case 'raw':
return true;
default:
return false;
}
};
function _normalizeEncoding(enc) {
if (!enc) return 'utf8';
var retried;
while (true) {
switch (enc) {
case 'utf8':
case 'utf-8':
return 'utf8';
case 'ucs2':
case 'ucs-2':
case 'utf16le':
case 'utf-16le':
return 'utf16le';
case 'latin1':
case 'binary':
return 'latin1';
case 'base64':
case 'ascii':
case 'hex':
return enc;
default:
if (retried) return; // undefined
enc = ('' + enc).toLowerCase();
retried = true;
}
}
};
// Do not cache `Buffer.isEncoding` when checking encoding names as some
// modules monkey-patch it to support additional encodings
function normalizeEncoding(enc) {
var nenc = _normalizeEncoding(enc);
if (typeof nenc !== 'string' && (Buffer.isEncoding === isEncoding || !isEncoding(enc))) throw new Error('Unknown encoding: ' + enc);
return nenc || enc;
}
// StringDecoder provides an interface for efficiently splitting a series of
// buffers into a series of JS strings without breaking apart multi-byte
// characters.
exports.StringDecoder = StringDecoder;
function StringDecoder(encoding) {
this.encoding = normalizeEncoding(encoding);
var nb;
switch (this.encoding) {
case 'utf16le':
this.text = utf16Text;
this.end = utf16End;
nb = 4;
break;
case 'utf8':
this.fillLast = utf8FillLast;
nb = 4;
break;
case 'base64':
this.text = base64Text;
this.end = base64End;
nb = 3;
break;
default:
this.write = simpleWrite;
this.end = simpleEnd;
return;
}
this.lastNeed = 0;
this.lastTotal = 0;
this.lastChar = Buffer.allocUnsafe(nb);
}
StringDecoder.prototype.write = function (buf) {
if (buf.length === 0) return '';
var r;
var i;
if (this.lastNeed) {
r = this.fillLast(buf);
if (r === undefined) return '';
i = this.lastNeed;
this.lastNeed = 0;
} else {
i = 0;
}
if (i < buf.length) return r ? r + this.text(buf, i) : this.text(buf, i);
return r || '';
};
StringDecoder.prototype.end = utf8End;
// Returns only complete characters in a Buffer
StringDecoder.prototype.text = utf8Text;
// Attempts to complete a partial non-UTF-8 character using bytes from a Buffer
StringDecoder.prototype.fillLast = function (buf) {
if (this.lastNeed <= buf.length) {
buf.copy(this.lastChar, this.lastTotal - this.lastNeed, 0, this.lastNeed);
return this.lastChar.toString(this.encoding, 0, this.lastTotal);
}
buf.copy(this.lastChar, this.lastTotal - this.lastNeed, 0, buf.length);
this.lastNeed -= buf.length;
};
// Checks the type of a UTF-8 byte, whether it's ASCII, a leading byte, or a
// continuation byte. If an invalid byte is detected, -2 is returned.
function utf8CheckByte(byte) {
if (byte <= 0x7F) return 0;else if (byte >> 5 === 0x06) return 2;else if (byte >> 4 === 0x0E) return 3;else if (byte >> 3 === 0x1E) return 4;
return byte >> 6 === 0x02 ? -1 : -2;
}
// Checks at most 3 bytes at the end of a Buffer in order to detect an
// incomplete multi-byte UTF-8 character. The total number of bytes (2, 3, or 4)
// needed to complete the UTF-8 character (if applicable) are returned.
function utf8CheckIncomplete(self, buf, i) {
var j = buf.length - 1;
if (j < i) return 0;
var nb = utf8CheckByte(buf[j]);
if (nb >= 0) {
if (nb > 0) self.lastNeed = nb - 1;
return nb;
}
if (--j < i || nb === -2) return 0;
nb = utf8CheckByte(buf[j]);
if (nb >= 0) {
if (nb > 0) self.lastNeed = nb - 2;
return nb;
}
if (--j < i || nb === -2) return 0;
nb = utf8CheckByte(buf[j]);
if (nb >= 0) {
if (nb > 0) {
if (nb === 2) nb = 0;else self.lastNeed = nb - 3;
}
return nb;
}
return 0;
}
// Validates as many continuation bytes for a multi-byte UTF-8 character as
// needed or are available. If we see a non-continuation byte where we expect
// one, we "replace" the validated continuation bytes we've seen so far with
// a single UTF-8 replacement character ('\ufffd'), to match v8's UTF-8 decoding
// behavior. The continuation byte check is included three times in the case
// where all of the continuation bytes for a character exist in the same buffer.
// It is also done this way as a slight performance increase instead of using a
// loop.
function utf8CheckExtraBytes(self, buf, p) {
if ((buf[0] & 0xC0) !== 0x80) {
self.lastNeed = 0;
return '\ufffd';
}
if (self.lastNeed > 1 && buf.length > 1) {
if ((buf[1] & 0xC0) !== 0x80) {
self.lastNeed = 1;
return '\ufffd';
}
if (self.lastNeed > 2 && buf.length > 2) {
if ((buf[2] & 0xC0) !== 0x80) {
self.lastNeed = 2;
return '\ufffd';
}
}
}
}
// Attempts to complete a multi-byte UTF-8 character using bytes from a Buffer.
function utf8FillLast(buf) {
var p = this.lastTotal - this.lastNeed;
var r = utf8CheckExtraBytes(this, buf, p);
if (r !== undefined) return r;
if (this.lastNeed <= buf.length) {
buf.copy(this.lastChar, p, 0, this.lastNeed);
return this.lastChar.toString(this.encoding, 0, this.lastTotal);
}
buf.copy(this.lastChar, p, 0, buf.length);
this.lastNeed -= buf.length;
}
// Returns all complete UTF-8 characters in a Buffer. If the Buffer ended on a
// partial character, the character's bytes are buffered until the required
// number of bytes are available.
function utf8Text(buf, i) {
var total = utf8CheckIncomplete(this, buf, i);
if (!this.lastNeed) return buf.toString('utf8', i);
this.lastTotal = total;
var end = buf.length - (total - this.lastNeed);
buf.copy(this.lastChar, 0, end);
return buf.toString('utf8', i, end);
}
// For UTF-8, a replacement character is added when ending on a partial
// character.
function utf8End(buf) {
var r = buf && buf.length ? this.write(buf) : '';
if (this.lastNeed) return r + '\ufffd';
return r;
}
// UTF-16LE typically needs two bytes per character, but even if we have an even
// number of bytes available, we need to check if we end on a leading/high
// surrogate. In that case, we need to wait for the next two bytes in order to
// decode the last character properly.
function utf16Text(buf, i) {
if ((buf.length - i) % 2 === 0) {
var r = buf.toString('utf16le', i);
if (r) {
var c = r.charCodeAt(r.length - 1);
if (c >= 0xD800 && c <= 0xDBFF) {
this.lastNeed = 2;
this.lastTotal = 4;
this.lastChar[0] = buf[buf.length - 2];
this.lastChar[1] = buf[buf.length - 1];
return r.slice(0, -1);
}
}
return r;
}
this.lastNeed = 1;
this.lastTotal = 2;
this.lastChar[0] = buf[buf.length - 1];
return buf.toString('utf16le', i, buf.length - 1);
}
// For UTF-16LE we do not explicitly append special replacement characters if we
// end on a partial character, we simply let v8 handle that.
function utf16End(buf) {
var r = buf && buf.length ? this.write(buf) : '';
if (this.lastNeed) {
var end = this.lastTotal - this.lastNeed;
return r + this.lastChar.toString('utf16le', 0, end);
}
return r;
}
function base64Text(buf, i) {
var n = (buf.length - i) % 3;
if (n === 0) return buf.toString('base64', i);
this.lastNeed = 3 - n;
this.lastTotal = 3;
if (n === 1) {
this.lastChar[0] = buf[buf.length - 1];
} else {
this.lastChar[0] = buf[buf.length - 2];
this.lastChar[1] = buf[buf.length - 1];
}
return buf.toString('base64', i, buf.length - n);
}
function base64End(buf) {
var r = buf && buf.length ? this.write(buf) : '';
if (this.lastNeed) return r + this.lastChar.toString('base64', 0, 3 - this.lastNeed);
return r;
}
// Pass bytes on through for single-byte encodings (e.g. ascii, latin1, hex)
function simpleWrite(buf) {
return buf.toString(this.encoding);
}
function simpleEnd(buf) {
return buf && buf.length ? this.write(buf) : '';
} | PypiClean |
/IdracRedfishSupportTest-0.0.7.tar.gz/IdracRedfishSupportTest-0.0.7/GetOSNetworkInformationREDFISH.py |
import argparse
import getpass
import json
import logging
import re
import requests
import sys
import time
import warnings
from datetime import datetime
from pprint import pprint
warnings.filterwarnings("ignore")
parser = argparse.ArgumentParser(description='Python script using Redfish API to get operating system network information. iDRAC Service Module (iSM) must be installed in the OS and services is running for iDRAC to get this data.')
parser.add_argument('-ip',help='iDRAC IP address', required=False)
parser.add_argument('-u', help='iDRAC username', required=False)
parser.add_argument('-p', help='iDRAC password. If you do not pass in argument -p, script will prompt to enter user password which will not be echoed to the screen.', required=False)
parser.add_argument('-x', help='Pass in X-Auth session token for executing Redfish calls. All Redfish calls will use X-Auth token instead of username/password', required=False)
parser.add_argument('--ssl', help='SSL cert verification for all Redfish calls, pass in value \"true\" or \"false\". By default, this argument is not required and script ignores validating SSL cert for all Redfish calls.', required=False)
parser.add_argument('--script-examples', action="store_true", help='Prints script examples')
parser.add_argument('--get-ism-status', help='Get iSM service status, confirm iSM is running in the OS', dest="get_ism_status", action="store_true", required=False)
parser.add_argument('--get-network-details', help='Get OS network details for each network device configured in the OS.', dest="get_network_details", action="store_true", required=False)
args = vars(parser.parse_args())
logging.basicConfig(format='%(message)s', stream=sys.stdout, level=logging.INFO)
def script_examples():
print("""\n- GetOSNetworkInformationREDFISH.py -ip 192.168.0.120 -u root -p calvin --get-ism-status, this example will return current iDRAC iSM service status.
\n- GetOSNetworkInformationREDFISH.py -ip 192.168.0.120 -u root -p calvin --get-network-details, this example will return detailed information for all OS network devices configured.""")
sys.exit(0)
def check_supported_idrac_version():
if args["x"]:
response = requests.get('https://%s/redfish/v1/Systems/System.Embedded.1/EthernetInterfaces' % idrac_ip, verify=verify_cert, headers={'X-Auth-Token': args["x"]})
else:
response = requests.get('https://%s/redfish/v1/Systems/System.Embedded.1/EthernetInterfaces' % idrac_ip, verify=verify_cert, auth=(idrac_username, idrac_password))
data = response.json()
if response.status_code == 401:
logging.warning("\n- WARNING, status code %s returned. Incorrect iDRAC username/password or invalid privilege detected." % response.status_code)
sys.exit(0)
elif response.status_code != 200:
logging.warning("\n- WARNING, iDRAC version installed does not support this feature using Redfish API")
sys.exit(0)
def get_iSM_status():
if args["x"]:
response = requests.get('https://%s/redfish/v1/Managers/iDRAC.Embedded.1/Attributes?$select=Attributes/ServiceModule.1.ServiceModuleState' % idrac_ip, verify=verify_cert, headers={'X-Auth-Token': args["x"]})
else:
response = requests.get('https://%s/redfish/v1/Managers/iDRAC.Embedded.1/Attributes?$select=Attributes/ServiceModule.1.ServiceModuleState' % idrac_ip, verify=verify_cert, auth=(idrac_username, idrac_password))
data = response.json()
if response.status_code != 200:
logging.error("- FAIL, GET command failed to get iDRAC iSM service status, status code %s returned" % response.status_code)
logging.error(data)
sys.exit(0)
logging.info("\n- INFO, current iDRAC iSM service status: %s" % data["Attributes"]["ServiceModule.1.ServiceModuleState"])
def get_OS_network_devices():
if args["x"]:
response = requests.get('https://%s/redfish/v1/Systems/System.Embedded.1/EthernetInterfaces' % idrac_ip, verify=verify_cert, headers={'X-Auth-Token': args["x"]})
else:
response = requests.get('https://%s/redfish/v1/Systems/System.Embedded.1/EthernetInterfaces' % idrac_ip, verify=verify_cert, auth=(idrac_username, idrac_password))
data = response.json()
if response.status_code != 200:
logging.error("- FAIL, GET command failed to get OS network device interfaces, status code %s returned" % response.status_code)
logging.error(data)
sys.exit(0)
supported_os_uris = []
for i in data["Members"]:
for ii in i.items():
if "OS" in ii[1]:
supported_os_uris.append(ii[1])
if supported_os_uris == []:
logging.warning("\n- WARNING, no OS network uris detected. Check to make sure iSM is running and network devices are configured in the OS.")
sys.exit(0)
for i in supported_os_uris:
logging.info("\n- Detailed information for %s -\n" % i)
if args["x"]:
response = requests.get('https://%s%s' % (idrac_ip, i), verify=verify_cert, headers={'X-Auth-Token': args["x"]})
else:
response = requests.get('https://%s%s' % (idrac_ip, i), verify=verify_cert, auth=(idrac_username, idrac_password))
pprint(response.json())
if __name__ == "__main__":
if args["script_examples"]:
script_examples()
if args["ip"] and args["ssl"] or args["u"] or args["p"] or args["x"]:
idrac_ip=args["ip"]
idrac_username=args["u"]
if args["p"]:
idrac_password=args["p"]
if not args["p"] and not args["x"] and args["u"]:
idrac_password = getpass.getpass("\n- Argument -p not detected, pass in iDRAC user %s password: " % args["u"])
if args["ssl"]:
if args["ssl"].lower() == "true":
verify_cert = True
elif args["ssl"].lower() == "false":
verify_cert = False
else:
verify_cert = False
else:
verify_cert = False
check_supported_idrac_version()
else:
logging.error("\n- FAIL, invalid argument values or not all required parameters passed in. See help text or argument --script-examples for more details.")
sys.exit(0)
if args["get_ism_status"]:
get_iSM_status()
elif args["get_network_details"]:
get_OS_network_devices()
else:
logging.error("\n- FAIL, invalid argument values or not all required parameters passed in. See help text or argument --script-examples for more details.") | PypiClean |
/Hikka_TL_New-2.0.4-py3-none-any.whl/hikkatl/events/newmessage.py | import re
from .common import EventBuilder, EventCommon, name_inner_event, _into_id_set
from .. import utils
from ..tl import types
@name_inner_event
class NewMessage(EventBuilder):
"""
Occurs whenever a new text message or a message with media arrives.
Args:
incoming (`bool`, optional):
If set to `True`, only **incoming** messages will be handled.
Mutually exclusive with ``outgoing`` (can only set one of either).
outgoing (`bool`, optional):
If set to `True`, only **outgoing** messages will be handled.
Mutually exclusive with ``incoming`` (can only set one of either).
from_users (`entity`, optional):
Unlike `chats`, this parameter filters the *senders* of the
message. That is, only messages *sent by these users* will be
handled. Use `chats` if you want private messages with this/these
users. `from_users` lets you filter by messages sent by *one or
more* users across the desired chats (doesn't need a list).
forwards (`bool`, optional):
Whether forwarded messages should be handled or not. By default,
both forwarded and normal messages are included. If it's `True`
*only* forwards will be handled. If it's `False` only messages
that are *not* forwards will be handled.
pattern (`str`, `callable`, `Pattern`, optional):
If set, only messages matching this pattern will be handled.
You can specify a regex-like string which will be matched
against the message, a callable function that returns `True`
if a message is acceptable, or a compiled regex pattern.
Example
.. code-block:: python
import asyncio
from telethon import events
@client.on(events.NewMessage(pattern='(?i)hello.+'))
async def handler(event):
# Respond whenever someone says "Hello" and something else
await event.reply('Hey!')
@client.on(events.NewMessage(outgoing=True, pattern='!ping'))
async def handler(event):
# Say "!pong" whenever you send "!ping", then delete both messages
m = await event.respond('!pong')
await asyncio.sleep(5)
await client.delete_messages(event.chat_id, [event.id, m.id])
"""
def __init__(self, chats=None, *, blacklist_chats=False, func=None,
incoming=None, outgoing=None,
from_users=None, forwards=None, pattern=None):
if incoming and outgoing:
incoming = outgoing = None # Same as no filter
elif incoming is not None and outgoing is None:
outgoing = not incoming
elif outgoing is not None and incoming is None:
incoming = not outgoing
elif all(x is not None and not x for x in (incoming, outgoing)):
raise ValueError("Don't create an event handler if you "
"don't want neither incoming nor outgoing!")
super().__init__(chats, blacklist_chats=blacklist_chats, func=func)
self.incoming = incoming
self.outgoing = outgoing
self.from_users = from_users
self.forwards = forwards
if isinstance(pattern, str):
self.pattern = re.compile(pattern).match
elif not pattern or callable(pattern):
self.pattern = pattern
elif hasattr(pattern, 'match') and callable(pattern.match):
self.pattern = pattern.match
else:
raise TypeError('Invalid pattern type given')
# Should we short-circuit? E.g. perform no check at all
self._no_check = all(x is None for x in (
self.chats, self.incoming, self.outgoing, self.pattern,
self.from_users, self.forwards, self.from_users, self.func
))
async def _resolve(self, client):
await super()._resolve(client)
self.from_users = await _into_id_set(client, self.from_users)
@classmethod
def build(cls, update, others=None, self_id=None):
if isinstance(update,
(types.UpdateNewMessage, types.UpdateNewChannelMessage)):
if not isinstance(update.message, types.Message):
return # We don't care about MessageService's here
event = cls.Event(update.message)
elif isinstance(update, types.UpdateShortMessage):
event = cls.Event(types.Message(
out=update.out,
mentioned=update.mentioned,
media_unread=update.media_unread,
silent=update.silent,
id=update.id,
peer_id=types.PeerUser(update.user_id),
from_id=types.PeerUser(self_id if update.out else update.user_id),
message=update.message,
date=update.date,
fwd_from=update.fwd_from,
via_bot_id=update.via_bot_id,
reply_to=update.reply_to,
entities=update.entities,
ttl_period=update.ttl_period
))
elif isinstance(update, types.UpdateShortChatMessage):
event = cls.Event(types.Message(
out=update.out,
mentioned=update.mentioned,
media_unread=update.media_unread,
silent=update.silent,
id=update.id,
from_id=types.PeerUser(self_id if update.out else update.from_id),
peer_id=types.PeerChat(update.chat_id),
message=update.message,
date=update.date,
fwd_from=update.fwd_from,
via_bot_id=update.via_bot_id,
reply_to=update.reply_to,
entities=update.entities,
ttl_period=update.ttl_period
))
else:
return
return event
def filter(self, event):
if self._no_check:
return event
if self.incoming and event.message.out:
return
if self.outgoing and not event.message.out:
return
if self.forwards is not None:
if bool(self.forwards) != bool(event.message.fwd_from):
return
if self.from_users is not None:
if event.message.sender_id not in self.from_users:
return
if self.pattern:
match = self.pattern(event.message.message or '')
if not match:
return
event.pattern_match = match
return super().filter(event)
class Event(EventCommon):
"""
Represents the event of a new message. This event can be treated
to all effects as a `Message <telethon.tl.custom.message.Message>`,
so please **refer to its documentation** to know what you can do
with this event.
Members:
message (`Message <telethon.tl.custom.message.Message>`):
This is the only difference with the received
`Message <telethon.tl.custom.message.Message>`, and will
return the `telethon.tl.custom.message.Message` itself,
not the text.
See `Message <telethon.tl.custom.message.Message>` for
the rest of available members and methods.
pattern_match (`obj`):
The resulting object from calling the passed ``pattern`` function.
Here's an example using a string (defaults to regex match):
>>> from telethon import TelegramClient, events
>>> client = TelegramClient(...)
>>>
>>> @client.on(events.NewMessage(pattern=r'hi (\\w+)!'))
... async def handler(event):
... # In this case, the result is a ``Match`` object
... # since the `str` pattern was converted into
... # the ``re.compile(pattern).match`` function.
... print('Welcomed', event.pattern_match.group(1))
...
>>>
"""
def __init__(self, message):
self.__dict__['_init'] = False
super().__init__(chat_peer=message.peer_id,
msg_id=message.id, broadcast=bool(message.post))
self.pattern_match = None
self.message = message
def _set_client(self, client):
super()._set_client(client)
m = self.message
m._finish_init(client, self._entities, None)
self.__dict__['_init'] = True # No new attributes can be set
def __getattr__(self, item):
if item in self.__dict__:
return self.__dict__[item]
else:
return getattr(self.message, item)
def __setattr__(self, name, value):
if not self.__dict__['_init'] or name in self.__dict__:
self.__dict__[name] = value
else:
setattr(self.message, name, value) | PypiClean |
/GPy_ABCD-1.2.1-py3-none-any.whl/GPy_ABCD/KernelExpressions/change.py | import operator
from GPy_ABCD.KernelExpressions.base import KernelExpression
from GPy_ABCD.KernelExpressions.commutatives import SumKE, ProductKE
from GPy_ABCD.KernelExpansion.kernelOperations import *
from GPy_ABCD.KernelExpansion.kernelInterpretation import *
from GPy_ABCD.Util.genericUtil import update_dict_with
from GPy_ABCD.Util.kernelUtil import sortOutTypePair
class ChangeKE(KernelExpression):
def __init__(self, CP_or_CW, left, right, root: KernelExpression = None, parent: KernelExpression = None):
super().__init__(root, parent, base_k_param_names[CP_or_CW]['name'])
self.CP_or_CW = CP_or_CW
self.left = deepcopy(left)
self.right = deepcopy(right)
self.simplify() # Deactivate for some testing
if isinstance(self.left, KernelExpression): self.left.set_parent(self).set_root(self.root)
if isinstance(self.right, KernelExpression): self.right.set_parent(self).set_root(self.root)
def __str__(self):
return self.CP_or_CW + KernelExpression.bs(str(self.left) + ', ' + str(self.right))
def __repr__(self):
return f"{type(self).__name__}('{self.CP_or_CW}', {self.left.__repr__()}, {self.right.__repr__()})"
def __eq__(self, other): ## NOTE: this is intended to check equality of data fields only, i.e. it does not check root or parent
return type(self) == type(other) and self.CP_or_CW == other.CP_or_CW and self.left == other.left and self.right == other.right
def simplify(self):
if isinstance(self.left, KernelExpression): self.left = self.left.simplify().extract_if_singleton()
if isinstance(self.right, KernelExpression): self.right = self.right.simplify().extract_if_singleton()
return self
def extract_if_singleton(self):
return self
def traverse(self):
res = [self]
# NOTE: this version adds new elements for raw string leaves (making them SumKE singletons); replace by comments to remove that behaviour
res += self.left.traverse() if isinstance(self.left, KernelExpression) else [SumKE([self.left]).set_root(self.root).set_parent(self)]#[]
res += self.right.traverse() if isinstance(self.right, KernelExpression) else [SumKE([self.right]).set_root(self.root).set_parent(self)]#[]
return res
def reduce(self, func, acc):
# NOTE: this function DOES deal with raw string leaves; see further comments upstream; swap commented middle line for opposite behaviour
return reduce(lambda acc2, branch: branch.reduce(func, acc2),
[branch if isinstance(branch, KernelExpression) else SumKE([branch]).set_root(self.root).set_parent(self) for branch in (self.left, self.right)],
# [branch for branch in (self.left, self.right) if isinstance(branch, KernelExpression)],
func(self, acc))
def set_root(self, new_root = None):
if new_root is None: new_root = self
self.root = new_root
if isinstance(self.left, KernelExpression): self.left.set_root(new_root)
if isinstance(self.right, KernelExpression): self.right.set_root(new_root)
return self
def _set_all_parents(self):
if isinstance(self.left, KernelExpression):
self.left.parent = self
self.left._set_all_parents()
if isinstance(self.right, KernelExpression):
self.right.parent = self
self.right._set_all_parents()
return self
def _check_all_parents(self):
return all([ct.parent is self and ct._check_all_parents() for ct in [self.left, self.right] if isinstance(ct, KernelExpression)])
def reassign_child(self, old_child, new_child):
if self.left is old_child: self.left = new_child # NOT A deepcopy!
else: self.right = new_child # NOT A deepcopy! # I.e. elif self.right is old_child
return new_child # NOTE THIS RETURN VALUE (used by new_tree_with_self_replaced)
def contains_base(self, bts):
if not isinstance(bts, list): bts = [bts]
return any([branch in bts if isinstance(branch, str) else branch.contains_base(bts) for branch in (self.left, self.right)])
def is_stationary(self):
return False
def to_kernel(self):
left_ker = self.left.to_kernel() if isinstance(self.left, KernelExpression) else base_str_to_ker(self.left)
right_ker = self.right.to_kernel() if isinstance(self.right, KernelExpression) else base_str_to_ker(self.right)
return base_str_to_ker_func[self.CP_or_CW](left_ker, right_ker)
# Methods for after fit
def to_kernel_with_params(self): # To be used on the result of a sum_of_prods_form
assert True, 'to_kernel_with_params called on a ChangeKE; only SumKE and ProductKE terms should be left when calling it after sum_of_prods_form'
def match_up_fit_parameters(self, param_dict, prefix = ''):
if self.is_root(): prefix += self.GPy_name + '.'
elif prefix == '': raise ValueError('No prefix but not root node in match_up_fit_parameters')
self.parameters[self.CP_or_CW].append({p: param_dict[prefix + p] for p in base_k_param_names[self.CP_or_CW]['parameters']})
same_type_branches = type(self.left) == type(self.right) and\
((isinstance(self.left, KernelExpression) and self.left.GPy_name == self.right.GPy_name) or self.left == self.right)
for branch, kex in (('left', self.left), ('right', self.right)):
postfix = '_1.' if same_type_branches and branch == 'right' else '.'
if isinstance(kex, KernelExpression): kex.match_up_fit_parameters(param_dict, prefix + kex.GPy_name + postfix)
else: self.parameters[(branch, kex)].append({p: param_dict[prefix + base_k_param_names[kex]['name'] + postfix + p] for p in base_k_param_names[kex]['parameters']})
return self
@staticmethod # This would live in kernelExpressionOperations if it did not need to be used within ChangeKEs
def add_sum_of_prods_terms(k1, k2):
res = None
pair = sortOutTypePair(k1, k2)
if len(pair) == 1:
if isinstance(k1, ProductKE): res = SumKE([], [k1, k2])
else: res = SumKE(+k1.base_terms + k2.base_terms, k1.composite_terms + k2.composite_terms)._new_parameters(update_dict_with(deepcopy(k1.parameters), k2.parameters, operator.add))
else: # I.e. one SumKE and one ProductKE
if isinstance(k1, ProductKE): res = SumKE(+k2.base_terms, [k1] + k2.composite_terms)._new_parameters(k2.parameters)
else: res = SumKE(+k1.base_terms, k1.composite_terms + [k2])._new_parameters(k1.parameters)
return res._set_all_parents()
def sum_of_prods_form(self):
new_children = []
for branch, kex in (('left', self.left), ('right', self.right)):
sigmoid_parameters = (change_k_sigmoid_names[self.CP_or_CW][branch], self.parameters[self.CP_or_CW][0])
if isinstance(kex, str):
leaf_params = [self.parameters[k][0] for k in self.parameters.keys() if isinstance(k, tuple) and k[0] == branch][0]
new_children.append(ProductKE([]).new_bases_with_parameters([(kex, leaf_params), sigmoid_parameters]))
else:
new_child = kex.sum_of_prods_form()
if isinstance(new_child, ProductKE): new_child.new_bases_with_parameters(sigmoid_parameters)
else: # I.e. SumKE
for pt in new_child.composite_terms: pt.new_bases_with_parameters(sigmoid_parameters)
for bt in new_child.base_terms.elements():
match_ps = new_child.parameters[bt]
new_child.new_composite(ProductKE([]).new_bases_with_parameters([(bt, match_ps[0]), sigmoid_parameters]))
if len(match_ps) == 1: new_child.parameters.pop(bt)
else: del match_ps[0]
new_child.base_terms.clear()
new_children.append(new_child)
return self.add_sum_of_prods_terms(new_children[0], new_children[1]).set_parent(self.parent).set_root(self.root)
# TODO:
# - Redo all ChangeKE methods using 'for branch in (self.left, self.right):' instead of repeated code where appropriate | PypiClean |
/DI_engine-0.4.9-py3-none-any.whl/ding/policy/fqf.py | from typing import List, Dict, Any, Tuple, Union
import copy
import torch
from ding.torch_utils import Adam, RMSprop, to_device
from ding.rl_utils import fqf_nstep_td_data, fqf_nstep_td_error, fqf_calculate_fraction_loss, \
get_train_sample, get_nstep_return_data
from ding.model import model_wrap
from ding.utils import POLICY_REGISTRY
from ding.utils.data import default_collate, default_decollate
from .dqn import DQNPolicy
from .common_utils import default_preprocess_learn
@POLICY_REGISTRY.register('fqf')
class FQFPolicy(DQNPolicy):
r"""
Overview:
Policy class of FQF algorithm.
Config:
== ==================== ======== ============== ======================================== =======================
ID Symbol Type Default Value Description Other(Shape)
== ==================== ======== ============== ======================================== =======================
1 ``type`` str fqf | RL policy register name, refer to | this arg is optional,
| registry ``POLICY_REGISTRY`` | a placeholder
2 ``cuda`` bool False | Whether to use cuda for network | this arg can be diff-
| erent from modes
3 ``on_policy`` bool False | Whether the RL algorithm is on-policy
| or off-policy
4 ``priority`` bool True | Whether use priority(PER) | priority sample,
| update priority
6 | ``other.eps`` float 0.05 | Start value for epsilon decay. It's
| ``.start`` | small because rainbow use noisy net.
7 | ``other.eps`` float 0.05 | End value for epsilon decay.
| ``.end``
8 | ``discount_`` float 0.97, | Reward's future discount factor, aka. | may be 1 when sparse
| ``factor`` [0.95, 0.999] | gamma | reward env
9 ``nstep`` int 3, | N-step reward discount sum for target
[3, 5] | q_value estimation
10 | ``learn.update`` int 3 | How many updates(iterations) to train | this args can be vary
| ``per_collect`` | after collector's one collection. Only | from envs. Bigger val
| valid in serial training | means more off-policy
11 ``learn.kappa`` float / | Threshold of Huber loss
== ==================== ======== ============== ======================================== =======================
"""
config = dict(
# (str) RL policy register name (refer to function "POLICY_REGISTRY").
type='fqf',
# (bool) Whether to use cuda for network.
cuda=False,
# (bool) Whether the RL algorithm is on-policy or off-policy.
on_policy=False,
# (bool) Whether use priority(priority sample, IS weight, update priority)
priority=False,
# (float) Reward's future discount factor, aka. gamma.
discount_factor=0.97,
# (int) N-step reward for target q_value estimation
nstep=1,
learn=dict(
# How many updates(iterations) to train after collector's one collection.
# Bigger "update_per_collect" means bigger off-policy.
# collect data -> update policy-> collect data -> ...
update_per_collect=3,
batch_size=64,
learning_rate_fraction=2.5e-9,
learning_rate_quantile=0.00005,
# ==============================================================
# The following configs are algorithm-specific
# ==============================================================
# (int) Frequence of target network update.
target_update_freq=100,
# (float) Threshold of Huber loss. In the FQF paper, this is denoted by kappa. Default to 1.0.
kappa=1.0,
# (float) Coefficient of entropy_loss.
ent_coef=0,
# (bool) Whether ignore done(usually for max step termination env)
ignore_done=False,
),
# collect_mode config
collect=dict(
# (int) Only one of [n_sample, n_step, n_episode] shoule be set
# n_sample=8,
# (int) Cut trajectories into pieces with length "unroll_len".
unroll_len=1,
),
eval=dict(),
# other config
other=dict(
# Epsilon greedy with decay.
eps=dict(
# (str) Decay type. Support ['exp', 'linear'].
type='exp',
start=0.95,
end=0.1,
# (int) Decay length(env step)
decay=10000,
),
replay_buffer=dict(replay_buffer_size=10000, )
),
)
def default_model(self) -> Tuple[str, List[str]]:
return 'fqf', ['ding.model.template.q_learning']
def _init_learn(self) -> None:
r"""
Overview:
Learn mode init method. Called by ``self.__init__``.
Init the optimizer, algorithm config, main and target models.
"""
self._priority = self._cfg.priority
# Optimizer
self._fraction_loss_optimizer = RMSprop(
self._model.head.quantiles_proposal.parameters(),
lr=self._cfg.learn.learning_rate_fraction,
alpha=0.95,
eps=0.00001
)
self._quantile_loss_optimizer = Adam(
list(self._model.head.Q.parameters()) + list(self._model.head.fqf_fc.parameters()) +
list(self._model.encoder.parameters()),
lr=self._cfg.learn.learning_rate_quantile,
eps=1e-2 / self._cfg.learn.batch_size
)
self._gamma = self._cfg.discount_factor
self._nstep = self._cfg.nstep
self._kappa = self._cfg.learn.kappa
self._ent_coef = self._cfg.learn.ent_coef
# use model_wrapper for specialized demands of different modes
self._target_model = copy.deepcopy(self._model)
self._target_model = model_wrap(
self._target_model,
wrapper_name='target',
update_type='assign',
update_kwargs={'freq': self._cfg.learn.target_update_freq}
)
self._learn_model = model_wrap(self._model, wrapper_name='argmax_sample')
self._learn_model.reset()
self._target_model.reset()
def _forward_learn(self, data: dict) -> Dict[str, Any]:
r"""
Overview:
Forward and backward function of learn mode.
Arguments:
- data (:obj:`dict`): Dict type data, including at least ['obs', 'action', 'reward', 'next_obs']
Returns:
- info_dict (:obj:`Dict[str, Any]`): Including current lr and loss.
"""
data = default_preprocess_learn(
data, use_priority=self._priority, ignore_done=self._cfg.learn.ignore_done, use_nstep=True
)
if self._cuda:
data = to_device(data, self._device)
# ====================
# Q-learning forward
# ====================
self._learn_model.train()
self._target_model.train()
# Current q value (main model)
ret = self._learn_model.forward(data['obs'])
logit = ret['logit'] # [batch, action_dim(64)]
q_value = ret['q'] # [batch, num_quantiles, action_dim(64)]
quantiles = ret['quantiles'] # [batch, num_quantiles+1]
quantiles_hats = ret['quantiles_hats'] # [batch, num_quantiles], requires_grad = False
q_tau_i = ret['q_tau_i'] # [batch_size, num_quantiles-1, action_dim(64)]
entropies = ret['entropies'] # [batch, 1]
# Target q value
with torch.no_grad():
target_q_value = self._target_model.forward(data['next_obs'])['q']
# Max q value action (main model)
target_q_action = self._learn_model.forward(data['next_obs'])['action']
data_n = fqf_nstep_td_data(
q_value, target_q_value, data['action'], target_q_action, data['reward'], data['done'], quantiles_hats,
data['weight']
)
value_gamma = data.get('value_gamma')
entropy_loss = -self._ent_coef * entropies.mean()
fraction_loss = fqf_calculate_fraction_loss(q_tau_i.detach(), q_value, quantiles, data['action']) + entropy_loss
quantile_loss, td_error_per_sample = fqf_nstep_td_error(
data_n, self._gamma, nstep=self._nstep, kappa=self._kappa, value_gamma=value_gamma
)
# compute grad norm of a network's parameters
def compute_grad_norm(model):
return torch.norm(torch.stack([torch.norm(p.grad.detach(), 2.0) for p in model.parameters()]), 2.0)
# ====================
# fraction_proposal network update
# ====================
self._fraction_loss_optimizer.zero_grad()
fraction_loss.backward(retain_graph=True)
if self._cfg.multi_gpu:
self.sync_gradients(self._learn_model)
with torch.no_grad():
total_norm_quantiles_proposal = compute_grad_norm(self._model.head.quantiles_proposal)
self._fraction_loss_optimizer.step()
# ====================
# Q-learning update
# ====================
self._quantile_loss_optimizer.zero_grad()
quantile_loss.backward()
if self._cfg.multi_gpu:
self.sync_gradients(self._learn_model)
with torch.no_grad():
total_norm_Q = compute_grad_norm(self._model.head.Q)
total_norm_fqf_fc = compute_grad_norm(self._model.head.fqf_fc)
total_norm_encoder = compute_grad_norm(self._model.encoder)
self._quantile_loss_optimizer.step()
# =============
# after update
# =============
self._target_model.update(self._learn_model.state_dict())
return {
'cur_lr_fraction_loss': self._fraction_loss_optimizer.defaults['lr'],
'cur_lr_quantile_loss': self._quantile_loss_optimizer.defaults['lr'],
'logit': logit.mean().item(),
'fraction_loss': fraction_loss.item(),
'quantile_loss': quantile_loss.item(),
'total_norm_quantiles_proposal': total_norm_quantiles_proposal,
'total_norm_Q': total_norm_Q,
'total_norm_fqf_fc': total_norm_fqf_fc,
'total_norm_encoder': total_norm_encoder,
'priority': td_error_per_sample.abs().tolist(),
# Only discrete action satisfying len(data['action'])==1 can return this and draw histogram on tensorboard.
'[histogram]action_distribution': data['action'],
'[histogram]quantiles_hats': quantiles_hats[0], # quantiles_hats.requires_grad = False
}
def _monitor_vars_learn(self) -> List[str]:
return [
'cur_lr_fraction_loss', 'cur_lr_quantile_loss', 'logit', 'fraction_loss', 'quantile_loss',
'total_norm_quantiles_proposal', 'total_norm_Q', 'total_norm_fqf_fc', 'total_norm_encoder'
]
def _state_dict_learn(self) -> Dict[str, Any]:
return {
'model': self._learn_model.state_dict(),
'target_model': self._target_model.state_dict(),
'optimizer_fraction_loss': self._fraction_loss_optimizer.state_dict(),
'optimizer_quantile_loss': self._quantile_loss_optimizer.state_dict(),
}
def _load_state_dict_learn(self, state_dict: Dict[str, Any]) -> None:
self._learn_model.load_state_dict(state_dict['model'])
self._target_model.load_state_dict(state_dict['target_model'])
self._fraction_loss_optimizer.load_state_dict(state_dict['optimizer_fraction_loss'])
self._quantile_loss_optimizer.load_state_dict(state_dict['optimizer_quantile_loss']) | PypiClean |
/Gooey-1.2.0a0.tar.gz/Gooey-1.2.0a0/gooey/examples/issue594.py | import wx
from gooey import Gooey, GooeyParser
# @Gooey
# def main():
# parser = GooeyParser(description="Gooey example")
# parser.add_argument("-a", "--myargument",
# # choices=['one', 'two', 'three'], gooey_options={
# # 'label_color': (255, 100, 100)
# # }
# )
# args = parser.parse_args()
# print(args)
# print(args.myargument)
# print(type(args.myargument))
#
#
# if __name__=="__main__":
# main()
import wx
class PromptingComboBox(wx.ComboBox) :
def __init__(self, parent, choices=[], style=0, **par):
wx.ComboBox.__init__(self, parent, wx.ID_ANY, style=style|wx.CB_DROPDOWN, choices=choices, **par)
self.choices = choices
self.Bind(wx.EVT_TEXT, self.OnText)
self.Bind(wx.EVT_KEY_DOWN, self.OnPress)
self.ignoreEvtText = False
self.deleteKey = False
def OnPress(self, event):
if event.GetKeyCode() == 8:
self.deleteKey = True
event.Skip()
def OnText(self, event):
currentText = event.GetString()
if self.ignoreEvtText:
self.ignoreEvtText = False
return
if self.deleteKey:
self.deleteKey = False
if self.preFound:
currentText = currentText[:-1]
self.preFound = False
for choice in self.choices :
if choice.startswith(currentText):
self.ignoreEvtText = True
self.SetValue(choice)
self.SetInsertionPoint(len(currentText))
self.SetTextSelection(len(currentText), len(choice))
self.preFound = True
break
class TrialPanel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent, wx.ID_ANY)
choices = ['grandmother', 'grandfather', 'cousin', 'aunt', 'uncle', 'grandson', 'granddaughter']
for relative in ['mother', 'father', 'sister', 'brother', 'daughter', 'son']:
choices.extend(self.derivedRelatives(relative))
self.choices = choices = sorted(choices)
mainSizer = wx.FlexGridSizer(2, 2, 5, 10)
self.SetSizer(mainSizer)
mainSizer.Add(wx.StaticText(
self, -1, "Worked in Mac - python 3 - wx phoenix"))
cb1 = PromptingComboBox(self, choices=choices)
mainSizer.Add(cb1)
mainSizer.Add(wx.StaticText(self, -1, "Work arround in Linux-gtk"))
sizer2 = wx.BoxSizer(wx.HORIZONTAL)
mainSizer.Add(sizer2)
filterCtrl = wx.TextCtrl(self, -1, size=(150, -1))
filterCtrl.Bind(wx.EVT_TEXT, self.OnFilter)
sizer2.Add(filterCtrl)
self.cb2 = wx.ComboBox(self, -1, size=(150, -1), choices=choices)
sizer2.Add(self.cb2)
def derivedRelatives(self, relative):
return [relative, 'step' + relative, relative + '-in-law']
def OnFilter(self, event):
currentText = event.GetString().upper()
tmpChoices = [c for c in self.choices if c.startswith(currentText)]
if tmpChoices != []:
self.cb2.SetItems(tmpChoices)
self.cb2.SetValue(tmpChoices[0])
else:
self.cb2.SetValue('')
self.cb2.SetItems([])
if __name__ == '__main__':
app = wx.App(False)
frame = wx.Frame (None, -1, 'Demo PromptingComboBox Control and Work around',
size=(700, 400))
TrialPanel(frame)
frame.Show()
app.MainLoop() | PypiClean |
/Nuitka_fixed-1.1.2-cp310-cp310-win_amd64.whl/nuitka/tools/specialize/SpecializeC.py | import nuitka.Options
nuitka.Options.is_full_compat = False
# isort:start
import os
import nuitka.specs.BuiltinBytesOperationSpecs
import nuitka.specs.BuiltinDictOperationSpecs
import nuitka.specs.BuiltinStrOperationSpecs
import nuitka.specs.BuiltinUnicodeOperationSpecs
from nuitka.code_generation.BinaryOperationHelperDefinitions import (
getSpecializedBinaryOperations,
parseTypesFromHelper,
)
from nuitka.code_generation.CallCodes import (
getQuickCallCode,
getQuickMethodCallCode,
getQuickMethodDescriptorCallCode,
getQuickMixedCallCode,
getTemplateCodeDeclaredFunction,
max_quick_call,
)
from nuitka.code_generation.ComparisonHelperDefinitions import (
getSpecializedComparisonOperations,
)
from nuitka.code_generation.ImportCodes import getImportModuleHardCodeName
from nuitka.nodes.ImportNodes import (
hard_modules,
hard_modules_non_stdlib,
hard_modules_version,
)
from nuitka.utils.Jinja2 import getTemplateC
from .Common import (
formatArgs,
getMethodVariations,
python2_dict_methods,
python2_str_methods,
python2_unicode_methods,
python3_bytes_methods,
python3_dict_methods,
python3_str_methods,
withFileOpenedAndAutoFormatted,
writeLine,
)
from .CTypeDescriptions import (
bytes_desc,
c_bool_desc,
c_digit_desc,
c_float_desc,
c_long_desc,
dict_desc,
float_desc,
int_desc,
list_desc,
long_desc,
n_bool_desc,
object_desc,
set_desc,
str_desc,
tuple_desc,
unicode_desc,
)
def getDoExtensionUsingTemplateC(template_name):
return getTemplateC(
package_name="nuitka.code_generation",
template_subdir="templates_c",
template_name=template_name,
extensions=("jinja2.ext.do",),
)
class AlternativeTypeBase(object):
# TODO: Base class for alternative types
pass
class AlternativeIntOrClong(AlternativeTypeBase):
# TODO: Base class for alternative type int or clong.
pass
types = (
int_desc,
str_desc,
unicode_desc,
float_desc,
tuple_desc,
list_desc,
set_desc,
dict_desc,
bytes_desc,
long_desc,
c_long_desc,
c_digit_desc,
c_float_desc,
c_bool_desc,
n_bool_desc,
object_desc,
)
def findTypeFromCodeName(code_name):
for candidate in types:
if candidate.getHelperCodeName() == code_name:
return candidate
op_slot_codes = set()
# Reverse operation mapping.
reversed_args_compare_op_codes = {
"LE": "GE",
"LT": "GT",
"EQ": "EQ",
"NE": "NE",
"GT": "LT",
"GE": "LE",
}
def makeCompareSlotCode(operator, op_code, target, left, right, emit):
# Many variations to consider, pylint: disable=too-many-branches
key = operator, op_code, target, left, right
if key in op_slot_codes:
return
int_types_family = (int_desc, c_long_desc)
long_types_family = (int_desc, long_desc, c_long_desc, c_digit_desc)
float_types_family = (int_desc, long_desc, float_desc, c_long_desc, c_float_desc)
if left in int_types_family and right in int_types_family:
template = getDoExtensionUsingTemplateC("HelperOperationComparisonInt.c.j2")
elif left in long_types_family and right in long_types_family:
template = getDoExtensionUsingTemplateC("HelperOperationComparisonLong.c.j2")
elif left in float_types_family and right in float_types_family:
template = getDoExtensionUsingTemplateC("HelperOperationComparisonFloat.c.j2")
elif left == int_desc:
template = getDoExtensionUsingTemplateC("HelperOperationComparisonInt.c.j2")
elif left == long_desc:
template = getDoExtensionUsingTemplateC("HelperOperationComparisonLong.c.j2")
elif left == float_desc:
template = getDoExtensionUsingTemplateC("HelperOperationComparisonFloat.c.j2")
elif left == tuple_desc:
template = getDoExtensionUsingTemplateC("HelperOperationComparisonTuple.c.j2")
elif left == list_desc:
template = getDoExtensionUsingTemplateC("HelperOperationComparisonList.c.j2")
# elif left == set_desc:
# template = env.get_template("HelperOperationComparisonSet.c.j2")
elif left == bytes_desc:
template = getDoExtensionUsingTemplateC("HelperOperationComparisonBytes.c.j2")
elif left == str_desc:
template = getDoExtensionUsingTemplateC("HelperOperationComparisonStr.c.j2")
elif left == unicode_desc:
template = getDoExtensionUsingTemplateC("HelperOperationComparisonUnicode.c.j2")
else:
return
assert left is not int_desc or right is not int_desc or target is not n_bool_desc
code = template.render(
operand=operator, # TODO: rename
target=target,
left=left,
right=right,
op_code=op_code,
reversed_args_op_code=reversed_args_compare_op_codes[op_code],
name=template.name,
long_desc=long_desc,
c_long_desc=c_long_desc,
c_digit_desc=c_digit_desc,
)
emit(code)
op_slot_codes.add(key)
mul_repeats = set()
def makeMulRepeatCode(target, left, right, emit):
key = right, left
if key in mul_repeats:
return
template = getDoExtensionUsingTemplateC("HelperOperationMulRepeatSlot.c.j2")
code = template.render(target=target, left=left, right=right)
emit(code)
mul_repeats.add(key)
def _getNbSlotFromOperand(operand, op_code):
# pylint: disable=too-many-branches,too-many-return-statements
if operand == "+":
return "nb_add"
elif operand == "*":
return "nb_multiply"
elif operand == "-":
return "nb_subtract"
elif operand == "//":
return "nb_floor_divide"
elif operand == "/":
if op_code == "TRUEDIV":
return "nb_true_divide"
else:
return "nb_divide"
elif operand == "%":
return "nb_remainder"
elif operand == "**":
return "nb_power"
elif operand == "<<":
return "nb_lshift"
elif operand == ">>":
return "nb_rshift"
elif operand == "|":
return "nb_or"
elif operand == "&":
return "nb_and"
elif operand == "^":
return "nb_xor"
elif operand == "@":
return "nb_matrix_multiply"
elif operand == "divmod":
return "nb_divmod"
else:
assert False, operand
def _getNbInplaceSlotFromOperand(operand, op_code):
if operand == "divmod":
return None
nb_slot = _getNbSlotFromOperand(operand, op_code)
return nb_slot.replace("nb_", "nb_inplace_")
def _parseTypesFromHelper(helper_name):
(
target_code,
left_code,
right_code,
) = parseTypesFromHelper(helper_name)
if target_code is not None:
target = findTypeFromCodeName(target_code)
else:
target = None
left = findTypeFromCodeName(left_code)
right = findTypeFromCodeName(right_code)
return target_code, target, left, right
def _parseRequirements(op_code, target, left, right, emit):
python_requirement = set()
# There is an obsolete Python2 operation too, making sure it's guarded in code.
if op_code == "OLDDIV":
python_requirement.add(int_desc.python_requirement)
if op_code == "MATMULT":
python_requirement.add("PYTHON_VERSION >= 0x350")
if target is not None and target.python_requirement:
python_requirement.add(target.python_requirement)
if left.python_requirement:
python_requirement.add(left.python_requirement)
if right.python_requirement:
python_requirement.add(right.python_requirement)
if python_requirement:
assert len(python_requirement) == 1, (target, left, right)
python_requirement = python_requirement.pop()
emit("#if %s" % python_requirement)
return python_requirement
def makeHelperOperations(
template, inplace, helpers_set, operator, op_code, emit_h, emit_c, emit
):
# Complexity comes natural, pylint: disable=too-many-locals
emit(
'/* C helpers for type %s "%s" (%s) operations */'
% ("in-place" if inplace else "specialized", operator, op_code)
)
emit()
for helper_name in helpers_set:
target_code, target, left, right = _parseTypesFromHelper(helper_name)
assert target is None or not inplace, helper_name
if target is None and not inplace:
assert False, target_code
python_requirement = _parseRequirements(op_code, target, left, right, emit)
emit(
'/* Code referring to "%s" corresponds to %s and "%s" to %s. */'
% (
left.getHelperCodeName(),
left.type_desc,
right.getHelperCodeName(),
right.type_desc,
)
)
if operator == "+":
sq_slot = "sq_concat"
elif operator == "*":
sq_slot = "sq_repeat"
else:
sq_slot = None
if inplace and sq_slot is not None:
sq_inplace_slot = sq_slot.replace("sq_", "sq_inplace_")
else:
sq_inplace_slot = None
code = template.render(
target=target,
left=left,
right=right,
op_code=op_code,
operator=operator,
nb_slot=_getNbSlotFromOperand(operator, op_code),
nb_inplace_slot=_getNbInplaceSlotFromOperand(operator, op_code)
if inplace
else None,
sq_slot=sq_slot,
sq_inplace_slot=sq_inplace_slot,
object_desc=object_desc,
int_desc=int_desc,
long_desc=long_desc,
float_desc=float_desc,
list_desc=list_desc,
tuple_desc=tuple_desc,
set_desc=set_desc,
str_desc=str_desc,
unicode_desc=unicode_desc,
bytes_desc=bytes_desc,
c_long_desc=c_long_desc,
c_digit_desc=c_digit_desc,
)
emit_c(code)
emit_h(getTemplateCodeDeclaredFunction(code))
if python_requirement:
emit("#endif")
emit()
def makeHelperComparisons(
template, helpers_set, operator, op_code, emit_h, emit_c, emit
):
# Details to look for, pylint: disable=too-many-locals
emit(
'/* C helpers for type specialized "%s" (%s) comparisons */'
% (operator, op_code)
)
emit()
for target in (object_desc, c_bool_desc):
python_requirement = _parseRequirements(
op_code, target, int_desc, int_desc, emit_c
)
makeCompareSlotCode(operator, op_code, target, int_desc, int_desc, emit_c)
if python_requirement:
emit_c("#endif")
for helper_name in helpers_set:
assert helper_name.split("_")[:2] == ["RICH", "COMPARE"], (helper_name,)
# Filter for the operation.
if helper_name.split("_")[2] != op_code:
continue
_target_code, target, left, right = _parseTypesFromHelper(helper_name)
assert target is not None, helper_name
assert left is not None, helper_name
assert right is not None, helper_name
python_requirement = _parseRequirements(op_code, target, left, right, emit)
(
code,
helper_target,
type_desc1,
type_desc2,
_operand1,
_operand2,
) = left.getTypeComparisonSpecializationHelper(
other=right,
op_code=op_code,
target=target,
operand1="operand1",
operand2="operand2",
)
if code:
makeCompareSlotCode(
operator, op_code, helper_target, type_desc1, type_desc2, emit_c
)
emit(
'/* Code referring to "%s" corresponds to %s and "%s" to %s. */'
% (
left.getHelperCodeName(),
left.type_desc,
right.getHelperCodeName(),
right.type_desc,
)
)
if not python_requirement:
is_py3_only = False
is_py2_only = False
elif python_requirement == "PYTHON_VERSION < 0x300":
is_py3_only = False
is_py2_only = True
elif python_requirement == "PYTHON_VERSION >= 0x300":
is_py3_only = True
is_py2_only = False
else:
assert False, python_requirement
code = template.render(
target=target,
left=left,
right=right,
op_code=op_code,
reversed_args_op_code=reversed_args_compare_op_codes[op_code],
operator=operator,
is_py3_only=is_py3_only,
is_py2_only=is_py2_only,
object_desc=object_desc,
int_desc=int_desc,
)
emit_c(code)
emit_h(getTemplateCodeDeclaredFunction(code))
if python_requirement:
emit("#endif")
emit()
def emitGenerationWarning(emit, template_name):
emit(
"/* WARNING, this code is GENERATED. Modify the template %s instead! */"
% template_name
)
def emitIDE(emit):
emit(
"""
/* This file is included from another C file, help IDEs to still parse it on its own. */
#ifdef __IDE_ONLY__
#include "nuitka/prelude.h"
#endif
"""
)
def makeHelpersComparisonOperation(operand, op_code):
specialized_cmp_helpers_set = getSpecializedComparisonOperations()
template = getDoExtensionUsingTemplateC("HelperOperationComparison.c.j2")
filename_c = "nuitka/build/static_src/HelpersComparison%s.c" % op_code.capitalize()
filename_h = "nuitka/build/include/nuitka/helper/comparisons_%s.h" % op_code.lower()
with withFileOpenedAndAutoFormatted(filename_c) as output_c:
with withFileOpenedAndAutoFormatted(filename_h) as output_h:
def emit_h(*args):
writeLine(output_h, *args)
def emit_c(*args):
writeLine(output_c, *args)
def emit(*args):
emit_h(*args)
emit_c(*args)
emitGenerationWarning(emit, template.name)
emitIDE(emit)
filename_utils = filename_c[:-2] + "Utils.c"
if os.path.exists(filename_utils):
emit_c('#include "%s"' % os.path.basename(filename_utils))
makeHelperComparisons(
template,
specialized_cmp_helpers_set,
operand,
op_code,
emit_h,
emit_c,
emit,
)
def makeHelpersBinaryOperation(operand, op_code):
specialized_op_helpers_set = getSpecializedBinaryOperations(op_code)
template = getDoExtensionUsingTemplateC("HelperOperationBinary.c.j2")
filename_c = (
"nuitka/build/static_src/HelpersOperationBinary%s.c" % op_code.capitalize()
)
filename_h = (
"nuitka/build/include/nuitka/helper/operations_binary_%s.h" % op_code.lower()
)
with withFileOpenedAndAutoFormatted(filename_c) as output_c:
with withFileOpenedAndAutoFormatted(filename_h) as output_h:
def emit_h(*args):
writeLine(output_h, *args)
def emit_c(*args):
writeLine(output_c, *args)
def emit(*args):
emit_h(*args)
emit_c(*args)
emitGenerationWarning(emit, template.name)
emitIDE(emit)
filename_utils = filename_c[:-2] + "Utils.c"
if os.path.exists(filename_utils):
emit_c('#include "%s"' % os.path.basename(filename_utils))
makeHelperOperations(
template,
False,
specialized_op_helpers_set,
operand,
op_code,
emit_h,
emit_c,
emit,
)
def makeHelpersInplaceOperation(operand, op_code):
specialized_op_helpers_set = getSpecializedBinaryOperations("I" + op_code)
template = getDoExtensionUsingTemplateC("HelperOperationInplace.c.j2")
filename_c = (
"nuitka/build/static_src/HelpersOperationInplace%s.c" % op_code.capitalize()
)
filename_h = (
"nuitka/build/include/nuitka/helper/operations_inplace_%s.h" % op_code.lower()
)
with withFileOpenedAndAutoFormatted(filename_c) as output_c:
with withFileOpenedAndAutoFormatted(filename_h) as output_h:
def emit_h(*args):
writeLine(output_h, *args)
def emit_c(*args):
writeLine(output_c, *args)
def emit(*args):
emit_h(*args)
emit_c(*args)
emitGenerationWarning(emit, template.name)
emitIDE(emit)
filename_utils = filename_c[:-2] + "Utils.c"
if os.path.exists(filename_utils):
emit_c('#include "%s"' % os.path.basename(filename_utils))
makeHelperOperations(
template,
True,
specialized_op_helpers_set,
operand,
op_code,
emit_h,
emit_c,
emit,
)
def makeHelpersImportHard():
filename_c = "nuitka/build/static_src/HelpersImportHard.c"
filename_h = "nuitka/build/include/nuitka/helper/import_hard.h"
template = getDoExtensionUsingTemplateC("HelperImportHard.c.j2")
with withFileOpenedAndAutoFormatted(filename_c) as output_c:
with withFileOpenedAndAutoFormatted(filename_h) as output_h:
def emit_h(*args):
writeLine(output_h, *args)
def emit_c(*args):
writeLine(output_c, *args)
def emit(*args):
emit_h(*args)
emit_c(*args)
emitGenerationWarning(emit, template.name)
emitIDE(emit)
for module_name in sorted(hard_modules):
makeHelperImportModuleHard(
template,
module_name,
emit_h,
emit_c,
emit,
)
def makeHelperImportModuleHard(template, module_name, emit_h, emit_c, emit):
emit('/* C helper for hard import of module "%s" import. */' % module_name)
python_min_max_version = hard_modules_version.get(module_name)
if python_min_max_version is not None:
python_min_version, python_max_version = python_min_max_version
parts = []
if python_min_version is not None:
parts.append("PYTHON_VERSION >= %s" % hex(python_min_version))
if python_max_version is not None:
parts.append("PYTHON_VERSION < %s" % hex(python_max_version))
python_requirement = " && ".join(parts)
else:
python_requirement = None
if python_requirement:
emit("#if %s" % python_requirement)
code = template.render(
module_name=module_name,
module_code_name=getImportModuleHardCodeName(module_name),
name=template.name,
target=object_desc,
is_stdlib=module_name not in hard_modules_non_stdlib,
)
emit_c(code)
emit_h(getTemplateCodeDeclaredFunction(code))
if python_requirement:
emit("#endif")
emit()
def makeHelperCalls():
filename_c = "nuitka/build/static_src/HelpersCalling2.c"
filename_h = "nuitka/build/include/nuitka/helper/calling2.h"
with withFileOpenedAndAutoFormatted(filename_c) as output_c:
with withFileOpenedAndAutoFormatted(filename_h) as output_h:
def emit_h(*args):
assert args[0] != "extern "
writeLine(output_h, *args)
def emit_c(*args):
writeLine(output_c, *args)
def emit(*args):
emit_h(*args)
emit_c(*args)
template = getTemplateC(
"nuitka.code_generation", "CodeTemplateCallsPositional.c.j2"
)
emitGenerationWarning(emit, template.name)
emitIDE(emit)
for args_count in range(max_quick_call + 1):
code = getQuickCallCode(args_count=args_count, has_tuple_arg=False)
emit_c(code)
emit_h(getTemplateCodeDeclaredFunction(code))
if args_count >= 1:
code = getQuickCallCode(args_count=args_count, has_tuple_arg=True)
emit_c(code)
emit_h(getTemplateCodeDeclaredFunction(code))
template = getTemplateC(
"nuitka.code_generation", "CodeTemplateCallsMixed.c.j2"
)
# Only keywords, but not positional arguments, via split args.
code = getQuickMixedCallCode(
args_count=0,
has_tuple_arg=False,
has_dict_values=True,
)
emit_c(code)
emit_h(getTemplateCodeDeclaredFunction(code))
for args_count in range(1, max_quick_call + 1):
for has_tuple_arg in (False, True):
for has_dict_values in (False, True):
# We do not do that.
if not has_dict_values and has_tuple_arg:
continue
code = getQuickMixedCallCode(
args_count=args_count,
has_tuple_arg=has_tuple_arg,
has_dict_values=has_dict_values,
)
emit_c(code)
emit_h(getTemplateCodeDeclaredFunction(code))
for args_count in range(1, 5):
code = getQuickMethodDescriptorCallCode(args_count=args_count)
emit_c(code)
emit_h(getTemplateCodeDeclaredFunction(code))
for args_count in range(max_quick_call + 1):
code = getQuickMethodCallCode(args_count=args_count)
emit_c(code)
emit_h(getTemplateCodeDeclaredFunction(code))
def _makeHelperBuiltinTypeAttributes(
type_prefix, type_name, python2_methods, python3_methods, emit_c, emit_h
):
# many cases to deal with, pylint: disable=too-many-branches
def getVarName(method_name):
return "%s_builtin_%s" % (type_prefix, method_name)
for method_name in sorted(set(python2_methods + python3_methods)):
is_public = method_name in ("format",)
if method_name in python2_methods and method_name not in python3_methods:
emit_c("#if PYTHON_VERSION < 0x300")
if is_public:
emit_h("#if PYTHON_VERSION < 0x300")
needs_endif = True
elif method_name not in python2_methods and method_name in python3_methods:
emit_c("#if PYTHON_VERSION >= 0x300")
if is_public:
emit_h("#if PYTHON_VERSION >= 0x300")
needs_endif = True
else:
needs_endif = False
if not is_public:
emit_c("static")
emit_c("PyObject *%s = NULL;" % getVarName(method_name))
if is_public:
emit_h("extern PyObject *%s;" % getVarName(method_name))
if needs_endif:
emit_c("#endif")
if is_public:
emit_h("#endif")
if not python3_methods:
emit_c("#if PYTHON_VERSION < 0x300")
if not python2_methods:
emit_c("#if PYTHON_VERSION >= 0x300")
emit_c("static void _init%sBuiltinMethods(void) {" % type_prefix.capitalize())
for method_name in sorted(set(python2_methods + python3_methods)):
if (
method_name in python2_methods
and method_name not in python3_methods
and python3_methods
):
emit_c("#if PYTHON_VERSION < 0x300")
needs_endif = True
elif (
method_name not in python2_methods
and method_name in python3_methods
and python2_methods
):
emit_c("#if PYTHON_VERSION >= 0x300")
needs_endif = True
else:
needs_endif = False
emit_c(
'%s = PyObject_GetAttrString((PyObject *)&%s, "%s");'
% (getVarName(method_name), type_name, method_name)
)
if needs_endif:
emit_c("#endif")
emit_c("}")
if not python2_methods or not python3_methods:
emit_c("#endif")
generate_builtin_type_operations = [
# TODO: For these, we would need an implementation for adding/deleting dictionary values. That
# has turned out to be too hard so far and these are very good friends, not doing hashing
# multiple times when reading and writing, so can't do it unless we add something for the
# Nuitka-Python eventually.
(
"tshape_dict",
dict_desc,
nuitka.specs.BuiltinDictOperationSpecs,
("pop", "popitem", "setdefault"),
),
# TODO: These are very complex things using "string lib" code in CPython,
# that we do not have easy access to, but we might one day for Nuitka-Python
# expose it for the static linking of it and then we could in fact call
# these directly.
(
"tshape_str",
str_desc,
nuitka.specs.BuiltinStrOperationSpecs,
(
"strip",
"rstrip",
"lstrip",
"partition",
"rpartition",
"find",
"rfind",
"index",
"rindex",
"capitalize",
"upper",
"lower",
"swapcase",
"title",
"isalnum",
"isalpha",
"isdigit",
"islower",
"isupper",
"isspace",
"istitle",
"split",
"rsplit",
"startswith",
"endswith",
"replace",
"encode",
"decode",
"count",
"expandtabs",
"translate",
"ljust",
"rjust",
"center",
"zfill",
"splitlines",
),
),
# TODO: This is using Python2 spec module for Python3 strings, that will be a problem down the
# road, when version specifics come in.
(
"tshape_unicode",
unicode_desc,
nuitka.specs.BuiltinUnicodeOperationSpecs,
(
"strip",
"rstrip",
"lstrip",
"find",
"rfind",
"index",
"rindex",
"capitalize",
"upper",
"lower",
"swapcase",
"title",
"isalnum",
"isalpha",
"isdigit",
"islower",
"isupper",
"isspace",
"istitle",
"split",
"rsplit",
"startswith",
"endswith",
"replace",
"encode",
"count",
"expandtabs",
"translate",
"ljust",
"rjust",
"center",
"zfill",
"splitlines",
),
),
(
"tshape_bytes",
bytes_desc,
nuitka.specs.BuiltinBytesOperationSpecs,
("decode",),
),
]
def makeHelperBuiltinTypeMethods():
# Many details, pylint: disable=too-many-locals
filename_c = "nuitka/build/static_src/HelpersBuiltinTypeMethods.c"
filename_h = "nuitka/build/include/nuitka/helper/operations_builtin_types.h"
with withFileOpenedAndAutoFormatted(filename_c) as output_c:
with withFileOpenedAndAutoFormatted(filename_h) as output_h:
def emit_h(*args):
writeLine(output_h, *args)
def emit_c(*args):
writeLine(output_c, *args)
def emit(*args):
emit_h(*args)
emit_c(*args)
emitIDE(emit)
_makeHelperBuiltinTypeAttributes(
"str", "PyString_Type", python2_str_methods, (), emit_c, emit_h
)
_makeHelperBuiltinTypeAttributes(
"bytes", "PyBytes_Type", (), python3_bytes_methods, emit_c, emit_h
)
_makeHelperBuiltinTypeAttributes(
"unicode",
"PyUnicode_Type",
python2_unicode_methods,
python3_str_methods,
emit_c,
emit_h,
)
_makeHelperBuiltinTypeAttributes(
"dict",
"PyDict_Type",
python2_dict_methods,
python3_dict_methods,
emit_c,
emit_h,
)
template = getDoExtensionUsingTemplateC("HelperBuiltinMethodOperation.c.j2")
for (
shape_name,
type_desc,
spec_module,
method_names,
) in generate_builtin_type_operations:
if type_desc.python_requirement:
emit("#if %s" % type_desc.python_requirement)
for method_name in sorted(method_names):
(
present,
arg_names,
_arg_tests,
arg_name_mapping,
arg_counts,
) = getMethodVariations(
spec_module=spec_module,
shape_name=shape_name,
method_name=method_name,
must_exist=True,
)
assert present, method_name
def formatArgumentDeclaration(arg_types, arg_names, starting):
return formatArgs(
[
arg_type.getVariableDecl(arg_name)
for arg_type, arg_name in zip(arg_types, arg_names)
],
starting=starting,
)
# Function is used immediately in same loop, pylint: disable=cell-var-from-loop
def replaceArgNameForC(arg_name):
if arg_name in arg_name_mapping:
arg_name = arg_name_mapping[arg_name]
if arg_name in ("default", "new"):
return arg_name + "_value"
else:
return arg_name
for arg_count in arg_counts:
variant_args = [
replaceArgNameForC(arg_name)
for arg_name in arg_names[:arg_count]
]
code = template.render(
object_desc=object_desc,
builtin_type=type_desc,
builtin_arg_name=type_desc.type_name,
method_name=method_name,
api_suffix=str(arg_count + 1)
if len(arg_counts) > 1
else "",
arg_names=variant_args,
arg_types=[object_desc] * len(variant_args),
formatArgumentDeclaration=formatArgumentDeclaration,
zip=zip,
len=len,
name=template.name,
)
emit_c(code)
emit_h(getTemplateCodeDeclaredFunction(code))
if type_desc.python_requirement:
emit("#endif")
def main():
# Cover many things once first, then cover all for quicker turnaround during development.
makeHelperBuiltinTypeMethods()
makeHelpersComparisonOperation("==", "EQ")
makeHelpersBinaryOperation("+", "ADD")
makeHelpersInplaceOperation("+", "ADD")
makeHelpersImportHard()
makeHelperCalls()
makeHelpersBinaryOperation("-", "SUB")
makeHelpersBinaryOperation("*", "MULT")
makeHelpersBinaryOperation("%", "MOD")
makeHelpersBinaryOperation("|", "BITOR")
makeHelpersBinaryOperation("&", "BITAND")
makeHelpersBinaryOperation("^", "BITXOR")
makeHelpersBinaryOperation("<<", "LSHIFT")
makeHelpersBinaryOperation(">>", "RSHIFT")
makeHelpersBinaryOperation("//", "FLOORDIV")
makeHelpersBinaryOperation("/", "TRUEDIV")
makeHelpersBinaryOperation("/", "OLDDIV")
makeHelpersBinaryOperation("divmod", "DIVMOD")
makeHelpersBinaryOperation("**", "POW")
makeHelpersBinaryOperation("@", "MATMULT")
makeHelpersInplaceOperation("-", "SUB")
makeHelpersInplaceOperation("*", "MULT")
makeHelpersInplaceOperation("%", "MOD")
makeHelpersInplaceOperation("|", "BITOR")
makeHelpersInplaceOperation("&", "BITAND")
makeHelpersInplaceOperation("^", "BITXOR")
makeHelpersInplaceOperation("<<", "LSHIFT")
makeHelpersInplaceOperation(">>", "RSHIFT")
makeHelpersInplaceOperation("//", "FLOORDIV")
makeHelpersInplaceOperation("/", "TRUEDIV")
makeHelpersInplaceOperation("/", "OLDDIV")
makeHelpersInplaceOperation("**", "POW")
makeHelpersInplaceOperation("@", "MATMULT")
makeHelpersComparisonOperation("!=", "NE")
makeHelpersComparisonOperation("<=", "LE")
makeHelpersComparisonOperation(">=", "GE")
makeHelpersComparisonOperation(">", "GT")
makeHelpersComparisonOperation("<", "LT") | PypiClean |
/EARL-pytorch-0.5.1.tar.gz/EARL-pytorch-0.5.1/rlgym_tools/sb3_utils/sb3_multiple_instance_env.py | import multiprocessing as mp
import os
import time
from typing import Optional, List, Union, Any, Callable, Sequence
import numpy as np
from stable_baselines3.common.vec_env import SubprocVecEnv, CloudpickleWrapper, VecEnv
from stable_baselines3.common.vec_env.base_vec_env import (
VecEnvObs,
VecEnvStepReturn,
VecEnvIndices,
)
from stable_baselines3.common.vec_env.subproc_vec_env import _worker
from rlgym.envs import Match
from rlgym.gym import Gym
from rlgym.gamelaunch import LaunchPreference
class SB3MultipleInstanceEnv(SubprocVecEnv):
"""
Class for launching several Rocket League instances into a single SubprocVecEnv for use with Stable Baselines.
"""
MEM_INSTANCE_LAUNCH = 3.5e9
MEM_INSTANCE_LIM = 4e6
@staticmethod
def estimate_supported_processes():
import psutil
vm = psutil.virtual_memory()
# Need 3.5GB to launch, reduces to 350MB after a while
est_proc_mem = round(
(vm.available - SB3MultipleInstanceEnv.MEM_INSTANCE_LAUNCH)
/ SB3MultipleInstanceEnv.MEM_INSTANCE_LAUNCH
)
est_proc_cpu = os.cpu_count()
est_proc = min(est_proc_mem, est_proc_cpu)
return est_proc
def __init__(
self,
match_func_or_matches: Union[Callable[[], Match], Sequence[Match]],
num_instances: Optional[int] = None,
launch_preference: Optional[Union[LaunchPreference, str]] = LaunchPreference.EPIC,
wait_time: float = 30,
force_paging: bool = False,
):
"""
:param match_func_or_matches: either a function which produces the a Match object, or a list of Match objects.
Needs to be a function so that each subprocess can call it and get their own objects.
:param num_instances: the number of Rocket League instances to start up,
or "auto" to estimate how many instances are supported (requires psutil).
:param wait_time: the time to wait between launching each instance. Default one minute.
:param force_paging: enable forced paging of each spawned rocket league instance to reduce memory utilization
immediately, instead of allowing the OS to slowly page untouched allocations.
WARNING: This will require you to potentially expand your Windows Page File, and it may
substantially increase disk activity, leading to decreased disk lifetime.
Use at your own peril.
https://www.tomshardware.com/news/how-to-manage-virtual-memory-pagefile-windows-10,36929.html
Default is off: OS dictates the behavior.
"""
if callable(match_func_or_matches):
assert num_instances is not None, (
"If using a function to generate Match objects, "
"num_instances must be specified"
)
if num_instances == "auto":
num_instances = SB3MultipleInstanceEnv.estimate_supported_processes()
match_func_or_matches = [
match_func_or_matches() for _ in range(num_instances)
]
def get_process_func(i):
def spawn_process():
match = match_func_or_matches[i]
env = Gym(
match,
pipe_id=os.getpid(),
launch_preference=launch_preference,
use_injector=True,
force_paging=force_paging,
)
return env
return spawn_process
# super().__init__([]) Super init intentionally left out since we need to launch processes with delay
env_fns = [get_process_func(i) for i in range(len(match_func_or_matches))]
# START - Code from SubprocVecEnv class
self.waiting = False
self.closed = False
n_envs = len(env_fns)
# Fork is not a thread safe method (see issue #217)
# but is more user friendly (does not require to wrap the code in
# a `if __name__ == "__main__":`)
forkserver_available = "forkserver" in mp.get_all_start_methods()
start_method = "forkserver" if forkserver_available else "spawn"
ctx = mp.get_context(start_method)
self.remotes, self.work_remotes = zip(*[ctx.Pipe() for _ in range(n_envs)])
self.processes = []
for work_remote, remote, env_fn in zip(
self.work_remotes, self.remotes, env_fns
):
args = (work_remote, remote, CloudpickleWrapper(env_fn))
# daemon=True: if the main process crashes, we should not cause things to hang
process = ctx.Process(
target=_worker, args=args, daemon=True
) # pytype:disable=attribute-error
process.start()
self.processes.append(process)
work_remote.close()
if len(self.processes) != len(env_fns):
time.sleep(wait_time) # ADDED - Waits between starting Rocket League instances
self.remotes[0].send(("get_spaces", None))
observation_space, action_space = self.remotes[0].recv()
# END - Code from SubprocVecEnv class
self.n_agents_per_env = [m.agents for m in match_func_or_matches]
self.num_envs = sum(self.n_agents_per_env)
VecEnv.__init__(self, self.num_envs, observation_space, action_space)
def reset(self) -> VecEnvObs:
for remote in self.remotes:
remote.send(("reset", None))
flat_obs = []
for remote, n_agents in zip(self.remotes, self.n_agents_per_env):
obs = remote.recv()
if n_agents <= 1:
flat_obs.append(obs)
else:
flat_obs += obs
return np.asarray(flat_obs)
def step_async(self, actions: np.ndarray) -> None:
i = 0
for remote, n_agents in zip(self.remotes, self.n_agents_per_env):
remote.send(("step", actions[i : i + n_agents, :]))
i += n_agents
self.waiting = True
def step_wait(self) -> VecEnvStepReturn:
flat_obs = []
flat_rews = []
flat_dones = []
flat_infos = []
for remote, n_agents in zip(self.remotes, self.n_agents_per_env):
obs, rew, done, info = remote.recv()
if n_agents <= 1:
flat_obs.append(obs)
flat_rews.append(rew)
flat_dones.append(done)
flat_infos.append(info)
else:
flat_obs += obs
flat_rews += rew
flat_dones += [done] * n_agents
flat_infos += [info] * n_agents
self.waiting = False
return (
np.asarray(flat_obs),
np.array(flat_rews),
np.array(flat_dones),
flat_infos,
)
def seed(self, seed: Optional[int] = None) -> List[Union[None, int]]:
res = super(SB3MultipleInstanceEnv, self).seed(seed)
return sum([r] * a for r, a in zip(res, self.n_agents_per_env))
def _get_target_remotes(self, indices: VecEnvIndices) -> List[Any]:
# Override to prevent out of bounds
indices = self._get_indices(indices)
remotes = []
for i in indices:
tot = 0
for remote, n_agents in zip(self.remotes, self.n_agents_per_env):
tot += n_agents
if i < tot:
remotes.append(remote)
break
return remotes | PypiClean |
/Helmholtz-0.2.0.tar.gz/Helmholtz-0.2.0/helmholtz/storage/models.py | from django.utils.datastructures import SortedDict
from django.db import models
from helmholtz.core.shortcuts import find_file
class CommunicationProtocol(models.Model):
"""Existing communication protocols."""
name = models.TextField(primary_key=True)
initials = models.CharField(max_length=10)
def __unicode__(self):
return self.initials
class MimeType(models.Model):
"""Existing type of :class:`File`."""
extension = models.CharField(primary_key=True, max_length=8)
name = models.CharField(max_length=32)
def __unicode__(self):
return self.name
shortname = property(__unicode__)
class FileServer(models.Model):
"""Physical storage where a :class:`File` could be stored."""
label = models.CharField(max_length=16)
ip_address = models.IPAddressField(default="127.0.0.1")
protocol = models.ForeignKey(CommunicationProtocol, null=True, blank=True)
port = models.PositiveIntegerField(null=True, blank=True)
def get_url(self):
"""Reconstruct :class:`FileServer` URL from attributes."""
url = ''
if self.protocol and self.ip_address :
url += "%s://%s%s/" % (self.protocol.initials.lower(), self.ip_address, '' if not self.port else ":%s" % self.port)
return url
url = property(get_url)
def __unicode__(self):
return self.label
class Meta:
ordering = ['protocol', 'ip_address', 'port']
class FileLocation(models.Model):
"""Path on a :class:`FileServer` where a :class:`File` is located."""
server = models.ForeignKey(FileServer)
drive = models.CharField(max_length="2", null=True, blank=True)
root = models.TextField(null=True, blank=True)
path = models.TextField()
def get_path(self):
"""Reconstruct :class:`FileLocation` path from attributes."""
st = ''
if self.drive :
st += self.drive
if self.root :
slashing = "/" if not self.drive else "\\"
st += (self.root + slashing)
st += self.path
return st
hdd_path = property(get_path)
def get_url(self):
"""Reconstruct :class:`FileLocation` URL from attributes."""
url = self.server.url + self.hdd_path
return url
url = property(get_url)
def __unicode__(self):
return self.url
class Meta:
ordering = ['server', 'root', 'path']
class File(models.Model) :
"""File containing data."""
name = models.TextField()
location = models.ForeignKey(FileLocation, null=True)
mimetype = models.ForeignKey(MimeType, null=True)
original = models.NullBooleanField(null=True, blank=True)
creation_date = models.DateTimeField(null=True, blank=True)
size = models.IntegerField(null=True, blank=True)
notes = models.TextField(null=True, blank=True)
def get_filename(self):
"""Get the complete filename of a :class:`File`."""
st = self.name
if self.mimetype :
st += '.' + self.mimetype.extension
return st
filename = property(get_filename)
def is_available(self):
"""Tell if a :class:`File` is actually on hard disk drive."""
results = find_file(self.location.hdd_path, self.filename)
return (len(results) == 1)
def get_all_file_formats(self):
"""Get all available file formats for a :class:`File`."""
dct = SortedDict()
pattern = "%s.*" % self.name
results = find_file(self.location.hdd_path, pattern)
for path in results :
format = path.split('.')[-1].lower()
dct[format] = path
return dct
formats = property(get_all_file_formats)
def get_protocol(self):
"""Get executed protocol name that has generated a :class:`File`."""
signals = self.signal_set.filter(protocol__isnull=False).distinct()
if signals.count() :
return signals[0].protocol
else :
return None
protocol = property(get_protocol)
def get_protocols(self):
"""Get :class:`ProtocolRecording` objects relative to a :class:`File`."""
protocols = self.signal_set.filter(channel__protocol__isnull=False).distinct()
return protocols
protocols = property(get_protocols)
def get_protocols_by_type(self):
"""Store protocols relative to the block by protocol type."""
protocols = {}
for protocol in self.get_protocols() :
name = protocol.protocol_type.label
if not (name in protocols) :
protocols[name] = []
protocols[name].append(protocol)
#transform each list of protocol into a QuerySet
for protocol in protocols :
protocols[protocol] = self.protocolrecording_set.model.objects.filter(pk__in=[k.pk for k in protocols[protocol]])
return protocols
def get_protocol_types(self):
"""Get all protocol types."""
protocols = self.get_protocols_by_type().keys()
return protocols
distinct_protocols = property(get_protocols)
def _protocols(self):
"""Get comma separated protocol names."""
protocols = self.get_protocols()
return ','.join(protocols) if protocols else None
protocol_names = property(_protocols)
def get_path(self, format=None):
"""Get the actual path to a :class:`File`."""
if not format :
return "%s/%s.%s" % (self.location.hdd_path, self.name, self.mimetype.extension)
else :
return self.formats.get(format.lower(), None)
hdd_path = property(get_path)
def is_available_as(self, format):
"""Tell if the :class:`File` is available as the specified format."""
return format.lower() in self.formats
def __unicode__(self):
return self.filename
class Meta:
ordering = ['name', 'mimetype'] | PypiClean |
/GeoNode-3.2.0-py3-none-any.whl/geonode/services/tasks.py | """Celery tasks for geonode.services"""
import time
import logging
from hashlib import md5
from django.template.defaultfilters import slugify
from . import models
from . import enumerations
from .serviceprocessors import base, get_service_handler
from geonode.celery_app import app
from geonode.layers.models import Layer
from geonode.tasks.tasks import AcquireLock
logger = logging.getLogger(__name__)
@app.task(
bind=True,
name='geonode.services.tasks.harvest_resource',
queue='upload',
expires=600,
acks_late=False,
autoretry_for=(Exception, ),
retry_kwargs={'max_retries': 3, 'countdown': 10},
retry_backoff=True,
retry_backoff_max=700,
retry_jitter=True)
def harvest_resource(self, harvest_job_id):
harvest_job = models.HarvestJob.objects.get(pk=harvest_job_id)
harvest_job.update_status(
status=enumerations.IN_PROCESS, details="Harvesting resource...")
result = False
details = ""
try:
handler = get_service_handler(
base_url=harvest_job.service.base_url,
proxy_base=harvest_job.service.proxy_base,
service_type=harvest_job.service.type
)
logger.debug("harvesting resource...")
handler.harvest_resource(
harvest_job.resource_id, harvest_job.service)
logger.debug("Resource harvested successfully")
workspace = base.get_geoserver_cascading_workspace(create=False)
_cnt = 0
while _cnt < 5 and not result:
try:
layer = None
if Layer.objects.filter(alternate=f"{harvest_job.resource_id}").count():
layer = Layer.objects.get(
alternate=f"{harvest_job.resource_id}")
else:
layer = Layer.objects.get(
alternate=f"{workspace.name}:{harvest_job.resource_id}")
layer.save(notify=True)
result = True
except Exception as e:
_cnt += 1
logger.error(
f"Notfiy resource {workspace.name}:{harvest_job.resource_id} tentative {_cnt}: {e}")
try:
layer = Layer.objects.get(
alternate=f"{slugify(harvest_job.service.base_url)}:{harvest_job.resource_id}")
layer.save(notify=True)
result = True
except Exception as e:
logger.error(
"Notfiy resource "
f"{slugify(harvest_job.service.base_url)}:{harvest_job.resource_id} "
f"tentative {_cnt}: {e}")
time.sleep(1.0)
except Exception as err:
logger.exception(msg="An error has occurred while harvesting "
f"resource {harvest_job.resource_id}")
details = str(err) # TODO: pass more context about the error
finally:
if not details:
result = True
harvest_job.update_status(
status=enumerations.PROCESSED if result else enumerations.FAILED,
details=details
)
@app.task(
bind=True,
name='geonode.services.tasks.probe_services',
queue='geonode',
expires=600,
acks_late=False,
autoretry_for=(Exception, ),
retry_kwargs={'max_retries': 1, 'countdown': 10},
retry_backoff=True,
retry_backoff_max=700,
retry_jitter=True)
def probe_services(self):
# The cache key consists of the task name and the MD5 digest
# of the name.
name = b'probe_services'
hexdigest = md5(name).hexdigest()
lock_id = f'{name.decode()}-lock-{hexdigest}'
with AcquireLock(lock_id) as lock:
if lock.acquire() is True:
for service in models.Service.objects.all():
try:
service.probe = service.probe_service()
service.save()
except Exception as e:
logger.error(e) | PypiClean |
/ImSwitchUC2-2.1.0.tar.gz/ImSwitchUC2-2.1.0/imswitch/imcontrol/model/interfaces/piezoPiezoconceptZ_mock.py | from lantz import Action, Feat, Driver
from imswitch.imcommon.model import initLogger
class MockPCZPiezo(Driver):
"""Mock driver for the PiezoConcept Z-piezo."""
def __init__(self):
super().__init__()
self.__logger = initLogger(self, tryInheritParent=True)
@Feat(read_once=True)
def idn(self):
"""Get information of device"""
# return self.query('INFOS')
dummyquery = 'dummy zpiezo answer'
return dummyquery
def initialize(self):
pass
# Z-MOVEMENT
@Feat()
def absZ(self):
""" Absolute Z position. """
return 2.0
@absZ.setter
def absZ(self, value):
""" Absolute Z position movement, in um. """
self.__logger.debug(f"setting Z position to {value} um")
def relZ(self, value):
""" Relative Z position movement, in um. """
self.__logger.debug(f"Moving Z position {value} um")
pass
if abs(float(value)) > 0.5:
self.__logger.warning('Warning: Step bigger than 500 nm')
@Action()
def move_relZ(self, value):
""" Relative Z position movement, in um. """
self.__logger.debug(f"Moving Z position {value} um")
pass
if abs(float(value)) > 0.5:
self.__logger.warning('Warning: Step bigger than 500 nm')
@Action(limits=(100,))
def move_absZ(self, value):
""" Absolute Z position movement, in um. """
self.__logger.debug(f"Setting Z position to {value} um")
# CONTROL/STATUS
@Feat()
def timeStep(self):
""" Get the time between each points sent by the RAM of the USB
interface to the nanopositioner. """
return 1
@timeStep.setter
def timeStep(self, value):
""" Set the time between each points sent by the RAM of the USB
interface to the nanopositioner, in ms. """
pass
def close(self):
pass
# Copyright (C) 2020-2021 ImSwitch developers
# This file is part of ImSwitch.
#
# ImSwitch is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ImSwitch is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>. | PypiClean |
/NEURON-9.0a0-cp311-cp311-macosx_10_15_x86_64.whl/neuron/.data/bin/nrnpyenv.sh |
# eval "`sh nrnpyenv.sh`"
# will set bash environment variables so that nrniv -python has same
# environment as python
# May specify the python executable with explicit first argument.
# Overcome environment issues when --with-nrnpython=dynamic .
# The problems might be an immediate exit due to 'No module named site',
# inability to find common modules and shared libraries that support them,
# and not loading the correct python library.
#Run python and generate the following on stdout
#export PYTHONPATH=...
#export LD_LIBRARY_PATH=...
#export PATH=...
#export NRN_PYLIB=...
#with NRN_PYLIB as a full path to the Python library,
#it may not be necessary to change LD_LIBRARY_PATH
#Some python installations, such as enthought canopy, do not have site
#in a subfolder of prefix. In that case, the site folder defines home and
#if there is a site-packages subfolder of prefix, that is added to the
#pythonpath. Also the lib under home is added to the ld_library_path.
# This script is useful for linux, mingw, and mac versions.
# Append the output to your .bashrc file.
export originalPATH="$PATH"
export originalPYTHONPATH="$PYTHONPATH"
export originalLDLIBRARYPATH="$LD_LIBRARY_PATH"
# list the Python versions that this NEURON build supports
# the format is {major}.{minor}, i.e. 3.8, 3.9, 3.10, ...
_this_neuron_py_versions=(3.11)
# order of preference is: -pyexe, $NRN_PYTHONEXE, python, python3,
# pythonX..pythonY, where X..Y are the versions in
# _this_neuron_py_versions. In this order, we accept the first one yielding a
# version we support
_pythons_to_try=(python python3)
for _ver_with_dot in "${_this_neuron_py_versions[@]}"; do
_pythons_to_try+=("python${_ver_with_dot}")
done
# get the last argument
for last; do true; done
# if the last argument begins with --NEURON_HOME=
if [[ $last == "--NEURON_HOME="* ]] ; then
last=${last#"--NEURON_HOME="}
export PATH=/cygdrive/${last//:/}/mingw/usr/bin:/cygdrive/${last//:/}/mingw/mingw64/bin:$PATH
# remove the last argument
set -- "${@:1:$(($#-1))}"
fi
# The first argument passed to this script is the value of nrniv -pyexe
pyexe_arg="$1"
WHICH=which
function trypy {
a=`ls "$1" |grep "$2"`
if test "$a" != "" ; then
b=`cygpath -U "$1/$a/$3"`
c=`nrnbinstr "$4" "$b" 2> /dev/null`
if test "$c" != "" ; then
c=`cygpath -U "$c"`
c=`dirname "$c"`
c=`dirname "$c"`
c="$c/python"
if $WHICH "$c" >& /dev/null ; then
PYTHON=`$WHICH "$c"`
# if python.exe not in PATH then cygcheck may not find the library
PYTHON=`cygpath -U "$PYTHON"`
export PATH=`dirname "$PYTHON"`:"$PATH"
PYTHON=`basename "$PYTHON"`
fi
fi
fi
}
unset PYTHON
unset PYTHON_VERSION
function try_python {
cmd_name="$1"
if [ -z "${cmd_name}" ]; then
return 1
fi
ver_and_path=$("${cmd_name}" -c "import sys; print('{}.{} {}'.format(*sys.version_info[:2], sys.executable))" 2>&1)
code="$?"
if [ $code -ne 0 ]; then
echo "# failed to run ${cmd_name} (${ver_and_path})"
if [ $code -eq 128 ]; then
PYTHON_COMMANDS_THAT_RETURNED_CODE_128+=("${cmd_name}")
fi
return 1
fi
full_path=${ver_and_path#* }
version=${ver_and_path%" ${full_path}"}
if [[ ! " ${_this_neuron_py_versions[*]} " =~ " ${version} " ]]; then
echo "# ran ${cmd_name} (${full_path}) but Python ${version} is not supported by this NEURON installation (supported: ${_this_neuron_py_versions[*]})."
return 1
fi
PYTHON="${full_path}"
PYTHON_VERSION="${version}"
}
# If either -pyexe or NRN_PYTHONEXE was set, it is an immediate hard error if
# they do not point to a a valid Python
_explicit_pythons_to_try=("${pyexe_arg}" "${NRN_PYTHONEXE}")
for _python in "${_explicit_pythons_to_try[@]}"; do
if [ -z "${_python}" ]; then
# don't bother distinguishing between "not set" and "set to empty string"
continue
fi
if ! try_python "${_python}"; then
echo "Given the explicit instructions:"
echo " -pyexe=${pyexe_arg}"
echo " NRN_PYTHONEXE=${NRN_PYTHONEXE}"
echo "we determined that '${_python}' is not valid."
echo "Because this was an explicit request, this script is returning an"
echo "error code instead of falling back to other search strategies..."
exit 1
fi
done
# Fall back to PATH-based searches if this explicit approach didn't work
if [ -z "${PYTHON}" ]; then
# On some windows systems python is an empty executable which, when
# launched in a Command Prompt, directs the user to the Microsoft Store.
# With bash, it returns a 128 exit status. So we loop until we
# find a working python (or no python). Each time a python is non-working
# we remove that path from the PATH. If not Windows, break out after first
# attempt at finding a Python.
while true ; do
# _pythons_to_try is a list of command names to be looked up in $PATH
PYTHON_COMMANDS_THAT_RETURNED_CODE_128=() # hack for Windows, see below
for _python in "${_pythons_to_try[@]}"; do
if try_python "${_python}"; then
break 2 # break out of the inner `for` and the outer `while`
fi
done
# do not do the following craziness if not Windows.
if test "$OS" != "Windows_NT" ; then
break
fi
if [ ${#PYTHON_COMMANDS_THAT_RETURNED_CODE_128[@]} -eq 0 ]; then
# Don't bother messing with PATH if we didn't get any of the 128 status
# codes referred to above
break
fi
# try and remove from PATH the location of the first command we tried that
# returned code 128
echo "# ${PYTHON_COMMANDS_THAT_RETURNED_CODE_128[@]} returned code 128"
oldpath="${PATH}"
a=$($WHICH "${PYTHON_COMMANDS_THAT_RETURNED_CODE_128[0]}")
b=$(dirname "$a")
echo "# trying to remove ${b} from the PATH"
PATH="`echo \"$PATH\" | sed \"s,:$b:,:,\"`" #remove b from path if internal
PATH="`echo \"$PATH\" | sed \"s,^$b:,,\"`" #remove b from path if begin
PATH="`echo \"$PATH\" | sed \"s,:$b\$,\",`" #remove b from path if end
export PATH
if [ "$oldpath" = "$PATH" ]; then
echo "\"$b\", that contained a failing Python, did not get removed from PATH=\"$PATH\"" 1>&2
exit 1
fi
unset PYTHON_COMMANDS_THAT_RETURNED_CODE_128
done
fi
# Searching PATH didn't work; there are even more hacks to try on Windows
if [ -z "${PYTHON}" -a "${OS}" = "Windows_NT" -a -n "${APPDATA}" ]; then
# Often people install Anaconda on Windows without adding it to PATH
smenu="${APPDATA}/Microsoft/Windows/Start Menu/Programs"
trypy "${smenu}" "Anaconda3 (64-bit)" "Anaconda Prompt (anaconda3).lnk" activate.bat
# Anaconda3 2020 may need more PATH for numpy to work.
if test "$PYTHON" != "" ; then
if ! $PYTHON -c 'import numpy' >& /dev/null ; then
# first item added in trypy
a="`echo $PATH | sed 's/:.*//'`"
export PATH="$PATH:$a/Library/mingw-w64/bin:$a/Library/usr/bin:$a/Library/bin:$a/Scripts:$a/bin:$a/condabin"
# Actually get this PATH when scripts do a -- eval "`nrnpyenv.sh`"
echo "export PATH=\"$PATH\""
fi
fi
if test "$PYTHON" = "" ; then
trypy "$smenu" Anaconda3 "Anaconda Prompt.lnk" activate.bat
fi
if test "$PYTHON" = "" ; then
trypy "$smenu" Anaconda2 "Anaconda Prompt.lnk" activate.bat
fi
if test "$PYTHON" = "" ; then
trypy "$smenu" Anaconda "Anaconda Prompt.lnk" activate.bat
fi
if test "$PYTHON" = "" ; then #brittle but try Enthought
a=`cygpath -U "$APPDATA/../local/enthought/canopy/edm/envs/user"`
if test -d "$a" ; then
export PATH="$a":"$PATH"
PYTHON=python
fi
fi
if [ -n "${PYTHON}" -a -z "${PYTHON_VERSION}"]; then
# In case one of the last-resort Windows hacks worked
PYTHON_VERSION=$("${PYTHON}" -c "import sys; print(\"{}.{}\".format(*sys.version_info[:2]))")
fi
fi
if test "$PYTHON" = "" ; then
echo "Cannot find a Python in ${_pythons_to_try[@]} that matches the versions this NEURON installation supports: ${_this_neuron_py_versions[@]}" 1>&2
exit 1;
fi
echo "export NRN_PYTHONEXE=\"${PYTHON}\""
echo "export NRN_PYTHONVERSION=\"${PYTHON_VERSION}\""
# what is the python library for Darwin
nrnpylib_provenance=""
nrn_pylib=""
kernel_name=''
if type -P uname > /dev/null ; then
kernel_name=`uname`
fi
if test "$kernel_name" = "Darwin" ; then
python_path=`$WHICH $PYTHON`
pyexedir=`dirname $python_path`
# Get the python lib dir in an official way, working with virtualenv
PYLIB=$($python_path -c 'import sysconfig; print(sysconfig.get_config_var("LIBDIR"))')
for path in $PYLIB/libpython*.dylib; do
if test -f "$path"; then
nrn_pylib="$path"
break
fi
done
if test -f "$nrn_pylib" ; then
unset python_path
unset pyexedir
nrnpylib_provenance="sysconfig LIBDIR"
fi
if test "$nrn_pylib" = "" ; then
nrn_pylib=$($python_path -c '
try:
from neuron import h
shlib=h.libpython_path()
shlib = shlib if ".dylib" in shlib else ""
print(shlib)
except:
print("")
')
if test "$nrn_pylib" != "" ; then
nrnpylib_provenance="h.libpython_path()"
fi
fi
if test "$nrn_pylib" = "" ; then
DYLD_PRINT_LIBRARIES=1
export DYLD_PRINT_LIBRARIES
nrn_pylib=`$PYTHON -c 'quit()' 2>&1 | sed -n 's/^dyld: loaded: //p' | sed -n /libpython/p`
if test "$nrn_pylib" = "" ; then
nrn_pylib=`$PYTHON -c 'quit()' 2>&1 | sed -n 's/^dyld: loaded: //p' | sed -n 2p`
fi
unset DYLD_PRINT_LIBRARIES
if test "$nrn_pylib" != "" ; then
nrnpylib_provenance=DYLD_PRINT_LIBRARIES
fi
fi
if test -f "$nrn_pylib" ; then
PYLIB_DARWIN=$nrn_pylib
else
PYLIB_DARWIN=""
fi
export nrnpylib_provenance
export PYLIB_DARWIN
fi
$PYTHON << 'here'
###########################################
import sys, os, site
usep = "/"
upathsep = ":"
nrnpylib_provenance = "not found"
nrnpyhome_provenance = "not found"
def upath(path):
#return linux path
if path is None:
return ""
import posixpath, sys
plist = path.split(os.pathsep)
for i, p in enumerate(plist):
p = os.path.splitdrive(p)
if p[0]:
p = "/cygdrive/" + p[0][:p[0].rfind(":")] + usep + p[1].replace(os.sep, usep)
else:
p = p[1].replace(os.sep, usep)
p = posixpath.normpath(p)
plist[i] = p
p = upathsep.join(plist)
return p
def u2d(p):
if "darwin" not in sys.platform and "win" in sys.platform:
p = p.split(usep)
if "cygdrive" == p[1]:
p = p[2] + ':/' + usep.join(p[3:])
else:
p = usep.join(p)
return p
#a copy of nrnpylib_linux() but with some os x specific modifications
def nrnpylib_darwin_helper():
global nrnpylib_provenance
import os, sys, re, subprocess
#in case it was dynamically loaded by python
pid = os.getpid()
cmd = "lsof -p %d"%pid
f = []
try: # in case lsof does not exist
f = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.STDOUT).stdout
except:
pass
nrn_pylib = None
cnt = 0
for bline in f:
fields = bline.decode().split()
if len(fields) > 8:
line = fields[8]
if re.search(r'libpython.*\.[ds]', line):
print ("# nrn_pylib from lsof: %s" % line)
nrn_pylib = line.strip()
nrnpylib_provenance = "lsof search for libpython..."
return nrn_pylib
if re.search(r'[Ll][Ii][Bb].*[Pp]ython', line):
cnt += 1
if cnt == 1: # skip 1st since it is the python executable
continue
if re.search(r'[Pp]ython', line.split('/')[-1]):
print ("# nrn_pylib from lsof: %s" % line)
candidate = line.strip()
# verify the file defines a PyRun_SimpleString
cmd = r'nm %s | grep PyRun_SimpleString' % candidate
try:
f = os.popen(cmd)
i=0
for line in f:
i += 1
if i == 0:
continue
except:
continue
nrn_pylib = candidate
nrnpylib_provenance = 'lsof search for occurrence of [Ll][Ii][Bb].*[Pp]ython defineing PyRun_SimpleString'
return nrn_pylib
else: # figure it out from the os path
p = os.path.sep.join(os.__file__.split(os.path.sep)[:-1])
name = "libpython%d.%d" % (sys.version_info[0], sys.version_info[1])
cmd = r'find %s -name %s\*.dylib' % (p, name)
print ('# %s'%cmd)
f = os.popen(cmd)
libs = []
for line in f:
libs.append(line.strip())
if len(libs) == 0: # try again searching the parent folder
p = os.path.sep.join(os.__file__.split(os.path.sep)[:-2])
cmd = r'find %s -name %s\*.dylib' % (p, name)
print ('# %s'%cmd)
f = os.popen(cmd)
for line in f:
libs.append(line.strip())
print ('# %s'%str(libs))
if len(libs) == 1:
nrnpylib_provenance="search based on os.__file__, found unique"
print ("# nrn_pylib from os.path %s"%str(libs[0]))
return libs[0]
if len(libs) > 1:
# which one do we want? Check the name of an imported shared object
try:
import _ctypes
except:
import ctypes
for i in sys.modules.values():
try:
s = i.__file__
if s.endswith('.dylib'):
match = re.search(r'-%d%d([^-]*)-' % (sys.version_info[0], sys.version_info[1]), s)
if match:
name = name + match.group(1) + '.dylib'
break
elif s.endswith('.so'):
match = re.search(r'-%d%d([^-]*)-' % (sys.version_info[0], sys.version_info[1]), s)
if match:
name = name + match.group(1) + '.so'
break
except:
pass
for i in libs:
if name in i:
print ("# nrn_pylib from os.path %s" % i)
nrnpylib_provenance='search based on os.__file__, found one with version in name'
return i
print ("# nrn_pylib from os.path %s" % str(nrn_pylib))
return nrn_pylib
def nrnpylib_darwin():
global nrnpylib_provenance
import os
nrn_pylib = os.getenv("PYLIB_DARWIN")
if nrn_pylib != "":
print ("# nrn_pylib from PYLIB_DARWIN %s"%nrn_pylib)
nrnpylib_provenance = os.getenv("nrnpylib_provenance")
return nrn_pylib
return nrnpylib_darwin_helper()
def nrnpylib_mswin():
global nrnpylib_provenance
import os, sys, re
e = '/'.join(sys.executable.split(os.path.sep))
cmd = 'cygcheck "%s"' % e
f = os.popen(cmd)
nrn_pylib = None
for line in f:
if re.search('ython[a-zA-Z0-9_.]*\.dll', line):
nrn_pylib = '/'.join(line.split(os.path.sep)).strip()
nrnpylib_provenance="cygcheck"
return nrn_pylib
def nrnpylib_linux():
global nrnpylib_provenance
import os, sys, re, subprocess
# Try the official way first
import sysconfig
libdir=sysconfig.get_config_var("LIBDIR")
try:
from os.path import isfile, join
ver = "%d.%d"%(sys.version_info[0], sys.version_info[1])
for f in os.listdir(libdir):
if 'libpython' in f and '.so' in f and ver in f:
nrn_pylib = join(libdir, f)
nrnpylib_provenance='sysconfig LIBDIR'
return nrn_pylib
except:
pass
#in case it was dynamically loaded by python
try:
from neuron import h
s=h.libpython_path()
s = s if ".so" in s else ""
if (s != ""):
nrnpylib_provenance="h.libpython_path()"
return s
except:
print("")
pid = os.getpid()
cmd = "lsof -p %d"%pid
f = []
try: # in case lsof does not exist
f = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.STDOUT).stdout
except:
pass
nrn_pylib = None
for bline in f:
fields = bline.decode().split()
if len(fields) > 8:
line = fields[8]
if re.search(r'libpython.*\.so', line):
print ("# from lsof: %s" % line)
nrn_pylib = line.strip()
nrnpylib_provenance = 'lsof search for libpython.*\.so'
return nrn_pylib
else: # figure it out from the os path
p = os.path.sep.join(os.__file__.split(os.path.sep)[:-1])
name = "libpython%d.%d" % (sys.version_info[0], sys.version_info[1])
cmd = r'find %s -name %s\*.so' % (p, name)
print ('# %s'%cmd)
f = os.popen(cmd)
libs = []
for line in f:
libs.append(line.strip())
if len(libs) == 0: # try again searching the parent folder
p = os.path.sep.join(os.__file__.split(os.path.sep)[:-2])
cmd = r'find %s -name %s\*.so' % (p, name)
print ('# %s'%cmd)
f = os.popen(cmd)
for line in f:
libs.append(line.strip())
print ('# %s'%str(libs))
if len(libs) == 1:
nrnpylib_provenance="search based on os.__file__, found unique"
return libs[0]
if len(libs) > 1:
# which one do we want? Check the name of an imported shared object
try:
import _ctypes
except:
import ctypes
for i in sys.modules.values():
try:
s = i.__file__
if s.endswith('.so'):
match = re.search(r'-%d%d([^-]*)-' % (sys.version_info[0], sys.version_info[1]), s)
if match:
name = name + match.group(1) + '.so'
break
except:
pass
for i in libs:
if name in i:
nrnpylib_provenance='search based on os.__file__, found one with version in name'
return i
return nrn_pylib
nrn_pylib = None
if 'darwin' in sys.platform:
nrn_pylib = nrnpylib_darwin()
elif 'win' in sys.platform:
nrn_pylib = nrnpylib_mswin()
elif 'linux' in sys.platform:
nrn_pylib = nrnpylib_linux()
#Use sys.base_prefix for PYTHONHOME if available, otherwise sys.prefix
try:
sp = upath(sys.base_prefix)
spname='sys.base_prefix'
base=True
except:
sp = upath(sys.prefix)
spname='sys.prefix'
base=False
#there is a question about whether to use sys.prefix for PYTHONHOME
#or whether to derive from site.__file__.
#to help answer, ask how many sys.path items begin with sys.prefix and
#how many begin with site.__file__ - 3
p = [upath(i) for i in sys.path]
print ("# items in sys.path = " + str(len(p)))
print ("# beginning with sys.prefix = " + str(len([i for i in p if sp in i])))
s = usep.join(upath(site.__file__).split(usep)[:-3])
if s == sp:
print ("# site-3 same as " + spname)
else:
print ("# beginning with site-3 = " + str(len([i for i in p if s in i])))
foo = [i for i in p if sp not in i]
foo = [i for i in foo if s not in i]
print ("# in neither location " + str(foo))
print ("# " + spname + " = " + sp)
print ("# site-3 = " + s)
if "darwin" in sys.platform or "linux" in sys.platform or "win" in sys.platform:
# What, if anything, did python prepend to PATH
path=""
oldpath = upath(os.getenv("originalPATH"))
newpath = upath(os.getenv("PATH"))
i = newpath.find(oldpath)
if i > 1:
path = newpath[:i]
pythonhome = upath(sp)
print ("#pythonhome=" + pythonhome)
pythonpath = upath(os.getenv("PYTHONPATH"))
ldpath = ""
oldldpath = upath(os.getenv("originalLD_LIBRARY_PATH"))
newldpath = upath(os.getenv("LD_LIBRARY_PATH"))
i = newldpath.find(oldldpath)
if i > 1:
ldpath = newldpath[:i]
sitedir = usep.join(upath(site.__file__).split(usep)[:-1])
# if sitedir is not a subfolder of pythonhome, add to pythonpath
if not pythonhome in sitedir:
if not sitedir in pythonpath:
pythonpath = (pythonpath + upathsep if pythonpath else "") + sitedir
# add the parent of sitedir to LD_LIBRARY_PATH
ldp = usep.join(sitedir.split(usep)[:-1])
if ldp not in oldldpath:
ldpath = (ldpath + upathsep if ldpath else "") + ldp
try:
#if a representative shared libary not under pythonhome, add to pythonpath
import _ctypes
f = usep.join(upath(_ctypes.__file__).split(usep)[:-1])
if f.find(pythonhome) == -1:
pythonpath = (pythonpath + upathsep if pythonpath else "") + f
except:
pass
dq = "\""
if pythonpath:
print ("\n# if launch python, then need:")
print ("export PYTHONPATH=" + dq + pythonpath + dq)
if path:
print ("\n#PYTHON prepended the following to PATH")
print ("export PATH=" + dq + path + "$PATH" + dq)
print("\n#NRN_PYLIB provenance: " + str(nrnpylib_provenance))
print ("\n# if launch nrniv, then likely need:")
if ldpath and nrn_pylib is None:
print ("export LD_LIBRARY_PATH=" + dq + ldpath + upathsep + "$LD_LIBRARY_PATH" + dq)
if nrn_pylib is not None:
print ('export NRN_PYLIB="%s"' % nrn_pylib)
quit()
###################################
here | PypiClean |
/Comet-3.1.0.tar.gz/Comet-3.1.0/comet/utility/voevent.py |
# Python standard library
import re
from datetime import datetime
# XML parsing using lxml
import lxml.etree as ElementTree
from comet import __version__, __url__
import comet.log as log
from comet.utility.xml import xml_document
__all__ = ["parse_ivoid", "broker_test_message"]
ElementTree.register_namespace("voe", "http://www.ivoa.net/xml/VOEvent/v2.0")
IVOID_RE = re.compile("""ivo://
(?P<auth>[a-zA-Z0-9][\w\-.~*'()]{2,}) # Authority
(?P<rsrc>/[\w\-\.~\*'()/]*)? \#? # Resource name
(?P<localID>[\w\-\.~\*'()\+=/%!$&,;:@?]*) $ # Fragment
""", re.VERBOSE)
def parse_ivoid(ivoid):
"""
Takes an IVOID of the form
ivo://[authorityID][resourceKey]#[local_ID]
and returns (authorityID, resourceKey, local_ID). Raise if that isn't
possible.
Note that the resourceKey will normally start with a slash. This is part
of the key, and this function will not trim it.
Refer to the IVOA Identifiers Recommendation (2.0) for details.
"""
try:
groups = IVOID_RE.match(ivoid).groups()
# If there's n
rsrc = groups[1] if groups[1] is not None else ""
# These may not appear in the resource key per IVOA Identifiers
# Version 2.0 \S2.3.3.
for forbidden in ['//', '/../', '/./']:
assert(forbidden not in rsrc)
assert(not rsrc.endswith('/'))
return groups[0], rsrc, groups[2]
except (AttributeError, AssertionError) as e:
log.debug("Failed to parse as IVOID: ", str(e))
raise Exception("Invalid IVOID: %s" % (ivoid,))
def broker_test_message(ivo):
"""
Test message which is regularly broadcast to all subscribers.
"""
root_element = ElementTree.Element("{http://www.ivoa.net/xml/VOEvent/v2.0}VOEvent",
attrib={
"ivorn": ivo + "#TestEvent-%s" % datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ"),
"role": "test",
"version": "2.0",
"{http://www.w3.org/2001/XMLSchema-instance}schemaLocation": "http://www.ivoa.net/xml/VOEvent/v2.0 http://www.ivoa.net/xml/VOEvent/VOEvent-v2.0.xsd"
}
)
who = ElementTree.SubElement(root_element, "Who")
author_ivoid = ElementTree.SubElement(who, "AuthorIVORN")
author_ivoid.text = ivo
date = ElementTree.SubElement(who, "Date")
date.text = datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ")
what = ElementTree.SubElement(root_element, "What")
description = ElementTree.SubElement(what, "Description")
description.text = "Broker test event generated by Comet %s." % (__version__,)
ElementTree.SubElement(what, "Reference", uri=__url__)
return xml_document(root_element) | PypiClean |
/GWlal-0.0.4.tar.gz/GWlal-0.0.4/lal/python/lal/utils/series.py | import numpy
import re
try:
from .. import lal
except ImportError:
raise ImportError("The SWIG-wrappings of LAL cannot be imported.")
from .. import git_version
__author__ = "Duncan Macleod <[email protected]>"
__version__ = git_version.id
__date__ = git_version.date
# -----------------------------------------------------------------------------
# utility constants
# map LAL type codes to strings (for function factory)
LAL_TYPE_STR = {
lal.I2_TYPE_CODE: 'INT2',
lal.I4_TYPE_CODE: 'INT4',
lal.I8_TYPE_CODE: 'INT8',
lal.U2_TYPE_CODE: 'UINT2',
lal.U4_TYPE_CODE: 'UINT4',
lal.U8_TYPE_CODE: 'UINT8',
lal.S_TYPE_CODE: 'REAL4',
lal.D_TYPE_CODE: 'REAL8',
lal.C_TYPE_CODE: 'COMPLEX8',
lal.Z_TYPE_CODE: 'COMPLEX16',
}
LAL_TYPE_FROM_STR = dict((v, k) for k, v in LAL_TYPE_STR.items())
LAL_TYPE_STR_REGEX = re.compile(
'(?P<dtype>(%s))' % ('|'.join(LAL_TYPE_FROM_STR.keys())), re.I)
# map numpy dtypes to LAL type codes
LAL_TYPE_FROM_NUMPY = {
numpy.int16: lal.I2_TYPE_CODE,
numpy.int32: lal.I4_TYPE_CODE,
numpy.int64: lal.I8_TYPE_CODE,
numpy.uint16: lal.U2_TYPE_CODE,
numpy.uint32: lal.U4_TYPE_CODE,
numpy.uint64: lal.U8_TYPE_CODE,
numpy.float32: lal.S_TYPE_CODE,
numpy.float64: lal.D_TYPE_CODE,
numpy.complex64: lal.C_TYPE_CODE,
numpy.complex128: lal.Z_TYPE_CODE,
}
NUMPY_TYPE_FROM_LAL = dict((v, k) for k, v in LAL_TYPE_FROM_NUMPY.items())
# structure definers
SERIES_OPERATIONS = ['create', 'destroy', 'cut', 'resize', 'shrink', 'add']
SERIES_TYPES = ['Time', 'Frequency']
STRUCT_TYPES = ['Sequence', 'Vector']
SERIES_REGEX = re.compile(
'%s(?P<stype>(%s))Series\Z'
% (LAL_TYPE_STR_REGEX.pattern, '|'.join(SERIES_TYPES)), re.I)
ARRAY_REGEX = re.compile(
'%sArray(?:(?P<dir>(L|V))?)' % LAL_TYPE_STR_REGEX.pattern, re.I)
STRUCT_REGEX = re.compile(
'%s(?P<struct>(%s))\Z'
% (LAL_TYPE_STR_REGEX.pattern, '|'.join(STRUCT_TYPES)), re.I)
# -----------------------------------------------------------------------------
# utility methods
def func_factory(operation, dtype):
"""Returns the LAL function to perform the given operation for the
relevant data type.
Example::
>>> create = func_factory('create', 'real8timeseries')
>>> create
lal.CreateREAL8TimeSeries
>>> ts = create(name, epoch, f0, deltaT, sampleUnits, length)
>>> func_factory('resize', ts)
lal.ResizeREAL8TimeSeries
"""
# verify operation
try:
SERIES_OPERATIONS.index(operation.lower())
except ValueError as e:
e.args("Operation '%s' not understood for LAL series. "
"Please select one of: %s"
% (operation, ", ".join(SERIES_OPERATIONS)),)
raise e
# verify data type
struct = get_struct_name(dtype)
return getattr(lal, ''.join([operation.title(), struct]))
def get_struct_name(series):
"""Format a structure name into the understood type for LAL
Example::
>>> get_struct_name('real8timeseries')
'REAL8TimeSeries'
"""
# get name of object
if isinstance(series, basestring):
typestr = series
else:
typestr = type(series).__name__
# attempt to match as a series type
try:
match = SERIES_REGEX.match(typestr).groupdict()
except AttributeError:
pass
else:
return '%s%sSeries' % (match['dtype'].upper(), match['stype'].title())
# attempt to match as an array (with optional dimension)
try:
match = ARRAY_REGEX.match(typestr).groupdict()
except AttributeError:
pass
else:
return '%sArray%s' % (match['dtype'].upper(),
match['dir'] and match['dir'].upper() or '')
# attempt to match as a structure
try:
match = STRUCT_REGEX.match(typestr).groupdict()
except AttributeError:
raise ValueError(
"Input %s cannot be parsed into LAL struct name" % series)
else:
return '%s%s' % (match['dtype'].upper(), match['struct'].title())
def get_series_type(series):
"""Find the LAL type enum for this series
@param series
a LAL series object (e.g. REAL8TimeSeries)
@returns the LAL type enum (integer) for the series
"""
try:
match = TYPE_REGEX.match(type(series).__name__).groupdict()
except AttributeError:
raise ValueError("Data type for series type %r unknown."
% type(series).__name__)
else:
return get_lal_type(match['dtype'].upper())
def get_lal_type_str(datatype):
"""Return the LAL type str for the given `datatype`
@param datatype
a dtype representation, normally a string, or a python/numpy type
object
@returns the LAL type str for the given datatype
Example::
>>> get_lal_type_str('uint32')
'UINT4'
>>> get_lal_type_str(float)
'REAL8'
"""
return LAL_TYPE_STR[get_lal_type(datatype)]
def get_lal_type(datatype):
"""Return the LAL type enum for the given `datatype`
@param datatype
a dtype representation, normally a string, or a python/numpy type
object
@returns the LAL type enum (integer) for the given datatype
Example::
>>> get_lal_type('uint32')
34
>>> get_lal_type(float)
11
"""
# parse a LAL type enum
try:
LAL_TYPE_STR[datatype]
except KeyError:
pass
else:
return datatype
# map a LAL type string
try:
return LAL_TYPE_FROM_STR[datatype]
except KeyError:
pass
# test again for 'real4' or 'real8' (lower-case)
# can't do this with others because they match numpy names
if re.match('real(4|8)\Z', str(datatype), re.I):
return LAL_TYPE_FROM_STR[datatype.upper()]
# format as a numpy data type and parse
try:
dtype = numpy.dtype(datatype).type
except TypeError:
pass
else:
try:
return LAL_TYPE_FROM_NUMPY[dtype]
except KeyError as e:
e.args = ('LAL has no support for numpy.%s' % dtype.__name__,)
raise
raise ValueError("Cannot interpret datatype %r" % datatype)
def get_numpy_type(datatype):
"""Return the numpy type for the given `datatype`
@param datatype
a dtype representation, normally a LAL type enum (int),
a LAL type string, or a python/numpy type object
@returns the numpy type corresponding to the given datatype
Example::
>>> get_numpy_type(float)
numpy.float64
>>> get_numpy_type('REAL8')
numpy.float64
>>> get_numpy_type(11)
numpy.float64
"""
try:
return NUMPY_TYPE_FROM_LAL[get_lal_type(datatype)]
except KeyError as e:
e.args('numpy has no support for %s'
% get_lal_type(str(get_lal_type_str(datatype))),)
raise
def duplicate(series):
"""
Duplicate a TimeSeries or FrequencySeries.
Arguments:
series : [ TimeSeries | FrequencySeries ]
input series to duplicate
"""
create = func_factory('create', series)
stype = series.__class__.__name__
if stype.endswith('FrequencySeries'):
out = create(series.name, series.epoch, series.f0, series.deltaF,
series.sampleUnits, series.data.length)
elif stype.endswith('TimeSeries'):
out = create(series.name, series.epoch, series.f0, series.deltaT,
series.sampleUnits, series.data.length)
else:
raise NotImplementedError("A duplicator for the %s has not been "
"implemented: here's your chance!" % series)
out.data.data = series.data.data
return out | PypiClean |
/ActiveReign-1.0.5.tar.gz/ActiveReign-1.0.5/README.md | # ActiveReign
<p align="center">
<img src="https://user-images.githubusercontent.com/13889819/62736481-6f7e7880-b9fb-11e9-92d6-47b650fdb84b.png"/>
<br>
<img src="https://img.shields.io/badge/Python-3.7-blue.svg"/>
<img src="https://img.shields.io/badge/License-GPLv3-green.svg">
<a href="https://www.youtube.com/channel/UC6-HLpd0rpPXmpJIhED8qTw">
<img src="https://img.shields.io/badge/Demo-Youtube-red.svg"/></a>
<a href="https://twitter.com/intent/follow?screen_name=m8r0wn">
<img src="https://img.shields.io/twitter/follow/m8r0wn?style=social&logo=twitter" alt="follow on Twitter"></a>
</p>
### Background
A while back I was challenged to write a discovery tool with Python3 that could automate the process of finding sensitive information on network file shares. After writing the entire tool with pysmb, and adding features such as the ability to open and scan docx an xlsx files, I slowly started adding functionality from the awesome [Impacket](https://github.com/SecureAuthCorp/impacket) library; just simple features I wanted to see in an internal penetration testing tool. The more I added, the more it looked like a Python3 rewrite of [CrackMapExec](https://github.com/byt3bl33d3r/CrackMapExec) created from scratch.
If you are doing a direct comparison, [CME](https://github.com/byt3bl33d3r/CrackMapExec) is an amazing tool that has way more features than currently implement here. However, I added a few new features and modifications that may come in handy during an assessment.
### For more documentation checkout the project [wiki](https://github.com/m8r0wn/ActiveReign/wiki)
### Operational Modes
* db - Query or insert values in to the ActiveReign database
* enum - System enumeration & module execution
* shell - Spawn a simulated shell on the target system and perform command execution
* spray - Domain password spraying and brute force
* query - Perform LDAP queries on the domain
### Key Features
* Automatically extract domain information via LDAP and incorporate into network enumeration.
* Perform Domain password spraying using LDAP to remove users close to lockout thresholds.
* Local and remote command execution, for use on multiple starting points throughout the network.
* Simulated interactive shell on target system, with file upload and download capabilities.
* Data discovery capable of scanning xlsx and docx files.
* Various modules to add and extend capabilities.
### Acknowledgments
There were many intended and unintended contributors that made this project possible. If I am missing any, I apologize, it was in no way intentional. Feel free to contact me and we can make sure they get the credit they deserve ASAP!
* [@byt3bl33d3r](https://github.com/byt3bl33d3r) - [CrackMapExec](https://github.com/byt3bl33d3r/CrackMapExec)
* [@SecureAuthCorp](https://github.com/SecureAuthCorp) - [Impacket](https://github.com/SecureAuthCorp/impacket)
* [@the-useless-one](https://github.com/the-useless-one) - [pywerview](https://github.com/the-useless-one/pywerview)
* [@dirkjanm](https://github.com/dirkjanm) - [ldapdomaindump](https://github.com/dirkjanm/ldapdomaindump)
### Final Thoughts
Writing this tool and testing on a variety of networks/systems has taught me that execution method matters, and depends on the configuration of the system. If a specific module or feature does not work, determine if it is actually the program, target system, configuration, or even network placement before creating an issue.
To help this investigation process, I have created a ```test_execution``` module to run against a system with known admin privileges. This will cycle through all all execution methods and provide a status report to determine the best method to use:
```bash
$ activereign enum -u administrator -p Password123 --local-auth -M test_execution 192.168.1.1
[*] Lockout Tracker Threshold extracted from database: 5
[*] Enum Authentication \administrator (Password: P****) (Hash: False)
[+] DC01 192.168.1.1 ENUM Windows Server 2008 R2 Standard 7601 Service Pack 1 (Domain: DEMO) (Signing: True) (SMBv1: True) (Adm!n)
[*] DC01 192.168.1.1 TEST_EXECUTION Testing execution methods
[*] DC01 192.168.1.1 TEST_EXECUTION Execution Method: WMIEXEC Fileless: SUCCESS Remote (Defualt): SUCCESS
[*] DC01 192.168.1.1 TEST_EXECUTION Execution Method: SMBEXEC Fileless: SUCCESS Remote (Defualt): SUCCESS
[*] DC01 192.168.1.1 TEST_EXECUTION Execution Method: ATEXEC Fileless: SUCCESS Remote (Defualt): SUCCESS
[*] DC01 192.168.1.1 TEST_EXECUTION Execution Method: WINRM Fileless: N/A Remote (Defualt): SUCCESS
``` | PypiClean |
/AMAS_sb-1.0.1-py3-none-any.whl/AMAS/recommend_reactions.py |
# recommend_reactions.py
"""
Predicts annotations of reaction(s) using a local XML file
and the reaction ID.
Usage: python recommend_reaction.py files/BIOMD0000000190.xml --cutoff 0.6 --outfile res.csv
"""
import argparse
import os
from os.path import dirname, abspath
import pandas as pd
import sys
sys.path.insert(0, dirname(dirname(abspath(__file__))))
from AMAS import constants as cn
from AMAS import recommender
def main():
parser = argparse.ArgumentParser(description='Recommend reaction annotations of an SBML model and save results')
parser.add_argument('model', type=str, help='SBML model file (.xml)')
# One or more reaction IDs can be given
parser.add_argument('--reactions', type=str, help='ID(s) of reaction(s) to be recommended. ' +\
'If not provided, all reactions will be used', nargs='*')
parser.add_argument('--min_len', type=int, help='Minimum number of reaction components (reactants and products) ' +\
'to be used for prediction. ' +\
'Reactions with fewer components than this value ' +\
'will be ignored. Default is zero.', nargs='?', default=0)
parser.add_argument('--cutoff', type=float, help='Match score cutoff.', nargs='?', default=0.0)
parser.add_argument('--mssc', type=str,
help='Match score selection criteria (MSSC). ' +\
'Choose either "top" or "above". "top" recommends ' +\
'the best candidates that are above the cutoff, ' +\
'and "above" recommends all candidates that are above ' +\
'the cutoff. Default is "top"',
nargs='?',
default='top')
parser.add_argument('--outfile', type=str, help='File path to save recommendation.', nargs='?',
default=os.path.join(os.getcwd(), 'reaction_rec.csv'))
args = parser.parse_args()
recom = recommender.Recommender(libsbml_fpath=args.model)
one_fpath = args.model
reacts = args.reactions
min_len = args.min_len
cutoff = args.cutoff
mssc = args.mssc.lower()
outfile = args.outfile
#
recom = recommender.Recommender(libsbml_fpath=one_fpath)
# # if nothing is given, predict all IDs
if reacts is None:
reacts = recom.getReactionIDs()
print("...\nAnalyzing %d reaction(s)...\n" % len(reacts))
res_tab = recom.recommendReactions(ids=reacts,
mssc=mssc,
cutoff=cutoff,
min_len=min_len,
outtype='table')
recom.saveToCSV(res_tab, outfile)
if isinstance(res_tab, pd.DataFrame):
print("Recommendations saved as:\n%s\n" % os.path.abspath(outfile))
if __name__ == '__main__':
main() | PypiClean |
/ISYlib-0.1.20150912c.tar.gz/ISYlib-0.1.20150912c/bin/isy_nestset.py | __author__ = "Peter Shipley"
import nest
import sys
import os
from warnings import warn
# import time
import pprint
from optparse import OptionParser
import ISY
from ISY.IsyExceptionClass import IsyValueError, IsyResponseError, IsyPropertyError
uuser=os.getenv('NEST_USER', None)
upass=os.getenv('NEST_PASS', None)
temperature_vars = ( "away_temperature_high", "away_temperature_low",
"current_temperature", "target_temperature",
"target_temperature_high", "target_temperature_low",
"temperature_lock_high_temp", "temperature_lock_low_temp",
"upper_safety_temp", "lower_safety_temp" )
#
# Some of the following code was outright copy/pasted from pynest
#
def main() :
parser = create_parser()
(opts, args) = parser.parse_args()
if (len(args)==0) or (args[0]=="help"):
help_txt()
sys.exit(-1)
if (not opts.uuser) or (not opts.upass):
print "a --user and --password are needed"
sys.exit(-1)
# get Nest Values
n = nest.Nest(opts.uuser, opts.upass, None, 0, "F")
n.login()
n.get_status()
if (args[0]=="show") :
n.show_status()
exit(0)
# consolidate data into a single dict
nest_values = dict ( )
nest_values.update(n.status["structure"][n.structure_id])
nest_values.update(n.status["shared"][n.serial] )
nest_values.update(n.status["device"][n.serial] )
if (args[0]=="dump") :
pprint.pprint(nest_values)
exit(0)
# faststart=1 don't load node data ( we not going to use it )
myisy= ISY.Isy(debug=0, faststart=1)
# not really needed but will speed up the first call
#myisy.load_vars()
auto_args = ( "nest_temp=current_temperature",
"nest_humidity=current_humidity",
"nest_away=away")
if (args[0]=="auto") :
args.pop(0)
args.extend(auto_args)
for var_arg in args :
(isy_var, src_var) = var_arg.split('=')
# check we got two value names
if (not isy_var) or (not src_var):
warn("Invalid arg : {0}".format(var_arg), RuntimeWarning)
continue
# check if net value name is valid
if src_var not in nest_values:
warn("Invalid Nest Value : {0}".format(isy_var), RuntimeWarning)
continue
# convert temperature to F
# we can't convert in place since the value may be used twice
if src_var in temperature_vars and not opts.celsius:
set_value = nest_values[src_var] *1.8 + 32.0
else :
set_value = nest_values[src_var]
try :
# this will raise an error if there is a problem with name or set_value
myisy.var_set_value(isy_var, int(set_value))
except IsyPropertyError :
warn("Invalid Isy Var : {0}".format(isy_var), RuntimeWarning)
continue
except (IsyValueError , ValueError):
print "invalid value :", nest_values[src_var]
warn("Invalid value for ISY var: {0}".format(set_value),
RuntimeWarning)
continue
except :
print("Unexpected error:", sys.exc_info()[0])
warn("Unexpected error: {0}".format(sys.exc_info()[0]),
RuntimeWarning)
exit(0)
else :
if opts.verbose :
print isy_var,"=", int(set_value)
# end of main
return
# convert time stamp into someting we can pass along
# if src_var == "$timestamp" :
# ti = nest_values["$timestamp"] // 1000
# set_value = time.strftime("%m%d%H%M%S", time.localtime(ti)).lstrip('0')
# print "shared timestamp", nest_values["$timestamp"],
# time.ctime(ti), set_value
#
def create_parser():
parser = OptionParser(usage="isy_nest [options] isy_var=nest_var ",
description="Commands: fan temp",
version="unknown")
parser.add_option("-u", "--user", dest="uuser",
help="username for nest.com", metavar="USER", default=uuser)
parser.add_option("-p", "--password", dest="upass",
help="password for nest.com", metavar="PASSWORD", default=upass)
parser.add_option("-c", "--celsius", dest="celsius", action="store_true", default=False,
help="use celsius instead of farenheit")
parser.add_option("-v", "--verbose", dest="verbose", action="store_true", default=False,
help="Be verbose")
parser.add_option("-s", "--serial", dest="serial", default=None,
help="optional, specify serial number of nest thermostat to talk to")
parser.add_option("-i", "--index", dest="index", default=0, type="int",
help="optional, specify index number of nest to talk to")
return parser
def help_txt():
print "syntax: isy_nestset [options] isyvar=nestvar .... "
print "options:"
print " --user <username> ... username on nest.com"
print " --password <password> ... password on nest.com"
print " --celsius ... use celsius (the default is farenheit)"
print " --serial <number> ... optional, specify serial number of nest to use"
print " --index <number> ... optional, 0-based index of nest"
print " (use --serial or --index, but not both)"
print
print "commands: isyvar=nestvar, show, help"
print " show ... show available nest vars"
print " help ... print this help"
print
print " home_temp=current_temperature"
print " ... set the var on the isy named 'home_temp'"
print " to the value of the nest current_temperature"
print " Note: the varable has to preexist on the ISY device "
print
print "examples:"
print " nest.py --user [email protected] --password swordfish home_temp=current_temperature"
print " nest.py --user [email protected] --password swordfish show"
# end of help
return
if __name__=="__main__":
main()
exit(0) | PypiClean |
/CoAPthon3-1.0.2.tar.gz/CoAPthon3-1.0.2/coapthon/messages/request.py | from coapthon import defines
from coapthon.messages.message import Message
from coapthon.messages.option import Option
__author__ = 'Giacomo Tanganelli'
class Request(Message):
"""
Class to handle the Requests.
"""
def __init__(self):
"""
Initialize a Request message.
"""
super(Request, self).__init__()
@property
def uri_path(self):
"""
Return the Uri-Path of a request
:rtype : String
:return: the Uri-Path
"""
value = []
for option in self.options:
if option.number == defines.OptionRegistry.URI_PATH.number:
value.append(str(option.value) + '/')
value = "".join(value)
value = value[:-1]
return value
@uri_path.setter
def uri_path(self, path):
"""
Set the Uri-Path of a request.
:param path: the Uri-Path
"""
path = path.strip("/")
tmp = path.split("?")
path = tmp[0]
paths = path.split("/")
for p in paths:
option = Option()
option.number = defines.OptionRegistry.URI_PATH.number
option.value = p
self.add_option(option)
if len(tmp) > 1:
query = tmp[1]
self.uri_query = query
@uri_path.deleter
def uri_path(self):
"""
Delete the Uri-Path of a request.
"""
self.del_option_by_number(defines.OptionRegistry.URI_PATH.number)
@property
def uri_query(self):
"""
Get the Uri-Query of a request.
:return: the Uri-Query
:rtype : String
:return: the Uri-Query string
"""
value = []
for option in self.options:
if option.number == defines.OptionRegistry.URI_QUERY.number:
value.append(str(option.value))
return "&".join(value)
@uri_query.setter
def uri_query(self, value):
"""
Adds a query.
:param value: the query
"""
del self.uri_query
queries = value.split("&")
for q in queries:
option = Option()
option.number = defines.OptionRegistry.URI_QUERY.number
option.value = str(q)
self.add_option(option)
@uri_query.deleter
def uri_query(self):
"""
Delete a query.
"""
self.del_option_by_number(defines.OptionRegistry.URI_QUERY.number)
@property
def accept(self):
"""
Get the Accept option of a request.
:return: the Accept value or None if not specified by the request
:rtype : String
"""
for option in self.options:
if option.number == defines.OptionRegistry.ACCEPT.number:
return option.value
return None
@accept.setter
def accept(self, value):
"""
Add an Accept option to a request.
:param value: the Accept value
"""
if value in list(defines.Content_types.values()):
option = Option()
option.number = defines.OptionRegistry.ACCEPT.number
option.value = value
self.add_option(option)
@accept.deleter
def accept(self):
"""
Delete the Accept options of a request.
"""
self.del_option_by_number(defines.OptionRegistry.ACCEPT.number)
@property
def if_match(self):
"""
Get the If-Match option of a request.
:return: the If-Match values or [] if not specified by the request
:rtype : list
"""
value = []
for option in self.options:
if option.number == defines.OptionRegistry.IF_MATCH.number:
value.append(option.value)
return value
@if_match.setter
def if_match(self, values):
"""
Set the If-Match option of a request.
:param values: the If-Match values
:type values : list
"""
assert isinstance(values, list)
for v in values:
option = Option()
option.number = defines.OptionRegistry.IF_MATCH.number
option.value = v
self.add_option(option)
@if_match.deleter
def if_match(self):
"""
Delete the If-Match options of a request.
"""
self.del_option_by_number(defines.OptionRegistry.IF_MATCH.number)
@property
def if_none_match(self):
"""
Get the if-none-match option of a request.
:return: True, if if-none-match is present
:rtype : bool
"""
for option in self.options:
if option.number == defines.OptionRegistry.IF_NONE_MATCH.number:
return True
return False
@if_none_match.setter
def if_none_match(self, value):
"""
Set the If-Match option of a request.
:param value: True/False
:type values : bool
"""
assert isinstance(value, bool)
option = Option()
option.number = defines.OptionRegistry.IF_NONE_MATCH.number
option.value = None
self.add_option(option)
def add_if_none_match(self):
"""
Add the if-none-match option to the request.
"""
option = Option()
option.number = defines.OptionRegistry.IF_NONE_MATCH.number
option.value = None
self.add_option(option)
def add_no_response(self):
"""
Add the no-response option to the request
# https://tools.ietf.org/html/rfc7967#section-2.1
"""
option = Option()
option.number = defines.OptionRegistry.NO_RESPONSE.number
option.value = 26
self.add_option(option)
@if_none_match.deleter
def if_none_match(self):
"""
Delete the if-none-match option in the request.
"""
self.del_option_by_number(defines.OptionRegistry.IF_NONE_MATCH.number)
@property
def proxy_uri(self):
"""
Get the Proxy-Uri option of a request.
:return: the Proxy-Uri values or None if not specified by the request
:rtype : String
"""
for option in self.options:
if option.number == defines.OptionRegistry.PROXY_URI.number:
return option.value
return None
@proxy_uri.setter
def proxy_uri(self, value):
"""
Set the Proxy-Uri option of a request.
:param value: the Proxy-Uri value
"""
option = Option()
option.number = defines.OptionRegistry.PROXY_URI.number
option.value = str(value)
self.add_option(option)
@proxy_uri.deleter
def proxy_uri(self):
"""
Delete the Proxy-Uri option of a request.
"""
self.del_option_by_number(defines.OptionRegistry.PROXY_URI.number)
@property
def proxy_schema(self):
"""
Get the Proxy-Schema option of a request.
:return: the Proxy-Schema values or None if not specified by the request
:rtype : String
"""
for option in self.options:
if option.number == defines.OptionRegistry.PROXY_SCHEME.number:
return option.value
return None
@proxy_schema.setter
def proxy_schema(self, value):
"""
Set the Proxy-Schema option of a request.
:param value: the Proxy-Schema value
"""
option = Option()
option.number = defines.OptionRegistry.PROXY_SCHEME.number
option.value = str(value)
self.add_option(option)
@proxy_schema.deleter
def proxy_schema(self):
"""
Delete the Proxy-Schema option of a request.
"""
self.del_option_by_number(defines.OptionRegistry.PROXY_SCHEME.number) | PypiClean |
/Ciw-3.0.0.tar.gz/Ciw-3.0.0/docs/Background/kendall.rst | .. _kendall-notation:
==================
Kendall's Notation
==================
Kendall's notation is used as shorthand to denote single node queueing systems [WS09]_.
A queue is characterised by:
.. math::
A/B/C/X/Y/Z
where:
+ :math:`A` denotes the distribution of inter-arrival times
+ :math:`B` denotes the distribution of service times
+ :math:`C` denotes the number of servers
+ :math:`X` denotes the queueing capacity
+ :math:`Y` denotes the size of the population of customers
+ :math:`Z` denotes the queueing discipline
For the parameters :math:`A` and :math:`B`, a number of shorthand notation is available. For example:
+ :math:`M`: Markovian or Exponential distribution
+ :math:`E`: Erlang distribution (a special case of the Gamma distribution)
+ :math:`C_k`: Coxian distribution of order :math:`k`
+ :math:`D`: Deterministic distribution
+ :math:`G` / :math:`GI`: General / General independent distribution
The parameters :math:`X`, :math:`Y` and :math:`Z` are optional, and are assumed to be :math:`\infty`, :math:`\infty`, and First In First Out (FIFO) respectively.
Other options for the queueing schedule :math:`Z` may be SIRO (Service In Random Order), LIFO (Last In First Out), and PS (Processor Sharing).
Some examples:
+ :math:`M/M/1`:
+ Exponential inter-arrival times
+ Exponential service times
+ 1 server
+ Infinite queueing capacity
+ Infinite population
+ First in first out
+ :math:`M/D/\infty/\infty/1000`:
+ Exponential inter-arrival times
+ Deterministic service times
+ Infinite servers
+ Infinite queueing capacity
+ Population of 1000 customers
+ First in first out
+ :math:`G/G/1/\infty/\infty/\text{SIRO}`:
+ General distribution for inter-arrival times
+ General distribution for service times
+ 1 server
+ Infinite queueing capacity
+ Infinite population
+ Service in random order
+ :math:`M/M/4/5`:
+ Exponential inter-arrival times
+ Exponential service times
+ 4 servers
+ Queueing capacity of 5
+ Infinite population
+ First in first out
| PypiClean |
/125softNLP-0.0.1-py3-none-any.whl/pysoftNLP/kashgari/processors/labeling_processor.py |
# author: BrikerMan
# contact: [email protected]
# blog: https://eliyar.biz
# version: 1.0
# license: Apache Licence
# file: corpus.py
# time: 2019-05-17 11:28
import collections
import logging
import operator
from typing import List, Dict, Optional
import numpy as np
from tensorflow.python.keras.preprocessing.sequence import pad_sequences
from tensorflow.python.keras.utils import to_categorical
import pysoftNLP.kashgari as kashgari
from pysoftNLP.kashgari import utils
from pysoftNLP.kashgari.processors.base_processor import BaseProcessor
class LabelingProcessor(BaseProcessor):
"""
Corpus Pre Processor class
"""
def info(self):
info = super(LabelingProcessor, self).info()
info['task'] = kashgari.LABELING
return info
def _build_label_dict(self,
label_list: List[List[str]]):
"""
Build label2idx dict for sequence labeling task
Args:
label_list: corpus label list
"""
label2idx: Dict[str: int] = {
self.token_pad: 0
}
token2count = {}
for sequence in label_list:
for label in sequence:
count = token2count.get(label, 0)
token2count[label] = count + 1
sorted_token2count = sorted(token2count.items(),
key=operator.itemgetter(1),
reverse=True)
token2count = collections.OrderedDict(sorted_token2count)
for token in token2count.keys():
if token not in label2idx:
label2idx[token] = len(label2idx)
self.label2idx = label2idx
self.idx2label = dict([(value, key)
for key, value in self.label2idx.items()])
logging.debug(f"build label2idx dict finished, contains {len(self.label2idx)} labels.")
def process_y_dataset(self,
data: List[List[str]],
max_len: Optional[int] = None,
subset: Optional[List[int]] = None) -> np.ndarray:
if subset is not None:
target = utils.get_list_subset(data, subset)
else:
target = data[:]
numerized_samples = self.numerize_label_sequences(target)
padded_seq = pad_sequences(
numerized_samples, max_len, padding='post', truncating='post')
return to_categorical(padded_seq, len(self.label2idx))
def numerize_token_sequences(self,
sequences: List[List[str]]):
result = []
for seq in sequences:
if self.add_bos_eos:
seq = [self.token_bos] + seq + [self.token_eos]
unk_index = self.token2idx[self.token_unk]
result.append([self.token2idx.get(token, unk_index) for token in seq])
return result
def numerize_label_sequences(self,
sequences: List[List[str]]) -> List[List[int]]:
result = []
for seq in sequences:
if self.add_bos_eos:
seq = [self.token_pad] + seq + [self.token_pad]
result.append([self.label2idx[label] for label in seq])
return result
def reverse_numerize_label_sequences(self,
sequences,
lengths=None):
result = []
for index, seq in enumerate(sequences):
labels = []
if self.add_bos_eos:
seq = seq[1:]
for idx in seq:
labels.append(self.idx2label[idx])
if lengths is not None:
labels = labels[:lengths[index]]
result.append(labels)
return result
if __name__ == "__main__":
from kashgari.corpus import ChineseDailyNerCorpus
x, y = ChineseDailyNerCorpus.load_data()
p = LabelingProcessor()
p.analyze_corpus(x, y)
r = p.process_x_dataset(x, subset=[10, 12, 20])
print(r) | PypiClean |
/Dickens-2.1.1.tar.gz/Dickens-2.1.1/README.rst | =======
Dickens
=======
Additional Python decorators implementing the descriptor interface.
Use cases
=========
Like the built-in decorator, ``property``, these classes are initialized by and wrap a function, generally within the context of a class, in order to modify its behavior.
cached property
---------------
This decorator functions much like a read-only ``property``, with the difference that, upon access, it records its result in the instance's object data dictionary, which reference takes precedence in look-ups, thereby replacing itself for that object::
from descriptors import cachedproperty
@cachedproperty
def circumference(self):
return 2 * math.pi * self.radius
class property
--------------
A read-only ``property`` for class methods::
from descriptors import classproperty
@classproperty
def badpi(cls):
return 22 / 7
cached class property
---------------------
A class ``property``, which caches its result in the data dictionary of the class from which it was invoked, (under another name, so as not to interfere with inheritance of the property)::
from descriptors import cachedclassproperty
@cachedclassproperty
def badpi(cls):
return 22 / 7
class-only method
-----------------
A class method that **cannot** be accessed as an instance method::
from descriptors import classonlymethod
@classonlymethod
def circumference(cls, radius):
return 2 * cls.pi * radius
The class-only method *may* be overshadowed by instance data set under the same name.
Otherwise, instance access raises ``AttributeError`` (and such that it is forwarded to and *may* be handled by the instance's ``__getattr__``).
Installation
============
Dickens is a Python distribution, which may be installed via ``easy_install`` or ``pip``, *e.g.*::
pip install Dickens
...or, from source::
python setup.py install
| PypiClean |
/GooeyDev-1.0.8b2.tar.gz/GooeyDev-1.0.8b2/gooey/gui/components/header.py | import wx
from gooey.gui import imageutil, image_repository
from gooey.gui.util import wx_util
from gooey.gui.three_to_four import bitmapFromImage
from gooey.util.functional import getin
from gooey.gui.components.mouse import notifyMouseEvent
PAD_SIZE = 10
class FrameHeader(wx.Panel):
def __init__(self, parent, buildSpec, **kwargs):
wx.Panel.__init__(self, parent, **kwargs)
self.SetDoubleBuffered(True)
self.buildSpec = buildSpec
self._header = None
self._subheader = None
self.settings_img = None
self.running_img = None
self.check_mark = None
self.error_symbol = None
self.images = []
self.layoutComponent()
self.bindMouseEvents()
def setTitle(self, title):
self._header.SetLabel(title)
def setSubtitle(self, subtitle):
self._subheader.SetLabel(subtitle)
def setImage(self, image):
for img in self.images:
img.Show(False)
getattr(self, image).Show(True)
self.Layout()
def layoutComponent(self):
self.SetBackgroundColour(self.buildSpec['header_bg_color'])
self.SetSize((30, self.buildSpec['header_height']))
self.SetMinSize((120, self.buildSpec['header_height']))
self._header = wx_util.h1(self, label=self.buildSpec['program_name'])
self._subheader = wx.StaticText(self, label=self.buildSpec['program_description'])
images = self.buildSpec['images']
targetHeight = self.buildSpec['header_height'] - 10
self.settings_img = self._load_image(images['configIcon'], targetHeight)
self.running_img = self._load_image(images['runningIcon'], targetHeight)
self.check_mark = self._load_image(images['successIcon'], targetHeight)
self.error_symbol = self._load_image(images['errorIcon'], targetHeight)
self.images = [
self.settings_img,
self.running_img,
self.check_mark,
self.error_symbol
]
vsizer = wx.BoxSizer(wx.VERTICAL)
sizer = wx.BoxSizer(wx.HORIZONTAL)
headings_sizer = self.build_heading_sizer()
sizer.Add(headings_sizer, 1,
wx.ALIGN_LEFT | wx.EXPAND | wx.LEFT,
PAD_SIZE)
sizer.Add(self.settings_img, 0, wx.EXPAND | wx.RIGHT, PAD_SIZE)
sizer.Add(self.running_img, 0, wx.EXPAND | wx.RIGHT, PAD_SIZE)
sizer.Add(self.check_mark, 0, wx.EXPAND | wx.RIGHT, PAD_SIZE)
sizer.Add(self.error_symbol, 0, wx.EXPAND | wx.RIGHT, PAD_SIZE)
self.running_img.Hide()
self.check_mark.Hide()
self.error_symbol.Hide()
vsizer.Add(sizer, 1, wx.EXPAND)
self.SetSizer(vsizer)
def _load_image(self, imgPath, targetHeight):
rawImage = imageutil.loadImage(imgPath)
sizedImage = imageutil.resizeImage(rawImage, targetHeight)
return imageutil.wrapBitmap(sizedImage, self)
def build_heading_sizer(self):
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.AddStretchSpacer(1)
if self.buildSpec['header_show_title']:
sizer.Add(self._header, 0)
else:
self._header.Hide()
if self.buildSpec['header_show_subtitle']:
sizer.Add(self._subheader, 0)
else:
self._subheader.Hide()
sizer.AddStretchSpacer(1)
return sizer
def bindMouseEvents(self):
"""
Manually binding all LEFT_DOWN events.
See: gooey.gui.mouse for background.
"""
self.Bind(wx.EVT_LEFT_DOWN, notifyMouseEvent)
self._header.Bind(wx.EVT_LEFT_DOWN, notifyMouseEvent)
self._subheader.Bind(wx.EVT_LEFT_DOWN, notifyMouseEvent)
for image in self.images:
image.Bind(wx.EVT_LEFT_DOWN, notifyMouseEvent) | PypiClean |
/0x0-python-0.5.tar.gz/0x0-python-0.5/README.md | ### Здесь есть функции:
- `upload_file_url(url, expires, secret)`: Загрузка файла через ссылку, url=ссылка, expires=время хранения файла в часах(можно оставить пустым), secret=удлинняет ссылку(можно оставить пустым).
- `upload_file_path(path, expires, secret)`: Тоже самое что и upload_file_url, только нужно указывать путь к файлу.
- `delete_file(token, url)`: Удаляет файл, token=токен, url=ссылкана файл в 0x0.
- `change_expires(url, expires, token)`: Изменяет время хранения файла, token=токен, url=ссылка на файл в 0x0, expires=новое время хранение файла в часах. | PypiClean |
/GMMA-1.2.2.tar.gz/GMMA-1.2.2/gamma/_gaussian_mixture.py |
# Author: Wei Xue <[email protected]>
# Modified by Thierry Guillemot <[email protected]>
# License: BSD 3 clause
import numpy as np
from scipy import linalg
from sklearn.utils import check_array
from sklearn.utils.extmath import row_norms
from sklearn.utils.validation import _deprecate_positional_args
from .seismic_ops import *
from ._base import BaseMixture, _check_shape
###############################################################################
# Gaussian mixture shape checkers used by the GaussianMixture class
def _check_weights(weights, n_components):
"""Check the user provided 'weights'.
Parameters
----------
weights : array-like of shape (n_components,)
The proportions of components of each mixture.
n_components : int
Number of components.
Returns
-------
weights : array, shape (n_components,)
"""
weights = check_array(weights, dtype=[np.float64, np.float32],
ensure_2d=False)
_check_shape(weights, (n_components,), 'weights')
# check range
if (any(np.less(weights, 0.)) or
any(np.greater(weights, 1.))):
raise ValueError("The parameter 'weights' should be in the range "
"[0, 1], but got max value %.5f, min value %.5f"
% (np.min(weights), np.max(weights)))
# check normalization
if not np.allclose(np.abs(1. - np.sum(weights)), 0.):
raise ValueError("The parameter 'weights' should be normalized, "
"but got sum(weights) = %.5f" % np.sum(weights))
return weights
def _check_means(means, n_components, n_samples, n_features):
"""Validate the provided 'means'.
Parameters
----------
means : array-like of shape (n_components, n_features)
The centers of the current components.
n_components : int
Number of components.
n_features : int
Number of features.
Returns
-------
means : array, (n_components, n_features)
"""
means = check_array(means, dtype=[np.float64, np.float64, np.float32], ensure_2d=False)
_check_shape(means, (n_components, n_samples, n_features), 'means')
return means
def _check_precision_positivity(precision, covariance_type):
"""Check a precision vector is positive-definite."""
if np.any(np.less_equal(precision, 0.0)):
raise ValueError("'%s precision' should be "
"positive" % covariance_type)
def _check_precision_matrix(precision, covariance_type):
"""Check a precision matrix is symmetric and positive-definite."""
if not (np.allclose(precision, precision.T) and
np.all(linalg.eigvalsh(precision) > 0.)):
raise ValueError("'%s precision' should be symmetric, "
"positive-definite" % covariance_type)
def _check_precisions_full(precisions, covariance_type):
"""Check the precision matrices are symmetric and positive-definite."""
for prec in precisions:
_check_precision_matrix(prec, covariance_type)
def _check_precisions(precisions, covariance_type, n_components, n_features):
"""Validate user provided precisions.
Parameters
----------
precisions : array-like
'full' : shape of (n_components, n_features, n_features)
'tied' : shape of (n_features, n_features)
'diag' : shape of (n_components, n_features)
'spherical' : shape of (n_components,)
covariance_type : string
n_components : int
Number of components.
n_features : int
Number of features.
Returns
-------
precisions : array
"""
precisions = check_array(precisions, dtype=[np.float64, np.float32],
ensure_2d=False,
allow_nd=covariance_type == 'full')
precisions_shape = {'full': (n_components, n_features, n_features),
'tied': (n_features, n_features),
'diag': (n_components, n_features),
'spherical': (n_components,)}
_check_shape(precisions, precisions_shape[covariance_type],
'%s precision' % covariance_type)
_check_precisions = {'full': _check_precisions_full,
'tied': _check_precision_matrix,
'diag': _check_precision_positivity,
'spherical': _check_precision_positivity}
_check_precisions[covariance_type](precisions, covariance_type)
return precisions
###############################################################################
# Gaussian mixture parameters estimators (used by the M-Step)
def _estimate_gaussian_covariances_full(resp, X, nk, means, reg_covar):
"""Estimate the full covariance matrices.
Parameters
----------
resp : array-like of shape (n_samples, n_components)
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
means : array-like of shape (n_components, n_features)
reg_covar : float
Returns
-------
covariances : array, shape (n_components, n_features, n_features)
The covariance matrix of the current components.
"""
n_components, _, n_features = means.shape
covariances = np.empty((n_components, n_features, n_features))
for k in range(n_components):
diff = X - means[k]
covariances[k] = np.dot(resp[:, k] * diff.T, diff) / nk[k]
covariances[k].flat[::n_features + 1] += reg_covar
return covariances
def _estimate_gaussian_covariances_tied(resp, X, nk, means, reg_covar):
"""Estimate the tied covariance matrix.
Parameters
----------
resp : array-like of shape (n_samples, n_components)
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
means : array-like of shape (n_components, n_features)
reg_covar : float
Returns
-------
covariance : array, shape (n_features, n_features)
The tied covariance matrix of the components.
"""
avg_X2 = np.dot(X.T, X)
avg_means2 = np.dot(nk * means.T, means)
covariance = avg_X2 - avg_means2
covariance /= nk.sum()
covariance.flat[::len(covariance) + 1] += reg_covar
return covariance
def _estimate_gaussian_covariances_diag(resp, X, nk, means, reg_covar):
"""Estimate the diagonal covariance vectors.
Parameters
----------
responsibilities : array-like of shape (n_samples, n_components)
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
means : array-like of shape (n_components, n_features)
reg_covar : float
Returns
-------
covariances : array, shape (n_components, n_features)
The covariance vector of the current components.
"""
avg_X2 = np.dot(resp.T, X * X) / nk[:, np.newaxis]
avg_means2 = means ** 2
avg_X_means = means * np.dot(resp.T, X) / nk[:, np.newaxis]
return avg_X2 - 2 * avg_X_means + avg_means2 + reg_covar
# n_components, _, n_features = means.shape
# covariances = np.empty((n_components, n_features))
# for k in range(n_components):
# diff = X - means[k]
# covariances[k] = np.diag(np.dot(resp[:, k] * diff.T, diff)) / nk[k]
# covariances[k] += reg_covar
# return covariances
def _estimate_gaussian_covariances_spherical(resp, X, nk, means, reg_covar):
"""Estimate the spherical variance values.
Parameters
----------
responsibilities : array-like of shape (n_samples, n_components)
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
means : array-like of shape (n_components, n_features)
reg_covar : float
Returns
-------
variances : array, shape (n_components,)
The variance values of each components.
"""
return _estimate_gaussian_covariances_diag(resp, X, nk,
means, reg_covar).mean(1)
def _estimate_gaussian_parameters(X, resp, reg_covar, covariance_type, station_locs, phase_type,
vel={"p":6.0, "s":6.0/1.75}, loss_type="l2", centers_prev=None, bounds=None, eikonal=None):
"""Estimate the Gaussian distribution parameters.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The input data array.
resp : array-like of shape (n_samples, n_components)
The responsibilities for each data sample in X.
reg_covar : float
The regularization added to the diagonal of the covariance matrices.
covariance_type : {'full', 'tied', 'diag', 'spherical'}
The type of precision matrices.
centers_prev: (stations(x, y, ...), time, amp, ...)
Returns
-------
nk : array-like of shape (n_components,)
The numbers of data samples in the current components.
means : array-like of shape (n_components, n_features)
The centers of the current components.
covariances : array-like
The covariance matrix of the current components.
The shape depends of the covariance_type.
"""
nk = resp.sum(axis=0) + 10 * np.finfo(resp.dtype).eps
# means = np.dot(resp.T, X) / nk[:, np.newaxis]
# means = np.tile(means, [X.shape[0],1,1]).transpose((1,0,2))
n_features = X.shape[1]
if centers_prev is None:
centers_prev = np.dot(resp.T, np.hstack([station_locs, X])) / nk[:, np.newaxis]
centers = np.zeros_like(centers_prev) #x, y, z, t, amp, ...
for i in range(len(centers_prev)):
if n_features == 1:
loc, loss = calc_loc(X[:,:1], phase_type, station_locs, resp[:, i:i+1], centers_prev[i:i+1, :], vel=vel, bounds=bounds, eikonal=eikonal)
centers[i:i+1, :] = loc
elif n_features == 2:
loc, loss = calc_loc(X[:,:1], phase_type, station_locs, resp[:, i:i+1], centers_prev[i:i+1, :-1], vel=vel, bounds=bounds, eikonal=eikonal)
centers[i:i+1, :-1] = loc
centers[i:i+1, -1:] = calc_mag(X[:,1:2], centers[i:i+1,:-1], station_locs, resp[:,i:i+1])
else:
raise ValueError(f"n_features = {n_features} > 2!")
means = np.zeros([resp.shape[1], X.shape[0], X.shape[1]])
for i in range(len(centers)):
if n_features == 1:
means[i, :, :] = calc_time(centers[i:i+1, :], station_locs, phase_type, vel=vel, eikonal=eikonal)
elif n_features == 2:
means[i, :, 0:1] = calc_time(centers[i:i+1, :-1], station_locs, phase_type, vel=vel, eikonal=eikonal)
means[i, :, 1:2] = calc_amp(centers[i:i+1, -1:], centers[i:i+1, :-1], station_locs)
else:
raise ValueError(f"n_features = {n_features} > 2!")
covariances = {"full": _estimate_gaussian_covariances_full,
"tied": _estimate_gaussian_covariances_tied,
"diag": _estimate_gaussian_covariances_diag,
"spherical": _estimate_gaussian_covariances_spherical
}[covariance_type](resp, X, nk, means, reg_covar)
return nk, means, covariances, centers
def _compute_precision_cholesky(covariances, covariance_type, max_covar=None):
"""Compute the Cholesky decomposition of the precisions.
Parameters
----------
covariances : array-like
The covariance matrix of the current components.
The shape depends of the covariance_type.
covariance_type : {'full', 'tied', 'diag', 'spherical'}
The type of precision matrices.
Returns
-------
precisions_cholesky : array-like
The cholesky decomposition of sample precisions of the current
components. The shape depends of the covariance_type.
"""
estimate_precision_error_message = (
"Fitting the mixture model failed because some components have "
"ill-defined empirical covariance (for instance caused by singleton "
"or collapsed samples). Try to decrease the number of components, "
"or increase reg_covar.")
if covariance_type == 'full':
n_components, n_features, _ = covariances.shape
precisions_chol = np.empty((n_components, n_features, n_features))
for k, covariance in enumerate(covariances):
try:
cov_chol = linalg.cholesky(covariance, lower=True)
except linalg.LinAlgError:
raise ValueError(estimate_precision_error_message)
precisions_chol[k] = linalg.solve_triangular(cov_chol,
np.eye(n_features),
lower=True).T
elif covariance_type == 'tied':
_, n_features = covariances.shape
try:
cov_chol = linalg.cholesky(covariances, lower=True)
except linalg.LinAlgError:
raise ValueError(estimate_precision_error_message)
precisions_chol = linalg.solve_triangular(cov_chol, np.eye(n_features),
lower=True).T
else:
if np.any(np.less_equal(covariances, 0.0)):
raise ValueError(estimate_precision_error_message)
precisions_chol = 1. / np.sqrt(covariances)
if max_covar is not None:
non_zero = (np.abs(precisions_chol) != 0.0)
precisions_chol[non_zero] = 1.0/(np.sqrt(max_covar) * np.tanh(1.0/precisions_chol[non_zero]/np.sqrt(max_covar)))
precisions_chol[~non_zero] = 1.0/np.sqrt(max_covar)
return precisions_chol
###############################################################################
# Gaussian mixture probability estimators
def _compute_log_det_cholesky(matrix_chol, covariance_type, n_features):
"""Compute the log-det of the cholesky decomposition of matrices.
Parameters
----------
matrix_chol : array-like
Cholesky decompositions of the matrices.
'full' : shape of (n_components, n_features, n_features)
'tied' : shape of (n_features, n_features)
'diag' : shape of (n_components, n_features)
'spherical' : shape of (n_components,)
covariance_type : {'full', 'tied', 'diag', 'spherical'}
n_features : int
Number of features.
Returns
-------
log_det_precision_chol : array-like of shape (n_components,)
The determinant of the precision matrix for each component.
"""
if covariance_type == 'full':
n_components, _, _ = matrix_chol.shape
log_det_chol = (np.sum(np.log(
matrix_chol.reshape(
n_components, -1)[:, ::n_features + 1]), 1))
elif covariance_type == 'tied':
log_det_chol = (np.sum(np.log(np.diag(matrix_chol))))
elif covariance_type == 'diag':
log_det_chol = (np.sum(np.log(matrix_chol), axis=1))
else:
log_det_chol = n_features * (np.log(matrix_chol))
return log_det_chol
def _estimate_log_gaussian_prob(X, means, precisions_chol, covariance_type):
"""Estimate the log Gaussian probability.
Parameters
----------
X : array-like of shape (n_samples, n_features)
means : array-like of shape (n_components, n_features)
precisions_chol : array-like
Cholesky decompositions of the precision matrices.
'full' : shape of (n_components, n_features, n_features)
'tied' : shape of (n_features, n_features)
'diag' : shape of (n_components, n_features)
'spherical' : shape of (n_components,)
covariance_type : {'full', 'tied', 'diag', 'spherical'}
Returns
-------
log_prob : array, shape (n_samples, n_components)
"""
n_samples, n_features = X.shape
n_components, _, _ = means.shape
# det(precision_chol) is half of det(precision)
log_det = _compute_log_det_cholesky(
precisions_chol, covariance_type, n_features)
if covariance_type == 'full':
log_prob = np.empty((n_samples, n_components))
for k, (mu, prec_chol) in enumerate(zip(means, precisions_chol)):
y = np.dot(X, prec_chol) - np.dot(mu, prec_chol)
log_prob[:, k] = np.sum(np.square(y), axis=1)
elif covariance_type == 'tied':
log_prob = np.empty((n_samples, n_components))
for k, mu in enumerate(means):
y = np.dot(X, precisions_chol) - np.dot(mu, precisions_chol)
log_prob[:, k] = np.sum(np.square(y), axis=1)
elif covariance_type == 'diag':
precisions = precisions_chol ** 2
log_prob = (np.sum((means ** 2 * precisions), 1) -
2. * np.dot(X, (means * precisions).T) +
np.dot(X ** 2, precisions.T))
# log_prob = np.empty((n_samples, n_components))
# for k, (mu, prec_chol) in enumerate(zip(means, precisions_chol)):
# y = np.dot(X, prec_chol) - np.dot(mu, prec_chol)
# log_prob[:, k] = np.square(y)
elif covariance_type == 'spherical':
precisions = precisions_chol ** 2
log_prob = (np.sum(means ** 2, 1) * precisions -
2 * np.dot(X, means.T * precisions) +
np.outer(row_norms(X, squared=True), precisions))
return -.5 * (n_features * np.log(2 * np.pi) + log_prob) + log_det
class GaussianMixture(BaseMixture):
"""Gaussian Mixture.
Representation of a Gaussian mixture model probability distribution.
This class allows to estimate the parameters of a Gaussian mixture
distribution.
Read more in the :ref:`User Guide <gmm>`.
.. versionadded:: 0.18
Parameters
----------
n_components : int, default=1
The number of mixture components.
covariance_type : {'full', 'tied', 'diag', 'spherical'}, default='full'
String describing the type of covariance parameters to use.
Must be one of:
'full'
each component has its own general covariance matrix
'tied'
all components share the same general covariance matrix
'diag'
each component has its own diagonal covariance matrix
'spherical'
each component has its own single variance
tol : float, default=1e-3
The convergence threshold. EM iterations will stop when the
lower bound average gain is below this threshold.
reg_covar : float, default=1e-6
Non-negative regularization added to the diagonal of covariance.
Allows to assure that the covariance matrices are all positive.
max_iter : int, default=100
The number of EM iterations to perform.
n_init : int, default=1
The number of initializations to perform. The best results are kept.
init_params : {'kmeans', 'random'}, default='kmeans'
The method used to initialize the weights, the means and the
precisions.
Must be one of::
'kmeans' : responsibilities are initialized using kmeans.
'random' : responsibilities are initialized randomly.
weights_init : array-like of shape (n_components, ), default=None
The user-provided initial weights.
If it is None, weights are initialized using the `init_params` method.
means_init : array-like of shape (n_components, n_features), default=None
The user-provided initial means,
If it is None, means are initialized using the `init_params` method.
precisions_init : array-like, default=None
The user-provided initial precisions (inverse of the covariance
matrices).
If it is None, precisions are initialized using the 'init_params'
method.
The shape depends on 'covariance_type'::
(n_components,) if 'spherical',
(n_features, n_features) if 'tied',
(n_components, n_features) if 'diag',
(n_components, n_features, n_features) if 'full'
random_state : int, RandomState instance or None, default=None
Controls the random seed given to the method chosen to initialize the
parameters (see `init_params`).
In addition, it controls the generation of random samples from the
fitted distribution (see the method `sample`).
Pass an int for reproducible output across multiple function calls.
See :term:`Glossary <random_state>`.
warm_start : bool, default=False
If 'warm_start' is True, the solution of the last fitting is used as
initialization for the next call of fit(). This can speed up
convergence when fit is called several times on similar problems.
In that case, 'n_init' is ignored and only a single initialization
occurs upon the first call.
See :term:`the Glossary <warm_start>`.
verbose : int, default=0
Enable verbose output. If 1 then it prints the current
initialization and each iteration step. If greater than 1 then
it prints also the log probability and the time needed
for each step.
verbose_interval : int, default=10
Number of iteration done before the next print.
Attributes
----------
weights_ : array-like of shape (n_components,)
The weights of each mixture components.
means_ : array-like of shape (n_components, n_features)
The mean of each mixture component.
covariances_ : array-like
The covariance of each mixture component.
The shape depends on `covariance_type`::
(n_components,) if 'spherical',
(n_features, n_features) if 'tied',
(n_components, n_features) if 'diag',
(n_components, n_features, n_features) if 'full'
precisions_ : array-like
The precision matrices for each component in the mixture. A precision
matrix is the inverse of a covariance matrix. A covariance matrix is
symmetric positive definite so the mixture of Gaussian can be
equivalently parameterized by the precision matrices. Storing the
precision matrices instead of the covariance matrices makes it more
efficient to compute the log-likelihood of new samples at test time.
The shape depends on `covariance_type`::
(n_components,) if 'spherical',
(n_features, n_features) if 'tied',
(n_components, n_features) if 'diag',
(n_components, n_features, n_features) if 'full'
precisions_cholesky_ : array-like
The cholesky decomposition of the precision matrices of each mixture
component. A precision matrix is the inverse of a covariance matrix.
A covariance matrix is symmetric positive definite so the mixture of
Gaussian can be equivalently parameterized by the precision matrices.
Storing the precision matrices instead of the covariance matrices makes
it more efficient to compute the log-likelihood of new samples at test
time. The shape depends on `covariance_type`::
(n_components,) if 'spherical',
(n_features, n_features) if 'tied',
(n_components, n_features) if 'diag',
(n_components, n_features, n_features) if 'full'
converged_ : bool
True when convergence was reached in fit(), False otherwise.
n_iter_ : int
Number of step used by the best fit of EM to reach the convergence.
lower_bound_ : float
Lower bound value on the log-likelihood (of the training data with
respect to the model) of the best fit of EM.
Examples
--------
>>> import numpy as np
>>> from sklearn.mixture import GaussianMixture
>>> X = np.array([[1, 2], [1, 4], [1, 0], [10, 2], [10, 4], [10, 0]])
>>> gm = GaussianMixture(n_components=2, random_state=0).fit(X)
>>> gm.means_
array([[10., 2.],
[ 1., 2.]])
>>> gm.predict([[0, 0], [12, 3]])
array([1, 0])
See Also
--------
BayesianGaussianMixture : Gaussian mixture model fit with a variational
inference.
"""
@_deprecate_positional_args
def __init__(self, n_components=1, *, covariance_type='full', tol=1e-3,
reg_covar=1e-6, max_iter=100, n_init=1, init_params='kmeans',
weights_init=None, means_init=None, precisions_init=None, centers_init=None,
random_state=None, warm_start=False,
station_locs=None, phase_type=None, phase_weight=None,
vel={"p":6.0, "s":6.0/1.75}, eikonal=None,
dummy_comp=False, dummy_prob=0.01, dummy_quantile=0.1,
loss_type="l1", bounds=None, max_covar=None,
verbose=0, verbose_interval=10):
super().__init__(
n_components=n_components, tol=tol, reg_covar=reg_covar,
max_iter=max_iter, n_init=n_init, init_params=init_params,
random_state=random_state, warm_start=warm_start,
dummy_comp=dummy_comp, dummy_prob=dummy_prob, dummy_quantile=dummy_quantile,
verbose=verbose, verbose_interval=verbose_interval)
self.covariance_type = covariance_type
self.weights_init = weights_init
self.means_init = means_init
self.precisions_init = precisions_init
self.centers_init = centers_init
if station_locs is None:
raise("Missing: station_locs")
if phase_type is None:
raise("Missing: phase_type")
if phase_weight is None:
phase_weight = np.ones([len(phase_type),1])
self.vel = vel
self.station_locs = station_locs
self.phase_type = np.squeeze(phase_type)
self.phase_weight = np.squeeze(phase_weight)
self.loss_type = loss_type
self.bounds = bounds
self.max_covar = max_covar
self.eikonal = eikonal
def _check_parameters(self, X):
"""Check the Gaussian mixture parameters are well defined."""
n_samples, n_features = X.shape
if self.covariance_type not in ['spherical', 'tied', 'diag', 'full']:
raise ValueError("Invalid value for 'covariance_type': %s "
"'covariance_type' should be in "
"['spherical', 'tied', 'diag', 'full']"
% self.covariance_type)
if self.weights_init is not None:
self.weights_init = _check_weights(self.weights_init,
self.n_components)
if self.means_init is not None:
self.means_init = _check_means(self.means_init,
self.n_components, n_features)
if self.precisions_init is not None:
self.precisions_init = _check_precisions(self.precisions_init,
self.covariance_type,
self.n_components,
n_features)
if n_features > 2:
raise ValueError(f"n_features = {n_features} > 2! Only support 2 features (time, amplitude)")
assert(self.covariance_type=='full')
assert(self.station_locs.shape[0] == n_samples)
assert(self.loss_type in ["l1", "l2"])
_check_shape(self.phase_type, (n_samples, ), 'phase_type')
_check_shape(self.phase_weight, (n_samples, ), 'phase_type')
if self.init_params == "centers":
assert(self.centers_init is not None)
# if self.centers_init is not None:
# _check_shape(self.centers_init, (self.n_components, self.station_locs.shape[1] + n_features), 'centers_init')
def _initialize(self, X, resp):
"""Initialization of the Gaussian mixture parameters.
Parameters
----------
X : array-like of shape (n_samples, n_features)
resp : array-like of shape (n_samples, n_components)
"""
n_samples, _ = X.shape
weights, means, covariances, centers = _estimate_gaussian_parameters(
X, resp, self.reg_covar, self.covariance_type,
self.station_locs, self.phase_type, vel=self.vel, loss_type=self.loss_type,
centers_prev=self.centers_init, bounds=self.bounds, eikonal=self.eikonal)
weights /= n_samples
# self.weights_ = (weights if self.weights_init is None else self.weights_init)
# self.means_ = (means if self.means_init is None else self.means_init)
# self.centers_ = (centers if self.centers_init is None else self.centers_init)
self.weights_ = weights
self.means_ = means
self.centers_ = centers
if self.precisions_init is None:
self.covariances_ = covariances
self.precisions_cholesky_ = _compute_precision_cholesky(
covariances, self.covariance_type, self.max_covar)
elif self.covariance_type == 'full':
self.precisions_cholesky_ = np.array(
[linalg.cholesky(prec_init, lower=True)
for prec_init in self.precisions_init])
elif self.covariance_type == 'tied':
self.precisions_cholesky_ = linalg.cholesky(self.precisions_init,
lower=True)
else:
self.precisions_cholesky_ = self.precisions_init
def _m_step(self, X, log_resp):
"""M step.
Parameters
----------
X : array-like of shape (n_samples, n_features)
log_resp : array-like of shape (n_samples, n_components)
Logarithm of the posterior probabilities (or responsibilities) of
the point of each sample in X.
"""
n_samples, _ = X.shape
self.weights_, self.means_, self.covariances_, self.centers_ = (
_estimate_gaussian_parameters(
X, np.exp(log_resp), self.reg_covar, self.covariance_type,
self.station_locs, self.phase_type, vel=self.vel, loss_type=self.loss_type,
centers_prev=self.centers_, bounds=self.bounds, eikonal=self.eikonal))
self.weights_ /= n_samples
self.precisions_cholesky_ = _compute_precision_cholesky(
self.covariances_, self.covariance_type, self.max_covar)
def _estimate_log_prob(self, X):
prob = _estimate_log_gaussian_prob(X, self.means_, self.precisions_cholesky_, self.covariance_type)
if self.dummy_comp:
# print(np.quantile(np.max(prob[:,:-1], axis=1), self.dummy_quantile), np.log(self.dummy_prob))
prob[:,-1] = min(np.quantile(np.max(prob[:,:-1], axis=1), self.dummy_quantile), np.log(self.dummy_prob))
return prob + np.log(self.phase_weight)[:,np.newaxis]
def _estimate_log_weights(self):
if self.dummy_comp:
score = 0.1 #1.0/len(self.weights_)
if self.weights_[-1] >= score:
self.weights_[:-1] /= np.sum(self.weights_[:-1]) / (1-score)
self.weights_[-1] = score
return np.log(self.weights_)
def _compute_lower_bound(self, _, log_prob_norm):
return log_prob_norm
def _get_parameters(self):
return (self.weights_, self.means_, self.covariances_,
self.precisions_cholesky_)
def _set_parameters(self, params):
(self.weights_, self.means_, self.covariances_,
self.precisions_cholesky_) = params
# Attributes computation
_, _, n_features = self.means_.shape
if self.covariance_type == 'full':
self.precisions_ = np.empty(self.precisions_cholesky_.shape)
for k, prec_chol in enumerate(self.precisions_cholesky_):
self.precisions_[k] = np.dot(prec_chol, prec_chol.T)
elif self.covariance_type == 'tied':
self.precisions_ = np.dot(self.precisions_cholesky_,
self.precisions_cholesky_.T)
else:
self.precisions_ = self.precisions_cholesky_ ** 2
def _n_parameters(self):
"""Return the number of free parameters in the model."""
_, _, n_features = self.means_.shape
if self.covariance_type == 'full':
cov_params = self.n_components * n_features * (n_features + 1) / 2.
elif self.covariance_type == 'diag':
cov_params = self.n_components * n_features
elif self.covariance_type == 'tied':
cov_params = n_features * (n_features + 1) / 2.
elif self.covariance_type == 'spherical':
cov_params = self.n_components
mean_params = n_features * self.n_components
return int(cov_params + mean_params + self.n_components - 1)
def bic(self, X):
"""Bayesian information criterion for the current model on the input X.
Parameters
----------
X : array of shape (n_samples, n_dimensions)
Returns
-------
bic : float
The lower the better.
"""
return (-2 * self.score(X) * X.shape[0] +
self._n_parameters() * np.log(X.shape[0]))
def aic(self, X):
"""Akaike information criterion for the current model on the input X.
Parameters
----------
X : array of shape (n_samples, n_dimensions)
Returns
-------
aic : float
The lower the better.
"""
return -2 * self.score(X) * X.shape[0] + 2 * self._n_parameters() | PypiClean |
/KratosDelaunayMeshingApplication-9.1.3-1-cp37-cp37m-win_amd64.whl/KratosMultiphysics/DelaunayMeshingApplication/domain_utilities.py | from __future__ import print_function, absolute_import, division #makes KratosMultiphysics backward compatible with python 2.6 and 2.7
import KratosMultiphysics
import KratosMultiphysics.DelaunayMeshingApplication as KratosDelaunay
class DomainUtilities(object):
#
def __init__(self):
pass
#
def InitializeDomains(self, model_part, echo_level):
if( model_part.ProcessInfo[KratosDelaunay.INITIALIZED_DOMAINS] == False ):
# initialize the mesher
print("::[--Domain Utilities-]:: Initialize", model_part.Name)
# find node neighbours
self.SearchNodeNeighbours(model_part, echo_level)
# find element neighbours
self.SearchElementNeighbours(model_part, echo_level)
# set mesher utilities
mesher_utils = KratosDelaunay.MesherUtilities()
# set the domain labels to conditions
mesher_utils.SetModelPartNameToConditions(model_part)
# set the domain labels to elements
mesher_utils.SetModelPartNameToElements(model_part)
# find skin and boundary normals
if( model_part.ProcessInfo[KratosMultiphysics.IS_RESTARTED] == False ):
# build boundary of a volumetric body domain
self.BuildModelPartBoundary(model_part, echo_level)
# search nodal h
self.SearchNodalH(model_part, echo_level)
# add rigid and solid boundary nodes to fluid domains:
self.AddBoundaryNodesToFluidDomains(model_part)
# set the domain labels to nodes
mesher_utils.SetModelPartNameToNodes(model_part)
model_part.ProcessInfo.SetValue(KratosDelaunay.INITIALIZED_DOMAINS, True)
if( echo_level > 0 ):
print("::[--Domain Utilities-]:: Resultant ModelPart")
print(model_part)
#
@classmethod
def SearchNodeNeighbours(self, model_part, echo_level):
# set search options:
number_of_avg_elems = 10
number_of_avg_nodes = 10
# define search utility
nodal_neighbour_search = KratosDelaunay.NodalNeighboursSearch(model_part, echo_level, number_of_avg_elems, number_of_avg_nodes)
# execute search:
nodal_neighbour_search.Execute()
print("::[--Domain Utilities-]:: Nodal Search executed ")
#
@classmethod
def SearchElementNeighbours(self, model_part, echo_level):
dimension = model_part.ProcessInfo[KratosMultiphysics.SPACE_DIMENSION]
# set search options:
number_of_avg_elems = 10
# define search utility
elemental_neighbour_search = KratosDelaunay.ElementalNeighboursSearch(model_part, dimension, echo_level, number_of_avg_elems)
# execute search:
elemental_neighbour_search.Execute()
if( echo_level > 0 ):
print("::[--Domain Utilities-]:: Elemental Search executed ")
#
@classmethod
def BuildModelPartBoundary(self, model_part, echo_level):
print("::[--Domain Utilities-]:: Build Mesh Boundary ")
# set building options:
# define building utility
skin_build = KratosDelaunay.BuildModelPartBoundary(model_part, model_part.Name, echo_level)
# execute building:
skin_build.Execute()
# search condition masters: (check)
# skin_build.SearchConditionMasters()
if( echo_level > 0 ):
print("::[--Domain Utilities-]:: Mesh Boundary Build executed ")
###
#
@classmethod
def SearchNodalH(self, model_part, echo_level):
# define search utility
nodal_h_search = KratosMultiphysics.FindNodalHProcess(model_part)
# execute search:
nodal_h_search.Execute()
# for node in self.main_model_part.Nodes:
# nodal_h = node.GetSolutionStepValue(NODAL_H);
# print "nodal_h:",nodal_h
if( echo_level > 0 ):
print("::[--Domain Utilities-]:: Nodal H Search executed ")
#
@classmethod
def ComputeBoundaryNormals(self, model_part, echo_level):
# define calculation utility
normals_calculation = KratosDelaunay.BoundaryNormalsCalculation()
# execute calculation:
#(scaled normals)
normals_calculation.CalculateWeightedBoundaryNormals(model_part, echo_level)
#(unit normals)
# normals_calculation.CalculateUnitBoundaryNormals(model_part, self.echo_level)
if( echo_level > 0 ):
print("::[--Domain Utilities-]:: Boundary Normals computed ")
#
@classmethod
def AddBoundaryNodesToFluidDomains(self,model_part):
exist_fluid_domain = False
for part in model_part.SubModelParts:
if part.Is(KratosMultiphysics.FLUID):
exist_fluid_domain = True
break
if( exist_fluid_domain ):
print("::[--Domain Utilities-]:: Add boundary nodes to fluid domains ")
transfer_flags = [KratosMultiphysics.BOUNDARY, (KratosMultiphysics.FLUID).AsFalse()]
entity_type = "Nodes"
for fluid_part in model_part.SubModelParts:
if (fluid_part.IsNot(KratosMultiphysics.ACTIVE) and fluid_part.Is(KratosMultiphysics.FLUID)):
for part in model_part.SubModelParts:
if part.IsNot(KratosMultiphysics.ACTIVE):
if( part.Is(KratosMultiphysics.SOLID) or part.Is(KratosMultiphysics.RIGID) ):
transfer_process = KratosDelaunay.TransferEntitiesProcess(fluid_part,part,entity_type,transfer_flags)
transfer_process.Execute()
#
@classmethod
def GetVariables(self):
nodal_variables = ['NORMAL', 'NODAL_H', 'SHRINK_FACTOR']
return nodal_variables | PypiClean |
/MTfit-1.0.6a5.tar.gz/MTfit-1.0.6a5/docs/source/run_base.rst | *******************************
Running MTfit
*******************************
There are several ways to run :mod:`MTfit`, and these are described here.
Command Line
===============================
:mod:`MTfit` can be run from the command line. A script should have been installed onto the path during installation and should be callable as::
$ MTfit
However it may be necessary to install the script manually. This is platform dependent.
Script Installation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Linux
-------------------------------
Add this python script to a directory in the $PATH environmental variable::
#!/usr/bin/env python
import MTfit
MTfit.__run__()
And make sure it is executable.
Windows
--------------------------------
Add the linux script (above) to the path or if using powershell edit the powershell profile (usually found in *Documents/WindowsPowerShell/* - if not present use ``$PROFILE|Format-List -Force`` to locate it, it may be necessary to create the profile) and add::
function MTfit{
$script={
python -c "import MTfit;MTfit.__run__()" $args
}
Invoke-Command -ScriptBlock $script -ArgumentList $args
}
Windows Powershell does seem to have some errors with commandline arguments, if necessary these should be enclosed in quotation marks e.g. "-d=datafile.inv"
Command Line Options
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When running :mod:`MTfit` from the command line, there are many options available, and these can be listed using::
$ MTfit -h
.. only:: latex
For a description of these options see Chapter :latex:`\ref{cli::doc}`.
.. only:: not latex
For a description of these options see :doc:`cli`.
The command line defaults can be set using a defaults file. This is recursively checked in 3 locations:
1. ``MTFITDEFAULTSPATH`` environmental variable (could be a system level setting)
2. ``.MTfitdefaults`` file in the users home directory
3. ``.MTfitdefaults`` file in the current working directory
The higher number file over-writes defaults in the lower files if they conflict.
The structure of the defaults file is simply::
key:attr
e.g.::
dc:True
algorithm:iterate
Python Interpreter
=================================
Running MTfit from the python interpreter is done as::
>>> import MTfit
>>> args=['-o','-d']
>>> MTfit.__run__(args)
.. only:: latex
Where args correspond to the command line arguments (see Chapter :latex:`\ref{cli::doc}`.
.. only:: not latex
Where args correspond to the command line arguments (see :doc:`cli`).
It is also possible to create the :class:`~MTfit.inversion.Inversion` object::
>>> import MTfit
>>> inversion=MTfit.Inversion(*args,**kwargs)
>>> inversion.forward()
.. only:: latex
The descriptions of the :class:`~MTfit.inversion.Inversion` initialisation arguments can be found in the :class:`~MTfit.inversion.Inversion.__init__` docstrings, and :latex:`\ref{inversion::doc}`.
.. only:: not latex
The descriptions of the :class:`~MTfit.inversion.Inversion` initialisation arguments can be found in the :class:`~MTfit.inversion.Inversion.__init__` docstrings, and :doc:`inversion`.
| PypiClean |
/HalRing_Lib-1.1.0-py3-none-any.whl/halring/jenkins_lib/halring_jenkins.py | import traceback
import jenkins
import requests
import json
from halring.log.halring_logging import LoggingUtil
from time import sleep
from requests import exceptions
class JenkinsUtil(object):
def __init__(self, server_ip, user, password):
self._user = user
self._password = password
self._server_ip = server_ip
def jenkins_login(self):
self._server = jenkins.Jenkins(self._server_ip, username=self._user, password=self._password)
def check_job_queue_exist(self, job_name):
"""
:param job_name: jenkins job的名字
:return : 若当前有队列,则返回Tue
若当前没有队列,则返回False
"""
queue_item = self._server.get_job_info(job_name)["queueItem"]
result = True if queue_item else False
return result
def check_job_in_building(self, job_name):
"""
:param job_name: jenkins job的名字
:return : 若当前有任务正在编译,则返回True
若当前没有任务正在编译,则返回False
"""
last_build_obj = self._server.get_job_info(job_name)["lastBuild"]
if last_build_obj:
previous_build_number =last_build_obj["number"]
previous_build_flag = self._server.get_build_info(job_name, previous_build_number)["building"]
else:
previous_build_flag = False
return previous_build_flag
def trigger_job_with_file_parameter(self, job_name, job_params_dict, file_params_dict):
"""
触发jenkins 文件参数构建
Args:
job_name: jenkins job的名字
job_params_dict: 以dict形式传入jenkins job的参数
file_params_dict: 以dict形式传入jenkins job的文件参数
Returns: 结果返回build_number
"""
try:
next_build_number = self._server.get_job_info(job_name)["nextBuildNumber"]
'''
通过requests触发
'''
url = "{0}/job/{1}/build".format(self._server_ip, job_name)
temp_params = []
for k1 in job_params_dict:
tmp1 = {"name": k1, "value": job_params_dict[k1]}
temp_params.append(tmp1)
# file_name_list=["file"+str(i) for i in range(len(file_params_dict))]
temp_file_payload_list = []
for k2 in file_params_dict:
file_name = k2 + ".tmp"
tmp2 = {"name": k2, "file": file_name}
temp_params.append(tmp2)
temp_file_payload_list.append((file_name, open(file_params_dict[k2], "rb")))
temp_params = {"parameter": temp_params}
temp_file_payload_list.append(("json", json.dumps(temp_params)))
payload = tuple(temp_file_payload_list)
response = requests.post(url, auth=(self._user, self._password), files=payload)
build_number = next_build_number
return build_number
except Exception as e:
LoggingUtil().error(traceback.format_exc())
return 0
def trigger_job(self, job_name, job_params_dict):
"""
触发jenkins 构建
@:param job_name jenkins job的名字
@:param job_params_dict 以dict形式传入jenkins job的参数
:return:结果返回build_number
"""
# 返回 build_number
try:
next_build_number = self._server.get_job_info(job_name)["nextBuildNumber"]
queue_number = self._server.build_job(job_name, job_params_dict)
# queue number is only valid for 5 min
build_number = next_build_number
return build_number
except jenkins.JenkinsException as e:
LoggingUtil().error(e.args[0])
return 0
def build_job(self, job_name, job_params_dict, timeout=1800, interval=10, max_retry_times=3):
"""
构建jenkins job
@:param job_name jenkins job的名字
@:param job_params_dict 以dict形式传入jenkins job的参数
@:param timeout 任务超时时间,单位秒,默认为1800秒=30分钟
@:param interval 轮询任务是否完成间隔,单位秒,默认10秒
:return:结果返回(build_number,build_result)
build_result分为 SUCCESS, ABORTED, TIMEOUT, FAILURE
"""
return self.__build_job_base(job_name, job_params_dict, {}, "common",timeout, interval, max_retry_times)
def build_job_with_file_parameter(self,job_name, job_params_dict, file_params_dict, timeout=1800, interval=10, max_retry_times=3):
"""
构建jenkins job
@:param job_name jenkins job的名字
@:param job_params_dict 以dict形式传入jenkins job的参数(除文件参数以外的)
@:param file_params_dict 以dict形式传入jenkins job的文件参数
@:param timeout 任务超时时间,单位秒,默认为1800秒=30分钟
@:param interval 轮询任务是否完成间隔,单位秒,默认10秒
:return:结果返回(build_number,build_result)
build_result分为 SUCCESS, ABORTED, TIMEOUT, FAILURE
"""
return self.__build_job_base(job_name, job_params_dict, file_params_dict, "file", timeout, interval, max_retry_times)
def __build_job_base(self, job_name, job_params_dict, file_params_dict, trigger_type,timeout=1800, interval=10, max_retry_times=3):
interval = interval
i_retry_time = 0
# 等待队列结束,保证queueItem= None
# 等待队列结束后,等待上一个构建结束
build_number = 0
waiting = 0
check_job_queue_exist_result = self.check_job_queue_exist(job_name)
while check_job_queue_exist_result and i_retry_time <= max_retry_times:
try:
check_job_queue_exist_result = self.check_job_queue_exist(job_name)
i_retry_time = 0
except exceptions.ConnectionError:
i_retry_time = i_retry_time + 1
log_content = "连接断开,重试次数:" + str(i_retry_time)
check_job_queue_exist_result = "connection aborted"
LoggingUtil().warning(log_content)
continue
LoggingUtil().info("Waiting previous queue item build complete")
sleep(interval)
waiting = waiting + interval
if waiting >= timeout:
LoggingUtil().error("Previous queue item build timeout")
return (0, "TIMEOUT")
if check_job_queue_exist_result == "connection aborted":
return (0, "TIMEOUT")
i_retry_time = 0
check_job_in_building_result = self.check_job_in_building(job_name)
while check_job_in_building_result and i_retry_time <= max_retry_times:
try:
check_job_in_building_result = self.check_job_in_building(job_name)
i_retry_time = 0
except exceptions.ConnectionError:
i_retry_time = i_retry_time + 1
log_content = "连接断开,重试次数:" + str(i_retry_time)
check_job_in_building_result = "connection aborted"
LoggingUtil().warning(log_content)
continue
LoggingUtil().info("Waiting previous job build complete")
sleep(interval)
waiting = waiting + interval
if waiting >= timeout:
LoggingUtil().error("Previous job build timeout")
return (0, "TIMEOUT")
if check_job_in_building_result == "connection aborted":
return (0, "TIMEOUT")
# 触发构建,最大重试次数为3次
i_retry_time = 0
while i_retry_time <= max_retry_times:
try:
if trigger_type=="file":
build_number = self.trigger_job_with_file_parameter(job_name, job_params_dict, file_params_dict)
else:
build_number = self.trigger_job(job_name, job_params_dict)
i_retry_time = 0
break
except exceptions.ConnectionError:
i_retry_time = i_retry_time + 1
log_content = "连接段开,重试次数:" + str(i_retry_time)
LoggingUtil().warning(log_content)
continue
# 如果i_retry_time 已大于最大重试次数,则说明在重试次数内未连接上目标服务器,也未成功触发构建
if i_retry_time > max_retry_times:
build_number = 0
# 如果触发由于其他意外退出导致build_number = 0
if build_number == 0:
LoggingUtil().error("Trigger failed")
return (build_number, "ERROR")
LoggingUtil().info("Start building:" + job_name)
LoggingUtil().info("Build number:" + str(build_number))
# 等待到可以获取result信息
result = self.jenkins_get_build_info_with_waiting(job_name, build_number)
if result == "timeout":
LoggingUtil().error("Get build info failed")
return (build_number, "ERROR")
# 初始building状态设为True
building_flag = True
while building_flag and i_retry_time <= max_retry_times:
try:
building_flag = self._server.get_build_info(job_name, build_number)["building"]
i_retry_time = 0
# not subClass of OSError ConnectionError
except exceptions.ConnectionError as e:
i_retry_time = i_retry_time + 1
log_content = "连接断开,重试次数:" + str(i_retry_time)
LoggingUtil().warning(log_content)
continue
building_status = "building" if building_flag else "finish"
sleep(interval)
waiting = waiting + interval
LoggingUtil().info("Check job build status:" + building_status)
if waiting >= timeout:
LoggingUtil().error("Job builds timeout")
return (build_number, "TIMEOUT")
# 编译结束,返回编译结果
if not building_flag:
build_result = self._server.get_build_info(job_name, build_number)["result"]
# SUCCESS
# ABORTED
# FAILURE
return (build_number, build_result)
def check_job_in_building_with_retry(self, job_name, retry_times=3):
"""
:param job_name: jenkins job的名字
:param retry_times:默认是3,连接不上时的重试次数
:return: 若当前有任务正在编译,则返回True
若当前没有任务正在编译,则返回False
超过重试次数,返回timeout
"""
return self.function_with_retry(retry_times, self.check_job_in_building, job_name=job_name)
def check_job_queue_exist_with_retry(self, job_name, retry_times=3):
"""
:param job_name:jenkins job的名字
:param retry_times:默认是3,连接不上时的重试次数
:return:若当前有队列,则返回True
若当前没有队列,则返回False
超过重试次数,返回timeout
"""
return self.function_with_retry(retry_times, self.check_job_queue_exist, job_name=job_name)
def jenkins_get_build_info_with_retry(self, job_name, job_build, retry_times=3):
"""
获取job build info
@:param job_name: jenkins job的名字
@:param job_build: jenkins job的build号
:return:结果返回同get_build_info
超过重试次数,返回timeout
"""
return self.function_with_retry(retry_times, self.jenkins_get_build_info, job_name=job_name, job_build=job_build)
def jenkins_get_job_info_with_retry(self, job_name, retry_times=3):
"""
获取job info
:param job_name:
:param retry_times:
:return:
"""
return self.function_with_retry(retry_times, self.jenkins_get_job_info, job_name=job_name)
@staticmethod
def function_with_retry(retry_times, fun, **kwargs):
i_retry_time = 0
while i_retry_time <= retry_times:
try:
return fun(**kwargs)
except exceptions.ConnectionError:
i_retry_time = i_retry_time + 1
log_content = "连接断开,重试次数:{0}.".format(i_retry_time)
LoggingUtil().info(log_content)
continue
return "timeout"
def jenkins_get_job_info(self, job_name):
"""
获取job info
@:param job_name jenkins job的名字
:return:结果返回同get_job_info
"""
return self._server.get_job_info(job_name)
def jenkins_get_build_info(self, job_name, job_build):
"""
获取job build info
@:param job_name: jenkins job的名字
@:param job_build: jenkins job的build号
:return:结果返回同get_build_info
"""
return self._server.get_build_info(job_name, job_build)
def jenkins_get_queue_info(self):
"""
获取jenkins server所有在队列中等待的任务信息
:return:结果返回同get_queue_info
"""
return self._server.get_queue_info()
def jenkins_get_build_console_output(self, job_name, job_build):
"""
:param job_name: jenkins job的名字
:param job_build: jenkins job的build号
:return: 结果返回同get_build_console_output(如果slave机是win系统则返回日志可能有乱码)
"""
return self._server.get_build_console_output(job_name, job_build)
def jenkins_get_build_console_output_url(self, job_url):
"""
:param job_name: jenkins job的名字
:param job_build: jenkins job的build号
:return: 结果返回同get_build_console_output(如果slave机是win系统则返回日志可能有乱码)
"""
# 直接请求consoletxt
url = job_url + "/consoleText"
payload = ""
response = requests.request("GET", url, data=payload, auth=(self._user, self._password))
return response.text
def jenkins_get_build_info_with_waiting(self, job_name, build_number, interval=3, timeout=180):
waiting = 0
while waiting < timeout:
try:
result = self.jenkins_get_build_info_with_retry(job_name, build_number)
return result
except jenkins.JenkinsException as e:
# 如果未获取到对应build,检查正在build的里是否有正在队列的
job_info = self.jenkins_get_job_info_with_retry(job_name)
current_build_number = job_info.get("lastBuild").get("number")
queue_item_number = int(len(job_info.get("queueItem"))) if job_info.get("queueItem") else 0
if build_number > current_build_number + queue_item_number:
raise e
else:
# 循环等待队列
sleep(interval)
waiting = waiting + interval
continue
return "timeout"
def sub_find_string_after_key(self, text, key):
text_list = text.split("\n")
for text_line in text_list:
if key in text_line:
string_after_key = text_line.partition(key)[-1]
return string_after_key
# 如果未找到则返回False
return False
def find_str_after_key_from_console_output(self, job_url, key):
text = self.jenkins_get_build_console_output_url(job_url)
string_after_key = self.sub_find_string_after_key(text, key)
# 如果未找到则返回False
return string_after_key
def get_all_jobs_by_views(self, return_type="1"):
"""
Args:
return_type:
1: 返回{"view_name": [job1, job2,...]}
2: 返回[job1, job2,...]
Returns:
根据不同类型返回
"""
view_list = self.get_all_views()
all_tag = "all"
personal_config_tag = "Personal Config"
if all_tag in view_list:
view_list.remove(all_tag)
if personal_config_tag in view_list:
view_list.remove(personal_config_tag)
view_dict = {}
jobs_in_view = []
for item in view_list:
result = self._server._get_view_jobs(item)
view_dict[item]=[]
for jtem in result:
view_dict[item].append(jtem.get("name"))
jobs_in_view.append(jtem.get("name"))
if return_type =="1":
return view_dict
else:
return jobs_in_view
def get_jobs_not_in_views(self):
"""
Returns:
获取不在views里的所有job
"""
all_jobs = self._server._get_view_jobs('all')
all_jobs_list = []
jobs_in_views = self.get_all_jobs_by_views("2")
for item in all_jobs:
if item.get("name") not in jobs_in_views:
all_jobs_list.append(item.get("name"))
return {"not in views": all_jobs_list}
def get_all_views(self):
"""
Returns:
[view_name1, view_name2,...]
"""
views = self._server.get_views()
view_list = []
for item in views:
view_list.append(item.get("name"))
return view_list
def get_jobs_time_(self, job_name_list):
job_name_timing_dict={}
for item in job_name_list:
self.jenkins_get_job_info(item)
job_name_timing_dict[item] = ""
if __name__ == '__main__':
jk_util = JenkinsUtil("http://10.112.6.207:8080", "admin", "Cvsopuser@2019")
result = jk_util.jenkins_login()
a = jk_util.build_job_with_file_parameter("4test", {"test": "555"}, {
"file.txt": "C:\\Users\\ygong.SSE\\Downloads\\package.info",
"file1.txt": "C:\\Users\\ygong.SSE\\Downloads\\packtools.sh",
})
print(a) | PypiClean |
/OTLModel/Classes/Onderdeel/HSBeveiligingscel.py | from OTLMOW.OTLModel.BaseClasses.OTLAttribuut import OTLAttribuut
from OTLMOW.OTLModel.Classes.ImplementatieElement.AIMNaamObject import AIMNaamObject
from OTLMOW.OTLModel.Datatypes.BooleanField import BooleanField
from OTLMOW.OTLModel.Datatypes.DtcDocument import DtcDocument
from OTLMOW.OTLModel.Datatypes.KlHSBeveiligingscelHoogspanningszekering import KlHSBeveiligingscelHoogspanningszekering
from OTLMOW.OTLModel.Datatypes.KlHSBeveiligingscelMerk import KlHSBeveiligingscelMerk
from OTLMOW.OTLModel.Datatypes.KlHSBeveiligingscelModelnaam import KlHSBeveiligingscelModelnaam
from OTLMOW.OTLModel.Datatypes.KlHSBeveiligingscelOverstroombeveiligingVermogenschakelaar import KlHSBeveiligingscelOverstroombeveiligingVermogenschakelaar
from OTLMOW.OTLModel.Datatypes.KlHSBeveiligingscelSchakelmateriaalKlasse import KlHSBeveiligingscelSchakelmateriaalKlasse
from OTLMOW.OTLModel.Datatypes.KlHSBeveiligingscelSchakelmateriaalType import KlHSBeveiligingscelSchakelmateriaalType
from OTLMOW.OTLModel.Datatypes.KwantWrdInAmpere import KwantWrdInAmpere
from OTLMOW.OTLModel.Datatypes.KwantWrdInJaar import KwantWrdInJaar
from OTLMOW.OTLModel.Datatypes.StringField import StringField
from OTLMOW.GeometrieArtefact.PuntGeometrie import PuntGeometrie
# Generated with OTLClassCreator. To modify: extend, do not edit
class HSBeveiligingscel(AIMNaamObject, PuntGeometrie):
"""Cel die de hoogspanningsschakelinrichting omvat. Hieronder vallen onder meer de lastscheidingsschakelaar, smeltveiligheden, aardingsschakelaar,..."""
typeURI = 'https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#HSBeveiligingscel'
"""De URI van het object volgens https://www.w3.org/2001/XMLSchema#anyURI."""
def __init__(self):
AIMNaamObject.__init__(self)
PuntGeometrie.__init__(self)
self._elektrischSchema = OTLAttribuut(field=DtcDocument,
naam='elektrischSchema',
label='elektrisch schema',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#HSBeveiligingscel.elektrischSchema',
definition='Elektrisch aansluitschema van de HS beveiligingscel.',
owner=self)
self._heeftreserveZekering = OTLAttribuut(field=BooleanField,
naam='heeftreserveZekering',
label='heeft reserve zekering',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#HSBeveiligingscel.heeftreserveZekering',
definition='Is er een reserve zekering aanwezig?',
owner=self)
self._hoogspanningszekering = OTLAttribuut(field=KlHSBeveiligingscelHoogspanningszekering,
naam='hoogspanningszekering',
label='hoogspanningszekering',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#HSBeveiligingscel.hoogspanningszekering',
definition='Waarde van de hoogspanningszekering.',
owner=self)
self._keuringsfrequentie = OTLAttribuut(field=KwantWrdInJaar,
naam='keuringsfrequentie',
label='keuringsfrequentie',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#HSBeveiligingscel.keuringsfrequentie',
definition='Frequentie (in jaar) waarmee de installatie moet onderworpen worden aan een periodieke keuring door een externe dienst voor technische controle.',
owner=self)
self._merk = OTLAttribuut(field=KlHSBeveiligingscelMerk,
naam='merk',
label='merk',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#HSBeveiligingscel.merk',
definition='Merk van de HS beveiligingscel.',
owner=self)
self._modelnaam = OTLAttribuut(field=KlHSBeveiligingscelModelnaam,
naam='modelnaam',
label='modelnaam',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#HSBeveiligingscel.modelnaam',
definition='Modelnaam van de HS beveiligingscel.',
owner=self)
self._overstroombeveiligingInstelwaarde = OTLAttribuut(field=KwantWrdInAmpere,
naam='overstroombeveiligingInstelwaarde',
label='overstroombeveiliging instelwaarde',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#HSBeveiligingscel.overstroombeveiligingInstelwaarde',
definition='Instelwaarde van de overstroombeveiliging.',
owner=self)
self._overstroombeveiligingType = OTLAttribuut(field=StringField,
naam='overstroombeveiligingType',
label='overstroombeveiliging type',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#HSBeveiligingscel.overstroombeveiligingType',
definition='Type overstroombeveiliging.',
owner=self)
self._overstroombeveiligingVermogenschakelaar = OTLAttribuut(field=KlHSBeveiligingscelOverstroombeveiligingVermogenschakelaar,
naam='overstroombeveiligingVermogenschakelaar',
label='overstroombeveiliging vermogenschakelaar',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#HSBeveiligingscel.overstroombeveiligingVermogenschakelaar',
definition='Directe of indirecte overstroombeveiliging van de vermogenschakelaar.',
owner=self)
self._schakelmateriaalKlasse = OTLAttribuut(field=KlHSBeveiligingscelSchakelmateriaalKlasse,
naam='schakelmateriaalKlasse',
label='schakelmateriaal klasse',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#HSBeveiligingscel.schakelmateriaalKlasse',
definition='Klasse van het schakelmateriaal volgens Synergrid.',
owner=self)
self._schakelmateriaalType = OTLAttribuut(field=KlHSBeveiligingscelSchakelmateriaalType,
naam='schakelmateriaalType',
label='schakelmateriaal type',
objectUri='https://wegenenverkeer.data.vlaanderen.be/ns/onderdeel#HSBeveiligingscel.schakelmateriaalType',
definition='Type van schakelmateriaal.',
owner=self)
@property
def elektrischSchema(self):
"""Elektrisch aansluitschema van de HS beveiligingscel."""
return self._elektrischSchema.get_waarde()
@elektrischSchema.setter
def elektrischSchema(self, value):
self._elektrischSchema.set_waarde(value, owner=self)
@property
def heeftreserveZekering(self):
"""Is er een reserve zekering aanwezig?"""
return self._heeftreserveZekering.get_waarde()
@heeftreserveZekering.setter
def heeftreserveZekering(self, value):
self._heeftreserveZekering.set_waarde(value, owner=self)
@property
def hoogspanningszekering(self):
"""Waarde van de hoogspanningszekering."""
return self._hoogspanningszekering.get_waarde()
@hoogspanningszekering.setter
def hoogspanningszekering(self, value):
self._hoogspanningszekering.set_waarde(value, owner=self)
@property
def keuringsfrequentie(self):
"""Frequentie (in jaar) waarmee de installatie moet onderworpen worden aan een periodieke keuring door een externe dienst voor technische controle."""
return self._keuringsfrequentie.get_waarde()
@keuringsfrequentie.setter
def keuringsfrequentie(self, value):
self._keuringsfrequentie.set_waarde(value, owner=self)
@property
def merk(self):
"""Merk van de HS beveiligingscel."""
return self._merk.get_waarde()
@merk.setter
def merk(self, value):
self._merk.set_waarde(value, owner=self)
@property
def modelnaam(self):
"""Modelnaam van de HS beveiligingscel."""
return self._modelnaam.get_waarde()
@modelnaam.setter
def modelnaam(self, value):
self._modelnaam.set_waarde(value, owner=self)
@property
def overstroombeveiligingInstelwaarde(self):
"""Instelwaarde van de overstroombeveiliging."""
return self._overstroombeveiligingInstelwaarde.get_waarde()
@overstroombeveiligingInstelwaarde.setter
def overstroombeveiligingInstelwaarde(self, value):
self._overstroombeveiligingInstelwaarde.set_waarde(value, owner=self)
@property
def overstroombeveiligingType(self):
"""Type overstroombeveiliging."""
return self._overstroombeveiligingType.get_waarde()
@overstroombeveiligingType.setter
def overstroombeveiligingType(self, value):
self._overstroombeveiligingType.set_waarde(value, owner=self)
@property
def overstroombeveiligingVermogenschakelaar(self):
"""Directe of indirecte overstroombeveiliging van de vermogenschakelaar."""
return self._overstroombeveiligingVermogenschakelaar.get_waarde()
@overstroombeveiligingVermogenschakelaar.setter
def overstroombeveiligingVermogenschakelaar(self, value):
self._overstroombeveiligingVermogenschakelaar.set_waarde(value, owner=self)
@property
def schakelmateriaalKlasse(self):
"""Klasse van het schakelmateriaal volgens Synergrid."""
return self._schakelmateriaalKlasse.get_waarde()
@schakelmateriaalKlasse.setter
def schakelmateriaalKlasse(self, value):
self._schakelmateriaalKlasse.set_waarde(value, owner=self)
@property
def schakelmateriaalType(self):
"""Type van schakelmateriaal."""
return self._schakelmateriaalType.get_waarde()
@schakelmateriaalType.setter
def schakelmateriaalType(self, value):
self._schakelmateriaalType.set_waarde(value, owner=self) | PypiClean |
/AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/textbook1/chapter.09.文本数据处理与分析.ipynb | 使用正则表达式做字符串模式验证。如:银行只允许对信用卡使用6位数字做密码,若使用含有非数字或超过或不足6位数字做密码,则显示密码无效。
```
import re
pwd=input('请输入密码(6位数字):')
result = re.match("^\d{6}$",pwd)
print('密码无效!') if result==None else print('密码合法!')
import re
s='''
Select the exam:
(A)GRE (B)TOEFL (C) IELTS (D)PETS
'''
re.findall('\s?\([A-D]\)\s*(\w*)\s?',s)
import re
s='''
Select the exam:
(A)GRE (B)TOEFL (C) IELTS (D)PETS
'''
re.split('\s?\([A-D]\)\s*',s)[1:]
```
从字符串中提取特定的信息。如:从某地区天气预报信息中查找降水量数据。
解题思路:首先,确定查找降水量的正则表达式'\d+~\d+毫米';然后,使用re模块处理表达式,即可实现特定字符串的查找。
```
import re
s='预计,7月22日08时至23日08时,陕西南部、河南中东部、山东中南部、安徽北部、江苏北部、湖北西部、重庆北部等地部分地区有大到暴雨(60~90毫米),' \
'其中,山东中部等地局地有大暴雨(100~140毫米)。'
p=re.compile(r'\d+~\d+毫米')
print(f'{p.search(s)=}')
print(f'{p.findall(s)}')
for i in p.finditer(s):
print('\t',i)
```
演示使用正则表达实现分割字符串。如:同时使用空格、冒号、逗号、顿号正则表达式字符串(('[\s:,、]'),来分割字符串。
```
content='''原始数据 来源:单位,公交集团,数据描述:线路名称、方向、站点序号、站点名称'''
import re
p=re.compile('[\s:,、]')
d=p.split(content)
print(d,len(d))
```
有一个字符串内容为小明的各门课考试成绩,总分部分有待填充。
已知:字符串'小明的考试成绩,语文:65.5 数学:72 英语:83.5 总分:{}'。
请使用正则表达式求出小明的总分,并填写总分。
解决此问题,需要2个正则表达式,一个是提取各门课成绩,另一个是提取总分的占位符“{}”,然后用数据替换总分的“{}”占位符。
```
content='小明的考试成绩,语文:65.5 数学:72 英语:83.5 总分:{}'
import re
p=re.compile(r'\d+\.?\d')
marks=map(float,p.findall(content))
total=sum(marks)
pp=re.compile(r'\{\}')
pp.sub(str(total),content)
```
标志位的使用例子如下,比如查找一个字符,可以选择忽略大小写匹配。
```
import re
result = re.findall("c","ICCC")
print(result)
import re
result = re.findall("c","ICCC",re.I)
print(result)
```
区分与不区分Unicode案例:
```
import re
#匹配任意字符含Unicode字符
target_str = "中国 China 世界 are friends"
# 不使用re.A or re.ASCII
result = re.findall(r"\b\w{2,}\b", target_str)
print(result)
# 只匹配 re.A or re.ASCII
result = re.findall(r"\b\w{2,}\b", target_str, re.A)
print(result)
```
演示正则表达式Match对象对检索结果的处理。
```
import re
string = "Shnu is guilin road.100, and postcode is 200234"
p=re.compile(r"(\d+).+ (\d+)")
match = p.search( string)
print(match.expand(r"shnu 在桂林路: \1 and 邮编 : \2"))
```
在正则表达式后面加上“?”作用是减少贪婪搜索。
```
import re
re.findall('a{2,3}','a aa aaa aaaa')
import re
re.findall('a{2,3}?','a aa aaa aaaa')
```
过滤字符串中的标点符号。
```
import re
s = "str中333_国。人们、\ing,、. With. Pun~ctuation?"
# 如果空白符也需要过滤,使用 r'[^\w]'
s = re.sub(r'[^\w\s]','',s)
print(s)
```
使用Pattern对象,需要先对正则表达式编译,然后使用编译得到的对象去进一步完成字符串的匹配、查找、分割、替换等操作。如:
```
import re
string = "Shnu is guilin road.100, and postcode is 200234"
p=re.compile(r"(\d+).+ (\d+)")
match = p.search( string)
print(match.expand(r"shnu 在桂林路: \1 and 邮编 : \2"))
```
而使用re模块的函数,则无需编译正则表达,直接把上述2个步骤合二为一,按照函数参数形式,把正则表达以及相关参数传给函数即可。如:上述代码可以改写为:
```
import re
string = "Shnu is guilin road.100, and postcode is 200234"
match = re.search(r"(\d+).+ (\d+)", string)
print(match.expand(r"shnu 在桂林路: \1 and 邮编 : \2"))
```
Re相关函数完成如下操作。已知:字符串'小明的考试成绩,语文:65.5 数学:72 英语:83.5 总分:{}',“总分”部分有待填充。请使用正则表达式,求出小明的总分,并填写上总分。
解决此问题,需要2个正则表达式,一个是'\d+\.?\d'用以提取各门课成绩,另一个是'\{\}'用以提取总分填写的位置{}。
```
content='小明的考试成绩,语文:65.5 数学:72 英语:83.5 总分:{}'
import re
marks=map(float,re.findall(r'\d+\.?\d',content))
total=sum(marks)
re.sub(r'\{\}',str(total),content)
```
从复杂文本中提取特定信息。以下是某小区微信群接龙投票信息,请使用正则表达式提取“投票序号”、“业主房号”、“投票”信息。信息如下(由于信息较长,只显示少了数据):
```
votes= '''
#接龙
投票接龙
1. 房号+姓名+反对/赞成/弃权
2. 100 神仆 赞成
3. 184朱良 赞成
4. 118号 反对
5. 97号 弃权
6. 62号(不能退钱就赞成,可以退钱就算了,不想烦)
7. 174号 赞成
8. 86-海鱼 反对(1来历尴尬;2过于破旧,维修维护成本未知,建议及时止损。如果无法退款,已花的费用众筹算我一份)
9. 223 九凤 赞同
10. 126一郑桂华 赞同
11. 247 大卫林 赞同
12. 128号孙伟 弃权(照顾个别业主,可以放到不显眼处)
13. 禾亮188 赞同
14. 168茅 赞同
15. 229 亚梅 赞同
16. 109-21赞同
17. 233林 赞同 (为了照顾少数人位置重新协商)
18. 129号 赞同
19. 136号 赞成
20. Xing 31号 赞同 希望小区越来越好,支持所有正能量的行为!
21. 120号 赞成(位置为照顾个别人想法,可以协商)
22. 42号ringing 反对,和小区建筑风格不符
23. 245号 赞成
24. 83小宝 反对
25. 3号 反对
26. 242 赞成、英雄不问出处,正能压邪!
27. 瑞华1号 赞成
28. 108-301 赞同
29. 227赞成
30. 224严,赞同!墓区边的房子都买了,还怕这个!就算从风水讲,墓区的东西面还是好风水。原先比当今小区还要乱的时候,就有热心的业主捐了五六块镜子,放在转角处,改善小区道路行车安全,经过几届业委会和全体正常交物业管理费业主的共同努力,小区面貌已有较大的改善,愿意为小区建设奉献的行为理应得到鼓励和支持!
31. 青青翠竹 赞同
32. 青青翠竹 赞同88号 南赞同
33. 南88 赞同
34. 78-安妮 弃权(既然已经来了后续协商更新外观或者位置就行)
35. 139-常 赞同
36. 143徐 赞同
37. 157号 赞同
38. 19-rongying 反对,和小区风格不搭
39. 106- 赞同 喜欢马车 无论来自哪里都喜欢
40. 62号叶师傅 赞同
41. 241~赵永 弃权(出发点是好的,但随意性强,没有遵循小区基本的议事规则,没有事先征询大多数业主意见。)
42. 127-凌耀初 赞同!(由于马儿和马车锈烂严重,希望好好修补。另,来历也确实是有点尴尬,建议修复时颜色重新考虑)。通过这件事情如能形成小区的议事规则,如能形成网络投票的新机制,那将大大提高业主大会和业委会的决策效率,那是一件大好事!我们小区急需做的大事还有不少~
43. 108-402陈 弃权(不论结果怎么样,至少体现了办事透明度和业主参与度,是好事。)
44. 110-401可可 赞成(本来就是业委会牵头做的事情,也是为了改善小区环境,如果每样小事都需要全体业主投票,业主们就太累了)
45. 72号 赞同
46. 76号 赞同
47. 华爷140 弃权
48. 74号陆 赞同
49. 185-麻辣面 弃权
50. 202号王焱 赞成
51. 61-芊茉 赞同
52. 151田 赞同
53. 21-夏 赞同
54. 117 赞同
55. 9号 弃权 虽然参加了众筹,但是的确不知道还有那么多邻居没有进新群,不知道众筹这个事;虽然初心是为了美丽家园做出贡献,但的确不知道青博馆大门开在海湾园内;虽然放在海湾园里的东西肯定不会全是祭品(比如园区办公室的办公用品、摆设等等),但他的确是海湾园里出来的;虽然我不信邪,但的确有人会觉得这个晦气。
56. 115-402 赞同 心中为阳处处阳,心中为阴处处阴,心灵纯洁一点就不会有那么多的事情了
57. 静80 反对放在大门口,可以改个地方放吗?听说是海湾园里出来的的确会让人觉得晦气。
58. 艺嘉 赞同
59. 114-402 赞同
60. 219号戴 赞同。
61. 8-陈 赞同(既来之则安之)
62. 172杰 赞同(是饰品非祭品)
63. 148号艺嘉 赞成
64. 152CQ 赞成
65. 211号 赞成
66. 10-嘟嘟爸 赞成
67. 135 反对。这种材质注定了保养翻新不会只有一次,这一次大家众筹了那么下次呢?如果不翻新,那么一到小区门口就会感到这个小区的破败,如果翻新,那么钱从哪里出?因为不赞同,所以后续费用也不愿意承担。桃花岛上的亭子想要翻新我看大家都想选一劳永逸的材质,为什么在小区门口要放一个需要反复翻新的?
68. 178-冰姐 赞成,小区要做成一件事太难了
69. 217 赞同
70. 15洪虹 弃权
71. 55号 赞成
认知的差异性产生了多样性的思想碰撞现象,我思故我在
72. 105号301 赞成
73. 84-wang 弃权
'''
import re
import pandas as pd
from pet.data import generator
votes=generator.votes
votes=re.sub('赞同', '赞成', votes)
results=re.findall('(\d+)\.\s\D{,6}\s*(\d+[--号]?\d*).*(反对|赞成|弃权|赞同)+', votes, re.MULTILINE)
print(results)
print(len(results))
df = pd.DataFrame(results, columns=['序号','门牌号', '投票'])
with pd.ExcelWriter('小区投票与统计.xlsx') as writer:
df.to_excel(writer, sheet_name='投票结果')
df['投票'].value_counts().to_excel(writer,sheet_name='统计结果')
```
jieba的三种中文分词模式演示。
```
import jieba
content="老王在阳光海岸小区写信用卡消费记录。"
seg_list = jieba.cut(content, cut_all=False)
print(f"精准模式(默认): " + "/".join(seg_list))
seg_list = jieba.cut(content, cut_all=True)
print("全模式: " + "/ ".join(seg_list))
seg_list = jieba.cut_for_search(content)
print("搜索引擎模式: " + "/ ".join(seg_list))
```
使用用户字典,只需要在jieba分词之前加载字典jieba.load_userdict("userdict1.txt") ,或者动态加载词语jieba.add_word('阳光海岸') 。如果想动态删除词语,可以使用del_word(word) 可动态删除用户定义的词语。
```
import jieba
content="老王在阳光海岸小区写信用卡消费记录。"
jieba.add_word('阳光海岸')
seg_list = jieba.cut(content, cut_all=False)
print(f"精准模式(默认): " + "/".join(seg_list))
seg_list = jieba.cut(content, cut_all=True)
print("全模式: " + "/ ".join(seg_list))
seg_list = jieba.cut_for_search(content)
print("搜索引擎模式: " + "/ ".join(seg_list))
t='''
教育部关于2020年春季学期延期开学的通知
经研究决定,2020年春季学期延期开学,具体通知如下。
一、部属各高等学校适当推迟2020年春季学期开学时间,具体开学时间与当地高校开学时间保持一致,并报教育部备案。春节返乡学生未经学校批准不要提前返校。其他中央部门所属高校可参照执行。
二、地方所属院校、中小学校、幼儿园等学校春季学期开学时间,由当地教育行政部门按照地方党委和政府统一部署确定。
三、各类学校要加强寒假期间对学生学习、生活的指导,要求在家不外出、不聚会、不举办和参加集中性活动。对寒假在校和自行返校的学生,要切实做好疫情防控工作。要做好开学后疫情防控工作预案,建立师生流动台账,明确防控工作要求,加大环境卫生整治力度,全面做好疫情防控工作。'''
import jieba.analyse
jieba.analyse.extract_tags(t, topK=5, withWeight=True, allowPOS=())
```
根据原始字符串内容生成词云图。
```
import wordcloud
w=wordcloud.WordCloud(width=800,
height=600,
background_color='white',
font_path='msyh.ttc')
content="""
新冠肺炎疫情正在美国各州蔓延,美国总统特朗普期望美国经济能够在复活节到来前得到“重启”。这一言论受到了民主党总统候选人、前副总统拜登的批评,拜登在接受采访时对特朗普的言论评价道:“如果你想长期破坏经济,那就让这(疫情)再度暴发吧。我们现在甚至还没有减缓疫情增长的趋势,听到总统这样说真是令人失望。他还是不要再说话了,多听专家的意见吧。”拜登还调侃道:“如果可能的话,我还想明天就进政府当上总统呢。”
拜登指出,目前美国疫情形势加重是因为“在应该响应的时候没有做出行动”,并呼吁特朗普把民众的健康作为工作重心。同时,拜登还建议特朗普政府多遵循国家过敏症与传染病研究所主任福西等医疗专家的建议,让民众保持社交距离,并且为控制疫情做好充分工作。
据美国约翰斯·霍普金斯大学数据显示,截至北京时间2020年3月25日12时30分左右,美国累计确诊新冠肺炎病例55222例,累计死亡797例。。
"""
w.generate(content)
w.to_file('result.jpg')
```

将中文字符串进行分词后,然后生成词云图。
```
import jieba
txtlist = jieba.lcut(content)
string = " ".join(txtlist)
w.generate(string)
# 将词云图片导出到当前文件夹
w.to_file('result-jieba.jpg')
```

```
import imageio
mk = imageio.imread("usa.jpg")
w= wordcloud.WordCloud(width=600,height=300,
background_color='white',font_path='msyh.ttc',
contour_width=1,contour_color='steelblue',mask=mk)
w.generate(string)
w.to_file('result-jieba-usa.jpg')
```

将中文字符串使用SnowNLP进行文本关键词和摘要的提取。
```
t='''
教育部关于2020年春季学期延期开学的通知
经研究决定,2020年春季学期延期开学,具体通知如下。
一、部属各高等学校适当推迟2020年春季学期开学时间,具体开学时间与当地高校开学时间保持一致,并报教育部备案。春节返乡学生未经学校批准不要提前返校。其他中央部门所属高校可参照执行。
二、地方所属院校、中小学校、幼儿园等学校春季学期开学时间,由当地教育行政部门按照地方党委和政府统一部署确定。
三、各类学校要加强寒假期间对学生学习、生活的指导,要求在家不外出、不聚会、不举办和参加集中性活动。对寒假在校和自行返校的学生,要切实做好疫情防控工作。要做好开学后疫情防控工作预案,建立师生流动台账,明确防控工作要求,加大环境卫生整治力度,全面做好疫情防控工作。'''
import snownlp
s = snownlp.SnowNLP(t)
print('关键词',s.keywords(limit=5))
print('摘要',s.summary(limit=2))
```
将中文字符串使用SnowNLP进行情感分析
```
import snownlp
word = snownlp.SnowNLP('为中华崛起而读书!!')
print(word.tf)
print(word.pinyin)
print(word.keywords())
feeling = word.sentiments
print(feeling)
```
取得无线设备的wifi密码
```
import subprocess
import re
import locale
lcode = locale.getpreferredencoding()
def get_wifi_password_by_profile():
s = subprocess.check_output(['netsh', 'wlan', 'show', 'profile']).decode(lcode)
wifi_ssid=re.findall(':\s(.+)\r',s)
print(wifi_ssid)
info = {}
for i in wifi_ssid:
profile_info = subprocess.check_output(
['netsh', 'wlan', 'show', 'profile', i, 'key=clear']).decode(lcode)
pwd=re.findall(r'[关键内容|Content]\s+:\s(\w+)',profile_info )
info[i] = pwd
return info
if __name__ == '__main__':
d = get_wifi_password_by_profile()
print(d)
```
| PypiClean |
/Mopidy-Tubeify-0.1.0.tar.gz/Mopidy-Tubeify-0.1.0/mopidy_tubeify/spotify.py | import json
from bs4 import BeautifulSoup as bs
from mopidy_tubeify import logger
from mopidy_tubeify.data import find_in_obj
from mopidy_tubeify.serviceclient import ServiceClient
from mopidy_tubeify.yt_matcher import search_and_get_best_match
class Spotify(ServiceClient):
def get_spotify_headers(self, endpoint=r"https://open.spotify.com/"):
# Getting the access token first to send it with the header to the api endpoint
page = self.session.get(endpoint)
soup = bs(page.text, "html.parser")
logger.debug(f"get_spotify_headers base url: {endpoint}")
access_token_tag = soup.find("script", {"id": "config"})
json_obj = json.loads(access_token_tag.contents[0])
access_token_text = json_obj["accessToken"]
self.session.headers.update(
{
"authorization": f"Bearer {access_token_text}",
"referer": endpoint,
"accept": "application/json",
"app-platform": "WebPlayer",
}
)
return
def get_users_details(self, users):
self.get_spotify_headers()
def job(user):
endpoint = f"https://api.spotify.com/v1/users/{user}"
data = self.session.get(endpoint).json()
data["name"] = data["display_name"]
return data
results = []
[results.append(job(user)) for user in users]
return results
def get_user_playlists(self, user):
endpoint = f"https://api.spotify.com/v1/users/{user}/playlists"
self.get_spotify_headers()
data = self.session.get(endpoint).json()
playlists = data["items"]
return [
{"name": playlist["name"], "id": playlist["id"]}
for playlist in playlists
]
def get_playlists_details(self, playlists):
self.get_spotify_headers()
def job(playlist):
endpoint = f"https://api.spotify.com/v1/playlists/{playlist}"
data = self.session.get(endpoint).json()
playlist_name = data["name"]
return {"name": playlist_name, "id": playlist}
results = []
[results.append(job(playlist)) for playlist in playlists]
return results
def get_playlist_tracks(self, playlist):
endpoint = f"https://api.spotify.com/v1/playlists/{playlist}"
self.get_spotify_headers()
data = self.session.get(endpoint).json()
items = data["tracks"]["items"]
tracks = [
{
"song_name": item["track"]["name"],
"song_artists": [
artist["name"] for artist in item["track"]["artists"]
],
"song_duration": item["track"]["duration_ms"] // 1000,
"isrc": item["track"]["external_ids"].get("isrc"),
}
for item in items
if item["track"]
]
return search_and_get_best_match(tracks, self.ytmusic)
def get_service_homepage(self):
endpoint = r"https://api.spotify.com/v1/views/desktop-home"
self.get_spotify_headers()
data = self.session.get(endpoint).json()
playlists = list(find_in_obj(data, "type", "playlist"))
return [
{"name": playlist["name"], "id": playlist["id"]}
for playlist in playlists
] | PypiClean |
/GAqap-0.0.1.tar.gz/GAqap-0.0.1/qapGA/GeneticAlgorithm.py | import sys
from GeneratePopulation import Generate_Initial_Population
from Fitness import Cost_Function
from Selection import Selection_Function
from Mutation import Mutation_Function
from Crossover import Crossover_Function
def GeneticAlgorithm(problem_size, population_size, distances, flows, number_of_iterations):
# generate initial population
population = Generate_Initial_Population(problem_size, population_size)
solution = int(sys.maxsize)
next_generation = []
n = 0
while n < number_of_iterations:
# get cost function for each data in population
population = Cost_Function(population=population, distances=distances, flows=flows)
# sort population according to fitness score
population.sort(key = lambda x: x[1])
# get fittest data
fittest_data = list.copy(population[0])
# check for the fittest data and print it out
if fittest_data[1] < solution:
result = list.copy(fittest_data)
solution = fittest_data[1]
print("\nSolution for iteration - " + str(n))
print(result)
while len(next_generation) < len(population):
# use selection fucntion to get 2 fit chromosomes
data1 = Selection_Function(population)
data2 = Selection_Function(population)
# crossover the 2 chromosome
crossed_over_data = Crossover_Function(data1, data2)
# mutate both chromosomes
offspring1 = Mutation_Function(crossed_over_data[0])
offspring2 = Mutation_Function(crossed_over_data[1])
# add offsprings to next generation
next_generation.append(offspring1)
next_generation.append(offspring2)
# repeat iteration with new generation
population = next_generation
next_generation = []
n+=1
# print final result
print("Final solution after " + str(n) +" iterations = ")
print(result)
return result | PypiClean |
/OctoBot-Trading-2.4.23.tar.gz/OctoBot-Trading-2.4.23/octobot_trading/personal_data/orders/order_adapter.py | import math
import octobot_trading.constants as constants
import octobot_trading.exchanges as exchanges
import octobot_trading.personal_data as personal_data
from octobot_trading.enums import ExchangeConstantsMarketStatusColumns as Ecmsc
def adapt_price(symbol_market, price):
maximal_price_digits = symbol_market[Ecmsc.PRECISION.value].get(
Ecmsc.PRECISION_PRICE.value,
constants.CURRENCY_DEFAULT_MAX_PRICE_DIGITS)
return trunc_with_n_decimal_digits(price, maximal_price_digits)
def adapt_quantity(symbol_market, quantity):
maximal_volume_digits = symbol_market[Ecmsc.PRECISION.value].get(
Ecmsc.PRECISION_AMOUNT.value, 0)
return trunc_with_n_decimal_digits(quantity, maximal_volume_digits)
def trunc_with_n_decimal_digits(value, digits): # TODO migrate to commons
try:
# force exact representation
return float("{0:.{1}f}".format(math.trunc(value * 10 ** digits) / (10 ** digits), digits if digits > 1 else 1))
except ValueError:
return value
def adapt_order_quantity_because_quantity(limiting_value, max_value, quantity_to_adapt, price, symbol_market):
orders = []
nb_full_orders = limiting_value // max_value
rest_order_quantity = limiting_value % max_value
after_rest_quantity_to_adapt = quantity_to_adapt
if rest_order_quantity > 0:
after_rest_quantity_to_adapt -= rest_order_quantity
valid_last_order_quantity = adapt_quantity(symbol_market, rest_order_quantity)
orders.append((valid_last_order_quantity, price))
other_orders_quantity = (after_rest_quantity_to_adapt + max_value) / (nb_full_orders + 1)
valid_other_orders_quantity = adapt_quantity(symbol_market, other_orders_quantity)
orders += [(valid_other_orders_quantity, price)] * int(nb_full_orders)
return orders
def adapt_order_quantity_because_price(limiting_value, max_value, price, symbol_market):
orders = []
nb_full_orders = limiting_value // max_value
rest_order_cost = limiting_value % max_value
if rest_order_cost > 0:
valid_last_order_quantity = adapt_quantity(symbol_market, rest_order_cost / price)
orders.append((valid_last_order_quantity, price))
other_orders_quantity = max_value / price
valid_other_orders_quantity = adapt_quantity(symbol_market, other_orders_quantity)
orders += [(valid_other_orders_quantity, price)] * int(nb_full_orders)
return orders
def split_orders(total_order_price, max_cost, valid_quantity, max_quantity, price, quantity, symbol_market):
"""
Splits too big orders into multiple ones according to the max_cost and max_quantity
:param total_order_price:
:param max_cost:
:param valid_quantity:
:param max_quantity:
:param price:
:param quantity:
:param symbol_market:
:return:
"""
if max_cost is None and max_quantity is None:
raise RuntimeError("Impossible to split orders with max_cost and max_quantity undefined.")
nb_orders_according_to_cost = None
nb_orders_according_to_quantity = None
if max_cost:
nb_orders_according_to_cost = total_order_price / max_cost
if max_quantity:
nb_orders_according_to_quantity = valid_quantity / max_quantity
if nb_orders_according_to_cost is None:
# can only split using quantity
return adapt_order_quantity_because_quantity(valid_quantity, max_quantity, quantity, price, symbol_market)
elif nb_orders_according_to_quantity is None:
# can only split using price
return adapt_order_quantity_because_price(total_order_price, max_cost, price, symbol_market)
else:
if nb_orders_according_to_cost > nb_orders_according_to_quantity:
return adapt_order_quantity_because_price(total_order_price, max_cost, price, symbol_market)
return adapt_order_quantity_because_quantity(valid_quantity, max_quantity, quantity, price, symbol_market)
def check_and_adapt_order_details_if_necessary(quantity, price, symbol_market, fixed_symbol_data=False):
"""
Checks if order attributes are valid and try to fix it if not
:param quantity:
:param price:
:param symbol_market:
:param fixed_symbol_data:
:return:
"""
if math.isnan(quantity) or math.isnan(price) or price == 0:
return []
symbol_market_limits = symbol_market[Ecmsc.LIMITS.value]
limit_amount = symbol_market_limits[Ecmsc.LIMITS_AMOUNT.value]
limit_cost = symbol_market_limits[Ecmsc.LIMITS_COST.value]
limit_price = symbol_market_limits[Ecmsc.LIMITS_PRICE.value]
# case 1: try with data directly from exchange
if personal_data.is_valid(limit_amount, Ecmsc.LIMITS_AMOUNT_MIN.value):
min_quantity = limit_amount.get(Ecmsc.LIMITS_AMOUNT_MIN.value, math.nan)
max_quantity = None
# not all symbol data have a max quantity
if personal_data.is_valid(limit_amount, Ecmsc.LIMITS_AMOUNT_MAX.value):
max_quantity = limit_amount.get(Ecmsc.LIMITS_AMOUNT_MAX.value, math.nan)
# adapt digits if necessary
valid_quantity = adapt_quantity(symbol_market, quantity)
valid_price = adapt_price(symbol_market, price)
total_order_price = valid_quantity * valid_price
if valid_quantity < min_quantity:
# invalid order
return []
# case 1.1: use only quantity and cost
if personal_data.is_valid(limit_cost, Ecmsc.LIMITS_COST_MIN.value):
min_cost = limit_cost.get(Ecmsc.LIMITS_COST_MIN.value, math.nan)
max_cost = None
# not all symbol data have a max cost
if personal_data.is_valid(limit_cost, Ecmsc.LIMITS_COST_MAX.value):
max_cost = limit_cost.get(Ecmsc.LIMITS_COST_MAX.value, math.nan)
# check total_order_price not < min_cost
if not personal_data.check_cost(total_order_price, min_cost):
return []
# check total_order_price not > max_cost and valid_quantity not > max_quantity
elif (max_cost is not None and total_order_price > max_cost) or \
(max_quantity is not None and valid_quantity > max_quantity):
# split quantity into smaller orders
return split_orders(total_order_price, max_cost, valid_quantity,
max_quantity, valid_price, quantity, symbol_market)
else:
# valid order that can be handled by the exchange
return [(valid_quantity, valid_price)]
# case 1.2: use only quantity and price
elif personal_data.is_valid(limit_price, Ecmsc.LIMITS_PRICE_MIN.value):
min_price = limit_price.get(Ecmsc.LIMITS_PRICE_MIN.value, math.nan)
max_price = None
# not all symbol data have a max price
if personal_data.is_valid(limit_price, Ecmsc.LIMITS_PRICE_MAX.value):
max_price = limit_price.get(Ecmsc.LIMITS_PRICE_MAX.value, math.nan)
if (max_price is not None and (max_price <= valid_price)) or valid_price <= min_price:
# invalid order
return []
# check total_order_price not > max_cost and valid_quantity not > max_quantity
elif max_quantity is not None and valid_quantity > max_quantity:
# split quantity into smaller orders
return adapt_order_quantity_because_quantity(valid_quantity, max_quantity,
quantity, valid_price, symbol_market)
else:
# valid order that can be handled wy the exchange
return [(valid_quantity, valid_price)]
if not fixed_symbol_data:
# case 2: try fixing data from exchanges
fixed_data = exchanges.ExchangeMarketStatusFixer(symbol_market, price).market_status
return check_and_adapt_order_details_if_necessary(quantity, price, fixed_data,
fixed_symbol_data=True)
else:
# impossible to check if order is valid: refuse it
return []
def add_dusts_to_quantity_if_necessary(quantity, price, symbol_market, current_symbol_holding):
"""
Adds remaining quantity to the order if the remaining quantity is too small
:param quantity:
:param price:
:param symbol_market:
:param current_symbol_holding:
:return:
"""
if price == 0:
return quantity
remaining_portfolio_amount = float("{1:.{0}f}".format(constants.CURRENCY_DEFAULT_MAX_PRICE_DIGITS,
current_symbol_holding - quantity))
remaining_max_total_order_price = remaining_portfolio_amount * price
symbol_market_limits = symbol_market[Ecmsc.LIMITS.value]
limit_amount = symbol_market_limits[Ecmsc.LIMITS_AMOUNT.value]
limit_cost = symbol_market_limits[Ecmsc.LIMITS_COST.value]
if not (personal_data.is_valid(limit_amount, Ecmsc.LIMITS_AMOUNT_MIN.value) and
personal_data.is_valid(limit_cost, Ecmsc.LIMITS_COST_MIN.value)):
fixed_market_status = exchanges.ExchangeMarketStatusFixer(symbol_market, price).market_status
limit_amount = fixed_market_status[Ecmsc.LIMITS.value][Ecmsc.LIMITS_AMOUNT.value]
limit_cost = fixed_market_status[Ecmsc.LIMITS.value][Ecmsc.LIMITS_COST.value]
min_quantity = limit_amount.get(Ecmsc.LIMITS_AMOUNT_MIN.value, math.nan)
min_cost = limit_cost.get(Ecmsc.LIMITS_COST_MIN.value, math.nan)
# check with 40% more than remaining total not to require huge market moves to sell this asset
min_cost_to_consider = min_cost * 1.4
min_quantity_to_consider = min_quantity * 1.4
if remaining_max_total_order_price < min_cost_to_consider \
or remaining_portfolio_amount < min_quantity_to_consider:
return current_symbol_holding
else:
return quantity | PypiClean |
/GeoNode-3.2.0-py3-none-any.whl/geonode/api/api.py |
import json
import time
from django.apps import apps
from django.db.models import Q
from django.conf.urls import url
from django.contrib.auth import get_user_model
from django.contrib.auth.models import Group
from django.urls import reverse
from django.contrib.contenttypes.models import ContentType
from django.conf import settings
from django.db.models import Count
from django.utils.translation import get_language
from avatar.templatetags.avatar_tags import avatar_url
from geonode import geoserver
from geonode.api.paginator import CrossSiteXHRPaginator
from geonode.api.authorization import GeoNodeStyleAuthorization, ApiLockdownAuthorization, \
GroupAuthorization, GroupProfileAuthorization
from guardian.shortcuts import get_objects_for_user
from tastypie.bundle import Bundle
from geonode.base.models import ResourceBase, ThesaurusKeyword
from geonode.base.models import TopicCategory
from geonode.base.models import Region
from geonode.base.models import HierarchicalKeyword
from geonode.base.models import ThesaurusKeywordLabel
from geonode.layers.models import Layer, Style
from geonode.maps.models import Map
from geonode.geoapps.models import GeoApp
from geonode.documents.models import Document
from geonode.groups.models import GroupProfile, GroupCategory
from django.core.serializers.json import DjangoJSONEncoder
from tastypie.serializers import Serializer
from tastypie import fields
from tastypie.resources import ModelResource
from tastypie.constants import ALL, ALL_WITH_RELATIONS
from tastypie.utils import trailing_slash
from geonode.utils import check_ogc_backend
from geonode.security.utils import get_visible_resources
FILTER_TYPES = {
'layer': Layer,
'map': Map,
'document': Document,
'geoapp': GeoApp
}
class CountJSONSerializer(Serializer):
"""Custom serializer to post process the api and add counts"""
def get_resources_counts(self, options):
if settings.SKIP_PERMS_FILTER:
resources = ResourceBase.objects.all()
else:
resources = get_objects_for_user(
options['user'],
'base.view_resourcebase'
)
resources = get_visible_resources(
resources,
options['user'],
admin_approval_required=settings.ADMIN_MODERATE_UPLOADS,
unpublished_not_visible=settings.RESOURCE_PUBLISHING,
private_groups_not_visibile=settings.GROUP_PRIVATE_RESOURCES)
subtypes = []
if resources and resources.count() > 0:
if options['title_filter']:
resources = resources.filter(title__icontains=options['title_filter'])
if options['type_filter']:
_type_filter = options['type_filter']
for label, app in apps.app_configs.items():
if hasattr(app, 'type') and app.type == 'GEONODE_APP':
if hasattr(app, 'default_model'):
_model = apps.get_model(label, app.default_model)
if issubclass(_model, _type_filter):
subtypes.append(
resources.filter(
polymorphic_ctype__model=_model.__name__.lower()))
if not isinstance(_type_filter, str):
_type_filter = _type_filter.__name__.lower()
resources = resources.filter(polymorphic_ctype__model=_type_filter)
counts = list()
if subtypes:
for subtype in subtypes:
counts.extend(
list(subtype.values(options['count_type']).annotate(count=Count(options['count_type'])))
)
else:
counts = list(resources.values(options['count_type']).annotate(count=Count(options['count_type'])))
return dict(
[(c[options['count_type']], c['count']) for c in counts if c and c['count'] and options['count_type']])
def to_json(self, data, options=None):
options = options or {}
data = self.to_simple(data, options)
counts = self.get_resources_counts(options)
if 'objects' in data:
for item in data['objects']:
item['count'] = counts.get(item['id'], 0)
# Add in the current time.
data['requested_time'] = time.time()
return json.dumps(data, cls=DjangoJSONEncoder, sort_keys=True)
class TypeFilteredResource(ModelResource):
""" Common resource used to apply faceting to categories, keywords, and
regions based on the type passed as query parameter in the form
type:layer/map/document"""
count = fields.IntegerField()
def build_filters(self, filters=None, ignore_bad_filters=False):
if filters is None:
filters = {}
self.type_filter = None
self.title_filter = None
orm_filters = super(TypeFilteredResource, self).build_filters(filters)
if 'type' in filters and filters['type'] in FILTER_TYPES.keys():
self.type_filter = FILTER_TYPES[filters['type']]
else:
self.type_filter = None
if 'title__icontains' in filters:
self.title_filter = filters['title__icontains']
return orm_filters
def serialize(self, request, data, format, options=None):
if options is None:
options = {}
options['title_filter'] = getattr(self, 'title_filter', None)
options['type_filter'] = getattr(self, 'type_filter', None)
options['user'] = request.user
return super(TypeFilteredResource, self).serialize(request, data, format, options)
class TagResource(TypeFilteredResource):
"""Tags api"""
def serialize(self, request, data, format, options=None):
if options is None:
options = {}
options['count_type'] = 'keywords'
return super(TagResource, self).serialize(request, data, format, options)
class Meta:
queryset = HierarchicalKeyword.objects.all().order_by('name')
resource_name = 'keywords'
allowed_methods = ['get']
filtering = {
'slug': ALL,
}
serializer = CountJSONSerializer()
authorization = ApiLockdownAuthorization()
class ThesaurusKeywordResource(TypeFilteredResource):
"""ThesaurusKeyword api"""
thesaurus_identifier = fields.CharField(null=False)
label_id = fields.CharField(null=False)
def build_filters(self, filters={}, ignore_bad_filters=False):
"""adds filtering by current language"""
_filters = filters.copy()
id = _filters.pop('id', None)
orm_filters = super(ThesaurusKeywordResource, self).build_filters(_filters)
if id is not None:
orm_filters['id__in'] = id
if 'thesaurus' in _filters:
orm_filters['thesaurus__identifier'] = _filters['thesaurus']
return orm_filters
def serialize(self, request, data, format, options={}):
options['count_type'] = 'tkeywords__id'
return super(ThesaurusKeywordResource, self).serialize(request, data, format, options)
def dehydrate_id(self, bundle):
return bundle.obj.id
def dehydrate_label_id(self, bundle):
return bundle.obj.id
def dehydrate_thesaurus_identifier(self, bundle):
return bundle.obj.thesaurus.identifier
def dehydrate(self, bundle):
lang = get_language()
label = ThesaurusKeywordLabel.objects.filter(keyword=bundle.data['id']).filter(lang=lang)
if label.exists():
bundle.data['label_id'] = label.get().id
bundle.data['label'] = label.get().label
bundle.data['alt_label'] = label.get().label
else:
bundle.data['label'] = bundle.data['alt_label']
return bundle
class Meta:
queryset = ThesaurusKeyword.objects \
.all() \
.order_by('alt_label') \
.select_related('thesaurus')
resource_name = 'thesaurus/keywords'
allowed_methods = ['get']
filtering = {
'id': ALL,
'alt_label': ALL,
'thesaurus': ALL,
}
serializer = CountJSONSerializer()
authorization = ApiLockdownAuthorization()
class RegionResource(TypeFilteredResource):
"""Regions api"""
def serialize(self, request, data, format, options=None):
if options is None:
options = {}
options['count_type'] = 'regions'
return super(RegionResource, self).serialize(request, data, format, options)
class Meta:
queryset = Region.objects.all().order_by('name')
resource_name = 'regions'
allowed_methods = ['get']
filtering = {
'name': ALL,
'code': ALL,
}
if settings.API_INCLUDE_REGIONS_COUNT:
serializer = CountJSONSerializer()
authorization = ApiLockdownAuthorization()
class TopicCategoryResource(TypeFilteredResource):
"""Category api"""
layers_count = fields.IntegerField(default=0)
def dehydrate_layers_count(self, bundle):
request = bundle.request
obj_with_perms = get_objects_for_user(request.user,
'base.view_resourcebase').filter(polymorphic_ctype__model='layer')
filter_set = bundle.obj.resourcebase_set.filter(id__in=obj_with_perms.values('id')).filter(metadata_only=False)
if not settings.SKIP_PERMS_FILTER:
filter_set = get_visible_resources(
filter_set,
request.user if request else None,
admin_approval_required=settings.ADMIN_MODERATE_UPLOADS,
unpublished_not_visible=settings.RESOURCE_PUBLISHING,
private_groups_not_visibile=settings.GROUP_PRIVATE_RESOURCES)
return filter_set.distinct().count()
def serialize(self, request, data, format, options=None):
if options is None:
options = {}
options['count_type'] = 'category'
return super(TopicCategoryResource, self).serialize(request, data, format, options)
class Meta:
queryset = TopicCategory.objects.all()
resource_name = 'categories'
allowed_methods = ['get']
filtering = {
'identifier': ALL,
}
serializer = CountJSONSerializer()
authorization = ApiLockdownAuthorization()
class GroupCategoryResource(TypeFilteredResource):
detail_url = fields.CharField()
member_count = fields.IntegerField()
resource_counts = fields.CharField()
class Meta:
queryset = GroupCategory.objects.all()
allowed_methods = ['get']
include_resource_uri = False
filtering = {'slug': ALL,
'name': ALL}
ordering = ['name']
authorization = ApiLockdownAuthorization()
def apply_filters(self, request, applicable_filters):
filtered = super(
GroupCategoryResource,
self).apply_filters(
request,
applicable_filters)
return filtered
def dehydrate_detail_url(self, bundle):
return bundle.obj.get_absolute_url()
def dehydrate_member_count(self, bundle):
request = bundle.request
user = request.user
filtered = bundle.obj.groups.all()
if not user.is_authenticated or user.is_anonymous:
filtered = filtered.exclude(access='private')
elif not user.is_superuser:
categories_ids = user.group_list_all().values('categories')
filtered = filtered.filter(
Q(id__in=categories_ids) |
~Q(access='private')
)
return filtered.count()
def dehydrate(self, bundle):
"""Provide additional resource counts"""
request = bundle.request
counts = _get_resource_counts(
request,
resourcebase_filter_kwargs={
'group__groupprofile__categories': bundle.obj
}
)
bundle.data.update(resource_counts=counts)
return bundle
class GroupProfileResource(ModelResource):
categories = fields.ToManyField(
GroupCategoryResource,
'categories',
full=True
)
member_count = fields.CharField()
manager_count = fields.CharField()
logo_url = fields.CharField()
detail_url = fields.CharField()
class Meta:
queryset = GroupProfile.objects.all()
resource_name = 'group_profile'
allowed_methods = ['get']
filtering = {
'title': ALL,
'slug': ALL,
'categories': ALL_WITH_RELATIONS,
}
ordering = ['title', 'last_modified']
authorization = GroupProfileAuthorization()
def dehydrate_member_count(self, bundle):
"""Provide relative URL to the geonode UI's page on the group"""
return bundle.obj.member_queryset().count()
def dehydrate_manager_count(self, bundle):
"""Provide relative URL to the geonode UI's page on the group"""
return bundle.obj.get_managers().count()
def dehydrate_detail_url(self, bundle):
"""Return relative URL to the geonode UI's page on the group"""
if bundle.obj.slug:
return reverse('group_detail', args=[bundle.obj.slug])
else:
return None
def dehydrate_logo_url(self, bundle):
return bundle.obj.logo_url
class GroupResource(ModelResource):
group_profile = fields.ToOneField(
GroupProfileResource,
'groupprofile',
full=True,
null=True,
blank=True
)
resource_counts = fields.CharField()
class Meta:
queryset = Group.objects.exclude(groupprofile=None)
resource_name = 'groups'
allowed_methods = ['get']
filtering = {
'name': ALL,
'title': ALL,
'group_profile': ALL_WITH_RELATIONS,
}
ordering = ['name', 'last_modified']
authorization = GroupAuthorization()
def dehydrate(self, bundle):
"""Provide additional resource counts"""
request = bundle.request
counts = _get_resource_counts(
request,
resourcebase_filter_kwargs={'group': bundle.obj, 'metadata_only': False}
)
bundle.data.update(resource_counts=counts)
return bundle
def get_object_list(self, request):
"""
Overridden in order to exclude the ``anoymous`` group from the list
"""
qs = super(GroupResource, self).get_object_list(request)
return qs.exclude(name="anonymous")
class ProfileResource(TypeFilteredResource):
"""Profile api"""
avatar_100 = fields.CharField(null=True)
profile_detail_url = fields.CharField()
email = fields.CharField(default='')
layers_count = fields.IntegerField(default=0)
maps_count = fields.IntegerField(default=0)
documents_count = fields.IntegerField(default=0)
current_user = fields.BooleanField(default=False)
activity_stream_url = fields.CharField(null=True)
def build_filters(self, filters=None, ignore_bad_filters=False):
"""adds filtering by group functionality"""
if filters is None:
filters = {}
orm_filters = super(ProfileResource, self).build_filters(filters)
if 'group' in filters:
orm_filters['group'] = filters['group']
if 'name__icontains' in filters:
orm_filters['username__icontains'] = filters['name__icontains']
return orm_filters
def apply_filters(self, request, applicable_filters):
"""filter by group if applicable by group functionality"""
group = applicable_filters.pop('group', None)
name = applicable_filters.pop('name__icontains', None)
semi_filtered = super(
ProfileResource,
self).apply_filters(
request,
applicable_filters)
if group is not None:
semi_filtered = semi_filtered.filter(
groupmember__group__slug=group)
if name is not None:
semi_filtered = semi_filtered.filter(
profile__first_name__icontains=name)
return semi_filtered
def dehydrate_email(self, bundle):
email = ''
if bundle.request.user.is_superuser:
email = bundle.obj.email
return email
def dehydrate_layers_count(self, bundle):
obj_with_perms = get_objects_for_user(bundle.request.user,
'base.view_resourcebase').filter(polymorphic_ctype__model='layer')
return bundle.obj.resourcebase_set.filter(id__in=obj_with_perms.values('id')).filter(metadata_only=False)\
.distinct().count()
def dehydrate_maps_count(self, bundle):
obj_with_perms = get_objects_for_user(bundle.request.user,
'base.view_resourcebase').filter(polymorphic_ctype__model='map')
return bundle.obj.resourcebase_set.filter(id__in=obj_with_perms.values('id')).filter(metadata_only=False)\
.distinct().count()
def dehydrate_documents_count(self, bundle):
obj_with_perms = get_objects_for_user(bundle.request.user,
'base.view_resourcebase').filter(polymorphic_ctype__model='document')
return bundle.obj.resourcebase_set.filter(id__in=obj_with_perms.values('id')).filter(metadata_only=False)\
.distinct().count()
def dehydrate_avatar_100(self, bundle):
return avatar_url(bundle.obj, 240)
def dehydrate_profile_detail_url(self, bundle):
return bundle.obj.get_absolute_url()
def dehydrate_current_user(self, bundle):
return bundle.request.user.username == bundle.obj.username
def dehydrate_activity_stream_url(self, bundle):
return reverse(
'actstream_actor',
kwargs={
'content_type_id': ContentType.objects.get_for_model(
bundle.obj).pk,
'object_id': bundle.obj.pk})
def dehydrate(self, bundle):
"""
Protects user's personal information from non staff
"""
is_owner = bundle.request.user == bundle.obj
is_admin = bundle.request.user.is_staff or bundle.request.user.is_superuser
if not (is_owner or is_admin):
bundle.data = dict(
id=bundle.data.get('id', ''),
username=bundle.data.get('username', ''),
first_name=bundle.data.get('first_name', ''),
last_name=bundle.data.get('last_name', ''),
avatar_100=bundle.data.get('avatar_100', ''),
profile_detail_url=bundle.data.get('profile_detail_url', ''),
documents_count=bundle.data.get('documents_count', 0),
maps_count=bundle.data.get('maps_count', 0),
layers_count=bundle.data.get('layers_count', 0),
)
return bundle
def prepend_urls(self):
if settings.HAYSTACK_SEARCH:
return [
url(r"^(?P<resource_name>%s)/search%s$" % (
self._meta.resource_name, trailing_slash()
),
self.wrap_view('get_search'), name="api_get_search"),
]
else:
return []
def serialize(self, request, data, format, options=None):
if options is None:
options = {}
options['count_type'] = 'owner'
return super(ProfileResource, self).serialize(request, data, format, options)
class Meta:
queryset = get_user_model().objects.exclude(Q(username='AnonymousUser') | Q(is_active=False))
resource_name = 'profiles'
allowed_methods = ['get']
ordering = ['username', 'date_joined']
excludes = ['is_staff', 'password', 'is_superuser',
'is_active', 'last_login']
filtering = {
'username': ALL,
}
serializer = CountJSONSerializer()
authorization = ApiLockdownAuthorization()
class OwnersResource(TypeFilteredResource):
"""Owners api, lighter and faster version of the profiles api"""
full_name = fields.CharField(null=True)
def dehydrate_full_name(self, bundle):
return bundle.obj.get_full_name() or bundle.obj.username
def dehydrate_email(self, bundle):
email = ''
if bundle.request.user.is_superuser:
email = bundle.obj.email
return email
def dehydrate(self, bundle):
"""
Protects user's personal information from non staff
"""
is_owner = bundle.request.user == bundle.obj
is_admin = bundle.request.user.is_staff or bundle.request.user.is_superuser
if not (is_owner or is_admin):
bundle.data = dict(id=bundle.obj.id, username=bundle.obj)
return bundle
def serialize(self, request, data, format, options=None):
if options is None:
options = {}
options['count_type'] = 'owner'
return super(OwnersResource, self).serialize(request, data, format, options)
class Meta:
queryset = get_user_model().objects.exclude(username='AnonymousUser')
resource_name = 'owners'
allowed_methods = ['get']
ordering = ['username', 'date_joined']
excludes = ['is_staff', 'password', 'is_superuser',
'is_active', 'last_login']
filtering = {
'username': ALL,
}
serializer = CountJSONSerializer()
authorization = ApiLockdownAuthorization()
class GeoserverStyleResource(ModelResource):
"""Styles API for Geoserver backend."""
body = fields.CharField(
attribute='sld_body',
use_in='detail')
name = fields.CharField(attribute='name')
title = fields.CharField(attribute='sld_title')
# layer_default_style is polymorphic, so it will have many to many
# relation
layer = fields.ManyToManyField(
'geonode.api.resourcebase_api.LayerResource',
attribute='layer_default_style',
null=True)
version = fields.CharField(
attribute='sld_version',
null=True,
blank=True)
style_url = fields.CharField(attribute='sld_url')
workspace = fields.CharField(attribute='workspace', null=True)
type = fields.CharField(attribute='type')
class Meta:
paginator_class = CrossSiteXHRPaginator
queryset = Style.objects.all()
resource_name = 'styles'
detail_uri_name = 'id'
authorization = GeoNodeStyleAuthorization()
allowed_methods = ['get']
filtering = {
'id': ALL,
'title': ALL,
'name': ALL,
'layer': ALL_WITH_RELATIONS
}
def build_filters(self, filters=None, **kwargs):
"""Apply custom filters for layer."""
filters = super(GeoserverStyleResource, self).build_filters(
filters, **kwargs)
# Convert layer__ filters into layer_styles__layer__
updated_filters = {}
for key, value in filters.items():
key = key.replace('layer__', 'layer_default_style__')
updated_filters[key] = value
return updated_filters
def populate_object(self, style):
"""Populate results with necessary fields
:param style: Style objects
:type style: Style
:return:
"""
style.type = 'sld'
return style
def build_bundle(self, obj=None, data=None, request=None, **kwargs):
"""Override build_bundle method to add additional info."""
if obj is None and self._meta.object_class:
obj = self._meta.object_class()
elif obj:
obj = self.populate_object(obj)
return Bundle(
obj=obj,
data=data,
request=request,
**kwargs)
if check_ogc_backend(geoserver.BACKEND_PACKAGE):
class StyleResource(GeoserverStyleResource):
"""Wrapper for Generic Style Resource"""
pass
def _get_resource_counts(request, resourcebase_filter_kwargs):
"""Return a dict with counts of resources of various types
The ``resourcebase_filter_kwargs`` argument should be a dict with a suitable
queryset filter that can be applied to select only the relevant
``ResourceBase`` objects to use when retrieving counts. For example::
_get_resource_counts(
request,
{
'group__slug': 'my-group',
}
)
The above function call would result in only counting ``ResourceBase``
objects that belong to the group that has ``my-group`` as slug
"""
resources = get_visible_resources(
ResourceBase.objects.filter(**resourcebase_filter_kwargs),
request.user,
request=request,
admin_approval_required=settings.ADMIN_MODERATE_UPLOADS,
unpublished_not_visible=settings.RESOURCE_PUBLISHING,
private_groups_not_visibile=settings.GROUP_PRIVATE_RESOURCES)
values = resources.values(
'polymorphic_ctype__model',
'is_approved',
'is_published',
)
qs = values.annotate(counts=Count('polymorphic_ctype__model'))
types = [
'layer',
'document',
'map',
'geoapp',
'all'
]
subtypes = []
for label, app in apps.app_configs.items():
if hasattr(app, 'type') and app.type == 'GEONODE_APP':
if hasattr(app, 'default_model'):
_model = apps.get_model(label, app.default_model)
if issubclass(_model, GeoApp):
types.append(_model.__name__.lower())
subtypes.append(_model.__name__.lower())
counts = {}
for type_ in types:
counts[type_] = {
'total': 0,
'visible': 0,
'published': 0,
'approved': 0,
}
for record in qs:
resource_type = record['polymorphic_ctype__model']
if resource_type in subtypes:
resource_type = 'geoapp'
is_visible = all((record['is_approved'], record['is_published']))
counts['all']['total'] += record['counts']
counts['all']['visible'] += record['counts'] if is_visible else 0
counts['all']['published'] += record['counts'] if record['is_published'] else 0
counts['all']['approved'] += record['counts'] if record['is_approved'] else 0
section = counts.get(resource_type)
if section is not None:
section['total'] += record['counts']
section['visible'] += record['counts'] if is_visible else 0
section['published'] += record['counts'] if record['is_published'] else 0
section['approved'] += record['counts'] if record['is_approved'] else 0
return counts | PypiClean |
/Camelot-13.04.13-gpl-pyqt.tar.gz/Camelot-13.04.13-gpl-pyqt/camelot/core/orm/entity.py | import sys
from sqlalchemy import orm, schema, sql
from sqlalchemy.ext.declarative.api import ( _declarative_constructor,
DeclarativeMeta )
from sqlalchemy.ext import hybrid
from . statements import MUTATORS
from . properties import EntityBuilder, Property
from . import Session, options
class EntityDescriptor(object):
"""
EntityDescriptor holds information about the Entity before it is
passed to Declarative. It is used to search for inverse relations
defined on an Entity before the relation is passed to Declarative.
:param entity_base: The Declarative base class used to subclass the
entity
"""
global_counter = 0
def __init__( self, entity_base ):
self.entity_base = entity_base
self.parent = None
self.relationships = []
self.has_pk = False
self._pk_col_done = False
self.builders = []
self.constraints = []
self.counter = EntityDescriptor.global_counter
EntityDescriptor.global_counter += 1
# set default value for other options
for key, value in options.options_defaults.items():
if isinstance( value, dict ):
value = value.copy()
setattr( self, key, value )
def set_entity( self, entity ):
self.entity = entity
self.module = sys.modules.get( entity.__module__ )
self.tablename = entity.__tablename__
#
# verify if a primary key was set manually
#
for key, value in entity.__dict__.items():
if isinstance( value, schema.Column ):
if value.primary_key:
self.has_pk = True
if isinstance( value, EntityBuilder ):
self.builders.append( value )
if isinstance( value, Property ):
value.entity = entity
value.name = key
# execute the builders in the order they were created
self.builders.sort( key = lambda b:b.counter )
@property
def primary_keys( self ):
return self.entity.__table__.primary_key
@property
def table_fullname( self ):
return self.entity.__tablename__
@property
def metadata( self ):
return self.entity.__table__.metadata
def create_non_pk_cols(self):
self.call_builders( 'create_non_pk_cols' )
def create_pk_cols( self ):
"""
Create primary_key columns. That is, call the 'create_pk_cols'
builders then add a primary key to the table if it hasn't already got
one and needs one.
This method is "semi-recursive" in some cases: it calls the
create_keys method on ManyToOne relationships and those in turn call
create_pk_cols on their target. It shouldn't be possible to have an
infinite loop since a loop of primary_keys is not a valid situation.
"""
if self._pk_col_done:
return
self.call_builders( 'create_pk_cols' )
base_descriptor = getattr( self.entity_base, '_descriptor', None )
if not self.has_pk and base_descriptor == None:
colname = options.DEFAULT_AUTO_PRIMARYKEY_NAME
self.add_column(
colname,
schema.Column( colname, options.DEFAULT_AUTO_PRIMARYKEY_TYPE,
primary_key = True ) )
self._pk_col_done = True
def create_properties(self):
self.call_builders( 'create_properties' )
def create_tables(self):
self.call_builders( 'create_tables' )
def finalize(self):
self.call_builders( 'finalize' )
if self.order_by:
mapper = orm.class_mapper( self.entity )
mapper.order_by = self.translate_order_by( self.order_by )
def add_column( self, key, col ):
setattr( self.entity, key, col )
if hasattr( col, 'primary_key' ) and col.primary_key:
self.has_pk = True
def add_constraint( self, constraint ):
self.constraints.append( constraint )
def append_constraints( self ):
table = orm.class_mapper( self.entity ).local_table
for constraint in self.constraints:
table.append_constraint( constraint )
def get_inverse_relation( self, rel, check_reverse=True ):
'''
Return the inverse relation of rel, if any, None otherwise.
'''
matching_rel = None
for other_rel in self.relationships:
if rel.is_inverse( other_rel ):
if matching_rel is None:
matching_rel = other_rel
else:
raise Exception(
"Several relations match as inverse of the '%s' "
"relation in entity '%s'. You should specify "
"inverse relations manually by using the inverse "
"keyword."
% (rel.name, rel.entity.__name__))
# When a matching inverse is found, we check that it has only
# one relation matching as its own inverse. We don't need the result
# of the method though. But we do need to be careful not to start an
# infinite recursive loop.
if matching_rel and check_reverse:
rel.entity._descriptor.get_inverse_relation(matching_rel, False)
return matching_rel
def add_property( self, name, prop ):
mapper = orm.class_mapper( self.entity )
mapper.add_property( name, property )
def call_builders(self, what):
for builder in self.builders:
if hasattr(builder, what):
getattr(builder, what)()
def find_relationship(self, name):
for rel in self.relationships:
if rel.name == name:
return rel
if self.parent:
return self.parent._descriptor.find_relationship(name)
else:
return None
def translate_order_by( self, order_by ):
if isinstance( order_by, basestring ):
order_by = [order_by]
order = []
mapper = orm.class_mapper( self.entity )
for colname in order_by:
prop = mapper.columns[ colname.strip('-') ]
if colname.startswith('-'):
prop = sql.desc( prop )
order.append( prop )
return order
class EntityMeta( DeclarativeMeta ):
"""Subclass of :class:`sqlalchmey.ext.declarative.DeclarativeMeta`. This
metaclass processes the Property and ClassMutator objects.
"""
# new is called to create a new Entity class
def __new__( cls, classname, bases, dict_ ):
#
# don't modify the Entity class itself
#
if classname != 'Entity':
entity_base = None
for base in bases:
if hasattr(base, '_decl_class_registry'):
entity_base = base
break
dict_['_descriptor'] = EntityDescriptor( entity_base )
#
# process the mutators
#
for mutator, args, kwargs in dict_.get( MUTATORS, [] ):
mutator.process( dict_, *args, **kwargs )
#
# use default tablename if none set
#
if '__tablename__' not in dict_:
dict_['__tablename__'] = classname.lower()
if '__mapper_args__' not in dict_:
dict_['__mapper_args__'] = dict()
return super( EntityMeta, cls ).__new__( cls, classname, bases, dict_ )
# init is called after the creation of the new Entity class, and can be
# used to initialize it
def __init__( cls, classname, bases, dict_ ):
from . properties import Property
if '_descriptor' in dict_:
descriptor = dict_['_descriptor']
descriptor.set_entity( cls )
for key, value in dict_.items():
if isinstance( value, Property ):
value.attach( cls, key )
cls._descriptor.create_pk_cols()
#
# Calling DeclarativeMeta's __init__ creates the mapper and
# the table for this class
#
super( EntityMeta, cls ).__init__( classname, bases, dict_ )
if '__table__' in cls.__dict__:
setattr( cls, 'table', cls.__dict__['__table__'] )
#
# Keep these functions separated from EntityBase to be able
# to reuse them in parts unrelated to EntityBase
#
def update_or_create_entity( cls, data, surrogate = True ):
mapper = orm.class_mapper( cls )
pk_props = mapper.primary_key
# if all pk are present and not None
if not [1 for p in pk_props if data.get( p.key ) is None]:
pk_tuple = tuple( [data[prop.key] for prop in pk_props] )
record = cls.query.get(pk_tuple)
if record is None:
record = cls()
else:
if surrogate:
record = cls()
else:
raise Exception("cannot create non surrogate without pk")
dict_to_entity( record, data )
return record
def dict_to_entity( entity, data ):
"""Update a mapped object with data from a JSON-style nested dict/list
structure.
:param entity: the Entity object into which to store the data
:param data: a `dict` with data to store into the entity
"""
# surrogate can be guessed from autoincrement/sequence but I guess
# that's not 100% reliable, so we'll need an override
mapper = orm.object_mapper( entity )
for key, value in data.iteritems():
if isinstance( value, dict ):
dbvalue = getattr( entity, key )
rel_class = mapper.get_property(key).mapper.class_
pk_props = orm.class_mapper( rel_class ).primary_key
# If the data doesn't contain any pk, and the relationship
# already has a value, update that record.
if not [1 for p in pk_props if p.key in data] and \
dbvalue is not None:
dict_to_entity( dbvalue, value )
else:
record = update_or_create_entity( rel_class, value)
setattr(entity, key, record)
elif isinstance(value, list) and \
value and isinstance(value[0], dict):
rel_class = mapper.get_property(key).mapper.class_
new_attr_value = []
for row in value:
if not isinstance(row, dict):
raise Exception(
'Cannot send mixed (dict/non dict) data '
'to list relationships in from_dict data.')
record = update_or_create_entity( rel_class, row)
new_attr_value.append(record)
setattr(entity, key, new_attr_value)
else:
setattr(entity, key, value)
def entity_to_dict( entity, deep = {}, exclude = [] ):
"""Generate a JSON-style nested dict/list structure from an object."""
mapper = orm.object_mapper( entity )
col_prop_names = [p.key for p in mapper.iterate_properties \
if isinstance(p, orm.properties.ColumnProperty)]
data = dict([(name, getattr(entity, name))
for name in col_prop_names if name not in exclude])
for rname, rdeep in deep.iteritems():
dbdata = getattr(entity, rname)
prop = mapper.get_property( rname )
fks = prop.remote_side
#FIXME: use attribute names (ie coltoprop) instead of column names
remote_exclude = exclude + [ c.name for c in fks ]
if dbdata is None:
data[rname] = None
elif isinstance(dbdata, list):
data[rname] = [ entity_to_dict( o, rdeep, remote_exclude ) for o in dbdata ]
else:
data[rname] = entity_to_dict( dbdata, rdeep, remote_exclude )
return data
class EntityBase( object ):
"""A declarative base class that adds some methods that used to be
available in Elixir"""
def __init__( self, *args, **kwargs ):
_declarative_constructor( self, *args, **kwargs )
# due to cascading rules and a constructor argument, the object might
# allready be in a session
if orm.object_session( self ) == None:
Session().add( self )
#
# methods below were copied from camelot.core.orm to mimic the Elixir Entity
# behavior
#
def set( self, **kwargs ):
for key, value in kwargs.iteritems():
setattr( self, key, value )
@classmethod
def update_or_create( cls, data, surrogate = True ):
return update_or_create_entity( cls, data, surrogate )
def from_dict( self, data ):
"""
Update a mapped class with data from a JSON-style nested dict/list
structure.
"""
return dict_to_entity( self, data )
def to_dict( self, deep = {}, exclude = [] ):
"""Generate a JSON-style nested dict/list structure from an object."""
return entity_to_dict( self, deep, exclude )
# session methods
def flush(self, *args, **kwargs):
return orm.object_session(self).flush([self], *args, **kwargs)
def delete(self, *args, **kwargs):
return orm.object_session(self).delete(self, *args, **kwargs)
def expire(self, *args, **kwargs):
return orm.object_session(self).expire(self, *args, **kwargs)
def refresh(self, *args, **kwargs):
return orm.object_session(self).refresh(self, *args, **kwargs)
def expunge(self, *args, **kwargs):
return orm.object_session(self).expunge(self, *args, **kwargs)
@hybrid.hybrid_property
def query( self ):
return Session().query( self.__class__ )
@query.expression
def query_expression( cls ):
return Session().query( cls )
@classmethod
def get_by(cls, *args, **kwargs):
"""
Returns the first instance of this class matching the given criteria.
This is equivalent to:
session.query(MyClass).filter_by(...).first()
"""
return Session().query( cls ).filter_by(*args, **kwargs).first()
@classmethod
def get(cls, *args, **kwargs):
"""
Return the instance of this class based on the given identifier,
or None if not found. This is equivalent to:
session.query(MyClass).get(...)
"""
return Session().query( cls ).get(*args, **kwargs) | PypiClean |
/BlockSDK-0.2.5.tar.gz/BlockSDK-0.2.5/README.md | # PYTHON REST API SDK for BlockSDK
[](https://twitter.com/BlockSdk1)
[](https://www.facebook.com/blocksdk)
[](https://pypi.python.org/pypi/BlockSDK)
[](https://pypi.python.org/pypi/BlockSDK)
[](https://docs-v2.blocksdk.com/)
__Welcome to BlockSDK PYTHON__. This repository contains BlockSDK's PHP SDK and samples for REST API.
## SDK Documentation
[ Our BlockSDK-PYTHON Page ](https://docs.blocksdk.com/) includes all the documentation related to JS SDK. Sample Codes, to Releases. Here are few quick links to get you there faster.
* [ BlockSDK Developer Docs]
## Prerequisites
- [deasync](https://www.npmjs.com/package/deasync) & [request](https://www.npmjs.com/package/request) extensions must be enabled
### In PYTHON
The preferred way to install the BlockSDK for Python is to use the
[pypi](https://pypi.org/) package manager for Python. Simply type the following
into a terminal window:
```sh
pip install BlockSDK
```
## Quick Examples
### Create an Bitcoin client
```python
from BlockSDK.blocksdk import BlockSDK
blockSDK = BlockSDK("YOU TOKEN")
btcClient = blockSDK.createBitcoin()
```
### Get Address info
```python
addressInfo = btcClient.getAddressInfo({
"address" : "18cBEMRxXHqzWWCxZNtU91F5sbUNKhL5PX",
"rawtx" : true,
"reverse" : true,
"offset" : 0,
"limit" : 10
})
print(addressInfo)
```
### Create an Bitcoin Wallet
```python
wallet = btcClient.createWallet({
"name" : "test"
})
```
[install-packagist]: https://packagist.org/packages/block-chen/blocksdk-php
[npm]:(http://npmjs.org)
[packagist]: http://packagist.org
[BlockSDK Developer Docs]: https://docs.blocksdk.com
| PypiClean |
/LUBEAT-0.13.1-cp38-cp38-macosx_10_9_x86_64.whl/econml/iv/dml/_dml.py | import numpy as np
from sklearn.base import clone
from sklearn.linear_model import LinearRegression, LogisticRegressionCV
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import FunctionTransformer
from itertools import product
from ..._ortho_learner import _OrthoLearner
from ..._cate_estimator import LinearModelFinalCateEstimatorMixin, StatsModelsCateEstimatorMixin, LinearCateEstimator
from ...inference import StatsModelsInference, GenericSingleTreatmentModelFinalInference
from ...sklearn_extensions.linear_model import StatsModels2SLS, StatsModelsLinearRegression, WeightedLassoCVWrapper
from ...sklearn_extensions.model_selection import WeightedStratifiedKFold
from ...utilities import (_deprecate_positional, get_feature_names_or_default, filter_none_kwargs, add_intercept,
cross_product, broadcast_unit_treatments, reshape_treatmentwise_effects, shape,
parse_final_model_params, deprecated, Summary)
from ...dml.dml import _FirstStageWrapper, _FinalWrapper
from ...dml._rlearner import _ModelFinal
from ..._shap import _shap_explain_joint_linear_model_cate, _shap_explain_model_cate
class _OrthoIVModelNuisance:
def __init__(self, model_y_xw, model_t_xw, model_z, projection):
self._model_y_xw = model_y_xw
self._model_t_xw = model_t_xw
self._projection = projection
if self._projection:
self._model_t_xwz = model_z
else:
self._model_z_xw = model_z
def _combine(self, W, Z, n_samples):
if Z is not None:
Z = Z.reshape(n_samples, -1)
return Z if W is None else np.hstack([W, Z])
return None if W is None else W
def fit(self, Y, T, X=None, W=None, Z=None, sample_weight=None, groups=None):
self._model_y_xw.fit(X=X, W=W, Target=Y, sample_weight=sample_weight, groups=groups)
self._model_t_xw.fit(X=X, W=W, Target=T, sample_weight=sample_weight, groups=groups)
if self._projection:
# concat W and Z
WZ = self._combine(W, Z, Y.shape[0])
self._model_t_xwz.fit(X=X, W=WZ, Target=T, sample_weight=sample_weight, groups=groups)
else:
self._model_z_xw.fit(X=X, W=W, Target=Z, sample_weight=sample_weight, groups=groups)
return self
def score(self, Y, T, X=None, W=None, Z=None, sample_weight=None, group=None):
if hasattr(self._model_y_xw, 'score'):
Y_X_score = self._model_y_xw.score(X=X, W=W, Target=Y, sample_weight=sample_weight)
else:
Y_X_score = None
if hasattr(self._model_t_xw, 'score'):
T_X_score = self._model_t_xw.score(X=X, W=W, Target=T, sample_weight=sample_weight)
else:
T_X_score = None
if self._projection:
# concat W and Z
WZ = self._combine(W, Z, Y.shape[0])
if hasattr(self._model_t_xwz, 'score'):
T_XZ_score = self._model_t_xwz.score(X=X, W=WZ, Target=T, sample_weight=sample_weight)
else:
T_XZ_score = None
return Y_X_score, T_X_score, T_XZ_score
else:
if hasattr(self._model_z_xw, 'score'):
Z_X_score = self._model_z_xw.score(X=X, W=W, Target=Z, sample_weight=sample_weight)
else:
Z_X_score = None
return Y_X_score, T_X_score, Z_X_score
def predict(self, Y, T, X=None, W=None, Z=None, sample_weight=None, group=None):
Y_pred = self._model_y_xw.predict(X=X, W=W)
T_pred = self._model_t_xw.predict(X=X, W=W)
if self._projection:
# concat W and Z
WZ = self._combine(W, Z, Y.shape[0])
T_proj = self._model_t_xwz.predict(X, WZ)
else:
Z_pred = self._model_z_xw.predict(X=X, W=W)
if (X is None) and (W is None): # In this case predict above returns a single row
Y_pred = np.tile(Y_pred.reshape(1, -1), (Y.shape[0], 1))
T_pred = np.tile(T_pred.reshape(1, -1), (T.shape[0], 1))
if not self._projection:
Z_pred = np.tile(Z_pred.reshape(1, -1), (Z.shape[0], 1))
Y_res = Y - Y_pred.reshape(Y.shape)
T_res = T - T_pred.reshape(T.shape)
if self._projection:
Z_res = T_proj.reshape(T.shape) - T_pred.reshape(T.shape)
else:
Z_res = Z - Z_pred.reshape(Z.shape)
return Y_res, T_res, Z_res
class _OrthoIVModelFinal:
def __init__(self, model_final, featurizer, fit_cate_intercept):
self._model_final = clone(model_final, safe=False)
self._original_featurizer = clone(featurizer, safe=False)
self._fit_cate_intercept = fit_cate_intercept
if self._fit_cate_intercept:
add_intercept_trans = FunctionTransformer(add_intercept,
validate=True)
if featurizer:
self._featurizer = Pipeline([('featurize', self._original_featurizer),
('add_intercept', add_intercept_trans)])
else:
self._featurizer = add_intercept_trans
else:
self._featurizer = self._original_featurizer
def _combine(self, X, T, fitting=True):
if X is not None:
if self._featurizer is not None:
F = self._featurizer.fit_transform(X) if fitting else self._featurizer.transform(X)
else:
F = X
else:
if not self._fit_cate_intercept:
raise AttributeError("Cannot have X=None and also not allow for a CATE intercept!")
F = np.ones((T.shape[0], 1))
return cross_product(F, T)
def fit(self, Y, T, X=None, W=None, Z=None, nuisances=None,
sample_weight=None, freq_weight=None, sample_var=None, groups=None):
Y_res, T_res, Z_res = nuisances
# Track training dimensions to see if Y or T is a vector instead of a 2-dimensional array
self._d_t = shape(T_res)[1:]
self._d_y = shape(Y_res)[1:]
XT_res = self._combine(X, T_res)
XZ_res = self._combine(X, Z_res)
filtered_kwargs = filter_none_kwargs(sample_weight=sample_weight,
freq_weight=freq_weight, sample_var=sample_var)
self._model_final.fit(XZ_res, XT_res, Y_res, **filtered_kwargs)
return self
def predict(self, X=None):
X2, T = broadcast_unit_treatments(X if X is not None else np.empty((1, 0)),
self._d_t[0] if self._d_t else 1)
XT = self._combine(None if X is None else X2, T, fitting=False)
prediction = self._model_final.predict(XT)
return reshape_treatmentwise_effects(prediction,
self._d_t, self._d_y)
def score(self, Y, T, X=None, W=None, Z=None, nuisances=None, sample_weight=None, groups=None):
Y_res, T_res, Z_res = nuisances
if Y_res.ndim == 1:
Y_res = Y_res.reshape((-1, 1))
if T_res.ndim == 1:
T_res = T_res.reshape((-1, 1))
effects = self.predict(X).reshape((-1, Y_res.shape[1], T_res.shape[1]))
Y_res_pred = np.einsum('ijk,ik->ij', effects, T_res).reshape(Y_res.shape)
if sample_weight is not None:
return np.linalg.norm(np.average(cross_product(Z_res, Y_res - Y_res_pred), weights=sample_weight, axis=0),
ord=2)
else:
return np.linalg.norm(np.mean(cross_product(Z_res, Y_res - Y_res_pred), axis=0), ord=2)
class OrthoIV(LinearModelFinalCateEstimatorMixin, _OrthoLearner):
"""
Implementation of the orthogonal/double ml method for CATE estimation with
IV as described in section 4.2:
Double/Debiased Machine Learning for Treatment and Causal Parameters
Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, James Robins
https://arxiv.org/abs/1608.00060
Solve the following moment equation:
.. math::
\\E[(Y-\\E[Y|X]-\\theta(X) * (T-\\E[T|X]))(Z-\\E[Z|X])] = 0
Parameters
----------
model_y_xw : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[Y | X, W]`. Must support `fit` and `predict` methods.
If 'auto' :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV` will be chosen.
model_t_xw : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[T | X, W]`. Must support `fit` and `predict` methods.
If 'auto', :class:`~sklearn.linear_model.LogisticRegressionCV`
will be applied for discrete treatment,
and :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV`
will be applied for continuous treatment.
model_t_xwz : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[T | X, W, Z]`. Must support `fit` and `predict` methods.
If 'auto', :class:`~sklearn.linear_model.LogisticRegressionCV`
will be applied for discrete treatment,
and :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV`
will be applied for continuous treatment.
model_z_xw : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[Z | X, W]`. Must support `fit` and `predict` methods.
If 'auto', :class:`~sklearn.linear_model.LogisticRegressionCV`
will be applied for discrete instrument,
and :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV`
will be applied for continuous instrument.
projection: bool, optional, default False
If True, we fit a slight variant of OrthoIV where we use E[T|X, W, Z] as the instrument as opposed to Z,
model_z_xw will be disabled; If False, model_t_xwz will be disabled.
featurizer : :term:`transformer`, optional, default None
Must support fit_transform and transform. Used to create composite features in the final CATE regression.
It is ignored if X is None. The final CATE will be trained on the outcome of featurizer.fit_transform(X).
If featurizer=None, then CATE is trained on X.
fit_cate_intercept : bool, optional, default False
Whether the linear CATE model should have a constant term.
discrete_treatment: bool, optional, default False
Whether the treatment values should be treated as categorical, rather than continuous, quantities
discrete_instrument: bool, optional, default False
Whether the instrument values should be treated as categorical, rather than continuous, quantities
categories: 'auto' or list, default 'auto'
The categories to use when encoding discrete treatments (or 'auto' to use the unique sorted values).
The first category will be treated as the control treatment.
cv: int, cross-validation generator or an iterable, optional, default 2
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- :term:`CV splitter`
- An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if the treatment is discrete
:class:`~sklearn.model_selection.StratifiedKFold` is used, else,
:class:`~sklearn.model_selection.KFold` is used
(with a random shuffle in either case).
Unless an iterable is used, we call `split(concat[W, X], T)` to generate the splits. If all
W, X are None, then we call `split(ones((T.shape[0], 1)), T)`.
mc_iters: int, optional (default=None)
The number of times to rerun the first stage models to reduce the variance of the nuisances.
mc_agg: {'mean', 'median'}, optional (default='mean')
How to aggregate the nuisance value for each sample across the `mc_iters` monte carlo iterations of
cross-fitting.
random_state: int, :class:`~numpy.random.mtrand.RandomState` instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If :class:`~numpy.random.mtrand.RandomState` instance, random_state is the random number generator;
If None, the random number generator is the :class:`~numpy.random.mtrand.RandomState` instance used
by :mod:`np.random<numpy.random>`.
"""
def __init__(self, *,
model_y_xw="auto",
model_t_xw="auto",
model_t_xwz="auto",
model_z_xw="auto",
projection=False,
featurizer=None,
fit_cate_intercept=True,
discrete_treatment=False,
discrete_instrument=False,
categories='auto',
cv=2,
mc_iters=None,
mc_agg='mean',
random_state=None):
self.model_y_xw = clone(model_y_xw, safe=False)
self.model_t_xw = clone(model_t_xw, safe=False)
self.model_t_xwz = clone(model_t_xwz, safe=False)
self.model_z_xw = clone(model_z_xw, safe=False)
self.projection = projection
self.featurizer = clone(featurizer, safe=False)
self.fit_cate_intercept = fit_cate_intercept
super().__init__(discrete_instrument=discrete_instrument,
discrete_treatment=discrete_treatment,
categories=categories,
cv=cv,
mc_iters=mc_iters,
mc_agg=mc_agg,
random_state=random_state)
def _gen_featurizer(self):
return clone(self.featurizer, safe=False)
def _gen_model_final(self):
return StatsModels2SLS(cov_type="HC0")
def _gen_ortho_learner_model_final(self):
return _OrthoIVModelFinal(self._gen_model_final(), self._gen_featurizer(), self.fit_cate_intercept)
def _gen_ortho_learner_model_nuisance(self):
if self.model_y_xw == 'auto':
model_y_xw = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_y_xw = clone(self.model_y_xw, safe=False)
if self.model_t_xw == 'auto':
if self.discrete_treatment:
model_t_xw = LogisticRegressionCV(cv=WeightedStratifiedKFold(random_state=self.random_state),
random_state=self.random_state)
else:
model_t_xw = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_t_xw = clone(self.model_t_xw, safe=False)
if self.projection:
# train E[T|X,W,Z]
if self.model_t_xwz == 'auto':
if self.discrete_treatment:
model_t_xwz = LogisticRegressionCV(cv=WeightedStratifiedKFold(random_state=self.random_state),
random_state=self.random_state)
else:
model_t_xwz = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_t_xwz = clone(self.model_t_xwz, safe=False)
return _OrthoIVModelNuisance(_FirstStageWrapper(clone(model_y_xw, safe=False), True,
self._gen_featurizer(), False, False),
_FirstStageWrapper(clone(model_t_xw, safe=False), False,
self._gen_featurizer(), False, self.discrete_treatment),
_FirstStageWrapper(clone(model_t_xwz, safe=False), False,
self._gen_featurizer(), False, self.discrete_treatment),
self.projection)
else:
# train [Z|X,W]
if self.model_z_xw == "auto":
if self.discrete_instrument:
model_z_xw = LogisticRegressionCV(cv=WeightedStratifiedKFold(random_state=self.random_state),
random_state=self.random_state)
else:
model_z_xw = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_z_xw = clone(self.model_z_xw, safe=False)
return _OrthoIVModelNuisance(_FirstStageWrapper(clone(model_y_xw, safe=False), True,
self._gen_featurizer(), False, False),
_FirstStageWrapper(clone(model_t_xw, safe=False), False,
self._gen_featurizer(), False, self.discrete_treatment),
_FirstStageWrapper(clone(model_z_xw, safe=False), False,
self._gen_featurizer(), False, self.discrete_instrument),
self.projection)
def fit(self, Y, T, *, Z, X=None, W=None, sample_weight=None, freq_weight=None, sample_var=None, groups=None,
cache_values=False, inference="auto"):
"""
Estimate the counterfactual model from data, i.e. estimates function :math:`\\theta(\\cdot)`.
Parameters
----------
Y: (n, d_y) matrix or vector of length n
Outcomes for each sample
T: (n, d_t) matrix or vector of length n
Treatments for each sample
Z: (n, d_z) matrix
Instruments for each sample
X: optional(n, d_x) matrix or None (Default=None)
Features for each sample
W: optional(n, d_w) matrix or None (Default=None)
Controls for each sample
sample_weight : (n,) array like, default None
Individual weights for each sample. If None, it assumes equal weight.
freq_weight: (n,) array like of integers, default None
Weight for the observation. Observation i is treated as the mean
outcome of freq_weight[i] independent observations.
When ``sample_var`` is not None, this should be provided.
sample_var : {(n,), (n, d_y)} nd array like, default None
Variance of the outcome(s) of the original freq_weight[i] observations that were used to
compute the mean outcome represented by observation i.
groups: (n,) vector, optional
All rows corresponding to the same group will be kept together during splitting.
If groups is not None, the `cv` argument passed to this class's initializer
must support a 'groups' argument to its split method.
cache_values: bool, default False
Whether to cache inputs and first stage results, which will allow refitting a different final model
inference: string,:class:`.Inference` instance, or None
Method for performing inference. This estimator supports 'bootstrap'
(or an instance of:class:`.BootstrapInference`) and 'auto'
(or an instance of :class:`.LinearModelFinalInference`)
Returns
-------
self: OrthoIV instance
"""
if self.projection:
assert self.model_z_xw == "auto", ("In the case of projection=True, model_z_xw will not be fitted, "
"please leave it when initializing the estimator!")
else:
assert self.model_t_xwz == "auto", ("In the case of projection=False, model_t_xwz will not be fitted, "
"please leave it when initializing the estimator!")
# Replacing fit from _OrthoLearner, to reorder arguments and improve the docstring
return super().fit(Y, T, X=X, W=W, Z=Z,
sample_weight=sample_weight, freq_weight=freq_weight, sample_var=sample_var, groups=groups,
cache_values=cache_values, inference=inference)
def refit_final(self, *, inference='auto'):
return super().refit_final(inference=inference)
refit_final.__doc__ = _OrthoLearner.refit_final.__doc__
def score(self, Y, T, Z, X=None, W=None, sample_weight=None):
"""
Score the fitted CATE model on a new data set. Generates nuisance parameters
for the new data set based on the fitted residual nuisance models created at fit time.
It uses the mean prediction of the models fitted by the different crossfit folds.
Then calculates the MSE of the final residual Y on residual T regression.
If model_final does not have a score method, then it raises an :exc:`.AttributeError`
Parameters
----------
Y: (n, d_y) matrix or vector of length n
Outcomes for each sample
T: (n, d_t) matrix or vector of length n
Treatments for each sample
Z: optional(n, d_z) matrix
Instruments for each sample
X: optional(n, d_x) matrix or None (Default=None)
Features for each sample
W: optional(n, d_w) matrix or None (Default=None)
Controls for each sample
sample_weight: optional(n,) vector or None (Default=None)
Weights for each samples
Returns
-------
score: float
The MSE of the final CATE model on the new data.
"""
# Replacing score from _OrthoLearner, to enforce Z to be required and improve the docstring
return super().score(Y, T, X=X, W=W, Z=Z, sample_weight=sample_weight)
@property
def featurizer_(self):
"""
Get the fitted featurizer.
Returns
-------
featurizer: object of type(`featurizer`)
An instance of the fitted featurizer that was used to preprocess X in the final CATE model training.
Available only when featurizer is not None and X is not None.
"""
return self.ortho_learner_model_final_._featurizer
@property
def original_featurizer(self):
# NOTE: important to use the ortho_learner_model_final_ attribute instead of the
# attribute so that the trained featurizer will be passed through
return self.ortho_learner_model_final_._original_featurizer
def cate_feature_names(self, feature_names=None):
"""
Get the output feature names.
Parameters
----------
feature_names: list of strings of length X.shape[1] or None
The names of the input features. If None and X is a dataframe, it defaults to the column names
from the dataframe.
Returns
-------
out_feature_names: list of strings or None
The names of the output features :math:`\\phi(X)`, i.e. the features with respect to which the
final CATE model for each treatment is linear. It is the names of the features that are associated
with each entry of the :meth:`coef_` parameter. Available only when the featurizer is not None and has
a method: `get_feature_names(feature_names)`. Otherwise None is returned.
"""
if self._d_x is None:
# Handles the corner case when X=None but featurizer might be not None
return None
if feature_names is None:
feature_names = self._input_names["feature_names"]
if self.original_featurizer is None:
return feature_names
return get_feature_names_or_default(self.original_featurizer, feature_names)
@property
def model_final_(self):
# NOTE This is used by the inference methods and is more for internal use to the library
return self.ortho_learner_model_final_._model_final
@property
def model_cate(self):
"""
Get the fitted final CATE model.
Returns
-------
model_cate: object of type(model_final)
An instance of the model_final object that was fitted after calling fit which corresponds
to the constant marginal CATE model.
"""
return self.ortho_learner_model_final_._model_final
@property
def models_y_xw(self):
"""
Get the fitted models for :math:`\\E[Y | X]`.
Returns
-------
models_y_xw: nested list of objects of type(`model_y_xw`)
A nested list of instances of the `model_y_xw` object. Number of sublist equals to number of monte carlo
iterations, each element in the sublist corresponds to a crossfitting
fold and is the model instance that was fitted for that training fold.
"""
return [[mdl._model_y_xw._model for mdl in mdls] for mdls in super().models_nuisance_]
@property
def models_t_xw(self):
"""
Get the fitted models for :math:`\\E[T | X]`.
Returns
-------
models_t_xw: nested list of objects of type(`model_t_xw`)
A nested list of instances of the `model_t_xw` object. Number of sublist equals to number of monte carlo
iterations, each element in the sublist corresponds to a crossfitting
fold and is the model instance that was fitted for that training fold.
"""
return [[mdl._model_t_xw._model for mdl in mdls] for mdls in super().models_nuisance_]
@property
def models_z_xw(self):
"""
Get the fitted models for :math:`\\E[Z | X]`.
Returns
-------
models_z_xw: nested list of objects of type(`model_z_xw`)
A nested list of instances of the `model_z_xw` object. Number of sublist equals to number of monte carlo
iterations, each element in the sublist corresponds to a crossfitting
fold and is the model instance that was fitted for that training fold.
"""
if self.projection:
raise AttributeError("Projection model is fitted for instrument! Use models_t_xwz.")
return [[mdl._model_z_xw._model for mdl in mdls] for mdls in super().models_nuisance_]
@property
def models_t_xwz(self):
"""
Get the fitted models for :math:`\\E[T | X, Z]`.
Returns
-------
models_t_xwz: nested list of objects of type(`model_t_xwz`)
A nested list of instances of the `model_t_xwz` object. Number of sublist equals to number of monte carlo
iterations, each element in the sublist corresponds to a crossfitting
fold and is the model instance that was fitted for that training fold.
"""
if not self.projection:
raise AttributeError("Direct model is fitted for instrument! Use models_z_xw.")
return [[mdl._model_t_xwz._model for mdl in mdls] for mdls in super().models_nuisance_]
@property
def nuisance_scores_y_xw(self):
"""
Get the scores for y_xw model on the out-of-sample training data
"""
return self.nuisance_scores_[0]
@property
def nuisance_scores_t_xw(self):
"""
Get the scores for t_xw model on the out-of-sample training data
"""
return self.nuisance_scores_[1]
@property
def nuisance_scores_z_xw(self):
"""
Get the scores for z_xw model on the out-of-sample training data
"""
if self.projection:
raise AttributeError("Projection model is fitted for instrument! Use nuisance_scores_t_xwz.")
return self.nuisance_scores_[2]
@property
def nuisance_scores_t_xwz(self):
"""
Get the scores for t_xwz model on the out-of-sample training data
"""
if not self.projection:
raise AttributeError("Direct model is fitted for instrument! Use nuisance_scores_z_xw.")
return self.nuisance_scores_[2]
@property
def fit_cate_intercept_(self):
return self.ortho_learner_model_final_._fit_cate_intercept
@property
def bias_part_of_coef(self):
return self.ortho_learner_model_final_._fit_cate_intercept
@property
def model_final(self):
return self._gen_model_final()
@model_final.setter
def model_final(self, model):
if model is not None:
raise ValueError("Parameter `model_final` cannot be altered for this estimator!")
@property
def residuals_(self):
"""
A tuple (y_res, T_res,Z_res, X, W, Z), of the residuals from the first stage estimation
along with the associated X, W and Z. Samples are not guaranteed to be in the same
order as the input order.
"""
if not hasattr(self, '_cached_values'):
raise AttributeError("Estimator is not fitted yet!")
if self._cached_values is None:
raise AttributeError("`fit` was called with `cache_values=False`. "
"Set to `True` to enable residual storage.")
Y_res, T_res, Z_res = self._cached_values.nuisances
return Y_res, T_res, Z_res, self._cached_values.X, self._cached_values.W, self._cached_values.Z
class _BaseDMLIVModelNuisance:
"""
Nuisance model fits the three models at fit time and at predict time
returns :math:`Y-\\E[Y|X]` and :math:`\\E[T|X,Z]-\\E[T|X]` as residuals.
"""
def __init__(self, model_y_xw, model_t_xw, model_t_xwz):
self._model_y_xw = clone(model_y_xw, safe=False)
self._model_t_xw = clone(model_t_xw, safe=False)
self._model_t_xwz = clone(model_t_xwz, safe=False)
def _combine(self, W, Z, n_samples):
if Z is not None:
Z = Z.reshape(n_samples, -1)
return Z if W is None else np.hstack([W, Z])
return None if W is None else W
def fit(self, Y, T, X=None, W=None, Z=None, sample_weight=None, groups=None):
self._model_y_xw.fit(X, W, Y, **filter_none_kwargs(sample_weight=sample_weight, groups=groups))
self._model_t_xw.fit(X, W, T, **filter_none_kwargs(sample_weight=sample_weight, groups=groups))
# concat W and Z
WZ = self._combine(W, Z, Y.shape[0])
self._model_t_xwz.fit(X, WZ, T, **filter_none_kwargs(sample_weight=sample_weight, groups=groups))
return self
def score(self, Y, T, X=None, W=None, Z=None, sample_weight=None, groups=None):
# note that groups are not passed to score because they are only used for fitting
if hasattr(self._model_y_xw, 'score'):
Y_X_score = self._model_y_xw.score(X, W, Y, **filter_none_kwargs(sample_weight=sample_weight))
else:
Y_X_score = None
if hasattr(self._model_t_xw, 'score'):
T_X_score = self._model_t_xw.score(X, W, T, **filter_none_kwargs(sample_weight=sample_weight))
else:
T_X_score = None
if hasattr(self._model_t_xwz, 'score'):
# concat W and Z
WZ = self._combine(W, Z, Y.shape[0])
T_XZ_score = self._model_t_xwz.score(X, WZ, T, **filter_none_kwargs(sample_weight=sample_weight))
else:
T_XZ_score = None
return Y_X_score, T_X_score, T_XZ_score
def predict(self, Y, T, X=None, W=None, Z=None, sample_weight=None, groups=None):
# note that sample_weight and groups are not passed to predict because they are only used for fitting
Y_pred = self._model_y_xw.predict(X, W)
# concat W and Z
WZ = self._combine(W, Z, Y.shape[0])
TXZ_pred = self._model_t_xwz.predict(X, WZ)
TX_pred = self._model_t_xw.predict(X, W)
if (X is None) and (W is None): # In this case predict above returns a single row
Y_pred = np.tile(Y_pred.reshape(1, -1), (Y.shape[0], 1))
TX_pred = np.tile(TX_pred.reshape(1, -1), (T.shape[0], 1))
Y_res = Y - Y_pred.reshape(Y.shape)
T_res = TXZ_pred.reshape(T.shape) - TX_pred.reshape(T.shape)
return Y_res, T_res
class _BaseDMLIVModelFinal(_ModelFinal):
"""
Final model at fit time, fits a residual on residual regression with a heterogeneous coefficient
that depends on X, i.e.
.. math ::
Y - \\E[Y | X] = \\theta(X) \\cdot (\\E[T | X, Z] - \\E[T | X]) + \\epsilon
and at predict time returns :math:`\\theta(X)`. The score method returns the MSE of this final
residual on residual regression.
"""
pass
class _BaseDMLIV(_OrthoLearner):
# A helper class that access all the internal fitted objects of a DMLIV Cate Estimator.
# Used by both Parametric and Non Parametric DMLIV.
# override only so that we can enforce Z to be required
def fit(self, Y, T, *, Z, X=None, W=None, sample_weight=None, freq_weight=None, sample_var=None, groups=None,
cache_values=False, inference=None):
"""
Estimate the counterfactual model from data, i.e. estimates function :math:`\\theta(\\cdot)`.
Parameters
----------
Y: (n, d_y) matrix or vector of length n
Outcomes for each sample
T: (n, d_t) matrix or vector of length n
Treatments for each sample
Z: (n, d_z) matrix
Instruments for each sample
X: optional(n, d_x) matrix or None (Default=None)
Features for each sample
W: optional (n, d_w) matrix or None (Default=None)
Controls for each sample
sample_weight : (n,) array like, default None
Individual weights for each sample. If None, it assumes equal weight.
freq_weight: (n,) array like of integers, default None
Weight for the observation. Observation i is treated as the mean
outcome of freq_weight[i] independent observations.
When ``sample_var`` is not None, this should be provided.
sample_var : {(n,), (n, d_y)} nd array like, default None
Variance of the outcome(s) of the original freq_weight[i] observations that were used to
compute the mean outcome represented by observation i.
groups: (n,) vector, optional
All rows corresponding to the same group will be kept together during splitting.
If groups is not None, the `cv` argument passed to this class's initializer
must support a 'groups' argument to its split method.
cache_values: bool, default False
Whether to cache inputs and first stage results, which will allow refitting a different final model
inference: string, :class:`.Inference` instance, or None
Method for performing inference. This estimator supports 'bootstrap'
(or an instance of :class:`.BootstrapInference`)
Returns
-------
self
"""
return super().fit(Y, T, X=X, W=W, Z=Z,
sample_weight=sample_weight, freq_weight=freq_weight, sample_var=sample_var, groups=groups,
cache_values=cache_values, inference=inference)
def score(self, Y, T, Z, X=None, W=None, sample_weight=None):
"""
Score the fitted CATE model on a new data set. Generates nuisance parameters
for the new data set based on the fitted residual nuisance models created at fit time.
It uses the mean prediction of the models fitted by the different crossfit folds.
Then calculates the MSE of the final residual Y on residual T regression.
If model_final does not have a score method, then it raises an :exc:`.AttributeError`
Parameters
----------
Y: (n, d_y) matrix or vector of length n
Outcomes for each sample
T: (n, d_t) matrix or vector of length n
Treatments for each sample
Z: (n, d_z) matrix
Instruments for each sample
X: optional(n, d_x) matrix or None (Default=None)
Features for each sample
W: optional(n, d_w) matrix or None (Default=None)
Controls for each sample
sample_weight: optional(n,) vector or None (Default=None)
Weights for each samples
Returns
-------
score: float
The MSE of the final CATE model on the new data.
"""
# Replacing score from _OrthoLearner, to enforce Z to be required and improve the docstring
return super().score(Y, T, X=X, W=W, Z=Z, sample_weight=sample_weight)
@property
def original_featurizer(self):
return self.ortho_learner_model_final_._model_final._original_featurizer
@property
def featurizer_(self):
# NOTE This is used by the inference methods and has to be the overall featurizer. intended
# for internal use by the library
return self.ortho_learner_model_final_._model_final._featurizer
@property
def model_final_(self):
# NOTE This is used by the inference methods and is more for internal use to the library
return self.ortho_learner_model_final_._model_final._model
@property
def model_cate(self):
"""
Get the fitted final CATE model.
Returns
-------
model_cate: object of type(model_final)
An instance of the model_final object that was fitted after calling fit which corresponds
to the constant marginal CATE model.
"""
return self.ortho_learner_model_final_._model_final._model
@property
def models_y_xw(self):
"""
Get the fitted models for :math:`\\E[Y | X]`.
Returns
-------
models_y_xw: nested list of objects of type(`model_y_xw`)
A nested list of instances of the `model_y_xw` object. Number of sublist equals to number of monte carlo
iterations, each element in the sublist corresponds to a crossfitting
fold and is the model instance that was fitted for that training fold.
"""
return [[mdl._model_y_xw._model for mdl in mdls] for mdls in super().models_nuisance_]
@property
def models_t_xw(self):
"""
Get the fitted models for :math:`\\E[T | X]`.
Returns
-------
models_t_xw: nested list of objects of type(`model_t_xw`)
A nested list of instances of the `model_t_xw` object. Number of sublist equals to number of monte carlo
iterations, each element in the sublist corresponds to a crossfitting
fold and is the model instance that was fitted for that training fold.
"""
return [[mdl._model_t_xw._model for mdl in mdls] for mdls in super().models_nuisance_]
@property
def models_t_xwz(self):
"""
Get the fitted models for :math:`\\E[T | X, Z]`.
Returns
-------
models_t_xwz: nested list of objects of type(`model_t_xwz`)
A nested list of instances of the `model_t_xwz` object. Number of sublist equals to number of monte carlo
iterations, each element in the sublist corresponds to a crossfitting
fold and is the model instance that was fitted for that training fold.
"""
return [[mdl._model_t_xwz._model for mdl in mdls] for mdls in super().models_nuisance_]
@property
def nuisance_scores_y_xw(self):
"""
Get the scores for y_xw model on the out-of-sample training data
"""
return self.nuisance_scores_[0]
@property
def nuisance_scores_t_xw(self):
"""
Get the scores for t_xw model on the out-of-sample training data
"""
return self.nuisance_scores_[1]
@property
def nuisance_scores_t_xwz(self):
"""
Get the scores for t_xwz model on the out-of-sample training data
"""
return self.nuisance_scores_[2]
@property
def residuals_(self):
"""
A tuple (y_res, T_res, X, W, Z), of the residuals from the first stage estimation
along with the associated X, W and Z. Samples are not guaranteed to be in the same
order as the input order.
"""
if not hasattr(self, '_cached_values'):
raise AttributeError("Estimator is not fitted yet!")
if self._cached_values is None:
raise AttributeError("`fit` was called with `cache_values=False`. "
"Set to `True` to enable residual storage.")
Y_res, T_res = self._cached_values.nuisances
return Y_res, T_res, self._cached_values.X, self._cached_values.W, self._cached_values.Z
def cate_feature_names(self, feature_names=None):
"""
Get the output feature names.
Parameters
----------
feature_names: list of strings of length X.shape[1] or None
The names of the input features. If None and X is a dataframe, it defaults to the column names
from the dataframe.
Returns
-------
out_feature_names: list of strings or None
The names of the output features :math:`\\phi(X)`, i.e. the features with respect to which the
final constant marginal CATE model is linear. It is the names of the features that are associated
with each entry of the :meth:`coef_` parameter. Not available when the featurizer is not None and
does not have a method: `get_feature_names(feature_names)`. Otherwise None is returned.
"""
if self._d_x is None:
# Handles the corner case when X=None but featurizer might be not None
return None
if feature_names is None:
feature_names = self._input_names["feature_names"]
if self.original_featurizer is None:
return feature_names
return get_feature_names_or_default(self.original_featurizer, feature_names)
class DMLIV(_BaseDMLIV):
"""
The base class for parametric DMLIV estimators to estimate a CATE. It accepts three generic machine
learning models as nuisance functions:
1) model_y_xw that estimates :math:`\\E[Y | X]`
2) model_t_xw that estimates :math:`\\E[T | X]`
3) model_t_xwz that estimates :math:`\\E[T | X, Z]`
These are estimated in a cross-fitting manner for each sample in the training set.
Then it minimizes the square loss:
.. math::
\\sum_i (Y_i - \\E[Y|X_i] - \\theta(X) * (\\E[T|X_i, Z_i] - \\E[T|X_i]))^2
This loss is minimized by the model_final class, which is passed as an input.
Parameters
----------
model_y_xw : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[Y | X, W]`. Must support `fit` and `predict` methods.
If 'auto' :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV` will be chosen.
model_t_xw : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[T | X, W]`. Must support `fit` and `predict` methods.
If 'auto', :class:`~sklearn.linear_model.LogisticRegressionCV`
will be applied for discrete treatment,
and :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV`
will be applied for continuous treatment.
model_t_xwz : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[T | X, W, Z]`. Must support `fit` and `predict` methods.
If 'auto', :class:`~sklearn.linear_model.LogisticRegressionCV`
will be applied for discrete treatment,
and :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV`
will be applied for continuous treatment.
model_final : estimator (default is :class:`.StatsModelsLinearRegression`)
final model that at fit time takes as input :math:`(Y-\\E[Y|X])`, :math:`(\\E[T|X,Z]-\\E[T|X])` and X
and supports method predict(X) that produces the CATE at X
featurizer: transformer
The transformer used to featurize the raw features when fitting the final model. Must implement
a `fit_transform` method.
fit_cate_intercept : bool, optional, default True
Whether the linear CATE model should have a constant term.
discrete_instrument: bool, optional, default False
Whether the instrument values should be treated as categorical, rather than continuous, quantities
discrete_treatment: bool, optional, default False
Whether the treatment values should be treated as categorical, rather than continuous, quantities
categories: 'auto' or list, default 'auto'
The categories to use when encoding discrete treatments (or 'auto' to use the unique sorted values).
The first category will be treated as the control treatment.
cv: int, cross-validation generator or an iterable, optional, default 2
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- :term:`CV splitter`
- An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if the treatment is discrete
:class:`~sklearn.model_selection.StratifiedKFold` is used, else,
:class:`~sklearn.model_selection.KFold` is used
(with a random shuffle in either case).
Unless an iterable is used, we call `split(concat[W, X], T)` to generate the splits. If all
W, X are None, then we call `split(ones((T.shape[0], 1)), T)`.
random_state: int, :class:`~numpy.random.mtrand.RandomState` instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If :class:`~numpy.random.mtrand.RandomState` instance, random_state is the random number generator;
If None, the random number generator is the :class:`~numpy.random.mtrand.RandomState` instance used
by :mod:`np.random<numpy.random>`.
mc_iters: int, optional (default=None)
The number of times to rerun the first stage models to reduce the variance of the nuisances.
mc_agg: {'mean', 'median'}, optional (default='mean')
How to aggregate the nuisance value for each sample across the `mc_iters` monte carlo iterations of
cross-fitting.
"""
def __init__(self, *,
model_y_xw="auto",
model_t_xw="auto",
model_t_xwz="auto",
model_final=StatsModelsLinearRegression(fit_intercept=False),
featurizer=None,
fit_cate_intercept=True,
discrete_treatment=False,
discrete_instrument=False,
categories='auto',
cv=2,
mc_iters=None,
mc_agg='mean',
random_state=None):
self.model_y_xw = clone(model_y_xw, safe=False)
self.model_t_xw = clone(model_t_xw, safe=False)
self.model_t_xwz = clone(model_t_xwz, safe=False)
self.model_final = clone(model_final, safe=False)
self.featurizer = clone(featurizer, safe=False)
self.fit_cate_intercept = fit_cate_intercept
super().__init__(discrete_treatment=discrete_treatment,
discrete_instrument=discrete_instrument,
categories=categories,
cv=cv,
mc_iters=mc_iters,
mc_agg=mc_agg,
random_state=random_state)
def _gen_featurizer(self):
return clone(self.featurizer, safe=False)
def _gen_model_y_xw(self):
if self.model_y_xw == 'auto':
model_y_xw = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_y_xw = clone(self.model_y_xw, safe=False)
return _FirstStageWrapper(model_y_xw, True, self._gen_featurizer(),
False, False)
def _gen_model_t_xw(self):
if self.model_t_xw == 'auto':
if self.discrete_treatment:
model_t_xw = LogisticRegressionCV(cv=WeightedStratifiedKFold(random_state=self.random_state),
random_state=self.random_state)
else:
model_t_xw = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_t_xw = clone(self.model_t_xw, safe=False)
return _FirstStageWrapper(model_t_xw, False, self._gen_featurizer(),
False, self.discrete_treatment)
def _gen_model_t_xwz(self):
if self.model_t_xwz == 'auto':
if self.discrete_treatment:
model_t_xwz = LogisticRegressionCV(cv=WeightedStratifiedKFold(random_state=self.random_state),
random_state=self.random_state)
else:
model_t_xwz = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_t_xwz = clone(self.model_t_xwz, safe=False)
return _FirstStageWrapper(model_t_xwz, False, self._gen_featurizer(),
False, self.discrete_treatment)
def _gen_model_final(self):
return clone(self.model_final, safe=False)
def _gen_ortho_learner_model_nuisance(self):
return _BaseDMLIVModelNuisance(self._gen_model_y_xw(), self._gen_model_t_xw(), self._gen_model_t_xwz())
def _gen_ortho_learner_model_final(self):
return _BaseDMLIVModelFinal(_FinalWrapper(self._gen_model_final(),
self.fit_cate_intercept,
self._gen_featurizer(),
False))
@property
def bias_part_of_coef(self):
return self.ortho_learner_model_final_._model_final._fit_cate_intercept
@property
def fit_cate_intercept_(self):
return self.ortho_learner_model_final_._model_final._fit_cate_intercept
def shap_values(self, X, *, feature_names=None, treatment_names=None, output_names=None, background_samples=100):
if hasattr(self, "featurizer_") and self.featurizer_ is not None:
X = self.featurizer_.transform(X)
feature_names = self.cate_feature_names(feature_names)
return _shap_explain_joint_linear_model_cate(self.model_final_, X, self._d_t, self._d_y,
self.bias_part_of_coef,
feature_names=feature_names, treatment_names=treatment_names,
output_names=output_names,
input_names=self._input_names,
background_samples=background_samples)
shap_values.__doc__ = LinearCateEstimator.shap_values.__doc__
@property
def coef_(self):
""" The coefficients in the linear model of the constant marginal treatment
effect.
Returns
-------
coef: (n_x,) or (n_t, n_x) or (n_y, n_t, n_x) array like
Where n_x is the number of features that enter the final model (either the
dimension of X or the dimension of featurizer.fit_transform(X) if the CATE
estimator has a featurizer.), n_t is the number of treatments, n_y is
the number of outcomes. Dimensions are omitted if the original input was
a vector and not a 2D array. For binary treatment the n_t dimension is
also omitted.
"""
return parse_final_model_params(self.model_final_.coef_, self.model_final_.intercept_,
self._d_y, self._d_t, self._d_t_in, self.bias_part_of_coef,
self.fit_cate_intercept_)[0]
@property
def intercept_(self):
""" The intercept in the linear model of the constant marginal treatment
effect.
Returns
-------
intercept: float or (n_y,) or (n_y, n_t) array like
Where n_t is the number of treatments, n_y is
the number of outcomes. Dimensions are omitted if the original input was
a vector and not a 2D array. For binary treatment the n_t dimension is
also omitted.
"""
if not self.fit_cate_intercept_:
raise AttributeError("No intercept was fitted!")
return parse_final_model_params(self.model_final_.coef_, self.model_final_.intercept_,
self._d_y, self._d_t, self._d_t_in, self.bias_part_of_coef,
self.fit_cate_intercept_)[1]
def summary(self, decimals=3, feature_names=None, treatment_names=None, output_names=None):
""" The summary of coefficient and intercept in the linear model of the constant marginal treatment
effect.
Parameters
----------
decimals: optinal int (default=3)
Number of decimal places to round each column to.
feature_names: optional list of strings or None (default is None)
The input of the feature names
treatment_names: optional list of strings or None (default is None)
The names of the treatments
output_names: optional list of strings or None (default is None)
The names of the outputs
Returns
-------
smry : Summary instance
this holds the summary tables and text, which can be printed or
converted to various output formats.
"""
# Get input names
treatment_names = self.cate_treatment_names(treatment_names)
output_names = self.cate_output_names(output_names)
feature_names = self.cate_feature_names(feature_names)
# Summary
smry = Summary()
smry.add_extra_txt(["<sub>A linear parametric conditional average treatment effect (CATE) model was fitted:",
"$Y = \\Theta(X)\\cdot T + g(X, W) + \\epsilon$",
"where for every outcome $i$ and treatment $j$ the CATE $\\Theta_{ij}(X)$ has the form:",
"$\\Theta_{ij}(X) = \\phi(X)' coef_{ij} + cate\\_intercept_{ij}$",
"where $\\phi(X)$ is the output of the `featurizer` or $X$ if `featurizer`=None. "
"Coefficient Results table portrays the $coef_{ij}$ parameter vector for "
"each outcome $i$ and treatment $j$. "
"Intercept Results table portrays the $cate\\_intercept_{ij}$ parameter.</sub>"])
d_t = self._d_t[0] if self._d_t else 1
d_y = self._d_y[0] if self._d_y else 1
def _reshape_array(arr, type):
if np.isscalar(arr):
arr = np.array([arr])
if type == 'coefficient':
arr = np.moveaxis(arr, -1, 0)
arr = arr.reshape(-1, 1)
return arr
# coefficient
try:
if self.coef_.size == 0: # X is None
raise AttributeError("X is None, please call intercept_inference to learn the constant!")
else:
coef_array = np.round(_reshape_array(self.coef_, "coefficient"), decimals)
coef_headers = ["point_estimate"]
if d_t > 1 and d_y > 1:
index = list(product(feature_names, output_names, treatment_names))
elif d_t > 1:
index = list(product(feature_names, treatment_names))
elif d_y > 1:
index = list(product(feature_names, output_names))
else:
index = list(product(feature_names))
coef_stubs = ["|".join(ind_value) for ind_value in index]
coef_title = 'Coefficient Results'
smry.add_table(coef_array, coef_headers, coef_stubs, coef_title)
except Exception as e:
print("Coefficient Results: ", str(e))
# intercept
try:
if not self.fit_cate_intercept:
raise AttributeError("No intercept was fitted!")
else:
intercept_array = np.round(_reshape_array(self.intercept_, "intercept"), decimals)
intercept_headers = ["point_estimate"]
if d_t > 1 and d_y > 1:
index = list(product(["cate_intercept"], output_names, treatment_names))
elif d_t > 1:
index = list(product(["cate_intercept"], treatment_names))
elif d_y > 1:
index = list(product(["cate_intercept"], output_names))
else:
index = list(product(["cate_intercept"]))
intercept_stubs = ["|".join(ind_value) for ind_value in index]
intercept_title = 'CATE Intercept Results'
smry.add_table(intercept_array, intercept_headers, intercept_stubs, intercept_title)
except Exception as e:
print("CATE Intercept Results: ", str(e))
if len(smry.tables) > 0:
return smry
class NonParamDMLIV(_BaseDMLIV):
"""
The base class for non-parametric DMLIV that allows for an arbitrary square loss based ML
method in the final stage of the DMLIV algorithm. The method has to support
sample weights and the fit method has to take as input sample_weights (e.g. random forests), i.e.
fit(X, y, sample_weight=None)
It achieves this by re-writing the final stage square loss of the DMLIV algorithm as:
.. math ::
\\sum_i (\\E[T|X_i, Z_i] - \\E[T|X_i])^2 * ((Y_i - \\E[Y|X_i])/(\\E[T|X_i, Z_i] - \\E[T|X_i]) - \\theta(X))^2
Then this can be viewed as a weighted square loss regression, where the target label is
.. math ::
\\tilde{Y}_i = (Y_i - \\E[Y|X_i])/(\\E[T|X_i, Z_i] - \\E[T|X_i])
and each sample has a weight of
.. math ::
V(X_i) = (\\E[T|X_i, Z_i] - \\E[T|X_i])^2
Thus we can call any regression model with inputs:
fit(X, :math:`\\tilde{Y}_i`, sample_weight= :math:`V(X_i)`)
Parameters
----------
model_y_xw : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[Y | X, W]`. Must support `fit` and `predict` methods.
If 'auto' :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV` will be chosen.
model_t_xw : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[T | X, W]`. Must support `fit` and either `predict` or `predict_proba` methods,
depending on whether the treatment is discrete.
If 'auto', :class:`~sklearn.linear_model.LogisticRegressionCV`
will be applied for discrete treatment,
and :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV`
will be applied for continuous treatment.
model_t_xwz : estimator or 'auto' (default is 'auto')
model to estimate :math:`\\E[T | X, W, Z]`. Must support `fit` and either `predict` or `predict_proba`
methods, depending on whether the treatment is discrete.
If 'auto', :class:`~sklearn.linear_model.LogisticRegressionCV`
will be applied for discrete treatment,
and :class:`.WeightedLassoCV`/:class:`.WeightedMultiTaskLassoCV`
will be applied for continuous treatment.
model_final : estimator
final model for predicting :math:`\\tilde{Y}` from X with sample weights V(X)
featurizer: transformer
The transformer used to featurize the raw features when fitting the final model. Must implement
a `fit_transform` method.
discrete_treatment: bool, optional, default False
Whether the treatment values should be treated as categorical, rather than continuous, quantities
discrete_instrument: bool, optional, default False
Whether the instrument values should be treated as categorical, rather than continuous, quantities
categories: 'auto' or list, default 'auto'
The categories to use when encoding discrete treatments (or 'auto' to use the unique sorted values).
The first category will be treated as the control treatment.
cv: int, cross-validation generator or an iterable, optional, default 2
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- :term:`CV splitter`
- An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if the treatment is discrete
:class:`~sklearn.model_selection.StratifiedKFold` is used, else,
:class:`~sklearn.model_selection.KFold` is used
(with a random shuffle in either case).
Unless an iterable is used, we call `split(concat[W, X], T)` to generate the splits. If all
W, X are None, then we call `split(ones((T.shape[0], 1)), T)`.
mc_iters: int, optional (default=None)
The number of times to rerun the first stage models to reduce the variance of the nuisances.
mc_agg: {'mean', 'median'}, optional (default='mean')
How to aggregate the nuisance value for each sample across the `mc_iters` monte carlo iterations of
cross-fitting.
random_state: int, :class:`~numpy.random.mtrand.RandomState` instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If :class:`~numpy.random.mtrand.RandomState` instance, random_state is the random number generator;
If None, the random number generator is the :class:`~numpy.random.mtrand.RandomState` instance used
by :mod:`np.random<numpy.random>`.
"""
def __init__(self, *,
model_y_xw="auto",
model_t_xw="auto",
model_t_xwz="auto",
model_final,
discrete_treatment=False,
discrete_instrument=False,
featurizer=None,
categories='auto',
cv=2,
mc_iters=None,
mc_agg='mean',
random_state=None):
self.model_y_xw = clone(model_y_xw, safe=False)
self.model_t_xw = clone(model_t_xw, safe=False)
self.model_t_xwz = clone(model_t_xwz, safe=False)
self.model_final = clone(model_final, safe=False)
self.featurizer = clone(featurizer, safe=False)
super().__init__(discrete_treatment=discrete_treatment,
discrete_instrument=discrete_instrument,
categories=categories,
cv=cv,
mc_iters=mc_iters,
mc_agg=mc_agg,
random_state=random_state)
def _gen_featurizer(self):
return clone(self.featurizer, safe=False)
def _gen_model_y_xw(self):
if self.model_y_xw == 'auto':
model_y_xw = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_y_xw = clone(self.model_y_xw, safe=False)
return _FirstStageWrapper(model_y_xw, True, self._gen_featurizer(),
False, False)
def _gen_model_t_xw(self):
if self.model_t_xw == 'auto':
if self.discrete_treatment:
model_t_xw = LogisticRegressionCV(cv=WeightedStratifiedKFold(random_state=self.random_state),
random_state=self.random_state)
else:
model_t_xw = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_t_xw = clone(self.model_t_xw, safe=False)
return _FirstStageWrapper(model_t_xw, False, self._gen_featurizer(),
False, self.discrete_treatment)
def _gen_model_t_xwz(self):
if self.model_t_xwz == 'auto':
if self.discrete_treatment:
model_t_xwz = LogisticRegressionCV(cv=WeightedStratifiedKFold(random_state=self.random_state),
random_state=self.random_state)
else:
model_t_xwz = WeightedLassoCVWrapper(random_state=self.random_state)
else:
model_t_xwz = clone(self.model_t_xwz, safe=False)
return _FirstStageWrapper(model_t_xwz, False, self._gen_featurizer(),
False, self.discrete_treatment)
def _gen_model_final(self):
return clone(self.model_final, safe=False)
def _gen_ortho_learner_model_nuisance(self):
return _BaseDMLIVModelNuisance(self._gen_model_y_xw(), self._gen_model_t_xw(), self._gen_model_t_xwz())
def _gen_ortho_learner_model_final(self):
return _BaseDMLIVModelFinal(_FinalWrapper(self._gen_model_final(),
False,
self._gen_featurizer(),
True))
def shap_values(self, X, *, feature_names=None, treatment_names=None, output_names=None, background_samples=100):
return _shap_explain_model_cate(self.const_marginal_effect, self.model_cate, X, self._d_t, self._d_y,
featurizer=self.featurizer_,
feature_names=feature_names,
treatment_names=treatment_names,
output_names=output_names,
input_names=self._input_names,
background_samples=background_samples)
shap_values.__doc__ = LinearCateEstimator.shap_values.__doc__
@deprecated("The DMLATEIV class has been deprecated by OrthoIV class with parameter `projection=False`, "
"an upcoming release will remove support for the old name")
def DMLATEIV(model_Y_W,
model_T_W,
model_Z_W,
discrete_treatment=False,
discrete_instrument=False,
categories='auto',
cv=2,
mc_iters=None,
mc_agg='mean',
random_state=None):
return OrthoIV(model_y_xw=model_Y_W,
model_t_xw=model_T_W,
model_z_xw=model_Z_W,
projection=False,
featurizer=None,
fit_cate_intercept=True,
discrete_treatment=discrete_treatment,
discrete_instrument=discrete_instrument,
categories=categories,
cv=cv,
mc_iters=mc_iters,
mc_agg=mc_agg,
random_state=random_state)
@deprecated("The DMLATEIV class has been deprecated by OrthoIV class with parameter `projection=True`, "
"an upcoming release will remove support for the old name")
def ProjectedDMLATEIV(model_Y_W,
model_T_W,
model_T_WZ,
discrete_treatment=False,
discrete_instrument=False,
categories='auto',
cv=2,
mc_iters=None,
mc_agg='mean',
random_state=None):
return OrthoIV(model_y_xw=model_Y_W,
model_t_xw=model_T_W,
model_t_xwz=model_T_WZ,
projection=True,
featurizer=None,
fit_cate_intercept=True,
discrete_treatment=discrete_treatment,
discrete_instrument=discrete_instrument,
categories=categories,
cv=cv,
mc_iters=mc_iters,
mc_agg=mc_agg,
random_state=random_state) | PypiClean |
/Flask-Turbo-Boost-0.2.8.tar.gz/Flask-Turbo-Boost-0.2.8/README.rst | Flask-Boost
===========
.. image:: http://img.shields.io/pypi/v/flask-turbo-boost.svg
:target: https://pypi.python.org/pypi/flask-turbo-boost
:alt: Latest Version
.. image:: http://img.shields.io/pypi/dm/flask-turbo-boost.svg
:target: https://pypi.python.org/pypi/flask-turbo-boost
:alt: Downloads Per Month
.. image:: http://img.shields.io/pypi/pyversions/flask-turbo-boost.svg
:target: https://pypi.python.org/pypi/flask-turbo-boost
:alt: Python Versions
.. image:: http://img.shields.io/badge/license-MIT-blue.svg
:target: https://github.com/hustlzp/Flask-Boost/blob/master/LICENSE
:alt: The MIT License
Flask application generator for boosting your development.
Features
--------
* **Well Defined Project Structure**
* Use factory pattern to generate Flask app.
* Use Blueprints to organize controllers.
* Split controllers, models, forms, utilities, assets, Jinja2 pages, Jinja2 macros into different directories.
* Organize Jinja2 page assets (HTML, JavaScript, CSS) to the same directory.
* Organize Jinja2 macro assets (HTML, JavaScript, CSS) to the same directory.
* API-Only Project Structure
* **Batteries Included**
* Use Flask-SQLAlchemy and Flask-Migrate as database tools.
* Use Searchable-Mixin for search every Models [Optional]
* Use ActiveRecord-Like-Mixin for search every Models [Optional]
* Use JWTAuth for authentication [Optional]
* Use JsonSchema for validate incoming requests [Optional]
* Use Flask-WTF to validate forms.
* Use Flask-Security for user management.
* Use Flask-Dance for social user authentication (Sample with facebook and google).
* Use Bulma as frontend framework.
* Dockerfile – Sample dockerfile for development
* docker-compose.yml – Sample docker-compose for deployment
* Use Gunicorn to run Flask app and Supervisor to manage Gunicorn processes.
* Use Fabric as deployment tool.
* Use Sentry to log exceptions.
* Use Nginx to serve static files.
* Use sub script command to generate admin and form from a model.
* **Scaffold Commands**
* Generate project files: ``turbo new <project>``
* Generate API-only project files: ``turbo new --api <project>``
* Generate controller files: ``turbo new controller <controller>``
* Generate action files: ``turbo new action <controller> <action> [-t]``
* Generate form files: ``turbo new form <form>``
* Generate model files: ``turbo new model <model>``
* Generate macro files: ``turbo new macro <category> <macro>`` or ``boost new macro <macro>``
Installation
------------
::
pip install flask-turbo-boost
Development Guide
-----------------
Init project
~~~~~~~~~~~~
::
turbo new <your_project_name>
Setup backend requirements
~~~~~~~~~~~~~~~~~~~~~~~~~~
::
cd <your_project_dir>
virtualenv venv
. venv/bin/activate (venv\Scripts\activate in Windows)
pip install -r requirements.txt
**Note**: if you failed in ``pip install -r requirements.txt`` in Windows, try to install package binaries directly:
* pycrpyto: try to follow this article compiling-pycrypto-on-win7-64_, or get the complied pycrypyto library directly: archive_pycrpyto_library_.
.. _compiling-pycrypto-on-win7-64: https://yorickdowne.wordpress.com/2010/12/22/compiling-pycrypto-on-win7-64/
.. _archive_pycrpyto_library: http://archive.warshaft.com/pycrypto-2.3.1.win7x64-py2.7x64.7z
Init database
~~~~~~~~~~~~~
Create database with name ``your_project_name`` and encoding ``utf8``.
Update ``SQLALCHEMY_DATABASE_URI`` in ``config/development.py`` as needed.
Then init tables::
flask db upgrade
Run app
~~~~~~~
Run local server::
flask run
Scaffold commands
~~~~~~~~~~~~~~~~~
::
turbo new <project>
turbo new --api <project>
turbo new controller <controller>
turbo new action <controller> <action> [-t]
turbo new form <form>
turbo new model <model>
turbo new macro <category> <macro>
turbo new macro <macro>
turbo -v
turbo -h
First Production Deploy
-----------------------
Config server
~~~~~~~~~~~~~
Install mysql-server, python-virtualenv, git, supervisor, nginx, g++, python-dev, libmysqlclient-dev, libxml2-dev, libxslt-dev on your server.
Install requirements
~~~~~~~~~~~~~~~~~~~~
::
git clone **.git
cd <your_project_dir>
virtualenv venv
. venv/bin/activate
pip install -r requirements.txt
Config app
~~~~~~~~~~
Save ``config/production_sample.py`` as ``config/production.py``, update configs in ``config/production.py`` as needed and transfer it to server.
**Note**: remember to update ``SECRET_KEY`` in ``config/production.py``! You can generate random secret key as follows::
>>> import os
>>> os.urandom(24)
Init database
~~~~~~~~~~~~~
Create database with name ``your_project_name`` and encoding ``utf8``.
And run::
export MODE=PRODUCTION
flask db upgrade
Copy config files
~~~~~~~~~~~~~~~~~
Update project root path as needed in ``deploy/nginx.conf`` and ``deploy/supervisor.conf``.
::
cp deploy/flask_env.sh /etc/profile.d/
cp deploy/nginx.conf /etc/nginx/conf.d/<your_project_name>.conf
cp deploy/supervisor.conf /etc/supervisor/conf.d/<your_project_name>.conf
Start app
~~~~~~~~~
::
service nginx restart
service supervisor restart
Daily Production Deploy
-----------------------
Update ``HOST_STRING`` in config with the format ``user@ip``.
Commit your codes and run::
git push && fab deploy
P.S. If you wanna to deploy flask with Apache2, see this_ post.
.. _this: https://www.digitalocean.com/community/tutorials/how-to-use-apache-http-server-as-reverse-proxy-using-mod_proxy-extension
License
-------
MIT
| PypiClean |
/EmbyPy-0.6.6.4.tar.gz/EmbyPy-0.6.6.4/embypy/objects/object.py | from embypy.utils.asyncio import async_func
import arrow
import datetime
_EMPTY_OBJ = {
"Id": "",
"Name": "",
"OriginalTitle": "",
"ForcedSortName": "",
"CommunityRating": "",
"CriticRating": "",
"IndexNumber": "",
"AirsBeforeSeasonNumber": "",
"AirsAfterSeasonNumber": "",
"AirsBeforeEpisodeNumber": "",
"ParentIndexNumber": None,
"DisplayOrder": "",
"Album": "",
"AlbumArtists": [],
"ArtistItems": [],
"Overview": "",
"Status": "",
"AirDays": [],
"AirTime": "",
"Genres": [],
"Tags": [],
"Studios": [],
"PremiereDate": "",
"DateCreated": "",
"EndDate": None,
"ProductionYear": "",
"AspectRatio": "",
"Video3DFormat": "",
"OfficialRating": "",
"CustomRating": "",
"People": [],
"LockData": False,
"LockedFields": [],
"ProviderIds": {
"MusicBrainzReleaseGroup": "",
"MusicBrainzAlbumArtist": "",
"MusicBrainzAlbum": "",
"MusicBrainzArtist": "",
"MusicBrainzTrack": "",
"AudioDbAlbum": "",
"AudioDbArtist": ""
},
"PreferredMetadataLanguage": "",
"PreferredMetadataCountryCode": "",
"Taglines": []
}
class EmbyObject(object):
'''Deafult EMby Object Template
Parameters
----------
object_dict : dict
dictionary with json info returned from emby
connector: embypy.utils.connector.Connector
connector object to make upstream api calls
save : bool
if true, append to list of existing objects
saves space/increases speed/reduces issues
only set to false if creating a temp object that will be thrown out
'''
known_objects = {}
def __init__(self, object_dict, connector, save=True):
self.connector = connector
self.object_dict = object_dict
self.extras = {}
if save:
EmbyObject.known_objects[object_dict.get('Id')] = self
def __eq__(self, other):
return isinstance(other, EmbyObject) and self.id == other.id
def __setattr__(self, name, value):
if name.endswith('_sync'):
return self.__setattr__(name[:-5], value)
super().__setattr__(name, value)
def __getattr__(self, name):
if name.endswith('_sync'):
return self.__getattr__(name[:-5])
return self.__getattribute__(name)
@property
def id(self):
'''string with hexidecimal hash representing the id of this
object in emby
'''
return self.object_dict.get('Id') or self.object_dict.get('ItemId')
@property
def name(self):
'''name of the item
See Also
--------
post :
'''
return self.object_dict.get('Name', '')
@name.setter
def name(self, value):
self.object_dict['Name'] = value
@property
def title(self):
'''same as name
See Also
--------
post :
'''
return self.name
@title.setter
def title(self, value):
self.name = value
@property
def path(self):
'''get the filepath of the media file (not url)
See Also
--------
url :
'''
return self.object_dict.get('Path', '')
@property
def watched(self):
'''returns True it item has been watched'''
return self.object_dict.get('UserData', {}).get('Played')
@property
def played(self):
'''same as `watched`'''
return self.watched
@property
def percentage_played(self):
'''returns played percentage [0,1] of item'''
played = self.object_dict.get(
'UserData', {}
).get('PlaybackPositionTicks')
total = self.object_dict.get('RunTimeTicks') or played or 1
return (played or 0) / total
@property
def duration(self):
'''returns duration of item in seconds'''
return self.object_dict.get('RunTimeTicks', 0) / (10**7)
@property
def play_count(self):
'''returns users playcount for item'''
return self.object_dict.get('UserData', {}).get('PlayCount', 0)
@property
def favorite(self):
'''returns True if user favorited item'''
return self.object_dict.get('UserData', {}).get('IsFavorite', False)
@async_func
async def _mark(self, type, value):
url = '/Users/{{UserId}}/{type}/{id}'.format(type=type, id=self.id)
if value:
(await self.connector.post(url)).close()
else:
(await self.connector.delete(url)).close()
@async_func
async def setFavorite(self, value=True):
await self._mark('FavoriteItems', value)
@async_func
async def setWatched(self, value=True):
await self._mark('PlayedItems', value)
@property
def type(self):
'''get the object type (general)
See Also
--------
media_type :
'''
return self.object_dict.get('Type', 'Object')
@property
def media_type(self):
'''get the object type (specific)
See Also
--------
type :
'''
return self.object_dict.get('MediaType', 'Object')
@property
def genres(self):
'''list of genres
See Also
--------
post :
tags :
'''
return self.object_dict.get('Genres', [])
@genres.setter
def genres(self, genres: list):
self.object_dict['Genres'] = genres
@property
def tags(self):
'''list of tags
See Also
--------
post :
genres :
'''
return self.object_dict.get('Tags', [])
@tags.setter
def tags(self, tags: list):
self.object_dict['Tags'] = tags
@property
def overview(self):
'''the description of the item
See Also
--------
post :
'''
return self.object_dict.get('Overview', '')
@overview.setter
def overview(self, value):
self.object_dict['Overview'] = value
@property
def community_rating(self):
'''int [0-10] with the rating of the item
See Also
--------
post :
'''
return self.object_dict.get('CommunityRating', 0)
@community_rating.setter
def community_rating(self, value):
self.object_dict['CommunityRating'] = value
@property
def primary_image_url(self):
'''url of the main poster image'''
path = '/Items/{}/Images/Primary'.format(self.id)
return self.connector.get_url(path, attach_api_key=False)
@property
def date(self):
"""alias of premier_date"""
return self.premier_date
@date.setter
def date(self, value):
self.premier_date = value
@property
def premier_date(self):
"""datetime of when the item premiered (aired/released) (or None)"""
ts = self.object_dict.get('PremiereDate')
if not ts:
return None
return arrow.get(ts).datetime
@premier_date.setter
def premier_date(self, value):
if isinstance(value, datetime.datetime):
value = value.strftime("%Y-%m-%dT%H:%M:%SZ")
elif not isinstance(value, str):
raise ValueError('value must be datetime or str')
self.object_dict['PremiereDate'] = value
@property
def date_created(self):
"""datetime of when the item was added to the server (or None)"""
ts = self.object_dict.get('DateCreated')
if not ts:
return None
return arrow.get(ts).datetime
@date_created.setter
def date_created(self, value):
if isinstance(value, datetime.datetime):
value = value.strftime("%Y-%m-%dT%H:%M:%SZ")
elif not isinstance(value, str):
raise ValueError('value must be datetime or str')
self.object_dict['DateCreated'] = value
@property
def parent_id(self):
'''id of the parent object
See Also
--------
parent :
'''
return self.object_dict.get('ParentId')
@property
@async_func
async def parent(self):
'''parent object as a subclass of EmbyObject
|coro|
'''
if self.parent_id:
return await self.process(self.parent_id)
else:
return None
@property
def download_url(self):
return self.connector.get_url('/Items/{}/Download'.format(self.id))
@property
@async_func
async def url(self):
'''url of the item
|coro|
Notes
-----
if remote-adderes was given, then that is used as the base
'''
if await self.connector.is_jellyfin:
path = '/web/index.html#!/details?id={}'
else:
path = '/web/itemdetails.html?id={}'
path = path.format(self.id)
return self.connector.get_url(path, attach_api_key=False)
@async_func
async def update(self, fields=''):
'''reload object info from emby
|coro|
Parameters
----------
fields : str
additional fields to request when updating
See Also
--------
refresh : same thing
send :
post :
'''
path = 'Users/{{UserId}}/Items/{}'.format(self.id)
info = await self.connector.getJson(
path,
remote=False,
Fields='Path,Overview,PremiereDate'+(',' if fields else '')+fields
)
self.object_dict.update(info)
self.extras = {}
return self
@async_func
async def refresh(self, fields=''):
'''Same as update
|coro|
See Also
--------
update :
'''
return await self.update()
@async_func
async def send(self):
'''send data that was changed to emby
|coro|
This should be used after using any of the setter.
Not necessarily immediately, but soon after.
See Also
--------
post: same thing
update :
refresh :
'''
# Why does the whole dict need to be sent?
# because emby is dumb, and will break if I don't
data = {**_EMPTY_OBJ, **self.object_dict}
path = 'Items/{}'.format(self.id)
status, resp = await self.connector.post(
path,
data=data,
remote=False,
send_raw=True,
headers={'Content-Type': 'application/json'},
)
if status in (400, 415):
await EmbyObject(self.object_dict, self.connector).update()
status, resp = await self.connector.post(
path,
data=data,
remote=False,
send_raw=False,
headers={'Content-Type': 'application/json'},
)
return status, resp
@async_func
async def post(self):
'''Same as send
|coro|
See Also
--------
send :
'''
return await self.send()
@async_func
async def process(self, object_dict):
'''[for internal use] convert json/dict into python object
|coro|
Parameters
----------
object_dict : dict
json representation of object from emby
Notes
-----
if a string is given, it is assumed to be an id, obj is returned.
if a list is given, this method is called for each item in list.
Returns
-------
EmbyObject
the object that is represented by the json dict
list
if input is a list, list is returned
'''
# if ID was given, create dummy object
# and update it to get full dict
try:
if type(object_dict) == str:
existing = EmbyObject.known_objects.get(object_dict)
if existing:
return existing
obj = EmbyObject(
{"Id": object_dict},
self.connector,
save=False
)
object_dict = (await obj.update()).object_dict
except:
return None
# if nothing was given, return it back
# if already created object was given, return it back too
if not object_dict or isinstance(object_dict, EmbyObject):
return object_dict
# if a json dict that's really just a list was given,
# convert to list
if type(object_dict) == dict and \
set(object_dict.keys()).issuperset({'Items', 'TotalRecordCount'}):
object_dict = object_dict['Items']
# if a list was given,
# process each item in list
if type(object_dict) == list:
items = []
for item in object_dict:
item = await self.process(item)
if item:
items.append(item)
return items
# otherwise we probably have an object dict
# so we should process that
# if dict has no id, it's a fake
if 'Id' not in object_dict and 'ItemId' not in object_dict:
return object_dict
# if object is already stored,
# update with existing info and return
itemId = object_dict.get('Id', object_dict.get('ItemId'))
existing = EmbyObject.known_objects.get(itemId)
if existing:
existing.object_dict.update(object_dict)
return existing
import embypy.objects.folders
import embypy.objects.videos
import embypy.objects.misc
# if object is not already stored,
# figure out its type (if unknown use this base class)
# create an object with subclass of that type
# return
if 'AppName' in object_dict:
object_dict['Type'] = 'Device'
elif 'HasPassword' in object_dict:
object_dict['Type'] = 'User'
objects = {
'Audio': embypy.objects.misc.Audio,
'Person': embypy.objects.misc.Person,
'Video': embypy.objects.videos.Video,
'Movie': embypy.objects.videos.Movie,
'Trailer': embypy.objects.videos.Trailer,
'AdultVideo': embypy.objects.videos.AdultVideo,
'MusicVideo': embypy.objects.videos.MusicVideo,
'Episode': embypy.objects.videos.Episode,
'Folder': embypy.objects.folders.Folder,
'Playlist': embypy.objects.folders.Playlist,
'BoxSet': embypy.objects.folders.BoxSet,
'MusicAlbum': embypy.objects.folders.MusicAlbum,
'MusicArtist': embypy.objects.folders.MusicArtist,
'Season': embypy.objects.folders.Season,
'Series': embypy.objects.folders.Series,
'Game': embypy.objects.misc.Game,
'GameSystem': embypy.objects.folders.GameSystem,
'Photo': embypy.objects.misc.Photo,
'Book': embypy.objects.misc.Book,
'Image': embypy.objects.misc.Image,
'Device': embypy.objects.misc.Device,
'User': embypy.objects.misc.User,
'Default': EmbyObject,
}
return objects.get(
object_dict.get('Type', 'Default'),
EmbyObject
)(object_dict, self.connector)
def __str__(self):
return self.name
def __repr__(self):
return '<{} {}>'.format(self.type, self.id)
@property
def provider_ids(self):
res = self.object_dict.get("ProviderIds", {})
return res
@property
def tmdbid(self):
return self.provider_ids.get("Tmdb")
@property
def imdbid(self):
return self.provider_ids.get("Imdb") | PypiClean |
/ExpectoCastellum-0.5.tar.gz/ExpectoCastellum-0.5/expectocastellum/commands.py | import help
import spells
import json
import rooms
import things
from quiz import sortingquiz
import people
class Commands(object):
def __init__(self, name=None):
self.name = name
def makemap(self):
rooms.make_rooms_from_json(self.name)
things.make_things_from_json(self.name)
people.make_people_from_json(self.name)
def go(self, direction, player):
return player.go(direction)
def fly(self, player):
return player.fly()
def where(self, player):
print "You are in " + player.location
def invent(self, player):
return player.look()
def look(self, player):
return rooms.phonebook[player.location].look(player)
def talk(self, person, player):
if person in rooms.phonebook[player.location].people:
return people.npclist[person].talk(player, rooms.phonebook[player.location])
else:
print "I don't see %s here." % person.capitalize()
def sort(self, player):
return sortingquiz.try_to_enter(player)
def help(self, args):
print help.helpstatement
def quit(self, player):
save_or_not = raw_input("Leave without saving? (y/n)" )
if save_or_not == 'y':
return 'break'
elif save_or_not == 'n':
self.save(player)
else:
pass
def save(self, player):
if player.name:
confirm = raw_input("Save as " + player.name + "? (y/n) ")
confirm = confirm.lower()
if confirm == 'y':
save_game = open(player.name.lower()+"_save.json", 'w')
else:
savename = raw_input("Save under what name? ")
save_game = open(savename.lower()+"_save.json", 'w')
save_game.truncate
player_states = {}
room_states = {}
thing_states = {}
npc_states = {}
player_states[player.name] = {k:v for k,v in player.__dict__.iteritems() if v}
for name, room in rooms.phonebook.iteritems():
room_states[name] = {k:v for k,v in room.__dict__.iteritems() if v}
for name, thing in things.objectlist.iteritems():
thing_states[name] = {k:v for k,v in thing.__dict__.iteritems() if v}
for name, person in people.npclist.iteritems():
npc_states[name] = {k:v for k,v in person.__dict__.iteritems() if v}
states = [player_states, room_states, thing_states, npc_states]
json.dump(states, save_game)
else:
player.name = raw_input("Save under what name? ")
save_game = open(player.name.lower()+"_save.json", 'w')
save_game.truncate
player_states[player.name] = {k:v for k,v in player.__dict__.iteritems() if k}
for name, room in rooms.phonebook.iteritems():
room_states[name] = {k:v for k,v in room.__dict__.iteritems() if k}
for name, thing in things.objectlist.iteritems():
thing_states[name] = {k:v for k,v in thing.__dict__.iteritems() if k}
for name, person in people.npclist.iteritems():
npc_states[name] = {k:v for k,v in person.__dict__.iteritems() if k}
states = [player_states, room_states, thing_states, npc_states]
json.dump(states, save_game)
def load(self, player):
namefile = raw_input("What name did you save under? ")
save_game = open(namefile.lower()+"_save.json")
states = json.load(save_game)
player_states = states[0]
room_states = states[1]
thing_states = states[2]
npc_states = states[3]
for att, player_data in player_states.iteritems():
player.__dict__.update(player_data)
for name, room_data in room_states.iteritems():
rooms.phonebook[name] = rooms.Room()
rooms.phonebook[name].__dict__.update(room_data)
for name, thing_data in thing_states.iteritems():
things.objectlist[name] = things.Thing()
things.objectlist[name].__dict__.update(thing_data)
for name, npc_data in npc_states.iteritems():
people.npclist[name] = people.Person()
people.npclsit[name].__dict__.update(npc_data)
def speak_parseltongue(self, player):
if player.location == "Myrtle's Bathroom":
print "The sinks creakily move upward and outward, and the floor tile swings up to reveal a dark chute."
rooms.phonebook["Myrtle's Bathroom"].description = rooms.phonebook["Myrtle's Bathroom"].description + "\nThe sink circle has opened to reveal a dark chute."
rooms.phonebook["Myrtle's Bathroom"].add_paths({'d': rooms.phonebook["Chute"]})
if player.location == "Slytherin Common Room":
print "The eyes on the many carved snake decorations glow green."
else:
print "Nothing happens."
def info(self, player):
player.info()
def drop(self, thing, player):
player.drop(thing)
def take(self, thing, player):
if thing not in things.objectlist:
print "You can't take %s!" % thing
return
player.take(thing)
def eat(self, thing, player):
if thing not in things.objectlist:
print "You can't eat %s!" % thing
return
player.eat(thing)
def cast(self, incantation, player):
if "wand" in player.invent:
spellbook = spells.Spells(player, rooms.phonebook[player.location])
spell = getattr(spellbook, incantation)
spell()
else:
print "You need your wand to cast spells!"
def find_distance(self, room, object, max_distance):
dist = 0
location = None
linked = set([room])
if object in room.invent:
location = room
accessible_things = set(room.invent)
while object not in accessible_things and dist <= max_distance:
dist += 1
temp = set()
for chamber in linked:
tempstrings = chamber.paths.values()
tempobjects = [rooms.phonebook[string] for string in tempstrings]
temp.update(tempobjects)
linked = linked.union(temp)
for chamber in linked:
if object in chamber.invent:
location = chamber
accessible_things.update(chamber.invent)
return dist, location
def accio(self, thing, player):
if thing not in things.objectlist:
print "You can't accio %s!" % thing
return
if 'wand' in player.invent:
if things.objectlist[thing].grabbable == True:
if thing in player.invent:
print "You already have that!"
else:
dist, thing_location = self.find_distance(rooms.phonebook[player.location], thing, 4)
if dist <= 3:
print "The %s flies toward you alarmingly quickly." % thing
return thing_location.move(thing, player)
else:
print "You try and try, but are not strong enough to summon the %s." % thing
else:
print "You're not strong enough to move that."
else:
print "You can't cast spells without your wand!"
def x(self, thing, player):
if thing in player.invent or thing in rooms.phonebook[player.location].invent:
if things.objectlist[thing].hidden == True and things.objectlist[thing].home == player.location:
things.objectlist[thing].examine_special()
else:
things.objectlist[thing].examine()
if thing == 'hat':
dist, location = self.find_distance(rooms.phonebook[player.location], 'sword', 50)
if location == None and'sword' not in player.invent:
if player.house == 'Lion':
print "A silver sword falls out of the hat. Congratulations! You are a true Gryffindor!"
rooms.phonebook[player.location].add_invent('sword')
elif player.location == "Gryffindor":
print "A silver sword falls out of the hat. The Sorting Hat cannot tell the difference between someone in Gryffindor House and someone in Gryffindor Common Room."
rooms.phonebook[player.location].add_invent('sword')
else:
return
else:
print "I don't see that here." | PypiClean |
/GoldSaxEngineInitialize-1.033.zip/GoldSaxEngineInitialize-1.033/README.txt | To install the package, you should be using Python 3.3 and above.(It also works with 2.7).
should have pip installed.
you can git clone from https://github.com/VanceKingSaxbeA/GoldSaxPersist
by git clone https://github.com/VanceKingSaxbeA/GoldSaxPersist.git
This is a dependancy Module to be used with GoldSaxEngine-******Markets. To Auto create tables based on source assets.
pip install goldsaxPersist before you can use it.
pip install goldsaxcreatetables
pip install goldsaxPersist
The Main package applicable for you is listed in sourceforge at
http://sourceforge.net/users/vancekingsaxbe.
or
https://github.com/VanceKingSaxbeA?tab=repositories
or
http://www.vancekingsaxbe.powerdominionenterprise.com/MarketsEngine.
Pick the one from your country. It is a full package installer which auto installs all dependancies and starts for you.
Modelled, Architected and designed by Vance King Saxbe. A. with the geeks from GoldSax Consulting and GoldSax Technologies email @[email protected], [email protected]. Development teams from Power Dominion Enterprise, Precieux Consulting. Project sponsored by GoldSax Foundation, GoldSax Group and executed by GoldSax Manager.
For live visual quotes, analytics, and dashboard, dont fail to register when the auto builder starts. As you register, it creates a domain for you to view the live quotes at http://foundation-vancekingsaxbe.rhcloud.com/{YOURNAME}
We are in theprocess of developing a Domain Specific Language for you to write, queries, alerts and strategies. Donations are welcome from donors. sent your donations to [email protected].
To customize this for your specific needs, plz contact [email protected]
Copyright (c) <2014> Author Vance King Saxbe. A http://www.vancekingsaxbe.powerdominionenterprise.com, and contributors Power Dominion Enterprise http://www.powerdominionenterprise.com, Precieux Consulting and other contributors.
Please refer to documentation for further use of this package.
For further support email Vance King Saxbe. A to [email protected], [email protected]. | PypiClean |
/EnergyCapSdk-8.2304.4743.tar.gz/EnergyCapSdk-8.2304.4743/energycap/sdk/models/bill_payment_details_item.py |
from msrest.serialization import Model
class BillPaymentDetailsItem(Model):
"""BillPaymentDetailsItem.
All required parameters must be populated in order to send to Azure.
:param bill_id: Required. Identifier for bill to update <span
class='property-internal'>Required</span> <span
class='property-internal'>Required</span>
:type bill_id: int
:param check_number: Required. Check number of payment <span
class='property-internal'>Required</span> <span
class='property-internal'>Must be between 0 and 32 characters</span> <span
class='property-internal'>Required</span> <span
class='property-internal'>Must be between 0 and 32 characters</span>
:type check_number: str
:param check_date: Required. Date of payment <span
class='property-internal'>Required</span> <span
class='property-internal'>Must be between 12/31/1899 and 1/1/3000</span>
<span class='property-internal'>Required</span> <span
class='property-internal'>Must be between 12/31/1899 and 1/1/3000</span>
:type check_date: datetime
:param pay_status: Required. Payment status indicator <span
class='property-internal'>Required</span> <span
class='property-internal'>Must be between 0 and 10 characters</span> <span
class='property-internal'>Required</span> <span
class='property-internal'>Must be between 0 and 10 characters</span>
:type pay_status: str
:param cleared_date: Date payment cleared <span
class='property-internal'>Must be between 12/31/1899 and 1/1/3000</span>
<span class='property-internal'>Required (defined)</span> <span
class='property-internal'>Must be between 12/31/1899 and 1/1/3000</span>
:type cleared_date: datetime
:param accounting_period: Accounting period in which payment was made
<span class='property-internal'>Must be between 190001 and 209913</span>
<span class='property-internal'>Required (defined)</span> <span
class='property-internal'>Must be between 190001 and 209913</span>
:type accounting_period: int
:param comment: Optional description for payment (ignored by importer)
<span class='property-internal'>Required (defined)</span>
:type comment: str
"""
_validation = {
'bill_id': {'required': True},
'check_number': {'required': True, 'max_length': 32, 'min_length': 0},
'check_date': {'required': True},
'pay_status': {'required': True, 'max_length': 10, 'min_length': 0},
'accounting_period': {'maximum': 209913, 'minimum': 190001},
}
_attribute_map = {
'bill_id': {'key': 'billId', 'type': 'int'},
'check_number': {'key': 'checkNumber', 'type': 'str'},
'check_date': {'key': 'checkDate', 'type': 'iso-8601'},
'pay_status': {'key': 'payStatus', 'type': 'str'},
'cleared_date': {'key': 'clearedDate', 'type': 'iso-8601'},
'accounting_period': {'key': 'accountingPeriod', 'type': 'int'},
'comment': {'key': 'comment', 'type': 'str'},
}
def __init__(self, **kwargs):
super(BillPaymentDetailsItem, self).__init__(**kwargs)
self.bill_id = kwargs.get('bill_id', None)
self.check_number = kwargs.get('check_number', None)
self.check_date = kwargs.get('check_date', None)
self.pay_status = kwargs.get('pay_status', None)
self.cleared_date = kwargs.get('cleared_date', None)
self.accounting_period = kwargs.get('accounting_period', None)
self.comment = kwargs.get('comment', None) | PypiClean |
/3DCORE-1.1.4.tar.gz/3DCORE-1.1.4/py3dcore/models/ttncv2/__init__.py |
import json
import numpy as np
import os
import py3dcore
from py3dcore.model import Toroidal3DCOREModel
from py3dcore.params import Base3DCOREParameters
from py3dcore.models.ttncv2.coordinates import g, f
from py3dcore.models.ttncv2.magfield import h
from py3dcore.models.ttncv2.propagation import p
class TTNCv2(Toroidal3DCOREModel):
"""Implements the thin torus Nieves-Chinchilla (v2) 3DCORE model.
Extended Summary
================
For this specific model there are a total of 16 initial parameters which are as follows:
0: t_i time offset
1: lon longitude
2: lat latitude
3: inc inclination
4: dia cross section diameter at 1 AU
5: w cme width ratio
6: delta cross section aspect ratio
7: r0 initial cme radius
8: v0 initial cme velocity
9: tau magnetic field turns over entire flux rope
10: n_a expansion rate
11: n_b magnetic field decay rate
12: b magnetic field strength at center at 1AU
13: bg_d solar wind background drag coefficient
14: bg_v solar wind background speed
15: noise instrument noise
There are 5 state parameters which are as follows:
0: t_t current time
1: v_t current velocity
2: rho_0 torus major radius
3: rho_1 torus minor radius
4: b magnetic field strength at center
"""
def __init__(self, launch, runs, **kwargs):
"""Initialize the thin torus Nieves-Chinchilla (v2) 3DCORE model.
Parameters
----------
launch: datetime.datetime
Initial datetime.
runs : int
Number of parallel model runs.
Other Parameters
----------------
cuda_device: int
CUDA device, by default 0.
dtype: type
Data type, by default np.float32.
use_cuda : bool
CUDA flag, by default False.
"""
funcs = {
"g": g,
"f": f,
"h": h,
"p": p
}
dtype = kwargs.pop("dtype", np.float32)
parameters = kwargs.pop("parameters", self.default_parameters())
if isinstance(parameters, dict):
parameters = Base3DCOREParameters(parameters, dtype=dtype)
super(TTNCv2, self).__init__(
launch, funcs, parameters, sparams_count=5, runs=runs, **kwargs
)
@classmethod
def default_parameters(cls):
path = os.path.join(os.path.dirname(py3dcore.__file__), "models/ttncv2/parameters.json")
with open(path) as fh:
data = json.load(fh)
return data | PypiClean |
/Inbreeding-0.1.1.3.post2-py3-none-any.whl/inbreeding/MissingYOB.py |
import pandas as pd
import numpy as np
from MultiProcessDivision.divide import divide
import multiprocessing as mp
import functools
import gc
class MissingYOB():
def __init__(self):
pass
def missingYOB(self, animals:pd.DataFrame, data:pd.DataFrame) -> pd.DataFrame:
splitted = divide(animals, axis=0)
to_execute = functools.partial(self.mapYOB, data)
with mp.Pool(processes=mp.cpu_count()) as pool:
_res = pd.concat(
[
x for x in pool.map(to_execute, splitted) if x is not None
], axis=0
)
data = pd.concat([data, _res], axis=0)
del _res
gc.collect()
data = data[~data.index.duplicated(keep="last")]
return data
def mapYOB(self, animals:pd.DataFrame, data:pd.DataFrame) -> pd.DataFrame:
_data = animals.groupby(animals.index).apply(
lambda x: self.__assignYOB(animal=x, data=data)
)
return _data
def __assignYOB(self, animal:pd.DataFrame, data:pd.DataFrame) -> pd.DataFrame:
if animal.loc[:,"Sex"].to_numpy()[0] == 1 and animal.loc[:,"Year of Birth"].to_numpy()[0] == 0:
sired_offspring_yob = data.loc[
(data["Ear tag sire"] == np.unique(animal.index)[0]) &
(data["Year of Birth"] != 0), "Year of Birth"
]
if sired_offspring_yob.size > 0:
data.loc[np.unique(animal.index)[0], "Year of Birth"] = np.min(sired_offspring_yob.to_numpy())[0] - 3
data.loc[
(data["Year of Birth sire"] == 0) &
(data["Ear tag sire"] == np.unique(animal.index)[0]),
"Year of Birth sire"] = np.min(sired_offspring_yob.to_numpy())[0] - 3
r = pd.DataFrame()
r = pd.concat([r, data.loc[np.unique(animal.index)[0],:]], axis=0)
r = pd.concat(
[
r,
data.loc[data["Ear tag sire"] == np.unique(animal.index)[0],:]
], axis=0
)
return r
if animal.loc[:,"Sex"].to_numpy()[0] == 2 and animal.loc[:,"Year of Birth"].to_numpy()[0] == 0:
damed_offspring_yob = data.loc[
(data["Ear tag dam"] == np.unique(animal.index)[0]) &
(data["Year of Birth"] != 0), "Year of Birth"
]
if damed_offspring_yob.size > 0:
data.loc[np.unique(animal.index)[0], "Year of Birth"] = np.min(damed_offspring_yob.to_numpy())[0] - 3
data.loc[
(data["Year of Birth dam"] == 0) &
(data["Ear tag dam"] == np.unique(animal.index)[0]),
"Year of Birth dam"] = np.min(damed_offspring_yob.to_numpy())[0] - 3
r = pd.DataFrame()
r = pd.concat([r, data.loc[np.unique(animal.index)[0],:]], axis=0)
r = pd.concat(
[
r,
data.loc[data["Ear tag dam"] == np.unique(animal.index)[0],:]
], axis=0
)
return r | PypiClean |
/Colbert-0.30.tar.gz/Colbert-0.30/src/scripts/colbert_solder_tva.py |
# Copyright (c) 2012 Stanislas Guerra <[email protected]>
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# 3. The name of the author may not be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
# OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
# NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
# THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import json
import sys
import datetime
from colbert.utils import DATE_FMT
from colbert.utils import json_encoder
from colbert.tva import solde_comptes_de_tva
from optparse import OptionParser
def main():
usage = "usage: %prog [options] livre-journal.txt"
version = "%prog 0.1"
parser = OptionParser(usage=usage, version=version, description=__doc__)
parser.add_option("-d", "--date-debut", dest="date_debut",
help="date de début de la période au format jj/mm/aaaa.")
parser.add_option("-f", "--date-fin", dest="date_fin",
help="date de fin de la période au format jj/mm/aaaa.")
(options, args) = parser.parse_args()
if len(args) != 1:
parser.error("Vous devez passer en argument le chemin d'un fichier "
"Livre Journal au format reStructuredText, la date de "
"début et la date de fin.")
else:
date_debut = datetime.datetime.strptime(options.date_debut, DATE_FMT).date()
date_fin = datetime.datetime.strptime(options.date_fin, DATE_FMT).date()
with open(args[0], mode="r") as livre_journal:
ecriture = solde_comptes_de_tva(livre_journal, date_debut, date_fin)
json.dump([ecriture], sys.stdout, default=json_encoder, indent=4)
if __name__ == "__main__":
main() | PypiClean |
/Argonaut-0.3.4.tar.gz/Argonaut-0.3.4/argonaut/public/ckeditor/_source/lang/de.js | /*
Copyright (c) 2003-2010, CKSource - Frederico Knabben. All rights reserved.
For licensing, see LICENSE.html or http://ckeditor.com/license
*/
/**
* @fileOverview Defines the {@link CKEDITOR.lang} object, for the
* German language.
*/
/**#@+
@type String
@example
*/
/**
* Constains the dictionary of language entries.
* @namespace
*/
CKEDITOR.lang['de'] =
{
/**
* The language reading direction. Possible values are "rtl" for
* Right-To-Left languages (like Arabic) and "ltr" for Left-To-Right
* languages (like English).
* @default 'ltr'
*/
dir : 'ltr',
/*
* Screenreader titles. Please note that screenreaders are not always capable
* of reading non-English words. So be careful while translating it.
*/
editorTitle : 'Rich text editor, %1, press ALT 0 for help.', // MISSING
// ARIA descriptions.
toolbar : 'Toolbar', // MISSING
editor : 'Rich Text Editor', // MISSING
// Toolbar buttons without dialogs.
source : 'Quellcode',
newPage : 'Neue Seite',
save : 'Speichern',
preview : 'Vorschau',
cut : 'Ausschneiden',
copy : 'Kopieren',
paste : 'Einfügen',
print : 'Drucken',
underline : 'Unterstrichen',
bold : 'Fett',
italic : 'Kursiv',
selectAll : 'Alles auswählen',
removeFormat : 'Formatierungen entfernen',
strike : 'Durchgestrichen',
subscript : 'Tiefgestellt',
superscript : 'Hochgestellt',
horizontalrule : 'Horizontale Linie einfügen',
pagebreak : 'Seitenumbruch einfügen',
unlink : 'Link entfernen',
undo : 'Rückgängig',
redo : 'Wiederherstellen',
// Common messages and labels.
common :
{
browseServer : 'Server durchsuchen',
url : 'URL',
protocol : 'Protokoll',
upload : 'Upload',
uploadSubmit : 'Zum Server senden',
image : 'Bild',
flash : 'Flash',
form : 'Formular',
checkbox : 'Checkbox',
radio : 'Radiobutton',
textField : 'Textfeld einzeilig',
textarea : 'Textfeld mehrzeilig',
hiddenField : 'verstecktes Feld',
button : 'Klickbutton',
select : 'Auswahlfeld',
imageButton : 'Bildbutton',
notSet : '<nichts>',
id : 'ID',
name : 'Name',
langDir : 'Schreibrichtung',
langDirLtr : 'Links nach Rechts (LTR)',
langDirRtl : 'Rechts nach Links (RTL)',
langCode : 'Sprachenkürzel',
longDescr : 'Langform URL',
cssClass : 'Stylesheet Klasse',
advisoryTitle : 'Titel Beschreibung',
cssStyle : 'Style',
ok : 'OK',
cancel : 'Abbrechen',
close : 'Schließen',
preview : 'Vorschau',
generalTab : 'Allgemein',
advancedTab : 'Erweitert',
validateNumberFailed : 'Dieser Wert ist keine Nummer.',
confirmNewPage : 'Alle nicht gespeicherten Änderungen gehen verlohren. Sind sie sicher die neue Seite zu laden?',
confirmCancel : 'Einige Optionen wurden geändert. Wollen Sie den Dialog dennoch schließen?',
options : 'Optionen',
target : 'Zielseite',
targetNew : 'Neues Fenster (_blank)',
targetTop : 'Oberstes Fenster (_top)',
targetSelf : 'Gleiches Fenster (_self)',
targetParent : 'Oberes Fenster (_parent)',
langDirLTR : 'Left to Right (LTR)', // MISSING
langDirRTL : 'Right to Left (RTL)', // MISSING
styles : 'Style', // MISSING
cssClasses : 'Stylesheet Classes', // MISSING
// Put the voice-only part of the label in the span.
unavailable : '%1<span class="cke_accessibility">, nicht verfügbar</span>'
},
contextmenu :
{
options : 'Context Menu Optionen'
},
// Special char dialog.
specialChar :
{
toolbar : 'Sonderzeichen einfügen/editieren',
title : 'Sonderzeichen auswählen',
options : 'Sonderzeichen Optionen'
},
// Link dialog.
link :
{
toolbar : 'Link einfügen/editieren',
other : '<andere>',
menu : 'Link editieren',
title : 'Link',
info : 'Link-Info',
target : 'Zielseite',
upload : 'Upload',
advanced : 'Erweitert',
type : 'Link-Typ',
toUrl : 'URL',
toAnchor : 'Anker in dieser Seite',
toEmail : 'E-Mail',
targetFrame : '<Frame>',
targetPopup : '<Pop-up Fenster>',
targetFrameName : 'Ziel-Fenster-Name',
targetPopupName : 'Pop-up Fenster-Name',
popupFeatures : 'Pop-up Fenster-Eigenschaften',
popupResizable : 'Größe änderbar',
popupStatusBar : 'Statusleiste',
popupLocationBar: 'Adress-Leiste',
popupToolbar : 'Werkzeugleiste',
popupMenuBar : 'Menü-Leiste',
popupFullScreen : 'Vollbild (IE)',
popupScrollBars : 'Rollbalken',
popupDependent : 'Abhängig (Netscape)',
popupWidth : 'Breite',
popupLeft : 'Linke Position',
popupHeight : 'Höhe',
popupTop : 'Obere Position',
id : 'Id',
langDir : 'Schreibrichtung',
langDirLTR : 'Links nach Rechts (LTR)',
langDirRTL : 'Rechts nach Links (RTL)',
acccessKey : 'Zugriffstaste',
name : 'Name',
langCode : 'Schreibrichtung',
tabIndex : 'Tab-Index',
advisoryTitle : 'Titel Beschreibung',
advisoryContentType : 'Inhaltstyp',
cssClasses : 'Stylesheet Klasse',
charset : 'Ziel-Zeichensatz',
styles : 'Style',
selectAnchor : 'Anker auswählen',
anchorName : 'nach Anker Name',
anchorId : 'nach Element Id',
emailAddress : 'E-Mail Addresse',
emailSubject : 'Betreffzeile',
emailBody : 'Nachrichtentext',
noAnchors : '(keine Anker im Dokument vorhanden)',
noUrl : 'Bitte geben Sie die Link-URL an',
noEmail : 'Bitte geben Sie e-Mail Adresse an'
},
// Anchor dialog
anchor :
{
toolbar : 'Anker einfügen/editieren',
menu : 'Anker-Eigenschaften',
title : 'Anker-Eigenschaften',
name : 'Anker Name',
errorName : 'Bitte geben Sie den Namen des Ankers ein'
},
// List style dialog
list:
{
numberedTitle : 'Nummerierte Listen-Eigenschaften',
bulletedTitle : 'Listen-Eigenschaften',
type : 'Typ',
start : 'Start',
validateStartNumber :'List Startnummer muss eine ganze Zahl sein.',
circle : 'Ring',
disc : 'Kreis',
square : 'Quadrat',
none : 'Keine',
notset : '<nicht gesetzt>',
armenian : 'Armenisch Nummerierung',
georgian : 'Georgisch Nummerierung (an, ban, gan, etc.)',
lowerRoman : 'Klein römisch (i, ii, iii, iv, v, etc.)',
upperRoman : 'Groß römisch (I, II, III, IV, V, etc.)',
lowerAlpha : 'Klein alpha (a, b, c, d, e, etc.)',
upperAlpha : 'Groß alpha (A, B, C, D, E, etc.)',
lowerGreek : 'Klein griechisch (alpha, beta, gamma, etc.)',
decimal : 'Dezimal (1, 2, 3, etc.)',
decimalLeadingZero : 'Dezimal mit führende Null (01, 02, 03, etc.)'
},
// Find And Replace Dialog
findAndReplace :
{
title : 'Suchen und Ersetzen',
find : 'Suchen',
replace : 'Ersetzen',
findWhat : 'Suche nach:',
replaceWith : 'Ersetze mit:',
notFoundMsg : 'Der gesuchte Text wurde nicht gefunden.',
matchCase : 'Groß-Kleinschreibung beachten',
matchWord : 'Nur ganze Worte suchen',
matchCyclic : 'zyklische suche',
replaceAll : 'Alle Ersetzen',
replaceSuccessMsg : '%1 vorkommen ersetzt.'
},
// Table Dialog
table :
{
toolbar : 'Tabelle',
title : 'Tabellen-Eigenschaften',
menu : 'Tabellen-Eigenschaften',
deleteTable : 'Tabelle löschen',
rows : 'Zeile',
columns : 'Spalte',
border : 'Rahmen',
align : 'Ausrichtung',
alignLeft : 'Links',
alignCenter : 'Zentriert',
alignRight : 'Rechts',
width : 'Breite',
widthPx : 'Pixel',
widthPc : '%',
widthUnit : 'Breite Einheit',
height : 'Höhe',
cellSpace : 'Zellenabstand außen',
cellPad : 'Zellenabstand innen',
caption : 'Überschrift',
summary : 'Inhaltsübersicht',
headers : 'Überschriften',
headersNone : 'keine',
headersColumn : 'Erste Spalte',
headersRow : 'Erste Zeile',
headersBoth : 'keine',
invalidRows : 'Die Anzahl der Zeilen muß größer als 0 sein.',
invalidCols : 'Die Anzahl der Spalten muß größer als 0 sein..',
invalidBorder : 'Die Rahmenbreite muß eine Zahl sein.',
invalidWidth : 'Die Tabellenbreite muss eine Zahl sein.',
invalidHeight : 'Die Tabellenbreite muß eine Zahl sein.',
invalidCellSpacing : 'Der Zellenabstand außen muß eine Zahl sein.',
invalidCellPadding : 'Der Zellenabstand innen muß eine Zahl sein.',
cell :
{
menu : 'Zelle',
insertBefore : 'Zelle davor einfügen',
insertAfter : 'Zelle danach einfügen',
deleteCell : 'Zelle löschen',
merge : 'Zellen verbinden',
mergeRight : 'nach rechts verbinden',
mergeDown : 'nach unten verbinden',
splitHorizontal : 'Zelle horizontal teilen',
splitVertical : 'Zelle vertikal teilen',
title : 'Zellen Eigenschaften',
cellType : 'Zellart',
rowSpan : 'Anzahl Zeilen verbinden',
colSpan : 'Anzahl Spalten verbinden',
wordWrap : 'Zeilenumbruch',
hAlign : 'Horizontale Ausrichtung',
vAlign : 'Vertikale Ausrichtung',
alignTop : 'Oben',
alignMiddle : 'Mitte',
alignBottom : 'Unten',
alignBaseline : 'Grundlinie',
bgColor : 'Hintergrundfarbe',
borderColor : 'Rahmenfarbe',
data : 'Daten',
header : 'Überschrift',
yes : 'Ja',
no : 'Nein',
invalidWidth : 'Zellenbreite muß eine Zahl sein.',
invalidHeight : 'Zellenhöhe muß eine Zahl sein.',
invalidRowSpan : '"Anzahl Zeilen verbinden" muss eine Ganzzahl sein.',
invalidColSpan : '"Anzahl Spalten verbinden" muss eine Ganzzahl sein.',
chooseColor : 'Wählen'
},
row :
{
menu : 'Zeile',
insertBefore : 'Zeile oberhalb einfügen',
insertAfter : 'Zeile unterhalb einfügen',
deleteRow : 'Zeile entfernen'
},
column :
{
menu : 'Spalte',
insertBefore : 'Spalte links davor einfügen',
insertAfter : 'Spalte rechts danach einfügen',
deleteColumn : 'Spalte löschen'
}
},
// Button Dialog.
button :
{
title : 'Button-Eigenschaften',
text : 'Text (Wert)',
type : 'Typ',
typeBtn : 'Button',
typeSbm : 'Absenden',
typeRst : 'Zurücksetzen'
},
// Checkbox and Radio Button Dialogs.
checkboxAndRadio :
{
checkboxTitle : 'Checkbox-Eigenschaften',
radioTitle : 'Optionsfeld-Eigenschaften',
value : 'Wert',
selected : 'ausgewählt'
},
// Form Dialog.
form :
{
title : 'Formular-Eigenschaften',
menu : 'Formular-Eigenschaften',
action : 'Action',
method : 'Method',
encoding : 'Zeichenkodierung'
},
// Select Field Dialog.
select :
{
title : 'Auswahlfeld-Eigenschaften',
selectInfo : 'Info',
opAvail : 'Mögliche Optionen',
value : 'Wert',
size : 'Größe',
lines : 'Linien',
chkMulti : 'Erlaube Mehrfachauswahl',
opText : 'Text',
opValue : 'Wert',
btnAdd : 'Hinzufügen',
btnModify : 'Ändern',
btnUp : 'Hoch',
btnDown : 'Runter',
btnSetValue : 'Setze als Standardwert',
btnDelete : 'Entfernen'
},
// Textarea Dialog.
textarea :
{
title : 'Textfeld (mehrzeilig) Eigenschaften',
cols : 'Spalten',
rows : 'Reihen'
},
// Text Field Dialog.
textfield :
{
title : 'Textfeld (einzeilig) Eigenschaften',
name : 'Name',
value : 'Wert',
charWidth : 'Zeichenbreite',
maxChars : 'Max. Zeichen',
type : 'Typ',
typeText : 'Text',
typePass : 'Passwort'
},
// Hidden Field Dialog.
hidden :
{
title : 'Verstecktes Feld-Eigenschaften',
name : 'Name',
value : 'Wert'
},
// Image Dialog.
image :
{
title : 'Bild-Eigenschaften',
titleButton : 'Bildbutton-Eigenschaften',
menu : 'Bild-Eigenschaften',
infoTab : 'Bild-Info',
btnUpload : 'Zum Server senden',
upload : 'Hochladen',
alt : 'Alternativer Text',
width : 'Breite',
height : 'Höhe',
lockRatio : 'Größenverhältnis beibehalten',
unlockRatio : 'Ratio Freischalten',
resetSize : 'Größe zurücksetzen',
border : 'Rahmen',
hSpace : 'Horizontal-Abstand',
vSpace : 'Vertikal-Abstand',
align : 'Ausrichtung',
alignLeft : 'Links',
alignRight : 'Rechts',
alertUrl : 'Bitte geben Sie die Bild-URL an',
linkTab : 'Link',
button2Img : 'Möchten Sie den gewählten Bild-Button in ein einfaches Bild umwandeln?',
img2Button : 'Möchten Sie das gewählten Bild in einen Bild-Button umwandeln?',
urlMissing : 'Imagequelle URL fehlt.',
validateWidth : 'Breite muß eine ganze Zahl sein.',
validateHeight : 'Höhe muß eine ganze Zahl sein.',
validateBorder : 'Rahmen muß eine ganze Zahl sein.',
validateHSpace : 'Horizontal-Abstand muß eine ganze Zahl sein.',
validateVSpace : 'Vertikal-Abstand must be a whole number.'
},
// Flash Dialog
flash :
{
properties : 'Flash-Eigenschaften',
propertiesTab : 'Eigenschaften',
title : 'Flash-Eigenschaften',
chkPlay : 'autom. Abspielen',
chkLoop : 'Endlosschleife',
chkMenu : 'Flash-Menü aktivieren',
chkFull : 'Vollbildmodus erlauben',
scale : 'Skalierung',
scaleAll : 'Alles anzeigen',
scaleNoBorder : 'ohne Rand',
scaleFit : 'Passgenau',
access : 'Skript Zugang',
accessAlways : 'Immer',
accessSameDomain: 'Gleiche Domain',
accessNever : 'Nie',
align : 'Ausrichtung',
alignLeft : 'Links',
alignAbsBottom : 'Abs Unten',
alignAbsMiddle : 'Abs Mitte',
alignBaseline : 'Baseline',
alignBottom : 'Unten',
alignMiddle : 'Mitte',
alignRight : 'Rechts',
alignTextTop : 'Text Oben',
alignTop : 'Oben',
quality : 'Qualität',
qualityBest : 'Beste',
qualityHigh : 'Hoch',
qualityAutoHigh : 'Auto Hoch',
qualityMedium : 'Medium',
qualityAutoLow : 'Auto Niedrig',
qualityLow : 'Niedrig',
windowModeWindow: 'Fenster',
windowModeOpaque: 'Deckend',
windowModeTransparent : 'Transparent',
windowMode : 'Fenster Modus',
flashvars : 'Variablen für Flash',
bgcolor : 'Hintergrundfarbe',
width : 'Breite',
height : 'Höhe',
hSpace : 'Horizontal-Abstand',
vSpace : 'Vertikal-Abstand',
validateSrc : 'Bitte geben Sie die Link-URL an',
validateWidth : 'Breite muss eine Zahl sein.',
validateHeight : 'Höhe muss eine Zahl sein.',
validateHSpace : 'HSpace muss eine Zahl sein.',
validateVSpace : 'VSpace muss eine Zahl sein.'
},
// Speller Pages Dialog
spellCheck :
{
toolbar : 'Rechtschreibprüfung',
title : 'Rechtschreibprüfung',
notAvailable : 'Entschuldigung, aber dieser Dienst steht im Moment nicht zur verfügung.',
errorLoading : 'Fehler beim laden des Dienstanbieters: %s.',
notInDic : 'Nicht im Wörterbuch',
changeTo : 'Ändern in',
btnIgnore : 'Ignorieren',
btnIgnoreAll : 'Alle Ignorieren',
btnReplace : 'Ersetzen',
btnReplaceAll : 'Alle Ersetzen',
btnUndo : 'Rückgängig',
noSuggestions : ' - keine Vorschläge - ',
progress : 'Rechtschreibprüfung läuft...',
noMispell : 'Rechtschreibprüfung abgeschlossen - keine Fehler gefunden',
noChanges : 'Rechtschreibprüfung abgeschlossen - keine Worte geändert',
oneChange : 'Rechtschreibprüfung abgeschlossen - ein Wort geändert',
manyChanges : 'Rechtschreibprüfung abgeschlossen - %1 Wörter geändert',
ieSpellDownload : 'Rechtschreibprüfung nicht installiert. Möchten Sie sie jetzt herunterladen?'
},
smiley :
{
toolbar : 'Smiley',
title : 'Smiley auswählen',
options : 'Smiley Optionen'
},
elementsPath :
{
eleLabel : 'Elements Pfad',
eleTitle : '%1 Element'
},
numberedlist : 'Nummerierte Liste',
bulletedlist : 'Liste',
indent : 'Einzug erhöhen',
outdent : 'Einzug verringern',
justify :
{
left : 'Linksbündig',
center : 'Zentriert',
right : 'Rechtsbündig',
block : 'Blocksatz'
},
blockquote : 'Zitatblock',
clipboard :
{
title : 'Einfügen',
cutError : 'Die Sicherheitseinstellungen Ihres Browsers lassen es nicht zu, den Text automatisch auszuschneiden. Bitte benutzen Sie die System-Zwischenablage über STRG-X (ausschneiden) und STRG-V (einfügen).',
copyError : 'Die Sicherheitseinstellungen Ihres Browsers lassen es nicht zu, den Text automatisch kopieren. Bitte benutzen Sie die System-Zwischenablage über STRG-C (kopieren).',
pasteMsg : 'Bitte fügen Sie den Text in der folgenden Box über die Tastatur (mit <STRONG>Strg+V</STRONG>) ein und bestätigen Sie mit <STRONG>OK</STRONG>.',
securityMsg : 'Aufgrund von Sicherheitsbeschränkungen Ihres Browsers kann der Editor nicht direkt auf die Zwischenablage zugreifen. Bitte fügen Sie den Inhalt erneut in diesem Fenster ein.',
pasteArea : 'Einfügebereich'
},
pastefromword :
{
confirmCleanup : 'Der Text, den Sie einfügen möchten, scheint aus MS-Word kopiert zu sein. Möchten Sie ihn zuvor bereinigen lassen?',
toolbar : 'aus MS-Word einfügen',
title : 'aus MS-Word einfügen',
error : 'Aufgrund eines internen Fehlers war es nicht möglich die eingefügten Daten zu bereinigen'
},
pasteText :
{
button : 'Als Text einfügen',
title : 'Als Text einfügen'
},
templates :
{
button : 'Vorlagen',
title : 'Vorlagen',
options : 'Vorlagen Optionen',
insertOption : 'Aktuellen Inhalt ersetzen',
selectPromptMsg : 'Klicken Sie auf eine Vorlage, um sie im Editor zu öffnen (der aktuelle Inhalt wird dabei gelöscht!):',
emptyListMsg : '(keine Vorlagen definiert)'
},
showBlocks : 'Blöcke anzeigen',
stylesCombo :
{
label : 'Stil',
panelTitle : 'Formatierungenstil',
panelTitle1 : 'Block Stilart',
panelTitle2 : 'Inline Stilart',
panelTitle3 : 'Objekt Stilart'
},
format :
{
label : 'Format',
panelTitle : 'Format',
tag_p : 'Normal',
tag_pre : 'Formatiert',
tag_address : 'Addresse',
tag_h1 : 'Überschrift 1',
tag_h2 : 'Überschrift 2',
tag_h3 : 'Überschrift 3',
tag_h4 : 'Überschrift 4',
tag_h5 : 'Überschrift 5',
tag_h6 : 'Überschrift 6',
tag_div : 'Normal (DIV)'
},
div :
{
title : 'Div Container erzeugen',
toolbar : 'Div Container erzeugen',
cssClassInputLabel : 'Stylesheet Klasse',
styleSelectLabel : 'Stil',
IdInputLabel : 'Id',
languageCodeInputLabel : ' Sprache Code',
inlineStyleInputLabel : 'Inline Style',
advisoryTitleInputLabel : 'Beratungs Titel',
langDirLabel : 'Sprache Richtung',
langDirLTRLabel : 'Links nach Rechs (LTR)',
langDirRTLLabel : 'Rechs nach Links (RTL)',
edit : 'Div Bearbeiten',
remove : 'Div Entfernen'
},
font :
{
label : 'Schriftart',
voiceLabel : 'Schriftart',
panelTitle : 'Schriftart'
},
fontSize :
{
label : 'Größe',
voiceLabel : 'Schrifgröße',
panelTitle : 'Größe'
},
colorButton :
{
textColorTitle : 'Textfarbe',
bgColorTitle : 'Hintergrundfarbe',
panelTitle : 'Farben',
auto : 'Automatisch',
more : 'Weitere Farben...'
},
colors :
{
'000' : 'Schwarz',
'800000' : 'Kastanienbraun',
'8B4513' : 'Braun',
'2F4F4F' : 'Dunkles Schiefergrau',
'008080' : 'Blaugrün',
'000080' : 'Navy',
'4B0082' : 'Indigo',
'696969' : 'Dunkelgrau',
'B22222' : 'Ziegelrot',
'A52A2A' : 'Braun',
'DAA520' : 'Goldgelb',
'006400' : 'Dunkelgrün',
'40E0D0' : 'Türkis',
'0000CD' : 'Medium Blau',
'800080' : 'Lila',
'808080' : 'Grau',
'F00' : 'Rot',
'FF8C00' : 'Dunkelorange',
'FFD700' : 'Gold',
'008000' : 'Grün',
'0FF' : 'Cyan',
'00F' : 'Blau',
'EE82EE' : 'Hellviolett',
'A9A9A9' : 'Dunkelgrau',
'FFA07A' : 'Helles Lachsrosa',
'FFA500' : 'Orange',
'FFFF00' : 'Gelb',
'00FF00' : 'Lime',
'AFEEEE' : 'Blaß-Türkis',
'ADD8E6' : 'Hellblau',
'DDA0DD' : 'Pflaumenblau',
'D3D3D3' : 'Hellgrau',
'FFF0F5' : 'Lavendel',
'FAEBD7' : 'Antik Weiß',
'FFFFE0' : 'Hellgelb',
'F0FFF0' : 'Honigtau',
'F0FFFF' : 'Azurblau',
'F0F8FF' : 'Alice Blau',
'E6E6FA' : 'Lavendel',
'FFF' : 'Weiß'
},
scayt :
{
title : 'Rechtschreibprüfung während der Texteingabe',
opera_title : 'Nicht von Opera unterstützt',
enable : 'SCAYT einschalten',
disable : 'SCAYT ausschalten',
about : 'Über SCAYT',
toggle : 'SCAYT umschalten',
options : 'Optionen',
langs : 'Sprachen',
moreSuggestions : 'Mehr Vorschläge',
ignore : 'Ignorieren',
ignoreAll : 'Alle ignorieren',
addWord : 'Wort hinzufügen',
emptyDic : 'Wörterbuchname sollte leer sein.',
optionsTab : 'Optionen',
allCaps : 'Groß geschriebenen Wörter ignorieren',
ignoreDomainNames : 'Domain-Namen ignorieren',
mixedCase : 'Wörter mit gemischte Setzkasten ignorieren',
mixedWithDigits : 'Wörter mit Zahlen ignorieren',
languagesTab : 'Sprachen',
dictionariesTab : 'Wörterbücher',
dic_field_name : 'Wörterbuchname',
dic_create : 'Erzeugen',
dic_restore : 'Wiederherstellen',
dic_delete : 'Löschen',
dic_rename : 'Umbenennen',
dic_info : 'Anfangs wird das Benutzerwörterbuch in einem Cookie gespeichert. Allerdings sind Cookies in der Größe begrenzt. Wenn das Benutzerwörterbuch bis zu einem Punkt wächst, wo es nicht mehr in einem Cookie gespeichert werden kann, wird das Benutzerwörterbuch auf dem Server gespeichert. Um Ihr persönliches Wörterbuch auf dem Server zu speichern, müssen Sie einen Namen für das Wörterbuch angeben. Falls Sie schon ein gespeicherte Wörterbuch haben, geben Sie bitte dessen Namen ein und klicken Sie auf die Schaltfläche Wiederherstellen.',
aboutTab : 'Über'
},
about :
{
title : 'Über CKEditor',
dlgTitle : 'Über CKEditor',
moreInfo : 'Für Informationen Liztenzbestimmungen besuchen sie bitte unsere Webseite:',
copy : 'Copyright © $1. Alle Rechte vorbehalten.'
},
maximize : 'Maximieren',
minimize : 'Minimieren',
fakeobjects :
{
anchor : 'Anker',
flash : 'Flash Animation',
div : 'Seitenumbruch',
unknown : 'Unbekanntes Objekt'
},
resize : 'Zum Vergrößern ziehen',
colordialog :
{
title : 'Farbe wählen',
options : 'Farbeoptionen',
highlight : 'Hervorheben',
selected : 'Ausgewählte Farbe',
clear : 'Entfernen'
},
toolbarCollapse : 'Symbolleiste einklappen',
toolbarExpand : 'Symbolleiste ausklappen',
bidi :
{
ltr : 'Text direction from left to right', // MISSING
rtl : 'Text direction from right to left' // MISSING
}
}; | PypiClean |
/EventSimpleGUI-0.3.1.tar.gz/EventSimpleGUI-0.3.1/README.md | # Events For SimpleGui
> Status of project: in progress...
<div align="center">


<a href="https://github.com/MikalROn/EventSimpleGUI">
<img alt="GitHub" src="https://img.shields.io/badge/Github-Open%20source-green?style=for-the-badge&logo=github"/>
</a>
<a href="https://smokeshow.helpmanual.io/474z2x1c0s2u3j101i26/">
<img alt="Conv100%" src="https://img.shields.io/badge/coverage-100%25-green?style=for-the-badge">
</a>
</div>
<em>This project has the intention to make easier, scalable and readable events on PySimpleGUI</em>
## Download
<p>Download from PyPi</p>
````shell
$pip install EventSimpleGUI
````
## Demonstration
<h3> Creating an event function </h3>
<p>Using the decorator event to run an event, you can pass the element key as an argument for decorator, when the event
is called, function is going to be called two</p>
````python
from pysimpleevent import EventSimpleGUI
import PySimpleGUI as sg
loop = EventSimpleGUI()
@loop.event('_click')
def when_btn_was_clicked(*ags):
print('Just a normal event')
layout = [[sg.B('Just a button', key='_click')]]
window = sg.Window('Just a Window.', layout)
if __name__ == '__main__':
loop.run_window(window)
````
Events can be passed as an argument of run window like in the exemple
````python
from pysimpleevent import EventSimpleGUI
import PySimpleGUI as sg
loop = EventSimpleGUI()
def when_btn_was_clicked(*args):
event, _, _ = args
if event == '_click':
print('Just a normal event')
layout = [[sg.B('Just a button', key='_click')]]
window = sg.Window('Just a Window.', layout)
if __name__ == '__main__':
loop.run_window(window, when_btn_was_clicked)
````
And can also pass an event using add_event
````python
from pysimpleevent import EventSimpleGUI
import PySimpleGUI as sg
loop = EventSimpleGUI()
def when_btn_was_clicked(*args):
event, _, _ = args
if event == '_click':
print('Just a normal event')
loop.add_event(when_btn_was_clicked)
layout = [[sg.B('Just a button', key='_click')]]
window = sg.Window('Just a Window.', layout)
if __name__ == '__main__':
loop.run_window(window)
````
## Events
<p> You can use a sting or list of keys to trigger your events </p>
````python
from pysimpleevent import EventSimpleGUI
import PySimpleGUI as sg
loop = EventSimpleGUI()
keys = ['_click', '_click1']
@loop.event(keys)
def when_btn_was_clicked(*args):
print('Just a normal event')
layout = [
[sg.B(f'{"Just a button":54}', key='_click')],
[sg.B(f'{"Just another button":50}', key='_click1')]
]
window = sg.Window('Just a Window.', layout, scaling=1.5)
if __name__ == '__main__':
loop.run_window(window, window_log=True)
````
<div>
#### Change log 0.2.7
- Tests are implemented 97% cov
- Close event replaced to the end of loop
#### Change log 0.2.5
- Now events can return values on Values dict
</div>
| PypiClean |
/Font-Awesome-Flask-0.1.1.tar.gz/Font-Awesome-Flask-0.1.1/docs/usage.md | # Usage
## Configuration
Font-Awesome-Flask can be configured via the [Flask configuration API](https://flask.palletsprojects.com/en/latest/config/), using the {attr}`config <flask.Flask.config>` attribute of the {class}`Flask <flask.Flask>` object. These are the available configuration values along with their description:
| Configuration value | Default | Description |
| -------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `FONT_AWESOME_SERVE_LOCAL` | `False` | Whether to serve Font Awesome's resources locally or from the CDN. When set to `True`, the appropriate resource(s) will be downloaded from the CDN once, after which they will be served locally. |
## Initialization
```{include} ../README.md
:start-after: <!-- start docs-include-initialization -->
:end-before: <!-- end docs-include-initialization -->
```
## Loading Resources
Font-Awesome-Flask provides three helper methods to load Font Awesome's resources: {func}`load() <flask_font_awesome.FontAwesome.load>`, {func}`load_js() <flask_font_awesome.FontAwesome.load_js>` and {func}`load_css() <flask_font_awesome.FontAwesome.load_css>`.
Font Awesome can be used either via [Web Fonts + CSS or via SVG + JS](https://fontawesome.com/docs/web/dig-deeper/webfont-vs-svg). Use the {func}`load_css() <flask_font_awesome.FontAwesome.load_css>` method for the former, and {func}`load_js() <flask_font_awesome.FontAwesome.load_js>` for the latter. You can also use the more general {func}`load() <flask_font_awesome.FontAwesome.load>` to load either, but which defaults to `SVG + JS`.
Whichever resource(s) you end up using, you can load them by simply including any of the methods mentioned above in the head of your base template:
<!-- prettier-ignore -->
```html
<head>
...
{{ font_awesome.load_js() }}
...
</head>
<body>
...
</body>
```
By default, this will load **all** icon styles of the **latest** available version in **minified** form from the CDN. You can change this default behaviour by specifying options such as `version` or `style`. Please refer to the [API Reference](api) for a complete list of all available options.
## Rendering Icons
Font-Awesome-Flask provides two methods to render icons: {func}`render_icon() <flask_font_awesome.FontAwesome.render_icon>` to render a single icon, and {func}`render_stacked_icon() <flask_font_awesome.FontAwesome.render_stacked_icon>` to render a stacked icon. You can simply include these in your [Jinja](https://jinja.palletsprojects.com/en/latest/) template like so:
```
{{ font_awesome.render_icon("fas fa-house") }}
{{ font_awesome.render_stacked_icon("fas fa-square", "fas fa-house") }}
```
Both methods offer an exhaustive set of options to customize their styling. See the [API Reference](api) for more details.
| PypiClean |
/observations-0.1.4.tar.gz/observations-0.1.4/observations/r/uk_driver_deaths.py | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import csv
import numpy as np
import os
import sys
from observations.util import maybe_download_and_extract
def uk_driver_deaths(path):
"""Road Casualties in Great Britain 1969–84
`UKDriverDeaths` is a time series giving the monthly totals of car
drivers in Great Britain killed or seriously injured Jan 1969 to Dec
1984. Compulsory wearing of seat belts was introduced on 31 Jan 1983.
`Seatbelts` is more information on the same problem.
`Seatbelts` is a multiple time series, with columns
`DriversKilled`
car drivers killed.
`drivers`
same as `UKDriverDeaths`.
`front`
front-seat passengers killed or seriously injured.
`rear`
rear-seat passengers killed or seriously injured.
`kms`
distance driven.
`PetrolPrice`
petrol price.
`VanKilled`
number of van (‘light goods vehicle’) drivers.
`law`
0/1: was the law in effect that month?
Harvey, A.C. (1989) *Forecasting, Structural Time Series Models and the
Kalman Filter.* Cambridge University Press, pp. 519–523.
Durbin, J. and Koopman, S. J. (2001) *Time Series Analysis by State
Space Methods.* Oxford University Press. http://www.ssfpack.com/dkbook/
Args:
path: str.
Path to directory which either stores file or otherwise file will
be downloaded and extracted there.
Filename is `uk_driver_deaths.csv`.
Returns:
Tuple of np.ndarray `x_train` with 192 rows and 2 columns and
dictionary `metadata` of column headers (feature names).
"""
import pandas as pd
path = os.path.expanduser(path)
filename = 'uk_driver_deaths.csv'
if not os.path.exists(os.path.join(path, filename)):
url = 'http://dustintran.com/data/r/datasets/UKDriverDeaths.csv'
maybe_download_and_extract(path, url,
save_file_name='uk_driver_deaths.csv',
resume=False)
data = pd.read_csv(os.path.join(path, filename), index_col=0,
parse_dates=True)
x_train = data.values
metadata = {'columns': data.columns}
return x_train, metadata | PypiClean |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.