text
stringlengths 29
850k
|
---|
from data_output_stream import DataOutputStream
from column import Column
class InsertRow:
# Constants for special offsets
# The field with this offset is a primary key.
IS_PKEY = -1
# The field with this offset has a null value.
IS_NULL = -2
def __init__(self, table, values):
"""
Constructs an InsertRow object for a row containing the specified
values that is to be inserted in the specified table.
:param table:
:param values:
"""
self._table = table
self._values = values
# These objects will be created by the marshall() method.
self._key = None
self._data = None
def marshall(self):
"""
Takes the collection of values for this InsertRow
and marshalls them into a key/data pair.
:return:
"""
def get_key(self):
"""
Returns the key in the key/data pair for this row.
:return: the key
"""
return self._key
def get_data(self):
"""
Returns the data item in the key/data pair for this row.
:return: the data
"""
return self._data
|
We are your partners in creating enjoyable gastronomic activities that your team won’t soon forget.
Are you looking for a unique opportunity to bring your company together?
Is your company planning a team building event, a corporate gathering or organizing a conference and looking for social activities? Our team will create a memorable experience just for you. Get in touch with us!
We are passionate about local gastronomy and strive for each and every guest to fall in love with the city and its food. Our expert local guides will unveil the tastes of the real Porto. Our food tours are consistently rated 5 stars on TripAdvisor, Facebook and Google!
Whether for a small team or a large group, we can create a unique team building event just for you. We have crafted boutique experiences for teams of 10 or larger food events with nearly 250 people, doing simultaneous food tours around the city, we are here to help you.
Hungry? Good. Loosen a belt notch for these superb half-day food tours, where you'll sample everything from Porto's best slow-roast pork sandwich to éclairs, fine wines, cheese and coffee. |
import urlparse
class History:
def __init__(self, history):
self.history = history
def push(self, url):
"""Adds an url to the history if the url is not already the most
recent entry. The scheme and network location (host, port,
username, password), if present, are removed from the URL before
storing it. If there are more than 5 entries in the list the
oldes entry will be removed.
"""
# normalize the URL by removing scheme and netloc. This avoids
# problems with the URLs when running ringo behind reverse
# proxies.
split = urlparse.urlsplit(url)
normalized_url = urlparse.urlunsplit(("", "") + split[2:])
if not self.history or normalized_url != self.history[-1]:
self.history.append(normalized_url)
if len(self.history) > 5:
del self.history[0]
def pop(self, num=1):
"""Returns a url form the history and deletes the item and all
decendants from the history. On default it will return the last
recent entry in the history. Optionally you can provide a number
to the pop method to get e.g the 2 most recent entry."""
url = None
for x in range(num):
if len(self.history) > 0:
url = self.history.pop()
return url
def last(self):
"""Returns the last element from the history stack without
removing it"""
if len(self.history) > 0:
return self.history[-1]
return None
|
Design vector artwork that you can send to Illustrator CC or Photoshop CC easily. With Adobe Illustrator Draw, you know that your files, fonts, design assets, and settings are there for you whenever and wherever you need them. The app offers support for various stylus types, has zoom up to 64 times, and lets you work with multiple images and drawing layers.
If you enjoy the tools from Adobe and are ready to make your designs stand out, Adobe Illustrator Draw is the app for you.
The latest Adobe app for iPad is here. Comp CC is available for download on the App Store.
Adobe has turned Adobe Illustrator Draw into a universal app that's optimized for the iPhone, iPad, and iPod touch. |
from django.contrib import admin
from economy.models import Deposit, SociBankAccount, SociProduct, SociSession, ProductOrder
@admin.register(SociBankAccount)
class SociBankAccountAdmin(admin.ModelAdmin):
list_display = ['user', 'card_uuid', 'balance']
readonly_fields = ['balance']
@admin.register(SociProduct)
class SociProductAdmin(admin.ModelAdmin):
list_display = ['sku_number', 'icon', 'name', 'price', 'description', 'start']
@admin.register(Deposit)
class DepositAdmin(admin.ModelAdmin):
list_display = ['id', 'user', 'amount', 'has_receipt', 'is_valid']
@staticmethod
def user(deposit: Deposit):
return deposit.account.user
def has_receipt(self, deposit):
return bool(deposit.receipt)
has_receipt.boolean = True
def is_valid(self, deposit):
return deposit.is_valid
is_valid.boolean = True
@admin.register(SociSession)
class SociSessionAdmin(admin.ModelAdmin):
pass
@admin.register(ProductOrder)
class ProductOrderAdmin(admin.ModelAdmin):
list_display = ['id', 'product', 'order_size', 'source', 'cost']
@staticmethod
def cost(product_order: ProductOrder):
return product_order.cost
|
Austin Convention Center, Exhibit Hall 4!
Come meet the Angels at our Marketplace booth for a chance to win a free HelpJess T-Shirt and Cap.
One lucky winner will win a revolutionary new virtual shopping experience!
HelpJess is launching a world-changing shopping technology.
Stop by on Saturday 18th for a sneak peek and meet the Angels!
Everyone who attends the HelpJess booth will receive a private invitation to be our very first users in the world! |
import types
import six
import inspect
import re
from warnings import warn
__version__ = '0.2.5'
__author__ = 'Philipp Sommer'
try:
from matplotlib.cbook import dedent as dedents
except ImportError:
from textwrap import dedent as _dedents
def dedents(s):
return '\n'.join(_dedents(s or '').splitlines()[1:])
substitution_pattern = re.compile(
r"""(?s)(?<!%)(%%)*%(?!%) # uneven number of %
\((?P<key>.*?)\)# key enclosed in brackets""", re.VERBOSE)
summary_patt = re.compile(r'(?s).*?(?=(\n\s*\n)|$)')
class _StrWithIndentation(object):
"""A convenience class that indents the given string if requested through
the __str__ method"""
def __init__(self, s, indent=0, *args, **kwargs):
self._indent = '\n' + ' ' * indent
self._s = s
def __str__(self):
return self._indent.join(self._s.splitlines())
def __repr__(self):
return repr(self._indent.join(self._s.splitlines()))
def safe_modulo(s, meta, checked='', print_warning=True, stacklevel=2):
"""Safe version of the modulo operation (%) of strings
Parameters
----------
s: str
string to apply the modulo operation with
meta: dict or tuple
meta informations to insert (usually via ``s % meta``)
checked: {'KEY', 'VALUE'}, optional
Security parameter for the recursive structure of this function. It can
be set to 'VALUE' if an error shall be raised when facing a TypeError
or ValueError or to 'KEY' if an error shall be raised when facing a
KeyError. This parameter is mainly for internal processes.
print_warning: bool
If True and a key is not existent in `s`, a warning is raised
stacklevel: int
The stacklevel for the :func:`warnings.warn` function
Examples
--------
The effects are demonstrated by this example::
>>> from docrep import safe_modulo
>>> s = "That's %(one)s string %(with)s missing 'with' and %s key"
>>> s % {'one': 1} # raises KeyError because of missing 'with'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyError: 'with'
>>> s % {'one': 1, 'with': 2} # raises TypeError because of '%s'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: not enough arguments for format string
>>> safe_modulo(s, {'one': 1})
"That's 1 string %(with)s missing 'with' and %s key"
"""
try:
return s % meta
except (ValueError, TypeError, KeyError):
# replace the missing fields by %%
keys = substitution_pattern.finditer(s)
for m in keys:
key = m.group('key')
if not isinstance(meta, dict) or key not in meta:
if print_warning:
warn("%r is not a valid key!" % key, SyntaxWarning,
stacklevel)
full = m.group()
s = s.replace(full, '%' + full)
if 'KEY' not in checked:
return safe_modulo(s, meta, checked=checked + 'KEY',
print_warning=print_warning,
stacklevel=stacklevel)
if not isinstance(meta, dict) or 'VALUE' in checked:
raise
s = re.sub(r"""(?<!%)(%%)*%(?!%) # uneven number of %
\s*(\w|$) # format strings""", '%\g<0>', s,
flags=re.VERBOSE)
return safe_modulo(s, meta, checked=checked + 'VALUE',
print_warning=print_warning, stacklevel=stacklevel)
class DocstringProcessor(object):
"""Class that is intended to process docstrings
It is, but only to minor extends, inspired by the
:class:`matplotlib.docstring.Substitution` class.
Examples
--------
Create docstring processor via::
>>> from docrep import DocstringProcessor
>>> d = DocstringProcessor(doc_key='My doc string')
And then use it as a decorator to process the docstring::
>>> @d
... def doc_test():
... '''That's %(doc_key)s'''
... pass
>>> print(doc_test.__doc__)
That's My doc string
Use the :meth:`get_sectionsf` method to extract Parameter sections (or
others) form the docstring for later usage (and make sure, that the
docstring is dedented)::
>>> @d.get_sectionsf('docstring_example',
... sections=['Parameters', 'Examples'])
... @d.dedent
... def doc_test(a=1, b=2):
... '''
... That's %(doc_key)s
...
... Parameters
... ----------
... a: int, optional
... A dummy parameter description
... b: int, optional
... A second dummy parameter
...
... Examples
... --------
... Some dummy example doc'''
... print(a)
>>> @d.dedent
... def second_test(a=1, b=2):
... '''
... My second function where I want to use the docstring from
... above
...
... Parameters
... ----------
... %(docstring_example.parameters)s
...
... Examples
... --------
... %(docstring_example.examples)s'''
... pass
>>> print(second_test.__doc__)
My second function where I want to use the docstring from
above
<BLANKLINE>
Parameters
----------
a: int, optional
A dummy parameter description
b: int, optional
A second dummy parameter
<BLANKLINE>
Examples
--------
Some dummy example doc
Another example uses non-dedented docstrings::
>>> @d.get_sectionsf('not_dedented')
... def doc_test2(a=1):
... '''That's the summary
...
... Parameters
... ----------
... a: int, optional
... A dummy parameter description'''
... print(a)
These sections must then be used with the :meth:`with_indent` method to
indent the inserted parameters::
>>> @d.with_indent(4)
... def second_test2(a=1):
... '''
... My second function where I want to use the docstring from
... above
...
... Parameters
... ----------
... %(not_dedented.parameters)s'''
... pass
"""
#: :class:`dict`. Dictionary containing the compiled patterns to identify
#: the Parameters, Other Parameters, Warnings and Notes sections in a
#: docstring
patterns = {}
#: :class:`dict`. Dictionary containing the parameters that are used in for
#: substitution.
params = {}
#: sections that behave the same as the `Parameter` section by defining a
#: list
param_like_sections = ['Parameters', 'Other Parameters', 'Returns',
'Raises']
#: sections that include (possibly not list-like) text
text_sections = ['Warnings', 'Notes', 'Examples', 'See Also',
'References']
#: The action on how to react on classes in python 2
#:
#: When calling::
#:
#: >>> @docstrings
#: ... class NewClass(object):
#: ... """%(replacement)s"""
#:
#: This normaly raises an AttributeError, because the ``__doc__`` attribute
#: of a class in python 2 is not writable. This attribute may be one of
#: ``'ignore', 'raise' or 'warn'``
python2_classes = 'ignore'
def __init__(self, *args, **kwargs):
"""
Parameters
----------
``*args`` and ``**kwargs``
Parameters that shall be used for the substitution. Note that you can
only provide either ``*args`` or ``**kwargs``, furthermore most of the
methods like `get_sectionsf` require ``**kwargs`` to be provided."""
if len(args) and len(kwargs):
raise ValueError("Only positional or keyword args are allowed")
self.params = args or kwargs
patterns = {}
all_sections = self.param_like_sections + self.text_sections
for section in self.param_like_sections:
patterns[section] = re.compile(
'(?s)(?<=%s\n%s\n)(.+?)(?=\n\n\S+|$)' % (
section, '-'*len(section)))
all_sections_patt = '|'.join(
'%s\n%s\n' % (s, '-'*len(s)) for s in all_sections)
# examples and see also
for section in self.text_sections:
patterns[section] = re.compile(
'(?s)(?<=%s\n%s\n)(.+?)(?=%s|$)' % (
section, '-'*len(section), all_sections_patt))
self._extended_summary_patt = re.compile(
'(?s)(.+?)(?=%s|$)' % all_sections_patt)
self._all_sections_patt = re.compile(all_sections_patt)
self.patterns = patterns
def __call__(self, func):
"""
Substitute in a docstring of a function with :attr:`params`
Parameters
----------
func: function
function with the documentation whose sections
shall be inserted from the :attr:`params` attribute
See Also
--------
dedent: also dedents the doc
with_indent: also indents the doc"""
doc = func.__doc__ and safe_modulo(func.__doc__, self.params,
stacklevel=3)
return self._set_object_doc(func, doc)
def get_sections(self, s, base,
sections=['Parameters', 'Other Parameters']):
"""
Method that extracts the specified sections out of the given string if
(and only if) the docstring follows the numpy documentation guidelines
[1]_. Note that the section either must appear in the
:attr:`param_like_sections` or the :attr:`text_sections` attribute.
Parameters
----------
s: str
Docstring to split
base: str
base to use in the :attr:`sections` attribute
sections: list of str
sections to look for. Each section must be followed by a newline
character ('\\n') and a bar of '-' (following the numpy (napoleon)
docstring conventions).
Returns
-------
str
The replaced string
References
----------
.. [1] https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt
See Also
--------
delete_params, keep_params, delete_types, keep_types, delete_kwargs:
For manipulating the docstring sections
save_docstring:
for saving an entire docstring
"""
params = self.params
# Remove the summary and dedent the rest
s = self._remove_summary(s)
for section in sections:
key = '%s.%s' % (base, section.lower().replace(' ', '_'))
params[key] = self._get_section(s, section)
return s
def _remove_summary(self, s):
# if the string does not start with one of the sections, we remove the
# summary
if not self._all_sections_patt.match(s.lstrip()):
# remove the summary
lines = summary_patt.sub('', s, 1).splitlines()
# look for the first line with content
first = next((i for i, l in enumerate(lines) if l.strip()), 0)
# dedent the lines
s = dedents('\n' + '\n'.join(lines[first:]))
return s
def _get_section(self, s, section):
try:
return self.patterns[section].search(s).group(0).rstrip()
except AttributeError:
return ''
def get_sectionsf(self, *args, **kwargs):
"""
Decorator method to extract sections from a function docstring
Parameters
----------
``*args`` and ``**kwargs``
See the :meth:`get_sections` method. Note, that the first argument
will be the docstring of the specified function
Returns
-------
function
Wrapper that takes a function as input and registers its sections
via the :meth:`get_sections` method"""
def func(f):
doc = f.__doc__
self.get_sections(doc or '', *args, **kwargs)
return f
return func
def _set_object_doc(self, obj, doc, stacklevel=3):
"""Convenience method to set the __doc__ attribute of a python object
"""
if isinstance(obj, types.MethodType) and six.PY2:
obj = obj.im_func
try:
obj.__doc__ = doc
except AttributeError: # probably python2 class
if (self.python2_classes != 'raise' and
(inspect.isclass(obj) and six.PY2)):
if self.python2_classes == 'warn':
warn("Cannot modify docstring of classes in python2!",
stacklevel=stacklevel)
else:
raise
return obj
def dedent(self, func):
"""
Dedent the docstring of a function and substitute with :attr:`params`
Parameters
----------
func: function
function with the documentation to dedent and whose sections
shall be inserted from the :attr:`params` attribute"""
doc = func.__doc__ and self.dedents(func.__doc__, stacklevel=4)
return self._set_object_doc(func, doc)
def dedents(self, s, stacklevel=3):
"""
Dedent a string and substitute with the :attr:`params` attribute
Parameters
----------
s: str
string to dedent and insert the sections of the :attr:`params`
attribute
stacklevel: int
The stacklevel for the warning raised in :func:`safe_module` when
encountering an invalid key in the string"""
s = dedents(s)
return safe_modulo(s, self.params, stacklevel=stacklevel)
def with_indent(self, indent=0):
"""
Substitute in the docstring of a function with indented :attr:`params`
Parameters
----------
indent: int
The number of spaces that the substitution should be indented
Returns
-------
function
Wrapper that takes a function as input and substitutes it's
``__doc__`` with the indented versions of :attr:`params`
See Also
--------
with_indents, dedent"""
def replace(func):
doc = func.__doc__ and self.with_indents(
func.__doc__, indent=indent, stacklevel=4)
return self._set_object_doc(func, doc)
return replace
def with_indents(self, s, indent=0, stacklevel=3):
"""
Substitute a string with the indented :attr:`params`
Parameters
----------
s: str
The string in which to substitute
indent: int
The number of spaces that the substitution should be indented
stacklevel: int
The stacklevel for the warning raised in :func:`safe_module` when
encountering an invalid key in the string
Returns
-------
str
The substituted string
See Also
--------
with_indent, dedents"""
# we make a new dictionary with objects that indent the original
# strings if necessary. Note that the first line is not indented
d = {key: _StrWithIndentation(val, indent)
for key, val in six.iteritems(self.params)}
return safe_modulo(s, d, stacklevel=stacklevel)
def delete_params(self, base_key, *params):
"""
Method to delete a parameter from a parameter documentation.
This method deletes the given `param` from the `base_key` item in the
:attr:`params` dictionary and creates a new item with the original
documentation without the description of the param. This method works
for the ``'Parameters'`` sections.
The new docstring without the selected parts will be accessible as
``base_key + '.no_' + '|'.join(params)``, e.g.
``'original_key.no_param1|param2'``.
See the :meth:`keep_params` method for an example.
Parameters
----------
base_key: str
key in the :attr:`params` dictionary
``*params``
str. Parameter identifier of which the documentations shall be
deleted
See Also
--------
delete_types, keep_params"""
self.params[
base_key + '.no_' + '|'.join(params)] = self.delete_params_s(
self.params[base_key], params)
@staticmethod
def delete_params_s(s, params):
"""
Delete the given parameters from a string
Same as :meth:`delete_params` but does not use the :attr:`params`
dictionary
Parameters
----------
s: str
The string of the parameters section
params: list of str
The names of the parameters to delete
Returns
-------
str
The modified string `s` without the descriptions of `params`
"""
patt = '(?s)' + '|'.join(
'(?<=\n)' + s + '\s*:.+?\n(?=\S+|$)' for s in params)
return re.sub(patt, '', '\n' + s.strip() + '\n').strip()
def delete_kwargs(self, base_key, args=None, kwargs=None):
"""
Deletes the ``*args`` or ``**kwargs`` part from the parameters section
Either `args` or `kwargs` must not be None. The resulting key will be
stored in
``base_key + 'no_args'``
if `args` is not None and `kwargs` is None
``base_key + 'no_kwargs'``
if `args` is None and `kwargs` is not None
``base_key + 'no_args_kwargs'``
if `args` is not None and `kwargs` is not None
Parameters
----------
base_key: str
The key in the :attr:`params` attribute to use
args: None or str
The string for the args to delete
kwargs: None or str
The string for the kwargs to delete
Notes
-----
The type name of `args` in the base has to be like ````*<args>````
(i.e. the `args` argument preceeded by a ``'*'`` and enclosed by double
``'`'``). Similarily, the type name of `kwargs` in `s` has to be like
````**<kwargs>````"""
if not args and not kwargs:
warn("Neither args nor kwargs are given. I do nothing for %s" % (
base_key))
return
ext = '.no' + ('_args' if args else '') + ('_kwargs' if kwargs else '')
self.params[base_key + ext] = self.delete_kwargs_s(
self.params[base_key], args, kwargs)
@classmethod
def delete_kwargs_s(cls, s, args=None, kwargs=None):
"""
Deletes the ``*args`` or ``**kwargs`` part from the parameters section
Either `args` or `kwargs` must not be None.
Parameters
----------
s: str
The string to delete the args and kwargs from
args: None or str
The string for the args to delete
kwargs: None or str
The string for the kwargs to delete
Notes
-----
The type name of `args` in `s` has to be like ````*<args>```` (i.e. the
`args` argument preceeded by a ``'*'`` and enclosed by double ``'`'``).
Similarily, the type name of `kwargs` in `s` has to be like
````**<kwargs>````"""
if not args and not kwargs:
return s
types = []
if args is not None:
types.append('`?`?\*%s`?`?' % args)
if kwargs is not None:
types.append('`?`?\*\*%s`?`?' % kwargs)
return cls.delete_types_s(s, types)
def delete_types(self, base_key, out_key, *types):
"""
Method to delete a parameter from a parameter documentation.
This method deletes the given `param` from the `base_key` item in the
:attr:`params` dictionary and creates a new item with the original
documentation without the description of the param. This method works
for ``'Results'`` like sections.
See the :meth:`keep_types` method for an example.
Parameters
----------
base_key: str
key in the :attr:`params` dictionary
out_key: str
Extension for the base key (the final key will be like
``'%s.%s' % (base_key, out_key)``
``*types``
str. The type identifier of which the documentations shall deleted
See Also
--------
delete_params"""
self.params['%s.%s' % (base_key, out_key)] = self.delete_types_s(
self.params[base_key], types)
@staticmethod
def delete_types_s(s, types):
"""
Delete the given types from a string
Same as :meth:`delete_types` but does not use the :attr:`params`
dictionary
Parameters
----------
s: str
The string of the returns like section
types: list of str
The type identifiers to delete
Returns
-------
str
The modified string `s` without the descriptions of `types`
"""
patt = '(?s)' + '|'.join(
'(?<=\n)' + s + '\n.+?\n(?=\S+|$)' for s in types)
return re.sub(patt, '', '\n' + s.strip() + '\n',).strip()
def keep_params(self, base_key, *params):
"""
Method to keep only specific parameters from a parameter documentation.
This method extracts the given `param` from the `base_key` item in the
:attr:`params` dictionary and creates a new item with the original
documentation with only the description of the param. This method works
for ``'Parameters'`` like sections.
The new docstring with the selected parts will be accessible as
``base_key + '.' + '|'.join(params)``, e.g.
``'original_key.param1|param2'``
Parameters
----------
base_key: str
key in the :attr:`params` dictionary
``*params``
str. Parameter identifier of which the documentations shall be
in the new section
See Also
--------
keep_types, delete_params
Examples
--------
To extract just two parameters from a function and reuse their
docstrings, you can type::
>>> from docrep import DocstringProcessor
>>> d = DocstringProcessor()
>>> @d.get_sectionsf('do_something')
... def do_something(a=1, b=2, c=3):
... '''
... That's %(doc_key)s
...
... Parameters
... ----------
... a: int, optional
... A dummy parameter description
... b: int, optional
... A second dummy parameter that will be excluded
... c: float, optional
... A third parameter'''
... print(a)
>>> d.keep_params('do_something.parameters', 'a', 'c')
>>> @d.dedent
... def do_less(a=1, c=4):
... '''
... My second function with only `a` and `c`
...
... Parameters
... ----------
... %(do_something.parameters.a|c)s'''
... pass
>>> print(do_less.__doc__)
My second function with only `a` and `c`
<BLANKLINE>
Parameters
----------
a: int, optional
A dummy parameter description
c: float, optional
A third parameter
Equivalently, you can use the :meth:`delete_params` method to remove
parameters::
>>> d.delete_params('do_something.parameters', 'b')
>>> @d.dedent
... def do_less(a=1, c=4):
... '''
... My second function with only `a` and `c`
...
... Parameters
... ----------
... %(do_something.parameters.no_b)s'''
... pass
"""
self.params[base_key + '.' + '|'.join(params)] = self.keep_params_s(
self.params[base_key], params)
@staticmethod
def keep_params_s(s, params):
"""
Keep the given parameters from a string
Same as :meth:`keep_params` but does not use the :attr:`params`
dictionary
Parameters
----------
s: str
The string of the parameters like section
params: list of str
The parameter names to keep
Returns
-------
str
The modified string `s` with only the descriptions of `params`
"""
patt = '(?s)' + '|'.join(
'(?<=\n)' + s + '\s*:.+?\n(?=\S+|$)' for s in params)
return ''.join(re.findall(patt, '\n' + s.strip() + '\n')).rstrip()
def keep_types(self, base_key, out_key, *types):
"""
Method to keep only specific parameters from a parameter documentation.
This method extracts the given `type` from the `base_key` item in the
:attr:`params` dictionary and creates a new item with the original
documentation with only the description of the type. This method works
for the ``'Results'`` sections.
Parameters
----------
base_key: str
key in the :attr:`params` dictionary
out_key: str
Extension for the base key (the final key will be like
``'%s.%s' % (base_key, out_key)``
``*types``
str. The type identifier of which the documentations shall be
in the new section
See Also
--------
delete_types, keep_params
Examples
--------
To extract just two return arguments from a function and reuse their
docstrings, you can type::
>>> from docrep import DocstringProcessor
>>> d = DocstringProcessor()
>>> @d.get_sectionsf('do_something', sections=['Returns'])
... def do_something():
... '''
... That's %(doc_key)s
...
... Returns
... -------
... float
... A random number
... int
... A random integer'''
... return 1.0, 4
>>> d.keep_types('do_something.returns', 'int_only', 'int')
>>> @d.dedent
... def do_less():
... '''
... My second function that only returns an integer
...
... Returns
... -------
... %(do_something.returns.int_only)s'''
... return do_something()[1]
>>> print(do_less.__doc__)
My second function that only returns an integer
<BLANKLINE>
Returns
-------
int
A random integer
Equivalently, you can use the :meth:`delete_types` method to remove
parameters::
>>> d.delete_types('do_something.returns', 'no_float', 'float')
>>> @d.dedent
... def do_less():
... '''
... My second function with only `a` and `c`
...
... Returns
... ----------
... %(do_something.returns.no_float)s'''
... return do_something()[1]
"""
self.params['%s.%s' % (base_key, out_key)] = self.keep_types_s(
self.params[base_key], types)
@staticmethod
def keep_types_s(s, types):
"""
Keep the given types from a string
Same as :meth:`keep_types` but does not use the :attr:`params`
dictionary
Parameters
----------
s: str
The string of the returns like section
types: list of str
The type identifiers to keep
Returns
-------
str
The modified string `s` with only the descriptions of `types`
"""
patt = '|'.join('(?<=\n)' + s + '\n(?s).+?\n(?=\S+|$)' for s in types)
return ''.join(re.findall(patt, '\n' + s.strip() + '\n')).rstrip()
def save_docstring(self, key):
"""
Descriptor method to save a docstring from a function
Like the :meth:`get_sectionsf` method this method serves as a
descriptor for functions but saves the entire docstring"""
def func(f):
self.params[key] = f.__doc__ or ''
return f
return func
def get_summary(self, s, base=None):
"""
Get the summary of the given docstring
This method extracts the summary from the given docstring `s` which is
basicly the part until two newlines appear
Parameters
----------
s: str
The docstring to use
base: str or None
A key under which the summary shall be stored in the :attr:`params`
attribute. If not None, the summary will be stored in
``base + '.summary'``. Otherwise, it will not be stored at all
Returns
-------
str
The extracted summary"""
summary = summary_patt.search(s).group()
if base is not None:
self.params[base + '.summary'] = summary
return summary
def get_summaryf(self, *args, **kwargs):
"""
Extract the summary from a function docstring
Parameters
----------
``*args`` and ``**kwargs``
See the :meth:`get_summary` method. Note, that the first argument
will be the docstring of the specified function
Returns
-------
function
Wrapper that takes a function as input and registers its summary
via the :meth:`get_summary` method"""
def func(f):
doc = f.__doc__
self.get_summary(doc or '', *args, **kwargs)
return f
return func
def get_extended_summary(self, s, base=None):
"""Get the extended summary from a docstring
This here is the extended summary
Parameters
----------
s: str
The docstring to use
base: str or None
A key under which the summary shall be stored in the :attr:`params`
attribute. If not None, the summary will be stored in
``base + '.summary_ext'``. Otherwise, it will not be stored at
all
Returns
-------
str
The extracted extended summary"""
# Remove the summary and dedent
s = self._remove_summary(s)
ret = ''
if not self._all_sections_patt.match(s):
m = self._extended_summary_patt.match(s)
if m is not None:
ret = m.group().strip()
if base is not None:
self.params[base + '.summary_ext'] = ret
return ret
def get_extended_summaryf(self, *args, **kwargs):
"""Extract the extended summary from a function docstring
This function can be used as a decorator to extract the extended
summary of a function docstring (similar to :meth:`get_sectionsf`).
Parameters
----------
``*args`` and ``**kwargs``
See the :meth:`get_extended_summary` method. Note, that the first
argument will be the docstring of the specified function
Returns
-------
function
Wrapper that takes a function as input and registers its summary
via the :meth:`get_extended_summary` method"""
def func(f):
doc = f.__doc__
self.get_extended_summary(doc or '', *args, **kwargs)
return f
return func
def get_full_description(self, s, base=None):
"""Get the full description from a docstring
This here and the line above is the full description (i.e. the
combination of the :meth:`get_summary` and the
:meth:`get_extended_summary`) output
Parameters
----------
s: str
The docstring to use
base: str or None
A key under which the description shall be stored in the
:attr:`params` attribute. If not None, the summary will be stored
in ``base + '.full_desc'``. Otherwise, it will not be stored
at all
Returns
-------
str
The extracted full description"""
summary = self.get_summary(s)
extended_summary = self.get_extended_summary(s)
ret = (summary + '\n\n' + extended_summary).strip()
if base is not None:
self.params[base + '.full_desc'] = ret
return ret
def get_full_descriptionf(self, *args, **kwargs):
"""Extract the full description from a function docstring
This function can be used as a decorator to extract the full
descriptions of a function docstring (similar to
:meth:`get_sectionsf`).
Parameters
----------
``*args`` and ``**kwargs``
See the :meth:`get_full_description` method. Note, that the first
argument will be the docstring of the specified function
Returns
-------
function
Wrapper that takes a function as input and registers its summary
via the :meth:`get_full_description` method"""
def func(f):
doc = f.__doc__
self.get_full_description(doc or '', *args, **kwargs)
return f
return func
|
Want to Opt Back Into Legrand?
At Legrand, we aim to provide our customers with the most up-to-date, detailed, and relevant information to help their business. When you recently opted out of our e-mail list, you may have been trying to simply put a stop to irrelevant messages, but were unfortunately removed from all Legrand e-mails and campaigns. We'd like to give you the opportunity to opt back into specific brands, product lines, and campaigns that still benefit your business while avoiding the ones that are no longer relevant. Fill out the form below to jump back in! |
# -*- coding:utf-8 -*-
from mako import runtime, filters, cache
UNDEFINED = runtime.UNDEFINED
STOP_RENDERING = runtime.STOP_RENDERING
__M_dict_builtin = dict
__M_locals_builtin = locals
_magic_number = 10
_modified_time = 1443802885.4031692
_enable_loop = True
_template_filename = '/usr/local/lib/python3.4/dist-packages/nikola/data/themes/base/templates/comments_helper_googleplus.tmpl'
_template_uri = 'comments_helper_googleplus.tmpl'
_source_encoding = 'utf-8'
_exports = ['comment_link_script', 'comment_form', 'comment_link']
def render_body(context,**pageargs):
__M_caller = context.caller_stack._push_frame()
try:
__M_locals = __M_dict_builtin(pageargs=pageargs)
__M_writer = context.writer()
__M_writer('\n\n')
__M_writer('\n\n')
__M_writer('\n')
return ''
finally:
context.caller_stack._pop_frame()
def render_comment_link_script(context):
__M_caller = context.caller_stack._push_frame()
try:
__M_writer = context.writer()
__M_writer('\n')
return ''
finally:
context.caller_stack._pop_frame()
def render_comment_form(context,url,title,identifier):
__M_caller = context.caller_stack._push_frame()
try:
__M_writer = context.writer()
__M_writer('\n<script src="https://apis.google.com/js/plusone.js"></script>\n<div class="g-comments"\n data-href="')
__M_writer(str(url))
__M_writer('"\n data-first_party_property="BLOGGER"\n data-view_type="FILTERED_POSTMOD">\n</div>\n')
return ''
finally:
context.caller_stack._pop_frame()
def render_comment_link(context,link,identifier):
__M_caller = context.caller_stack._push_frame()
try:
__M_writer = context.writer()
__M_writer('\n<div class="g-commentcount" data-href="')
__M_writer(str(link))
__M_writer('"></div>\n<script src="https://apis.google.com/js/plusone.js"></script>\n')
return ''
finally:
context.caller_stack._pop_frame()
"""
__M_BEGIN_METADATA
{"uri": "comments_helper_googleplus.tmpl", "source_encoding": "utf-8", "filename": "/usr/local/lib/python3.4/dist-packages/nikola/data/themes/base/templates/comments_helper_googleplus.tmpl", "line_map": {"33": 16, "39": 2, "57": 12, "43": 2, "44": 5, "45": 5, "16": 0, "51": 11, "21": 9, "22": 14, "23": 17, "56": 12, "55": 11, "29": 16, "63": 57}}
__M_END_METADATA
"""
|
Taxon: Dichapetalum cymosum (Hook.) Engl.
National Germplasm Resources Laboratory, Beltsville, Maryland. URL: https://npgsweb.ars-grin.gov/gringlobal/taxonomydetail.aspx?id=409900. Accessed 26 April 2019. |
#!/usr/bin/python
import getpass
import json
import os
from globalconfig import passwd, url, usr
from jinja2 import Environment, FileSystemLoader
from vraapiclient import reservation
#Get the current directory
currentDirectory = os.path.dirname(os.path.abspath(__file__))
client = reservation.ReservationClient(url, usr, passwd)
#Set up jinja2 environment
env = Environment(loader=FileSystemLoader(currentDirectory))
template = env.get_template('reservationTemplate.json')
#Get all business groups
businessGroups = client.getAllBusinessGroups(show="json")
#Loop through each group in the businessGroups object and pull out
#id and name, format the reservation name and inject both values
#in to the params dict.
for group in businessGroups:
#This is where we format the reservation name.
#[ComputeResource]-Res-BusinessGroupName(nospaces)
name = 'CLTEST01-Res-{groupname}'.format(groupname = group['name'].replace(" ",""))
#Set all configurable parameters here
params = {
'ReservationName': name,
'SubTenantId': group['id'],
}
#Create the JSON payload for the POST
#This is where params are added to the json payload
payload = json.loads(template.render(params=params))
#Attempt to create each reservation. Catch any errors and continue
try:
reservation = client.createReservation(payload)
print "Reservation created: {id}".format(id=reservation)
except Exception, e:
pass
|
Discover what Discount Tree Services of Preston can do for you!
The following services are available and we are happy to visit your premises with no obligation to discuss your requirements and offer expert advice.
Based in Preston we are perfectly located to serve the North West!
is the process in which a tree is removed down to ground level.
reducing the crown is the process of removing the height and spread of the crown of the tree.
raising the crown is the process of removing the lower branches of the tree, this provides clearance and lets more light to the surrounding area.
is the process in which a climber will remove small pieces of the tree one at a time, sometimes lowering the branches down on ropes, if needed.
removing some or all of the tree to prevent personal injury or damage to property.
removing the top out of the tree to allow more sunlight.
All our work is carried out to BS3998 (British Standard). |
# Copyright 2014 Camptocamp SA (author: Guewen Baconnier)
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
from odoo.tests import common
class TestAutomaticWorkflowBase(common.TransactionCase):
def create_sale_order(self, workflow, override=None):
sale_obj = self.env['sale.order']
partner_values = {'name': 'Imperator Caius Julius Caesar Divus'}
partner = self.env['res.partner'].create(partner_values)
product_values = {'name': 'Bread',
'list_price': 5,
'type': 'product'}
product = self.env['product.product'].create(product_values)
self.product_uom_unit = self.env.ref('uom.product_uom_unit')
values = {
'partner_id': partner.id,
'order_line': [(0, 0, {
'name': product.name,
'product_id': product.id,
'product_uom': self.product_uom_unit.id,
'price_unit': product.list_price,
'product_uom_qty': 1})],
'workflow_process_id': workflow.id,
}
if override:
values.update(override)
order = sale_obj.create(values)
# Create inventory for add stock qty to lines
# With this commit https://goo.gl/fRTLM3 the moves that where
# force-assigned are not transferred in the picking
for line in order.order_line:
if line.product_id.type == 'product':
inventory = self.env['stock.inventory'].create({
'name': 'Inventory for move %s' % line.name,
'filter': 'product',
'product_id': line.product_id.id,
'line_ids': [(0, 0, {
'product_id': line.product_id.id,
'product_qty': line.product_uom_qty,
'location_id':
self.env.ref('stock.stock_location_stock').id
})]
})
inventory.post_inventory()
return order
def create_full_automatic(self, override=None):
workflow_obj = self.env['sale.workflow.process']
values = workflow_obj.create({
'name': 'Full Automatic',
'picking_policy': 'one',
'validate_order': True,
'validate_picking': True,
'create_invoice': True,
'validate_invoice': True,
'invoice_date_is_order_date': True,
})
if override:
values.update(override)
return values
def progress(self):
self.env['automatic.workflow.job'].run()
|
Loren Marshall Foundation has started CPR training program in Alaska with the help of a “CPR Anytime” training kit introduced by the Laerdal Corporation and the American Heart Association. This training kit is used for teaching life saving techniques to the learners. The highlight is the “hands only CPR” for community members. The “CPR Anytime” training is a simple course that can be completed in less than half an hour. |
# -*- encoding: utf-8 -*-
from types import FunctionType
import tornado.web
from tornado.util import unicode_type
try:
import urlparse # py2
except ImportError:
import urllib.parse as urlparse # py3
try:
from urllib import urlencode # py2
except ImportError:
from urllib.parse import urlencode # py3
def decorate_all(decorator_list):
def is_method_need_to_decorate(func_name, func_obj, check_param):
"""check if an object should be decorated"""
methods = ["get", "head", "post", "put", "delete", "patch"]
return (func_name in methods and
isinstance(func_obj, FunctionType) and
getattr(func_obj, check_param, True))
"""decorate all instance methods (unless excluded) with the same decorator"""
class DecorateAll(type):
def __new__(cls, name, bases, dct):
for func_name, func_obj in dct.items():
for item in decorator_list:
decorator, check_param = item
if is_method_need_to_decorate(func_name, func_obj, check_param):
dct[func_name] = decorator(dct[func_name])
return super(DecorateAll, cls).__new__(cls, name, bases, dct)
def __setattr__(self, func_name, func_obj):
for item in decorator_list:
decorator, check_param = item
if is_method_need_to_decorate(func_name, func_obj, check_param):
func_obj = decorator(func_obj)
super(DecorateAll, self).__setattr__(func_name, func_obj)
return DecorateAll
def make_list(val):
if isinstance(val, list):
return val
else:
return [val]
def real_ip(request):
# split is for X-Forwarded-For header that can consist of many IPs: X-Forwarded-For: client, proxy1, proxy2
return (request.headers.get('X-Real-Ip', None) or request.headers.get('X-Forwarded-For', None) or
request.remote_ip or '127.0.0.1').split(',')[0]
HTTPError = tornado.web.HTTPError
ITERABLE = (set, frozenset, list, tuple)
def update_url(url, update_args=None, remove_args=None):
scheme, sep, url_new = url.partition('://')
if len(scheme) == len(url):
scheme = ''
else:
url = '//' + url_new
url_split = urlparse.urlsplit(url)
query_dict = urlparse.parse_qs(url_split.query, keep_blank_values=True)
# add args
if update_args:
query_dict.update(update_args)
# remove args
if remove_args:
query_dict = dict([(k, query_dict.get(k)) for k in query_dict if k not in remove_args])
query = make_qs(query_dict)
return urlparse.urlunsplit((scheme, url_split.netloc, url_split.path, query, url_split.fragment))
def make_qs(query_args):
def _encode(s):
if isinstance(s, unicode_type):
return s.encode('utf-8')
else:
return s
kv_pairs = []
for key, val in query_args.items():
if val is not None:
encoded_key = _encode(key)
if isinstance(val, ITERABLE):
for v in val:
kv_pairs.append((encoded_key, _encode(v)))
else:
kv_pairs.append((encoded_key, _encode(val)))
qs = urlencode(kv_pairs, doseq=True)
return qs
|
'Aumakua Hawai'i is a 501(c)(3) charitable non-profit providing support to the Native Hawaiian Community through revenue generated by its' ownership in for profit companies. 'Aumakua Hawai'i, through its' subsidiaries, supports either directly or indirectly through other non-profit organizations Native Hawaiian families with children in need. Programs provide housing assistance, educational programs, job training, employment assistance and health improvement opportunities. |
"""
Script updates russian {{Перевод недели}} template according to Translate of
the Week project.
Usage:
python tow.py
"""
import re
import pywikibot
META_TEMPLATE = "Template:TOWThisweek"
LOCAL_TEMPLATE = "Шаблон:Перевод недели"
ORIGINAL_ID = "original"
LOCAL_ID = "russian"
ARCHIVE_PAGE = "Проект:Переводы/Невыполненные переводы недели"
ARCHIVE_ALL = False
ARCHIVE_LABEL = "<!-- NapalmBot: insert here -->"
ARCHIVE_DEFAULT = "???"
ARCHIVE_FORMAT = "|-\n| {local} || {original}\n"
DEFAULT_TEXT = "'''[[Шаблон:Перевод недели|Укажите название статьи]]'''"
UPDATE_COMMENT = "Обновление перевода недели."
ARCHIVE_COMMENT = "Архивация перевода недели."
def parse_meta_template():
"""Return (link, langcode, pagename) tuple."""
site = pywikibot.Site("meta", "meta")
template = pywikibot.Page(site, META_TEMPLATE)
match = re.search(r"\[\[:([A-Za-z\-]+):(.*?)\]\]", template.text)
return (match.group(0), match.group(1), match.group(2))
def get_sitelink(site, lang, name):
"""Return interwiki of [[:lang:name]] in current site."""
try:
page = pywikibot.Page(pywikibot.Site(lang), name)
result = pywikibot.ItemPage.fromPage(page).getSitelink(site)
except:
result = None
return result
def get_regexps():
"""
Return (original, local) re object tuple for matching links:
$1 — prefix,
$2 — link,
$3 — postfix.
"""
regexp = r"(<span id\s*=\s*\"{}\">)(.*?)(</span>)"
wrap = lambda x: re.compile(regexp.format(x))
return (wrap(ORIGINAL_ID), wrap(LOCAL_ID))
def archive(site, local, original):
"""Archive link if neccessary."""
if ARCHIVE_PAGE == "":
return
if local != DEFAULT_TEXT:
if not ARCHIVE_ALL:
match = re.match(r"\[\[(.*?)[\]|]", local)
if match is None:
return
try:
if pywikibot.Page(site, match.group(1)).exists():
return
except:
return
else:
local = ARCHIVE_DEFAULT
page = pywikibot.Page(site, ARCHIVE_PAGE)
text = page.text
pos = text.find(ARCHIVE_LABEL)
if pos == -1:
return
text = text[:pos] + ARCHIVE_FORMAT.format(local=local, original=original) + text[pos:]
page.text = text
page.save(ARCHIVE_COMMENT, minor=False)
def main():
"""Main script function."""
site = pywikibot.Site()
(interwiki, lang, name) = parse_meta_template()
local = get_sitelink(site, lang, name)
if local:
local = "[[{}]]".format(local)
else:
local = DEFAULT_TEXT
(interwiki_re, local_re) = get_regexps()
template = pywikibot.Page(site, LOCAL_TEMPLATE)
result = template.text
old_interwiki = interwiki_re.search(result).group(2)
old_local = local_re.search(result).group(2)
if interwiki == old_interwiki:
return
else:
archive(site, old_local, old_interwiki)
result = local_re.sub("\\1" + local + "\\3", result)
result = interwiki_re.sub("\\1" + interwiki + "\\3", result)
template.text = result
template.save(UPDATE_COMMENT, minor=False)
if __name__ == "__main__":
main()
|
Our luxurious four bedroom condominiums are Snowmass lodging at its finest. Their large size and convenient location make them the perfect Stonebridge Condominiums to enjoy with family and friends for group ski vacations, reunions, and other events. The first level contains two large bedrooms off the living room, each with their own private bath and separate vantiy/dressing area. Follow the spiral staircase to the second level to discover two additional large bedrooms, which share an adjoining full bath and vanity/dressing area. High ceilings with exposed beams, built-in natural stone fireplaces, and large balconies deliver total ski chalet ambience. A buffet bar divides each unit’s full equipped kitchen from the living room and dining area. Sip your morning coffee, aprés ski refreshment, or refreshing summertime beverage on a spacious deck overlooking Snowmass’ ski slopes and relax into your stay at Stonebridge.
Interested in staying in one of our 4 Bedroom condominium units? Click Book Now or call (970) 923-4323 to reserve today. Be sure to keep an eye on Facebook, Twitter (@SBCondominiums), Instagram (@StonebridgeCondominiums) and StonebridgeCondominiums.com for booking specials, photos of Snowmass, and area news. Sign up for our Stonebridge Exclusive Offers to receive quarterly updates and deals. |
'''
Copyright 2012 Joe Harris
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
'''
import json
import time
from urlparse import urlsplit, urlunsplit
from zope.interface import implements
from twisted.python import failure
from twisted.internet import reactor, protocol
from twisted.web.client import Agent
from twisted.web.iweb import IBodyProducer
from twisted.web.http_headers import Headers
from twisted.internet.defer import succeed
from txhttprelay.parser import ParserError
# try and import the verifying SSL context from txverifyssl
try:
from txverifyssl.context import VerifyingSSLContext as SSLContextFactory
except ImportError:
# if txverifyssl is not installed default to the built-in SSL context, this works but has no SSL verification
from twisted.internet.ssl import ClientContextFactory
class SSLContextFactory(ClientContextFactory):
def getContext(self, hostname, port):
return ClientContextFactory.getContext(self)
class RequestError(Exception):
pass
class HttpRequest(object):
METHODS = ('get', 'post', 'put', 'delete', 'head', 'options')
def __init__(self, id='', method='', url='', expected=200, parser=None):
method = method.lower().strip()
if method not in self.METHODS:
raise RequestError('invalid HTTP method: {}'.format(method))
self.method = method
self.url = urlsplit(url)
self.expected = expected
self.parser = parser
self.headers = {}
self.body = None
self.set_header('User-Agent', 'txhttprelay')
if self.method == 'post':
self.set_header('Content-Type', 'application/x-www-form-urlencoded')
self.id = id
self.start_time = 0
def __unicode__(self):
return u'<HttpRequest ({} {})>'.format(
self.method.upper(),
urlunsplit(self.url)
)
def __str__(self):
return self.__unicode__()
def start_timer(self):
self.start_time = time.time()
def set_header(self, name, value):
self.headers.setdefault(str(name), []).append(str(value))
def set_body(self, body):
if body:
self.body = self.parser.request(body)
class HttpResponse(object):
def __init__(self, request, code, headers, body):
self.request = request
self.code = int(code)
self.headers = list(headers)
self.body = str(body)
def ok(self):
return int(self.request.expected) == int(self.code)
def data(self):
if not self.request.parser:
return self.body
try:
return self.request.parser.response(self.body)
except ParserError:
return None
class TransportError(Exception):
pass
class StringProducer(object):
implements(IBodyProducer)
def __init__(self, data):
self.body = data
self.length = len(self.body)
def startProducing(self, consumer):
consumer.write(self.body)
return succeed(None)
def pauseProducing(self):
pass
def stopProducing(self):
pass
class StringReceiver(protocol.Protocol):
def __init__(self, response, callback):
self.response = response
self.callback = callback
def dataReceived(self, data):
self.response.body += data
def connectionLost(self, reason):
self.callback(self.response)
class HttpTransport(object):
def __init__(self, request):
self.request = request
def _request(self):
method = self.request.method.upper()
scheme = self.request.url.scheme.lower()
if scheme == 'https':
context = SSLContextFactory()
if hasattr(context, 'set_expected_host'):
context.set_expected_host(self.request.url.netloc)
agent = Agent(reactor, context)
elif scheme == 'http':
agent = Agent(reactor)
else:
raise TransportError('only HTTP and HTTPS schemes are supported')
producer = StringProducer(self.request.body) if self.request.body else None
self.request.start_timer()
return agent.request(
method,
urlunsplit(self.request.url),
Headers(self.request.headers),
producer
)
def go(self, callback=None):
if not callback:
raise TransportError('go() requires a callback as the only parameter')
def _got_response(raw_response):
if isinstance(raw_response, failure.Failure):
error_body = json.dumps({'error':raw_response.getErrorMessage()})
response = HttpResponse(request=self.request, code=0, headers={}, body=error_body)
callback(response)
else:
response = HttpResponse(
request=self.request,
code=raw_response.code,
headers=raw_response.headers.getAllRawHeaders(),
body=''
)
raw_response.deliverBody(StringReceiver(response, callback))
self._request().addBoth(_got_response)
'''
eof
'''
|
Catalogue Information Catalogue Record 400275 .
Reviews Catalogue Record 400275 .
Catalogue Information 400275 Beginning of record . Catalogue Information 400275 Top of page . |
# -*- coding: utf-8 -*-
# Generated by Django 1.10 on 2016-11-08 19:18
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
import image.models
import image.storage
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Albumn',
fields=[
('id', models.CharField(default=uuid.uuid4, max_length=64, primary_key=True, serialize=False, verbose_name='Activation key')),
('name', models.CharField(db_index=True, max_length=60, unique=True)),
('weight', models.IntegerField(default=0)),
('slug', models.SlugField(max_length=150, unique=True)),
('created', models.DateTimeField(auto_now_add=True, db_index=True)),
('active', models.BooleanField(default=True)),
],
),
migrations.CreateModel(
name='Image',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=60, null=True)),
('image_key', models.CharField(default=uuid.uuid4, max_length=64, verbose_name='Activation key')),
('image', models.ImageField(storage=image.storage.OverwriteStorage(), upload_to=image.models.image_upload_path)),
('created', models.DateTimeField(auto_now_add=True)),
('active', models.BooleanField(default=True)),
('weight', models.IntegerField(default=0)),
('albumn', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='image.Albumn')),
],
),
]
|
Racer Tech's Super Heavy Duty Radius Rod Kit will replace the weak factory radius rods with a substantially stronger design. Here's the fix for the weakest link on the RZR XP 1000 & RZR XP 900. These Radius Rods are a direct bolt-on replacement and work with the OEM trailing arm and knuckle/spindle. They are 1.25" dia. USA tubing and use 5/8" FK heim joints. The rods have a little room for adjustments for the inclined rider to tweak the handling a little bit if desired. The adjustment provided from this kit is the static camber settings. By adjusting the rear camber one can control how aggressive the bite of the rear tire is in a corner and find a preferred balance of understeer / oversteer which is how much the machine pushes or slides out in a corner. This setting while controlling handling going into a corner, can also fine tune traction coming out of a turn.
Put together the ultimate front suspension on your RZR XP 900 or RZR XP 4 900 with Racer Tech proven Replacement Arm Kit, SHD Tie Rods and our long lasting Delrin Pivot Bushings and High Precision Sleeves.Come standard in matte black or we can powder coat to the colour of your choice. |
##########################################################################
#
# Copyright (c) 2011-2012, John Haddon. All rights reserved.
# Copyright (c) 2011-2012, Image Engine Design Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above
# copyright notice, this list of conditions and the following
# disclaimer.
#
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided with
# the distribution.
#
# * Neither the name of John Haddon nor the names of
# any other contributors to this software may be used to endorse or
# promote products derived from this software without specific prior
# written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
# IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
##########################################################################
import unittest
import imath
import IECore
import Gaffer
import GafferTest
import GafferUI
import GafferUITest
class GadgetTest( GafferUITest.TestCase ) :
def testTransform( self ) :
g = GafferUI.TextGadget( "hello" )
self.assertEqual( g.getTransform(), imath.M44f() )
t = imath.M44f().scale( imath.V3f( 2 ) )
g.setTransform( t )
self.assertEqual( g.getTransform(), t )
c1 = GafferUI.LinearContainer()
c1.addChild( g )
c2 = GafferUI.LinearContainer()
c2.addChild( c1 )
t2 = imath.M44f().translate( imath.V3f( 1, 2, 3 ) )
c2.setTransform( t2 )
self.assertEqual( g.fullTransform(), t * t2 )
self.assertEqual( g.fullTransform( c1 ), t )
def testToolTip( self ) :
g = GafferUI.TextGadget( "hello" )
self.assertEqual( g.getToolTip( IECore.LineSegment3f() ), "" )
g.setToolTip( "hi" )
self.assertEqual( g.getToolTip( IECore.LineSegment3f() ), "hi" )
def testDerivationInPython( self ) :
class MyGadget( GafferUI.Gadget ) :
def __init__( self ) :
GafferUI.Gadget.__init__( self )
self.layersRendered = set()
def bound( self ) :
return imath.Box3f( imath.V3f( -20, 10, 2 ), imath.V3f( 10, 15, 5 ) )
def doRenderLayer( self, layer, style ) :
self.layersRendered.add( layer )
mg = MyGadget()
# we can't call the methods of the gadget directly in python to test the
# bindings, as that doesn't prove anything (we're no exercising the virtual
# method override code in the wrapper). instead cause c++ to call through
# for us by adding our gadget to a parent and making calls to the parent.
c = GafferUI.IndividualContainer()
c.addChild( mg )
self.assertEqual( c.bound().size(), mg.bound().size() )
with GafferUI.Window() as w :
GafferUI.GadgetWidget( c )
w.setVisible( True )
self.waitForIdle( 1000 )
self.assertEqual( mg.layersRendered, set( GafferUI.Gadget.Layer.values.values() ) )
def testStyle( self ) :
g = GafferUI.TextGadget( "test" )
l = GafferUI.LinearContainer()
l.addChild( g )
self.assertEqual( g.getStyle(), None )
self.assertEqual( l.getStyle(), None )
self.assertTrue( g.style().isSame( GafferUI.Style.getDefaultStyle() ) )
self.assertTrue( l.style().isSame( GafferUI.Style.getDefaultStyle() ) )
s = GafferUI.StandardStyle()
l.setStyle( s )
self.assertTrue( l.getStyle().isSame( s ) )
self.assertEqual( g.getStyle(), None )
self.assertTrue( g.style().isSame( s ) )
self.assertTrue( l.style().isSame( s ) )
def testTypeNamePrefixes( self ) :
self.assertTypeNamesArePrefixed( GafferUI )
self.assertTypeNamesArePrefixed( GafferUITest )
def testRenderRequestOnStyleChange( self ) :
g = GafferUI.Gadget()
cs = GafferTest.CapturingSlot( g.renderRequestSignal() )
self.assertEqual( len( cs ), 0 )
s = GafferUI.StandardStyle()
g.setStyle( s )
self.assertEqual( len( cs ), 1 )
self.assertTrue( cs[0][0].isSame( g ) )
s2 = GafferUI.StandardStyle()
g.setStyle( s2 )
self.assertEqual( len( cs ), 2 )
self.assertTrue( cs[1][0].isSame( g ) )
s2.setColor( GafferUI.StandardStyle.Color.BackgroundColor, imath.Color3f( 1 ) )
self.assertEqual( len( cs ), 3 )
self.assertTrue( cs[2][0].isSame( g ) )
def testHighlighting( self ) :
g = GafferUI.Gadget()
self.assertEqual( g.getHighlighted(), False )
g.setHighlighted( True )
self.assertEqual( g.getHighlighted(), True )
g.setHighlighted( False )
self.assertEqual( g.getHighlighted(), False )
cs = GafferTest.CapturingSlot( g.renderRequestSignal() )
g.setHighlighted( False )
self.assertEqual( len( cs ), 0 )
g.setHighlighted( True )
self.assertEqual( len( cs ), 1 )
self.assertTrue( cs[0][0].isSame( g ) )
def testVisibility( self ) :
g1 = GafferUI.Gadget()
self.assertEqual( g1.getVisible(), True )
self.assertEqual( g1.visible(), True )
g1.setVisible( False )
self.assertEqual( g1.getVisible(), False )
self.assertEqual( g1.visible(), False )
g2 = GafferUI.Gadget()
g1.addChild( g2 )
self.assertEqual( g2.getVisible(), True )
self.assertEqual( g2.visible(), False )
g1.setVisible( True )
self.assertEqual( g2.visible(), True )
g3 = GafferUI.Gadget()
g2.addChild( g3 )
self.assertEqual( g3.getVisible(), True )
self.assertEqual( g3.visible(), True )
g1.setVisible( False )
self.assertEqual( g3.getVisible(), True )
self.assertEqual( g3.visible(), False )
self.assertEqual( g3.visible( relativeTo = g2 ), True )
self.assertEqual( g3.visible( relativeTo = g1 ), True )
def testVisibilitySignals( self ) :
g = GafferUI.Gadget()
cs = GafferTest.CapturingSlot( g.renderRequestSignal() )
self.assertEqual( len( cs ), 0 )
g.setVisible( True )
self.assertEqual( len( cs ), 0 )
g.setVisible( False )
self.assertEqual( len( cs ), 1 )
self.assertEqual( cs[0][0], g )
g.setVisible( False )
self.assertEqual( len( cs ), 1 )
self.assertEqual( cs[0][0], g )
g.setVisible( True )
self.assertEqual( len( cs ), 2 )
self.assertEqual( cs[1][0], g )
def testBoundIgnoresHiddenChildren( self ) :
g = GafferUI.Gadget()
t = GafferUI.TextGadget( "text" )
g.addChild( t )
b = t.bound()
self.assertEqual( g.bound(), b )
t.setVisible( False )
# we still want to know what the bound would be for t,
# even when it's hidden.
self.assertEqual( t.bound(), b )
# but we don't want it taken into account when computing
# the parent bound.
self.assertEqual( g.bound(), imath.Box3f() )
def testVisibilityChangedSignal( self ) :
g = GafferUI.Gadget()
g["a"] = GafferUI.Gadget()
g["a"]["c"] = GafferUI.Gadget()
g["b"] = GafferUI.Gadget()
events = []
def visibilityChanged( gadget ) :
events.append( ( gadget, gadget.visible() ) )
connnections = [
g.visibilityChangedSignal().connect( visibilityChanged ),
g["a"].visibilityChangedSignal().connect( visibilityChanged ),
g["a"]["c"].visibilityChangedSignal().connect( visibilityChanged ),
g["b"].visibilityChangedSignal().connect( visibilityChanged ),
]
g["b"].setVisible( True )
self.assertEqual( len( events ), 0 )
g["b"].setVisible( False )
self.assertEqual( len( events ), 1 )
self.assertEqual( events[0], ( g["b"], False ) )
g["b"].setVisible( True )
self.assertEqual( len( events ), 2 )
self.assertEqual( events[1], ( g["b"], True ) )
g["a"].setVisible( True )
self.assertEqual( len( events ), 2 )
g["a"].setVisible( False )
self.assertEqual( len( events ), 4 )
self.assertEqual( events[-2], ( g["a"]["c"], False ) )
self.assertEqual( events[-1], ( g["a"], False ) )
g["a"].setVisible( True )
self.assertEqual( len( events ), 6 )
self.assertEqual( events[-2], ( g["a"]["c"], True ) )
self.assertEqual( events[-1], ( g["a"], True ) )
g["a"]["c"].setVisible( False )
self.assertEqual( len( events ), 7 )
self.assertEqual( events[-1], ( g["a"]["c"], False ) )
g.setVisible( False )
self.assertEqual( len( events ), 10 )
self.assertEqual( events[-3], ( g["a"], False ) )
self.assertEqual( events[-2], ( g["b"], False ) )
self.assertEqual( events[-1], ( g, False ) )
g["a"]["c"].setVisible( True )
self.assertEqual( len( events ), 10 )
def testEnabled( self ) :
g1 = GafferUI.Gadget()
self.assertEqual( g1.getEnabled(), True )
self.assertEqual( g1.enabled(), True )
g1.setEnabled( False )
self.assertEqual( g1.getEnabled(), False )
self.assertEqual( g1.enabled(), False )
g2 = GafferUI.Gadget()
g1.addChild( g2 )
self.assertEqual( g2.getEnabled(), True )
self.assertEqual( g2.enabled(), False )
g1.setEnabled( True )
self.assertEqual( g2.enabled(), True )
g3 = GafferUI.Gadget()
g2.addChild( g3 )
self.assertEqual( g3.getEnabled(), True )
self.assertEqual( g3.enabled(), True )
g1.setEnabled( False )
self.assertEqual( g3.getEnabled(), True )
self.assertEqual( g3.enabled(), False )
self.assertEqual( g3.enabled( relativeTo = g2 ), True )
self.assertEqual( g3.enabled( relativeTo = g1 ), True )
if __name__ == "__main__":
unittest.main()
|
U.S. Labor Secretary Thomas Perez is promoting a Rhode Island labor union’s apprenticeship program as a model for lifting the underemployed into better jobs.
Perez visited a plumber and pipefitter union in East Providence on Friday.
Perez says apprenticeships are as viable as college as pathways to prosperity, but without the debt.
He toured a federally-funded training program with officials including Gov. Gina Raimondo and Rhode Island’s two U.S. senators, Jack Reed and Sheldon Whitehouse. All are Democrats.
The United Association of Plumbers and Pipefitters, Local 51, recently launched its six-week, pre-apprenticeship program through the Real Jobs Rhode Island initiative. After learning the basics of welding and other skills, the trainees move into five-year paid apprenticeships.
The Obama administration last month announced $90 million in funding for similar programs nationwide. |
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db.models import F
from django.db import models, migrations
"""
PROCEDURE
- Delete aux column
"""
class Migration(migrations.Migration):
dependencies = [
('AndroidRequests', '0022_auto_20170310_1525'),
]
operations = [
# remove aux columns
migrations.RemoveField(
model_name='servicesbybusstop',
name='busStop_id_aux',
),
migrations.RemoveField(
model_name='servicestopdistance',
name='busStop_id_aux',
),
migrations.RemoveField(
model_name='eventforbusstop',
name='busStop_id_aux',
),
migrations.RemoveField(
model_name='nearbybuseslog',
name='busStop_id_aux',
),
# Service model
migrations.RemoveField(
model_name='servicesbybusstop',
name='service_id_aux',
),
# Token model
migrations.RemoveField(
model_name='poseintrajectoryoftoken',
name='token_id_aux',
),
migrations.RemoveField(
model_name='activetoken',
name='token_id_aux',
),
]
|
What better to keep coins than a counting sheep? This zippered fleece-covered coin purse looks just like sheep right down to the tip of its tail. From the artisans of Mai Vietnamese Handicrafts, based in Ho Chi Minh City. |
# Yum package checker and installer
__author__ = "Frederico Martins"
__license__ = "GPLv3"
__version__ = 1.2
from os import system
from output_handler import OutputWaiting
class Package(object):
def __init__(self, packages):
if type(packages) is list or type(packages) is tuple or type(packages) is set:
pass
elif type(packages) is str:
packages = [packages]
else:
OutputWaiting.Info('Packages to be asserted must be in string or list format')
exit(-1)
if self.Check(packages):
self.Install('Installing {}'.format(' and '.join(self.packages)), True)
def Check(self, packages):
self.packages = []
for each in packages:
if system('rpm -q {} > /dev/null'.format(each)):
self.packages.append(each)
return self.packages
@OutputWaiting
def Install(self):
return system('yum -y install {} > /dev/null'.format(' '.join(self.packages)))
|
WOKINGHAM, England – Christie®, a leader in creating and delivering the world’s best visual and audio experiences, is pleased to announce that its latest Christie HS Series projectors will be used at the GLOW Eindhoven international light art festival from November 10 to 17 at Eindhoven city center.
A total of 12 Christie D20WU-HS projectors will be utilized in an impressive video mapping installation on the face of the 19th century Catharinakerk, a neo-gothic Roman Catholic church located in the center of Eindhoven. The projectors – which are being provided by Christie partner Sahara Benelux – are the lightest and brightest 1DLP® laser phosphor projectors available.
At Catharinakerk, visitors will see the Confluence video mapping installation powered by the D20WU-HS projectors, which can operate at full brightness on a single 15A, 110V circuit. The original project – created by Ocubo – is inspired by the confluence of rivers and streams in Holland. The kinetic art will also see virtual dancers guide the viewer through the film, with flourishes of color emphasizing the church’s stunning architecture in an array of eye-catching geometric combinations. Christie partner, Sahara Benelux, is also providing 10 Christie UHD 551-L LCD panels for Glowie, an interactive art installation combining chatbot technology, voice commands and light art. |
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Libvdwxc(AutotoolsPackage):
"""Portable C library of density functionals with van der Waals
interactions for density functional theory"""
homepage = "https://libvdwxc.gitlab.io/libvdwxc/"
url = "https://launchpad.net/libvdwxc/stable/0.4.0/+download/libvdwxc-0.4.0.tar.gz"
version("0.4.0", sha256="3524feb5bb2be86b4688f71653502146b181e66f3f75b8bdaf23dd1ae4a56b33")
variant("mpi", default=True, description="Enable MPI support")
variant("pfft", default=False, description="Enable support for PFFT")
depends_on("fftw-api@3")
depends_on("mpi@2:", when="+mpi")
depends_on("pfft", when="+pfft")
# pfft needs MPI
conflicts("~mpi", "+pfft")
conflicts("^fftw~mpi", "+mpi")
def configure_args(self):
spec = self.spec
args = [
"--{0}-pfft".format(
"with" if self.spec.satisfies("+pfft") else "without"
),
"MPICC=", # make sure both variables are always unset
"MPIFC=", # otherwise the configure scripts complains
]
if spec.satisfies("+mpi"):
# work around b0rken MPI detection: the MPI detection tests are
# run with CC instead of MPICC, triggering an error. So, setting
# CC/FC to the MPI compiler wrappers.
args += [
"--with-mpi",
"CC={0}".format(spec["mpi"].mpicc),
"FC={0}".format(spec["mpi"].mpifc),
]
else:
args += ["--without-mpi"]
return args
|
Advertiser Disclosure: The credit card gives that appear on this web site are from bank card companies from which receives compensation. Charges might reduce earnings on the account. Fee if account is closed early: $25 if account is closed inside ninety days of opening. With the intention to receive probably the most from your checking account promotions, let’s breakdown the linked accounts. All Residents Bank and Constitution One checking accounts now also embrace the financial institution’s signature innovation, the $5 Overdraft Pass protecting clients from overdraft charges for transactions of $5 or much less.
Easy methods to get it: Open a Chase SavingsSM account online or in particular person utilizing a coupon emailed to you thru the promotion web page. There are a number of different checking account bonus opportunities with Suntrust Financial institution Open a new Suntrust Choose Checking account and meet the requirements to receive $250. Clients should meet the month-to-month bonus requirements by sustaining a linked financial savings or money market account and posting $1,000 or more in direct deposits in a calendar month to get the $10 monthly bonus.
CIT Bank (FDIC #58978) received GoBankingRates’ Best Banks Award in 2017 as their Premier Excessive Yield Savings account offers a very generous 1.seventy five% APY with a $100 minimum. 9 Complete Relationship Balance: The sum of balances in the Select Checking account PLUS assertion linked SunTrust deposit accounts (financial savings, checking, cash market, or CDs), IRA or Brokerage accounts introduced by way of SunTrust Funding Services, Inc.
Signup in the present day without cost to get the newest banking efficiency strategies, tactics and insight delivered right to your out what thousands of banks, regulators and business experts are doing to drive performance. Bottom Line: The $a hundred and fifty bonus isn’t the very best round, however not having Direct Deposit as a requirement will make this one a winner for residents in an eligible service area. 1Non-TD fees reimbursed when minimum each day stability is a minimum of $2,500 in checking account.
Pay Payments Handle your account and avoid late charges by paying payments online.
Enjoy the freedom to bank anytime, anyplace, with features like Cellular Banking with Cell Test Deposit, over 850 no-payment ATMs, and a CheckCard with ScoreCard® Rewards. Whether or not you utilize a debit card or checks or not, though, pay shut consideration to your account balance otherwise you’ll get hit with big overdraft charges. Verify the fees however the Silver checking package has an $8.95 monthly fee for paper statements, $6.ninety five monthly price for estatements, and no payment for those who open a Package Cash Market Financial savings account and have combined monthly direct deposits of $1,000 or more OR have a combined common every day deposit steadiness of no less than $1,500.
It is best to never unfold your money too skinny by attempting to open too many accounts with a minimal deposit requirement without delay. To qualify for a bonus you should open your first new personal checking account and have not less than 1 single direct deposit of $500 or extra post to your new checking account inside 60 days of account opening. Direct deposits should whole at the least $500, $2,000 or $5,000 per thirty days, depending on which account you’ve signed up for, inside 60 days of opening.
Youngsters in kindergarten via fifth grade who learn 10 books this summer time and fill out the bank’s summer studying type will receive $10 in their new or existing Younger Saver accounts at TD Bank upon submitting their studying forms to the nearest branch. Several months back, I went forward and applied for 3 separate presents for these accounts, and I simply wished to let you understand how my experience went.
Have month-to-month direct deposits totaling $500 or extra. Uncover Financial institution, which has $71.eight billion in property, is topic to the value caps on debit swipe fees. |
import tempfile
import subprocess
from PIL import Image, ImageEnhance
from docassemble.base.functions import get_config, get_language, ReturnValue
from docassemble.base.core import DAFile, DAFileList, DAFileCollection, DAStaticFile
import PyPDF2
from docassemble.base.logger import logmessage
from docassemble.base.error import DAError
import pycountry
import sys
import os
import shutil
import re
QPDF_PATH = 'qpdf'
def safe_pypdf_reader(filename):
try:
return PyPDF2.PdfFileReader(open(filename, 'rb'))
except PyPDF2.utils.PdfReadError:
new_filename = tempfile.NamedTemporaryFile(prefix="datemp", mode="wb", suffix=".pdf", delete=False)
qpdf_subprocess_arguments = [QPDF_PATH, filename, new_filename.name]
try:
result = subprocess.run(qpdf_subprocess_arguments, timeout=60).returncode
except subprocess.TimeoutExpired:
result = 1
if result != 0:
raise Exception("Call to qpdf failed for template " + str(filename) + " where arguments were " + " ".join(qpdf_subprocess_arguments))
return PyPDF2.PdfFileReader(open(new_filename.name, 'rb'))
def ocr_finalize(*pargs, **kwargs):
#sys.stderr.write("ocr_finalize started")
if kwargs.get('pdf', False):
target = kwargs['target']
dafilelist = kwargs['dafilelist']
filename = kwargs['filename']
file_list = []
input_number = target.number
for parg in pargs:
if type(parg) is list:
for item in parg:
if type(item) is ReturnValue:
if isinstance(item.value, dict):
if 'page' in item.value:
file_list.append([item.value['indexno'], int(item.value['page']), item.value['doc']._pdf_page_path(int(item.value['page']))])
else:
file_list.append([item.value['indexno'], 0, item.value['doc'].path()])
else:
if type(parg) is ReturnValue:
if isinstance(item.value, dict):
if 'page' in item.value:
file_list.append([parg.value['indexno'], int(parg.value['page']), parg.value['doc']._pdf_page_path(int(parg.value['page']))])
else:
file_list.append([parg.value['indexno'], 0, parg.value['doc'].path()])
from docassemble.base.pandoc import concatenate_files
pdf_path = concatenate_files([y[2] for y in sorted(file_list, key=lambda x: x[0]*10000 + x[1])])
target.initialize(filename=filename, extension='pdf', mimetype='application/pdf', reinitialize=True)
shutil.copyfile(pdf_path, target.file_info['path'])
del target.file_info
target._make_pdf_thumbnail(1, both_formats=True)
target.commit()
target.retrieve()
return (target, dafilelist)
output = dict()
#index = 0
for parg in pargs:
#sys.stderr.write("ocr_finalize: index " + str(index) + " is a " + str(type(parg)) + "\n")
if type(parg) is list:
for item in parg:
#sys.stderr.write("ocr_finalize: sub item is a " + str(type(item)) + "\n")
if type(item) is ReturnValue and isinstance(item.value, dict):
output[int(item.value['page'])] = item.value['text']
else:
if type(parg) is ReturnValue and isinstance(item.value, dict):
output[int(parg.value['page'])] = parg.value['text']
#index += 1
#sys.stderr.write("ocr_finalize: assembling output\n")
final_output = "\f".join([output[x] for x in sorted(output.keys())])
#sys.stderr.write("ocr_finalize: final output has length " + str(len(final_output)) + "\n")
return final_output
def get_ocr_language(language):
langs = get_available_languages()
if language is None:
language = get_language()
ocr_langs = get_config("ocr languages")
if ocr_langs is None:
ocr_langs = dict()
if language in langs:
lang = language
else:
if language in ocr_langs and ocr_langs[language] in langs:
lang = ocr_langs[language]
else:
try:
pc_lang = pycountry.languages.get(alpha_2=language)
lang_three_letter = pc_lang.alpha_3
if lang_three_letter in langs:
lang = lang_three_letter
else:
if 'eng' in langs:
lang = 'eng'
else:
lang = langs[0]
raise Exception("could not get OCR language for language " + str(language) + "; using language " + str(lang))
except Exception as the_error:
if 'eng' in langs:
lang = 'eng'
else:
lang = langs[0]
raise Exception("could not get OCR language for language " + str(language) + "; using language " + str(lang) + "; error was " + str(the_error))
return lang
def get_available_languages():
try:
output = subprocess.check_output(['tesseract', '--list-langs'], stderr=subprocess.STDOUT).decode()
except subprocess.CalledProcessError as err:
raise Exception("get_available_languages: failed to list available languages: " + str(err))
else:
result = output.splitlines()
result.pop(0)
return result
def ocr_page_tasks(image_file, language=None, psm=6, x=None, y=None, W=None, H=None, user_code=None, user=None, pdf=False, preserve_color=False, **kwargs):
#sys.stderr.write("ocr_page_tasks running\n")
if isinstance(image_file, set):
return []
if not (isinstance(image_file, DAFile) or isinstance(image_file, DAFileList)):
return word("(Not a DAFile or DAFileList object)")
pdf_to_ppm = get_config("pdftoppm")
if pdf_to_ppm is None:
pdf_to_ppm = 'pdftoppm'
ocr_resolution = get_config("ocr dpi")
if ocr_resolution is None:
ocr_resolution = '300'
langs = get_available_languages()
if language is None:
language = get_language()
if language in langs:
lang = language
else:
ocr_langs = get_config("ocr languages")
if ocr_langs is None:
ocr_langs = dict()
if language in ocr_langs and ocr_langs[language] in langs:
lang = ocr_langs[language]
else:
try:
pc_lang = pycountry.languages.get(alpha_2=language)
lang_three_letter = pc_lang.alpha_3
if lang_three_letter in langs:
lang = lang_three_letter
else:
if 'eng' in langs:
lang = 'eng'
else:
lang = langs[0]
sys.stderr.write("ocr_file: could not get OCR language for language " + str(language) + "; using language " + str(lang) + "\n")
except Exception as the_error:
if 'eng' in langs:
lang = 'eng'
else:
lang = langs[0]
sys.stderr.write("ocr_file: could not get OCR language for language " + str(language) + "; using language " + str(lang) + "; error was " + str(the_error) + "\n")
if isinstance(image_file, DAFile):
image_file = [image_file]
todo = list()
for doc in image_file:
if hasattr(doc, 'extension'):
if doc.extension not in ['pdf', 'png', 'jpg', 'gif', 'docx', 'doc', 'odt', 'rtf']:
raise Exception("document with extension " + doc.extension + " is not a readable image file")
if doc.extension == 'pdf':
#doc.page_path(1, 'page')
for i in range(safe_pypdf_reader(doc.path()).getNumPages()):
todo.append(dict(doc=doc, page=i+1, lang=lang, ocr_resolution=ocr_resolution, psm=psm, x=x, y=y, W=W, H=H, pdf_to_ppm=pdf_to_ppm, user_code=user_code, user=user, pdf=pdf, preserve_color=preserve_color))
elif doc.extension in ("docx", "doc", "odt", "rtf"):
import docassemble.base.util
doc_conv = docassemble.base.util.pdf_concatenate(doc)
for i in range(safe_pypdf_reader(doc_conv.path()).getNumPages()):
todo.append(dict(doc=doc_conv, page=i+1, lang=lang, ocr_resolution=ocr_resolution, psm=psm, x=x, y=y, W=W, H=H, pdf_to_ppm=pdf_to_ppm, user_code=user_code, user=user, pdf=pdf, preserve_color=preserve_color))
else:
todo.append(dict(doc=doc, page=None, lang=lang, ocr_resolution=ocr_resolution, psm=psm, x=x, y=y, W=W, H=H, pdf_to_ppm=pdf_to_ppm, user_code=user_code, user=user, pdf=pdf, preserve_color=preserve_color))
#sys.stderr.write("ocr_page_tasks finished\n")
return todo
def make_png_for_pdf(doc, prefix, resolution, pdf_to_ppm, page=None):
path = doc.path()
make_png_for_pdf_path(path, prefix, resolution, pdf_to_ppm, page=page)
doc.commit()
def make_png_for_pdf_path(path, prefix, resolution, pdf_to_ppm, page=None):
basefile = os.path.splitext(path)[0]
test_path = basefile + prefix + '-in-progress'
with open(test_path, 'a'):
os.utime(test_path, None)
if page is None:
try:
result = subprocess.run([str(pdf_to_ppm), '-r', str(resolution), '-png', str(path), str(basefile + prefix)], timeout=3600).returncode
except subprocess.TimeoutExpired:
result = 1
else:
try:
result = subprocess.run([str(pdf_to_ppm), '-f', str(page), '-l', str(page), '-r', str(resolution), '-png', str(path), str(basefile + prefix)], timeout=3600).returncode
except subprocess.TimeoutExpired:
result = 1
if os.path.isfile(test_path):
os.remove(test_path)
if result > 0:
raise Exception("Unable to extract images from PDF file")
def ocr_pdf(*pargs, target=None, filename=None, lang=None, psm=6, dafilelist=None, preserve_color=False):
if preserve_color:
device = 'tiff48nc'
else:
device = 'tiffgray'
docs = []
if not isinstance(target, DAFile):
raise DAError("ocr_pdf: target must be a DAFile")
for other_file in pargs:
if isinstance(other_file, DAFileList):
for other_file_sub in other_file.elements:
docs.append(other_file_sub)
elif isinstance(other_file, DAFileCollection):
if hasattr(other_file, 'pdf'):
docs.append(other_file.pdf)
elif hasattr(other_file, 'docx'):
docs.append(other_file.docx)
else:
raise DAError('ocr_pdf: DAFileCollection object did not have pdf or docx attribute.')
elif isinstance(other_file, DAStaticFile):
docs.append(other_file)
elif isinstance(other_file, (str, DAFile)):
docs.append(other_file)
if len(docs) == 0:
docs.append(target)
if psm is None:
psm = 6
output = []
for doc in docs:
if not hasattr(doc, 'extension'):
continue
if doc._is_pdf() and hasattr(doc, 'has_ocr') and doc.has_ocr:
output.append(doc.path())
continue
if doc.extension in ['png', 'jpg', 'gif']:
import docassemble.base.util
doc = docassemble.base.util.pdf_concatenate(doc)
elif doc.extension in ['docx', 'doc', 'odt', 'rtf']:
import docassemble.base.util
output.append(docassemble.base.util.pdf_concatenate(doc).path())
continue
elif not doc._is_pdf():
logmessage("ocr_pdf: not a readable image file")
continue
path = doc.path()
pdf_file = tempfile.NamedTemporaryFile(prefix="datemp", mode="wb", delete=False)
pdf_file.close()
if doc.extension == 'pdf':
tiff_file = tempfile.NamedTemporaryFile(prefix="datemp", mode="wb", suffix=".tiff", delete=False)
params = ['gs', '-q', '-dNOPAUSE', '-sDEVICE=' + device, '-r600', '-sOutputFile=' + tiff_file.name, path, '-c', 'quit']
try:
result = subprocess.run(params, timeout=60*60).returncode
except subprocess.TimeoutExpired:
result = 1
logmessage("ocr_pdf: call to gs took too long")
if result != 0:
raise Exception("ocr_pdf: failed to run gs with command " + " ".join(params))
params = ['tesseract', tiff_file.name, pdf_file.name, '-l', str(lang), '--psm', str(psm), '--dpi', '600', 'pdf']
try:
result = subprocess.run(params, timeout=60*60).returncode
except subprocess.TimeoutExpired:
result = 1
logmessage("ocr_pdf: call to tesseract took too long")
if result != 0:
raise Exception("ocr_pdf: failed to run tesseract with command " + " ".join(params))
else:
params = ['tesseract', path, pdf_file.name, '-l', str(lang), '--psm', str(psm), '--dpi', '300', 'pdf']
try:
result = subprocess.run(params, timeout=60*60).returncode
except subprocess.TimeoutExpired:
result = 1
logmessage("ocr_pdf: call to tesseract took too long")
if result != 0:
raise Exception("ocr_pdf: failed to run tesseract with command " + " ".join(params))
output.append(pdf_file.name + '.pdf')
if len(output) == 0:
return None
if len(output) == 1:
the_file = tempfile.NamedTemporaryFile(prefix="datemp", mode="wb", delete=False)
the_file.close()
shutil.copyfile(output[0], the_file.name)
source_file = the_file.name
else:
import docassemble.base.pandoc
source_file = docassemble.base.pandoc.concatenate_files(output)
if filename is None:
filename = 'file.pdf'
target.initialize(filename=filename, extension='pdf', mimetype='application/pdf', reinitialize=True)
shutil.copyfile(source_file, target.file_info['path'])
del target.file_info
target._make_pdf_thumbnail(1, both_formats=True)
target.commit()
target.retrieve()
return target
def ocr_page(indexno, doc=None, lang=None, pdf_to_ppm='pdf_to_ppm', ocr_resolution=300, psm=6, page=None, x=None, y=None, W=None, H=None, user_code=None, user=None, pdf=False, preserve_color=False):
"""Runs optical character recognition on an image or a page of a PDF file and returns the recognized text."""
if page is None:
page = 1
if psm is None:
psm = 6
sys.stderr.write("ocr_page running on page " + str(page) + "\n")
the_file = None
if not hasattr(doc, 'extension'):
return None
#sys.stderr.write("ocr_page running with extension " + str(doc.extension) + "\n")
if doc.extension not in ['pdf', 'png', 'jpg', 'gif']:
raise Exception("Not a readable image file")
#sys.stderr.write("ocr_page calling doc.path()\n")
path = doc.path()
if doc.extension == 'pdf':
the_file = None
if x is None and y is None and W is None and H is None:
the_file = doc.page_path(page, 'page', wait=False)
if the_file is None:
output_file = tempfile.NamedTemporaryFile()
args = [str(pdf_to_ppm), '-r', str(ocr_resolution), '-f', str(page), '-l', str(page)]
if x is not None:
args.extend(['-x', str(x)])
if y is not None:
args.extend(['-y', str(y)])
if W is not None:
args.extend(['-W', str(W)])
if H is not None:
args.extend(['-H', str(H)])
args.extend(['-singlefile', '-png', str(path), str(output_file.name)])
try:
result = subprocess.run(args, timeout=120).returncode
except subprocess.TimeoutExpired:
result = 1
if result > 0:
return word("(Unable to extract images from PDF file)")
the_file = output_file.name + '.png'
else:
the_file = path
file_to_read = tempfile.NamedTemporaryFile()
if pdf and preserve_color:
shutil.copyfile(the_file, file_to_read.name)
else:
image = Image.open(the_file)
color = ImageEnhance.Color(image)
bw = color.enhance(0.0)
bright = ImageEnhance.Brightness(bw)
brightened = bright.enhance(1.5)
contrast = ImageEnhance.Contrast(brightened)
final_image = contrast.enhance(2.0)
file_to_read = tempfile.TemporaryFile()
final_image.convert('RGBA').save(file_to_read, "PNG")
file_to_read.seek(0)
if pdf:
outfile = doc._pdf_page_path(page)
params = ['tesseract', 'stdin', re.sub(r'\.pdf$', '', outfile), '-l', str(lang), '--psm', str(psm), '--dpi', str(ocr_resolution), 'pdf']
sys.stderr.write("ocr_page: piping to command " + " ".join(params) + "\n")
try:
text = subprocess.check_output(params, stdin=file_to_read).decode()
except subprocess.CalledProcessError as err:
raise Exception("ocr_page: failed to run tesseract with command " + " ".join(params) + ": " + str(err) + " " + str(err.output.decode()))
sys.stderr.write("ocr_page finished with pdf page " + str(page) + "\n")
doc.commit()
return dict(indexno=indexno, page=page, doc=doc)
params = ['tesseract', 'stdin', 'stdout', '-l', str(lang), '--psm', str(psm), '--dpi', str(ocr_resolution)]
sys.stderr.write("ocr_page: piping to command " + " ".join(params) + "\n")
try:
text = subprocess.check_output(params, stdin=file_to_read).decode()
except subprocess.CalledProcessError as err:
raise Exception("ocr_page: failed to run tesseract with command " + " ".join(params) + ": " + str(err) + " " + str(err.output.decode()))
sys.stderr.write("ocr_page finished with page " + str(page) + "\n")
return dict(indexno=indexno, page=page, text=text)
|
Anyone else still waiting on a yellow 920?
Guys, just wanted to let you know... Its 6:40am eastern and its STILL in stock at the premiere site! If anyone is up and about. GO FOR IT NOW!
I know I haven't checked in yet, but I wanted to let you know that the phone came in just as indicated and it was definitely worth the wait. My wife fell in love with it as well and wanted me to order one. Out of nowhere I get a shipment confirmation for my original order (Which is still listed as "Cancelled"). Now I have to figure out if AT&T can assign that phone to her when it gets here. Anyway, It looks every bit as amazing as I expected. I love it. It just have a favor to ask from anyone that has received theirs yet. Can someone upload the sample pictures folder online. I accidentally deleted it and I really liked these pics. I dont want to have to restore the phone because I deleted some pics. Anyway, good luck everyone the yellows are coming around finally. I wish you guys in Europe some good luck as well. Dont give up on it!
I kind of am. Actually got mine about 2 weeks after it was first released, but I've been having quite a few freezing problems with it. Even doing a factory reset is a pain since it bricks the phone. AT&T said they'll replace the phone for me but they have no yellows in stock (only black, white and red). I love the yellow color, but to be honest I'm getting to the point I'm willing to try something else the freezing problems are so bad. I check with their chat people pretty much everyday (over the past week) to see if status has changed, but no dice.
Good luck to those of you who were able to get the phone through Premier.
Ordered a cyan and yellow from Walmart Wireless back on November 26. Still waiting for anupdate on the order status. Walmart's customer service doesn't know anything, and Nokia's just refers back to Walmart.
With the India launch, it's starting to feel like maybe Nokia doesn't know what the **** is going on.
I am waiting for a red 920 ordered from Phones4U as soon as it came back in stock. Still no dispatch notification. I am starting to worry that they did not really have them in stock. Anyone in the same situation?
Just got it 10 min ago! The yellow is much deeper than the pictures seem to show, I like it!
Nice, mine will be at my house when i get home this evening, but i bought mine used on ebay. The plus side is that it came with a yellow wireless charger as well. If for some reason i dont love it, ill probably swap it out for a note 2.
it's funny over here as most of the retailers don't even bother to put estimates anymore, just "date open". Hopefully Nokia gets it shorted.
Yep we have another yellow one on order since nov28th. Still waiting. Unacceptable.
My order with Walmart is being canceled tomorrow, I really didn't care what color it was and they seem unwilling to offer me any other color that they have in stock or any other incentive to stay with my order, the forums are great but I'm spending too much time on these boards looking for any news on my mystery phone. I don't think I will be getting a 920 even though i desperately wanted one, the allure is wearing off. if I have waited this long I might as well see what they bring out on the next release and I will not be ordering whatever I get, it is either in my hand or no deal.
a little update on the yellows from walmart, i went on chat with walmart and they were telling me that the phones were on hold from the manufacturer. i went on chat with nokia and they told me that walmart does not have an open order with nokia for yellow lumia 920's at this time. bottom line on this is if walmart cannot get a good deal from the manufacturer they will not order the phones period. if you really need a yellow it is probably best to go another route. you can change your order and settle for a black as i did at the original pricing. the idea of a phone in another color other than black grew on me, i guess now at least i can blend in with the crowd.
Wooohooo got my phone now .
Anyone else have this sound issue... ( I am on Lumia 920 ) ... think its a Windows Phone 8 bug.
Anyone else want Steam Mobile on WP7?
Does anyone else have an update waiting for them? |
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
import datetime
from django.utils.timezone import utc
class Migration(migrations.Migration):
dependencies = [
('app', '0025_auto_20150923_0843'),
]
operations = [
migrations.CreateModel(
name='BlogEntry',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=100)),
('text', models.CharField(max_length=10000)),
('pub_date', models.DateField()),
('author', models.ForeignKey(to='app.Account')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='BlogEntryTag',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=100)),
('desc', models.CharField(max_length=500)),
],
options={
'abstract': False,
},
),
migrations.AddField(
model_name='mission',
name='finalize_date',
field=models.DateField(default=datetime.datetime(2012, 1, 6, 22, 12, 50, 101184, tzinfo=utc)),
preserve_default=False,
),
migrations.AlterField(
model_name='neurorequest',
name='closed_date',
field=models.DateField(),
),
migrations.AddField(
model_name='blogentry',
name='tags',
field=models.ManyToManyField(to='app.BlogEntryTag'),
),
]
|
Many Lending Club investors just use the filtering on LendingClub.com to make their investments. Other investors, like me, download all the in-funding notes into a CSV file and do their filtering in Excel. If you are part of the first group then this change will not apply to you. For those of us who download the CSV file from the Browse Notes screen then you will notice some big changes starting today.
The additional data includes loan details and status, third-party reported credit attributes, and information reported by the borrower. This detailed information can be downloaded by logged in users from any page of our Browse Notes section by clicking on the “Download All” link on the bottom right corner of the page.
If you don’t know where to find this download link Lending Club also provided this useful graphic to help you locate it.
There are some very useful additional fields in this file, some that I have been wanting for a long time such as loan listing URL and additional credit data. In reality, this change is mostly about additional credit data with a few small tweaks elsewhere. Unfortunately, this new data is not available in the big loan history file so we won’t be able to do much analysis on this other data just yet.
I have a bunch of macros that I run inside Excel that helps me with filtering so these will all have to be rewritten before I can use them again. While this is somewhat of a hassle I like that investors are getting more data on borrowers now. To check out the new layout right now here is a direct link to download the new CSV file.
Do you know if they have plans to add that info to the “big loan history file”?
@Lou, I chatted with LC about that today and while they said a project is “in the works” for additions to the download file, nothing is imminent.
Outstanding. I had asked for loan title, as there are a few keywords that can be added to my probability model now. Can’t wait for them to update the history, too, with all of the credit stuff, to do some more soul searching =).
Ahhhh! Loan listing links—one click and done to invest! Super sweet!
@Bryce, Indeed. I think the only thing they took out was job title which I know disappointed some investors. But I can understand that because if you know the employer and the job title it doesn’t take much detective work to find out the identity of the borrower.
hey peter–are you planning at all to update the excel macro on your p2p lending wealth system? i’ve found it invaluable for filtering.
@Hector, Yes, I am in the process of updating the Excel macro – it will be available within a few days. I will be send out an email to all P2P Lending Wealth System owners when it is ready to go.
Are the loan in the CSV still hours or days out of date? I used to look at the CSV data but half the time the loans would already be 100% fully funded on the site.
@Em, No. I would say the CSV is completely up to date. Make sure you use the link on the Browse Notes screen to Download the CSV file there and not the In-Funding Loan Data CSV in the Statistics (that one is out of date). They really should remove that latter one or at least have it link to the same file.
Has anyone been able to get the file to download without manually logging in? By writing code to download it or by linking a database to it?
@John, I know Lendstats is able to access this file so I imagine it would be not that difficult to just write a script to log yourself in and grab the file. It is a bit out of my area of expertise but it doesn’t sound very difficult. |
import __builtin__
from direct.showbase.DirectObject import DirectObject
from direct.task.Task import Task
from devsyn.entities import Entity
base = __builtin__.base
class FirstPersonCamera(Entity):
# TODO: Make speed configurable
# constants
speed = 50
def __init__(self, parent = base.render):
# basic properties
## keeps track of mouse movement
self.pos = [0.0, 0.0]
# our prime is the camera
self.prime = base.camera
# who are we attached to?
self.parent = parent
# initialize various velocities
self.rotation_velocity = 0.05
def activate(self, reparent = True):
print "Activated FirstPerson Camera"
# initialize camera
base.camLens.setFov(70) # field of view
if reparent == True:
self.reset_parent()
# initialize camera task
base.taskMgr.add(self.update, "update_camera_task")
def deactivate(self):
self.reset_parent(base.render)
base.taskMgr.remove("update_camera_task")
def reset_parent(self, parent = None):
if parent != None:
if isinstance(parent, Entity):
self.parent = parent.prime
else:
self.parent = parent
# attach to our parent
self.attachTo(self.parent)
# has to be a way to get the height of the model....
self.setZ(self.getZ() + 1.0)
self.parent.hide()
def update(self, task):
# rotate the camera
pointer = base.win.getPointer(0)
new_position = [pointer.getX(), pointer.getY()]
# new position - last position gives us difference in mouse movement
d = [new_position[0] - self.pos[0],
new_position[1] - self.pos[1]]
# interpolate mouse last position to new position
self.pos[0] += d[0] * 0.5
self.pos[1] += d[1] * 0.5
# rotate camera using x vector (left/right)
camright = base.camera.getNetTransform().getMat().getRow3(0)
camright.normalize()
base.camera.setH(base.camera.getH() -
(d[0] * self.rotation_velocity))
# rotate camera using z vector (up/down)
camup = base.camera.getNetTransform().getMat().getRow3(2)
camup.normalize()
base.camera.setP(base.camera.getP() -
(d[1] * self.rotation_velocity * 2.5))
# collisions are taken care of through our
# parent (usually a player etc)
# For smoother mouse movement on all platforms
# we don't immediately set the 'cursor' in the window
# back to the center. Instead we let it freely travel
# within a square region inside the actual window.
# In this case the region has a 1/4 margin around
# our game window.
# If the cursor travels outside of this region
# we set it back to the center of the region.
# We ONLY reset the axis that moves out of bounds.
## If the mouse escapes the region via the x-axis
## reset the x axis to half screen width (center of screen)
if (self.pos[0] < (base.win.getXSize() * 0.25)):
self.pos[0] = (base.win.getXSize() / 2)
base.win.movePointer(0, base.win.getXSize() / 2, int(self.pos[1]))
elif (self.pos[0] > (base.win.getXSize() * 0.75)):
self.pos[0] = (base.win.getXSize() / 2)
base.win.movePointer(0, base.win.getXSize() / 2, int(self.pos[1]))
## If the mouse escapes the region via the y-axis
## reset the y axis to half the screen height (center of screen)
if (self.pos[1] < (base.win.getYSize() * 0.25)):
self.pos[1] = (base.win.getYSize() / 2)
base.win.movePointer(0, int(self.pos[0]), base.win.getYSize() / 2)
elif (self.pos[1] > (base.win.getYSize() * 0.75)):
self.pos[1] = (base.win.getYSize() / 2)
base.win.movePointer(0, int(self.pos[0]), base.win.getYSize() / 2)
return Task.cont
|
Not a lot keeps Mark Zuckerberg up at night. When he slips between his — presumably luxury — sheets, rests his head and closes his weary eyes, it’s unlikely his sleep will be disturbed by worries over his cable bill or the cost of servicing his car. As his peers toss and turn, fretting over the problems they’ll have to face the following morning, the 28 year-old billionaire drifts into a deep, satisfied sleep.
However, if you snuck into the Zuckerberg household under the cover of night (assuming you manage to evade the inevitable security) you might find the pyjama’d Zuckerberg in the kitchen; pacing back and forth as he hyperventilates into a brown paper bag, his pan of warm milk bubbling-over on the hob.
What could possibly be troubling him? Well, it’s very simple: like every other social network, Zuckerberg’s staff have one giant headache, how do they monetize their service before investors start using phrases like “return on investment”? And like every other social network, Zuckerberg’s staff have fixed their sights on the one area that may deliver the kind of revenue they need to keep surviving: advertising.
Advertising is nothing new to Facebook, they’re currently attempting to settle a multi-million dollar lawsuit that alleges they made use of their users’ private data in their ‘sponsored stories’ advertising feature. Zuckerberg himself has acknowledged their interest in targeted advertising, calling ‘personal referrals’ the holy grail of advertising.
One has to wonder if the purchase of Instagram was made to limit the opt-out possibilities for Facebook’s one billion users, which would have been substantially greater had the terms only been applied to Facebook’s in-house app. Executives can’t have failed to anticipate the response the change would provoke, with many users going so far as to describe it as Instagram’s suicide note, and sites like Wired publishing instructions on how to delete your Instagram account.
It’s important to understand that Instagram isn’t claiming ownership of your intellectual property; they are asserting the right to make use of it, anywhere in the world, for the purposes of advertising third party products, without your permission and without paying you a dime.
As of January 16th 2013, expect to see photographs of the most popular kids at school, used to advertise clubs, bars and shops to the least popular. Expect to see the photographs of your girlfriend sunbathing, plastered over adverts for the local singles-scene. Expect to see photographs of your husband, advertising local bankruptcy services.
As you lay in bed, worrying over what your friends and family, co-workers and neighbors are being sold using your ‘endorsement’, spare a thought for Mark Zuckerberg; he’ll probably be fast asleep.
Have you deleted your Instagram account as a result of their new terms? Do you trust corporations with your personal data? Let us know in the comments below. |
from pyroutelib3 import distHaversine
from contextlib import contextmanager
from typing import IO, Optional, List, Union, Tuple
from time import time
import signal
import math
import os
from ..const import DIR_SHAPE_CACHE, SHAPE_CACHE_TTL
from ..util import ensure_dir_exists
_Pt = Tuple[float, float]
@contextmanager
def time_limit(sec):
"Time limter based on https://gist.github.com/Rabbit52/7449101"
def handler(x, y):
raise TimeoutError
signal.signal(signal.SIGALRM, handler)
signal.alarm(sec)
try:
yield
finally:
signal.alarm(0)
def total_length(x: List[_Pt]) -> float:
dist = 0.0
for i in range(1, len(x)):
dist += distHaversine(x[i-1], x[i])
return dist
def dist_point_to_line(r: _Pt, p1: _Pt, p2: _Pt) -> float:
"""Defines distance from point r to line defined by point p1 and p2."""
# See https://en.wikipedia.org/wiki/Distance_from_a_point_to_a_line,
# algorithm "Line defined by two points"
# Unpack coordinates
x0, y0 = r
x1, y1 = p1
x2, y2 = p2
# DIfferences between p1, p2 coordinates
dx = x2 - x1
dy = y2 - y1
return abs(dy*x0 - dx*y0 + x2*y1 - y2*x1) / math.sqrt(dy**2 + dx**2)
def simplify_line(x: List[_Pt], threshold: float) -> List[_Pt]:
"""Simplifies line x using the Ramer-Douglas-Peucker algorithm"""
# Unable to simplify 2-point lines any further
if len(x) <= 2:
return x
# Find point furthest away from line (x[0], x[-1])
furthest_pt_dist = 0
furthest_pt_index = -1
for pt_idx, pt in enumerate(x[1:-1], start=1):
pt_dist = dist_point_to_line(pt, x[0], x[-1])
if pt_dist > furthest_pt_dist:
furthest_pt_dist = pt_dist
furthest_pt_index = pt_idx
# If furthest point is further then given threshold, simplify recursively both parts
if furthest_pt_dist > threshold:
left_simplified = simplify_line(x[:furthest_pt_index + 1], threshold)
right_simplified = simplify_line(x[furthest_pt_index:], threshold)
# strip last point from `left_simplified` to avoid furthest point being included twice
return left_simplified[:-1] + right_simplified
# If furthest point is close then given threshold, the simplification is just the
# segment from start & end of x.
else:
return [x[0], x[-1]]
def cache_retr(file: str, ttl_minutes: int = SHAPE_CACHE_TTL) -> Optional[IO[bytes]]:
"""
Tries to read specified from cahce.
If file is older then specified time-to-live,
or cahced files doesn't exist at all, returns None.
Otherwise, returns a file-like object.
"""
file_path = os.path.join(DIR_SHAPE_CACHE, file)
# Check if cahced file exists
if not os.path.exists(file_path):
return
# Try to get file's last-modified attribute
file_stat = os.stat(file_path)
file_timediff = (time() - file_stat.st_mtime) / 60
# File was modified earlier then specified time-to-live, return a IO object to that file
if file_timediff < ttl_minutes:
return open(file_path, "rb")
def cache_save(file: str, reader: Union[IO[bytes], bytes]):
"""Caches contents of `reader` in DIR_SHAPE_CACHE/{file}."""
ensure_dir_exists(DIR_SHAPE_CACHE, clear=False)
file_path = os.path.join(DIR_SHAPE_CACHE, file)
# Check if cahced file exists
with open(file_path, "wb") as writer:
if isinstance(reader, bytes):
writer.write(reader)
else:
while (chunk := reader.read(1024 * 16)):
writer.write(chunk)
|
Today's design team project was created for Lemon Shortbread Digital Stamps using the Toadstool Princess digi stamp. Many of Vera's images are so whimsical and this little princess is no exception! I will be entering her @ Paper Nest Dolls for "May Challenge", @ Star Stampz for "Hand Colored Image", @ Craft Rocket Challenges for " Use Embellishments", @ Dragonfly Dreams for "Here Come the Girls" and @ Dream Valley Challenge for "Add Ribbon and/or Lace".
Since purple and green are my two favorite colors, I really enjoyed coloring her up with my colored pencils. I chose the background papers to compliment along with my embellishments. Once again, my Scan N Cut machine came through and did the fussy cutting for me beautifully! |
import os
import shutil
import vimswitch.six.moves.builtins as builtins
class FileSystemSandbox:
"""
This static class sets up a global sandbox where all disk modification is
disabled except inside the sandbox directory. If an operation outside the
sandbox directory occurs, a FileSystemSandboxError is thrown.
Read-only disk operations will still be allowed outside the sandbox
directory, however.
"""
enabled = False
sandboxRoot = ''
def enable(self, sandboxRoot):
if self.enabled:
raise RuntimeError('Sandbox already enabled')
self.enabled = True
self.sandboxRoot = sandboxRoot
self._setUpSafeOperations()
def disable(self):
if not self.enabled:
raise RuntimeError('Sandbox already disabled')
self.enabled = False
self._tearDownSafeOperations()
def _setUpSafeOperations(self):
self._real_builtin_open = builtins.open
self._real_os_mkdir = os.mkdir
self._real_os_makedirs = os.makedirs
self._real_os_remove = os.remove
self._real_os_path_isfile = os.path.isfile
self._real_os_path_isdir = os.path.isdir
self._real_shutil_copy = shutil.copy
self._real_shutil_move = shutil.move
self._real_shutil_copytree = shutil.copytree
self._real_shutil_rmtree = shutil.rmtree
builtins.open = self._safe_builtin_open
os.mkdir = self._safe_os_mkdir
os.makedirs = self._safe_os_makedirs
os.remove = self._safe_os_remove
shutil.copy = self._safe_shutil_copy
shutil.move = self._safe_shutil_move
shutil.copytree = self._safe_shutil_copytree
shutil.rmtree = self._safe_shutil_rmtree
def _tearDownSafeOperations(self):
builtins.open = self._real_builtin_open
os.mkdir = self._real_os_mkdir
os.makedirs = self._real_os_makedirs
os.remove = self._real_os_remove
shutil.copy = self._real_shutil_copy
shutil.move = self._real_shutil_move
shutil.copytree = self._real_shutil_copytree
shutil.rmtree = self._real_shutil_rmtree
def _safe_builtin_open(self, path, mode='r', *args, **kwargs):
# We only verify if the file is being opened for writing or appending.
# Read only access should be allowed.
if mode.find('w') != -1 or mode.find('a') != -1:
self._verifyPath(path)
return self._real_builtin_open(path, mode, *args, **kwargs)
def _safe_os_mkdir(self, path, *args, **kwargs):
self._verifyPath(path)
self._real_os_mkdir(path, *args, **kwargs)
def _safe_os_makedirs(self, path, *args, **kwargs):
self._verifyPath(path)
self._real_os_makedirs(path, *args, **kwargs)
def _safe_os_remove(self, path):
self._verifyPath(path)
self._real_os_remove(path)
def _safe_shutil_copy(self, src, dst):
# Only need to verify destination path since src will not be modified
self._verifyPath(dst)
self._real_shutil_copy(src, dst)
def _safe_shutil_move(self, src, dst):
self._verifyPath(src)
self._verifyPath(dst)
self._real_shutil_move(src, dst)
def _safe_shutil_copytree(self, src, dst, *args, **kwargs):
# Only need to verify destination path since src will not be modified
self._verifyPath(dst)
self._real_shutil_copytree(src, dst, *args, **kwargs)
def _safe_shutil_rmtree(self, path, *args, **kwargs):
self._verifyPath(path)
self._real_shutil_rmtree(path, *args, **kwargs)
def _verifyPath(self, path):
"Checks that path is inside the sandbox"
absPath = os.path.abspath(path)
if not absPath.startswith(self.sandboxRoot):
raise FileSystemSandboxError(path)
class FileSystemSandboxError(Exception):
def __init__(self, path):
Exception.__init__(self, 'Tried to access path outside sandbox: %s' % path)
|
A job you hate is like a bad haircut. It’s hard to hide, it makes you unhappy, and it’s probably led to quite a few tears.
A new job that you hate is even worse.
Can you simply walk away from your new position? Well, sure—but then you’ll face the challenge of explaining that ever-expanding employment gap to future employers. Unless it’s truly unbearable, it makes far more sense to reap whatever benefits you can and continue to collect a paycheck until a new door opens.
To do that, however, you have to get in the right mindset. If your attitude is only bitterness, resentment, and anger when you walk through your office doors each morning, that’s going to show up in your work.
I know you can’t control everything at work. But if you’ve read some of my previous columns, you know I’m a big proponent of identifying when and where you can take control—and doing so. Your mind is one thing you can (mostly) control.
Instead of bitterness, here are a few healthier mindsets to try.
You deserve to build your network and your resume through your role at the company. However, unlike a paycheck, your employer can’t just hand you great experience and a rock-solid professional network. Taking advantage of those opportunities is largely up to you.
For example, maybe you get assigned a task you aren’t fond of—like developing a press release for a small event, which you feel is a waste of your time. You could put in minimal effort, write a crummy release, and turn it in at the last minute. Or, you could connect with key players in your organization to get some solid quotes for your release, perfect a catchy lead, and turn in the best product possible. Then, when a version of your document is published, you could email the people you quoted to say thank you and share a link to the story, then add the release to your portfolio.
Committing that extra effort—which can seriously build your relationships and portfolio—is entirely up to you.
When you get busy building your experience and relationships, you might be surprised by a couple of things. First, you might be able to bring your contributions (e.g., if you increased revenue, expanded company exposure, caught a mistake before it became a costly issue or PR nightmare, or pushed a difficult project through on time and on or under budget) to the bargaining table to improve your compensation.
Even if you aren’t able to negotiate a higher pay rate in your current role, your relationships and experience may eventually open doors for you to move on up to a better position.
Do you feel like you work with a bunch of jerks? That stinks, but in reality, it’s almost impossible that every single person at your job is a first-class a-hole. There has to be at least someone—probably several someones—who are decent people. But if you just sit at your desk stewing about your misfortune or grouse about your unhappiness every time you get an opportunity to talk with someone, you’ll never build relationships with these folks.
If you seek opportunities to connect when and where you can, however, you are likely to reap a range of benefits from your new relationships. For starters, those people will make your job more enjoyable. Plus, your new network may also be able to alert you to new opportunities and provide references when those opportunities arise.
When it comes to building connections, you aren’t limited to the people in the cube to your right and left. Wander around the place and meet people who work in different areas. Get creative in the ways you connect with people. Consider bringing in donuts in the morning and engaging in chit-chat away from your desk, joining a workout at the company gym, showing up to happy hour, or bringing up the marathon you ran last week to strike up conversation about hobbies and interests.
When you connect with people in ways other than standard work talk, you get a chance to know them differently and more holistically. Instead of seeing Hilde as the person who asks a lot of questions about every single project, for instance, you might see Hilde as the analytical thinker who built her own robot and likes the Science Channel—and so of course Hilde questions everything. Now the habit that was annoying is just a quirk that makes Hilde, Hilde.
Do the bare minimum? Not you. Remember, your company owes you good, resume-building experience, so you’re going to do your job and then some. Look for opportunities to be innovative, to push a project beyond just “getting it done,” to take something to the next level.
Let’s say you have to give a presentation about the status of a project to your organization’s board of directors. You could get up and give the standard PowerPoint complete with lengthy bullet points, charts, and graphs, with the dynamism of a sloth.
Or, you can take a few tips from Carmine Gallo and tell a compelling story about the project that reminds the listeners of why the project matters and include photos or a video that allows the audience to literally see what’s happening on the project.
Maybe you have to launch a new initiative, and there is likely to be some dissent in the ranks. You can communicate only through convoluted emails and ignore any questions or frustrations that the team expresses. Or, you can walk out of your office and make a point of meeting key players face-to-face, discussing the initiative, considering their feedback, and following up with them about their suggestions. You may not win everyone over, but you’ll definitely gain respect.
Sloth or dynamo. Cold or personable. Those distinctions largely lie in your choices and actions, and they will make a difference when it’s time to discuss a raise, promotion, or new opportunity.
When you swap out a defeating mindset for a mindset that allows you to thrive, you might inadvertently create new opportunities for yourself that you can’t see right now. You might even figure out the job isn’t that bad (and just maybe the biggest problem was in your own head). But if not, your amazing experience, solid reputation, and supportive network will help you on to your next opportunity.
Photo of man using tablet courtesy of Shutterstock. |
import difflib
import inspect
import io
import logging
import traceback
import warnings
from collections import deque, defaultdict
from xml.etree import cElementTree as ET
from xml.etree.cElementTree import ParseError
from errbot import botcmd, PY2
from errbot.utils import get_sender_username, xhtml2txt, parse_jid, split_string_after, deprecated
from errbot.templating import tenv
from errbot.bundled.threadpool import ThreadPool, WorkRequest
class ACLViolation(Exception):
"""Exceptions raised when user is not allowed to execute given command due to ACLs"""
class RoomError(Exception):
"""General exception class for MUC-related errors"""
class RoomNotJoinedError(RoomError):
"""Exception raised when performing MUC operations
that require the bot to have joined the room"""
class RoomDoesNotExistError(RoomError):
"""Exception that is raised when performing an operation
on a room that doesn't exist"""
class Identifier(object):
"""
This class is the parent and the basic contract of all the ways the backends
are identifying a person on their system.
"""
def __init__(self, jid=None, node='', domain='', resource=''):
if jid:
self._node, self._domain, self._resource = parse_jid(jid)
else:
self._node = node
self._domain = domain
self._resource = resource
@property
def node(self):
return self._node
@property
def domain(self):
return self._domain
@property
def resource(self):
return self._resource
@property
def stripped(self):
if self._domain:
return self._node + '@' + self._domain
return self._node # if the backend has no domain notion
def bare_match(self, other):
""" checks if 2 identifiers are equal, ignoring the resource """
return other.stripped == self.stripped
def __str__(self):
answer = self.stripped
if self._resource:
answer += '/' + self._resource
return answer
def __unicode__(self):
return str(self.__str__())
# deprecated stuff ...
@deprecated(node)
def getNode(self):
""" will be removed on the next version """
@deprecated(domain)
def getDomain(self):
""" will be removed on the next version """
@deprecated(bare_match)
def bareMatch(self, other):
""" will be removed on the next version """
@deprecated(stripped)
def getStripped(self):
""" will be removed on the next version """
@deprecated(resource)
def getResource(self):
""" will be removed on the next version """
class Message(object):
"""
A chat message.
This class represents chat messages that are sent or received by
the bot. It is modeled after XMPP messages so not all methods
make sense in the context of other back-ends.
"""
fr = Identifier('unknown@localhost')
def __init__(self, body, type_='chat', html=None):
"""
:param body:
The plaintext body of the message.
:param type_:
The type of message (generally one of either 'chat' or 'groupchat').
:param html:
An optional HTML representation of the body.
"""
# it is either unicode or assume it is utf-8
if isinstance(body, str):
self._body = body
else:
self._body = body.decode('utf-8')
self._html = html
self._type = type_
self._from = None
self._to = None
self._delayed = False
self._nick = None
@property
def to(self):
"""
Get the recipient of the message.
:returns:
An :class:`~errbot.backends.base.Identifier` identifying
the recipient.
"""
return self._to
@to.setter
def to(self, to):
"""
Set the recipient of the message.
:param to:
An :class:`~errbot.backends.base.Identifier`, or string which may
be parsed as one, identifying the recipient.
"""
if isinstance(to, Identifier):
self._to = to
else:
self._to = Identifier(to) # assume a parseable string
@property
def type(self):
"""
Get the type of the message.
:returns:
The message type as a string (generally one of either
'chat' or 'groupchat')
"""
return self._type
@type.setter
def type(self, type_):
"""
Set the type of the message.
:param type_:
The message type (generally one of either 'chat'
or 'groupchat').
"""
self._type = type_
@property
def frm(self):
"""
Get the sender of the message.
:returns:
An :class:`~errbot.backends.base.Identifier` identifying
the sender.
"""
return self._from
@frm.setter
def frm(self, from_):
"""
Set the sender of the message.
:param from_:
An :class:`~errbot.backends.base.Identifier`, or string which may
be parsed as one, identifying the sender.
"""
if isinstance(from_, Identifier):
self._from = from_
else:
self._from = Identifier(from_) # assume a parseable string
@property
def body(self):
"""
Get the plaintext body of the message.
:returns:
The body as a string.
"""
return self._body
@property
def html(self):
"""
Get the HTML representation of the message.
:returns:
A string containing the HTML message or `None` when there
is none.
"""
return self._html
@html.setter
def html(self, html):
"""
Set the HTML representation of the message
:param html:
The HTML message.
"""
self._html = html
@property
def delayed(self):
return self._delayed
@delayed.setter
def delayed(self, delayed):
self._delayed = delayed
@property
def nick(self):
return self._nick
@nick.setter
def nick(self, nick):
self._nick = nick
def __str__(self):
return self._body
# deprecated stuff ...
@deprecated(to)
def getTo(self):
""" will be removed on the next version """
@deprecated(to.fset)
def setTo(self, to):
""" will be removed on the next version """
@deprecated(type)
def getType(self):
""" will be removed on the next version """
@deprecated(type.fset)
def setType(self, type_):
""" will be removed on the next version """
@deprecated(frm)
def getFrom(self):
""" will be removed on the next version """
@deprecated(frm.fset)
def setFrom(self, from_):
""" will be removed on the next version """
@deprecated(body)
def getBody(self):
""" will be removed on the next version """
@deprecated(html)
def getHTML(self):
""" will be removed on the next version """
@deprecated(html.fset)
def setHTML(self, html):
""" will be removed on the next version """
@deprecated(delayed)
def isDelayed(self):
""" will be removed on the next version """
@deprecated(delayed.fset)
def setDelayed(self, delayed):
""" will be removed on the next version """
@deprecated(nick)
def setMuckNick(self, nick):
""" will be removed on the next version """
@deprecated(nick.fset)
def getMuckNick(self):
""" will be removed on the next version """
ONLINE = 'online'
OFFLINE = 'offline'
AWAY = 'away'
DND = 'dnd'
class Presence(object):
"""
This class represents a presence change for a user or a user in a chatroom.
Instances of this class are passed to :meth:`~errbot.botplugin.BotPlugin.callback_presence`
when the presence of people changes.
"""
def __init__(self, nick=None, identifier=None, status=None, chatroom=None, message=None):
if nick is None and identifier is None:
raise ValueError('Presence: nick and identifiers are both None')
if nick is None and chatroom is not None:
raise ValueError('Presence: nick is None when chatroom is not')
if status is None and message is None:
raise ValueError('Presence: at least a new status or a new status message mustbe present')
self._nick = nick
self._identifier = identifier
self._chatroom = chatroom
self._status = status
self._message = message
@property
def chatroom(self):
""" Returns the Identifier pointing the room in which the event occurred.
If it returns None, the event occurred outside of a chatroom.
"""
return self._chatroom
@property
def nick(self):
""" Returns a plain string of the presence nick.
(In some chatroom implementations, you cannot know the real identifier
of a person in it).
Can return None but then identifier won't be None.
"""
return self._nick
@property
def identifier(self):
""" Returns the identifier of the event.
Can be None *only* if chatroom is not None
"""
return self._identifier
@property
def status(self):
""" Returns the status of the presence change.
It can be one of the constants ONLINE, OFFLINE, AWAY, DND, but
can also be custom statuses depending on backends.
It can be None if it is just an update of the status message (see get_message)
"""
return self._status
@property
def message(self):
""" Returns a human readable message associated with the status if any.
like : "BRB, washing the dishes"
It can be None if it is only a general status update (see get_status)
"""
return self._message
def __str__(self):
response = ''
if self._nick:
response += 'Nick:%s ' % self._nick
if self._identifier:
response += 'Idd:%s ' % self._identifier
if self._status:
response += 'Status:%s ' % self._status
if self._chatroom:
response += 'Room:%s ' % self._chatroom
if self._message:
response += 'Msg:%s ' % self._message
return response
def __unicode__(self):
return str(self.__str__())
STREAM_WAITING_TO_START = 'pending'
STREAM_TRANSFER_IN_PROGRESS = 'in progress'
STREAM_SUCCESSFULLY_TRANSFERED = 'success'
STREAM_PAUSED = 'paused'
STREAM_ERROR = 'error'
STREAM_REJECTED = 'rejected'
DEFAULT_REASON = 'unknown'
class Stream(io.BufferedReader):
"""
This class represents a stream request.
Instances of this class are passed to :meth:`~errbot.botplugin.BotPlugin.callback_stream`
when an incoming stream is requested.
"""
def __init__(self, identifier, fsource, name=None, size=None, stream_type=None):
super(Stream, self).__init__(fsource)
self._identifier = identifier
self._name = name
self._size = size
self._stream_type = stream_type
self._status = STREAM_WAITING_TO_START
self._reason = DEFAULT_REASON
@property
def identifier(self):
"""
The identity the stream is coming from if it is an incoming request
or to if it is an outgoing request.
"""
return self._identifier
@property
def name(self):
"""
The name of the stream/file if it has one or None otherwise.
!! Be carefull of injections if you are using this name directly as a filename.
"""
return self._name
@property
def size(self):
"""
The expected size in bytes of the stream if it is known or None.
"""
return self._size
@property
def stream_type(self):
"""
The mimetype of the stream if it is known or None.
"""
return self._stream_type
@property
def status(self):
"""
The status for this stream.
"""
return self._status
def accept(self):
"""
Signal that the stream has been accepted.
"""
if self._status != STREAM_WAITING_TO_START:
raise ValueError("Invalid state, the stream is not pending.")
self._status = STREAM_TRANSFER_IN_PROGRESS
def reject(self):
"""
Signal that the stream has been rejected.
"""
if self._status != STREAM_WAITING_TO_START:
raise ValueError("Invalid state, the stream is not pending.")
self._status = STREAM_REJECTED
def error(self, reason=DEFAULT_REASON):
"""
An internal plugin error prevented the transfer.
"""
self._status = STREAM_ERROR
self._reason = reason
def success(self):
"""
The streaming finished normally.
"""
if self._status != STREAM_TRANSFER_IN_PROGRESS:
raise ValueError("Invalid state, the stream is not in progress.")
self._status = STREAM_SUCCESSFULLY_TRANSFERED
def clone(self, new_fsource):
"""
Creates a clone and with an alternative stream
"""
return Stream(self._identifier, new_fsource, self._name, self._size, self._stream_type)
class MUCRoom(Identifier):
"""
This class represents a Multi-User Chatroom.
"""
def join(self, username=None, password=None):
"""
Join the room.
If the room does not exist yet, this will automatically call
:meth:`create` on it first.
"""
raise NotImplementedError("It should be implemented specifically for your backend")
def leave(self, reason=None):
"""
Leave the room.
:param reason:
An optional string explaining the reason for leaving the room.
"""
raise NotImplementedError("It should be implemented specifically for your backend")
def create(self):
"""
Create the room.
Calling this on an already existing room is a no-op.
"""
raise NotImplementedError("It should be implemented specifically for your backend")
def destroy(self):
"""
Destroy the room.
Calling this on a non-existing room is a no-op.
"""
raise NotImplementedError("It should be implemented specifically for your backend")
@property
def exists(self):
"""
Boolean indicating whether this room already exists or not.
:getter:
Returns `True` if the room exists, `False` otherwise.
"""
raise NotImplementedError("It should be implemented specifically for your backend")
@property
def joined(self):
"""
Boolean indicating whether this room has already been joined.
:getter:
Returns `True` if the room has been joined, `False` otherwise.
"""
raise NotImplementedError("It should be implemented specifically for your backend")
@property
def topic(self):
"""
The room topic.
:getter:
Returns the topic (a string) if one is set, `None` if no
topic has been set at all.
.. note::
Back-ends may return an empty string rather than `None`
when no topic has been set as a network may not
differentiate between no topic and an empty topic.
:raises:
:class:`~MUCNotJoinedError` if the room has not yet been joined.
"""
raise NotImplementedError("It should be implemented specifically for your backend")
@topic.setter
def topic(self, topic):
"""
Set the room's topic.
:param topic:
The topic to set.
"""
raise NotImplementedError("It should be implemented specifically for your backend")
@property
def occupants(self):
"""
The room's occupants.
:getter:
Returns a list of :class:`~errbot.backends.base.MUCOccupant` instances.
:raises:
:class:`~MUCNotJoinedError` if the room has not yet been joined.
"""
raise NotImplementedError("It should be implemented specifically for your backend")
def invite(self, *args):
"""
Invite one or more people into the room.
:*args:
One or more JID's to invite into the room.
"""
raise NotImplementedError("It should be implemented specifically for your backend")
class MUCOccupant(Identifier):
"""
This class represents a person inside a MUC.
This class exists to expose additional information about occupants
inside a MUC. For example, the XMPP back-end may expose backend-specific
information such as the real JID of the occupant and whether or not
that person is a moderator or owner of the room.
See the parent class for additional details.
"""
pass
def build_text_html_message_pair(source):
node = None
text_plain = None
try:
node = ET.XML(source)
text_plain = xhtml2txt(source)
except ParseError as ee:
if source.strip(): # avoids keep alive pollution
logging.debug('Could not parse [%s] as XHTML-IM, assume pure text Parsing error = [%s]' % (source, ee))
text_plain = source
except UnicodeEncodeError:
text_plain = source
return text_plain, node
def build_message(text, message_class, conversion_function=None):
"""Builds an xhtml message without attributes.
If input is not valid xhtml-im fallback to normal."""
message = None # keeps the compiler happy
try:
text = text.replace('', '*') # there is a weird chr IRC is sending that we need to filter out
if PY2:
ET.XML(text.encode('utf-8')) # test if is it xml
else:
ET.XML(text)
edulcorated_html = conversion_function(text) if conversion_function else text
try:
text_plain, node = build_text_html_message_pair(edulcorated_html)
message = message_class(body=text_plain)
message.html = node
except ET.ParseError as ee:
logging.error('Error translating to hipchat [%s] Parsing error = [%s]' % (edulcorated_html, ee))
except ET.ParseError as ee:
if text.strip(): # avoids keep alive pollution
logging.debug('Determined that [%s] is not XHTML-IM (%s)' % (text, ee))
message = message_class(body=text)
return message
class Backend(object):
"""
Implements the basic Bot logic (logic independent from the backend) and leaves
you to implement the missing parts
"""
cmd_history = defaultdict(lambda: deque(maxlen=10)) # this will be a per user history
MSG_ERROR_OCCURRED = 'Sorry for your inconvenience. ' \
'An unexpected error occurred.'
MSG_HELP_TAIL = 'Type help <command name> to get more info ' \
'about that specific command.'
MSG_HELP_UNDEFINED_COMMAND = 'That command is not defined.'
def __init__(self, config):
""" Those arguments will be directly those put in BOT_IDENTITY
"""
if config.BOT_ASYNC:
self.thread_pool = ThreadPool(3)
logging.debug('created the thread pool' + str(self.thread_pool))
self.commands = {} # the dynamically populated list of commands available on the bot
self.re_commands = {} # the dynamically populated list of regex-based commands available on the bot
self.MSG_UNKNOWN_COMMAND = 'Unknown command: "%(command)s". ' \
'Type "' + config.BOT_PREFIX + 'help" for available commands.'
if config.BOT_ALT_PREFIX_CASEINSENSITIVE:
self.bot_alt_prefixes = tuple(prefix.lower() for prefix in config.BOT_ALT_PREFIXES)
else:
self.bot_alt_prefixes = config.BOT_ALT_PREFIXES
def send_message(self, mess):
"""Should be overridden by backends"""
def send_simple_reply(self, mess, text, private=False):
"""Send a simple response to a message"""
self.send_message(self.build_reply(mess, text, private))
def build_reply(self, mess, text=None, private=False):
"""Build a message for responding to another message.
Message is NOT sent"""
msg_type = mess.type
response = self.build_message(text)
response.frm = self.jid
if msg_type == 'groupchat' and not private:
# stripped returns the full [email protected]/chat_username
# but in case of a groupchat, we should only try to send to the MUC address
# itself ([email protected])
response.to = mess.frm.stripped.split('/')[0]
elif str(mess.to) == self.bot_config.BOT_IDENTITY['username']:
# This is a direct private message, not initiated through a MUC. Use
# stripped to remove the resource so that the response goes to the
# client with the highest priority
response.to = mess.frm.stripped
else:
# This is a private message that was initiated through a MUC. Don't use
# stripped here to retain the resource, else the XMPP server doesn't
# know which user we're actually responding to.
response.to = mess.frm
response.type = 'chat' if private else msg_type
return response
def callback_presence(self, presence):
"""
Implemented by errBot.
"""
pass
def callback_room_joined(self, room):
"""
See :class:`~errbot.errBot.ErrBot`
"""
pass
def callback_room_left(self, room):
"""
See :class:`~errbot.errBot.ErrBot`
"""
pass
def callback_room_topic(self, room):
"""
See :class:`~errbot.errBot.ErrBot`
"""
pass
def callback_message(self, mess):
"""
Needs to return False if we want to stop further treatment
"""
# Prepare to handle either private chats or group chats
type_ = mess.type
jid = mess.frm
text = mess.body
username = get_sender_username(mess)
user_cmd_history = self.cmd_history[username]
if mess.delayed:
logging.debug("Message from history, ignore it")
return False
if type_ not in ("groupchat", "chat"):
logging.debug("unhandled message type %s" % mess)
return False
# Ignore messages from ourselves. Because it isn't always possible to get the
# real JID from a MUC participant (including ourself), matching the JID against
# ourselves isn't enough (see https://github.com/gbin/err/issues/90 for
# background discussion on this). Matching against CHATROOM_FN isn't technically
# correct in all cases because a MUC could give us another nickname, but it
# covers 99% of the MUC cases, so it should suffice for the time being.
if (jid.bare_match(self.jid) or
type_ == "groupchat" and mess.nick == self.bot_config.CHATROOM_FN): # noqa
logging.debug("Ignoring message from self")
return False
logging.debug("*** jid = %s" % jid)
logging.debug("*** username = %s" % username)
logging.debug("*** type = %s" % type_)
logging.debug("*** text = %s" % text)
# If a message format is not supported (eg. encrypted),
# txt will be None
if not text:
return False
surpress_cmd_not_found = False
prefixed = False # Keeps track whether text was prefixed with a bot prefix
only_check_re_command = False # Becomes true if text is determed to not be a regular command
tomatch = text.lower() if self.bot_config.BOT_ALT_PREFIX_CASEINSENSITIVE else text
if len(self.bot_config.BOT_ALT_PREFIXES) > 0 and tomatch.startswith(self.bot_alt_prefixes):
# Yay! We were called by one of our alternate prefixes. Now we just have to find out
# which one... (And find the longest matching, in case you have 'err' and 'errbot' and
# someone uses 'errbot', which also matches 'err' but would leave 'bot' to be taken as
# part of the called command in that case)
prefixed = True
longest = 0
for prefix in self.bot_alt_prefixes:
l = len(prefix)
if tomatch.startswith(prefix) and l > longest:
longest = l
logging.debug("Called with alternate prefix '{}'".format(text[:longest]))
text = text[longest:]
# Now also remove the separator from the text
for sep in self.bot_config.BOT_ALT_PREFIX_SEPARATORS:
# While unlikely, one may have separators consisting of
# more than one character
l = len(sep)
if text[:l] == sep:
text = text[l:]
elif type_ == "chat" and self.bot_config.BOT_PREFIX_OPTIONAL_ON_CHAT:
logging.debug("Assuming '%s' to be a command because BOT_PREFIX_OPTIONAL_ON_CHAT is True" % text)
# In order to keep noise down we surpress messages about the command
# not being found, because it's possible a plugin will trigger on what
# was said with trigger_message.
surpress_cmd_not_found = True
elif not text.startswith(self.bot_config.BOT_PREFIX):
only_check_re_command = True
if text.startswith(self.bot_config.BOT_PREFIX):
text = text[len(self.bot_config.BOT_PREFIX):]
prefixed = True
text = text.strip()
text_split = text.split(' ')
cmd = None
command = None
args = ''
if not only_check_re_command:
if len(text_split) > 1:
command = (text_split[0] + '_' + text_split[1]).lower()
if command in self.commands:
cmd = command
args = ' '.join(text_split[2:])
if not cmd:
command = text_split[0].lower()
args = ' '.join(text_split[1:])
if command in self.commands:
cmd = command
if len(text_split) > 1:
args = ' '.join(text_split[1:])
if command == self.bot_config.BOT_PREFIX: # we did "!!" so recall the last command
if len(user_cmd_history):
cmd, args = user_cmd_history[-1]
else:
return False # no command in history
elif command.isdigit(): # we did "!#" so we recall the specified command
index = int(command)
if len(user_cmd_history) >= index:
cmd, args = user_cmd_history[-index]
else:
return False # no command in history
# Try to match one of the regex commands if the regular commands produced no match
matched_on_re_command = False
if not cmd:
if prefixed:
commands = self.re_commands
else:
commands = {k: self.re_commands[k] for k in self.re_commands
if not self.re_commands[k]._err_command_prefix_required}
for name, func in commands.items():
if func._err_command_matchall:
match = list(func._err_command_re_pattern.finditer(text))
else:
match = func._err_command_re_pattern.search(text)
if match:
logging.debug("Matching '{}' against '{}' produced a match"
.format(text, func._err_command_re_pattern.pattern))
matched_on_re_command = True
self._process_command(mess, name, text, match)
else:
logging.debug("Matching '{}' against '{}' produced no match"
.format(text, func._err_command_re_pattern.pattern))
if matched_on_re_command:
return True
if cmd:
self._process_command(mess, cmd, args, match=None)
elif not only_check_re_command:
logging.debug("Command not found")
if surpress_cmd_not_found:
logging.debug("Surpressing command not found feedback")
else:
reply = self.unknown_command(mess, command, args)
if reply is None:
reply = self.MSG_UNKNOWN_COMMAND % {'command': command}
if reply:
self.send_simple_reply(mess, reply)
return True
def _process_command(self, mess, cmd, args, match):
"""Process and execute a bot command"""
jid = mess.frm
username = get_sender_username(mess)
user_cmd_history = self.cmd_history[username]
logging.info("Processing command '{}' with parameters '{}' from {}/{}".format(cmd, args, jid, mess.nick))
if (cmd, args) in user_cmd_history:
user_cmd_history.remove((cmd, args)) # Avoids duplicate history items
try:
self.check_command_access(mess, cmd)
except ACLViolation as e:
if not self.bot_config.HIDE_RESTRICTED_ACCESS:
self.send_simple_reply(mess, str(e))
return
f = self.re_commands[cmd] if match else self.commands[cmd]
if f._err_command_admin_only and self.bot_config.BOT_ASYNC:
# If it is an admin command, wait until the queue is completely depleted so
# we don't have strange concurrency issues on load/unload/updates etc...
self.thread_pool.wait()
if f._err_command_historize:
user_cmd_history.append((cmd, args)) # add it to the history only if it is authorized to be so
# Don't check for None here as None can be a valid argument to str.split.
# '' was chosen as default argument because this isn't a valid argument to str.split()
if not match and f._err_command_split_args_with != '':
try:
if hasattr(f._err_command_split_args_with, "parse_args"):
args = f._err_command_split_args_with.parse_args(args)
elif callable(f._err_command_split_args_with):
args = f._err_command_split_args_with(args)
else:
args = args.split(f._err_command_split_args_with)
except Exception as e:
self.send_simple_reply(
mess,
"Sorry, I couldn't parse your arguments. {}".format(e)
)
return
if self.bot_config.BOT_ASYNC:
wr = WorkRequest(
self._execute_and_send,
[],
{'cmd': cmd, 'args': args, 'match': match, 'mess': mess, 'jid': jid,
'template_name': f._err_command_template}
)
self.thread_pool.putRequest(wr)
if f._err_command_admin_only:
# Again, if it is an admin command, wait until the queue is completely
# depleted so we don't have strange concurrency issues.
self.thread_pool.wait()
else:
self._execute_and_send(cmd=cmd, args=args, match=match, mess=mess, jid=jid,
template_name=f._err_command_template)
def _execute_and_send(self, cmd, args, match, mess, jid, template_name=None):
"""Execute a bot command and send output back to the caller
cmd: The command that was given to the bot (after being expanded)
args: Arguments given along with cmd
match: A re.MatchObject if command is coming from a regex-based command, else None
mess: The message object
jid: The jid of the person executing the command
template_name: The name of the template which should be used to render
html-im output, if any
"""
def process_reply(reply_):
# integrated templating
if template_name:
reply_ = tenv().get_template(template_name + '.html').render(**reply_)
# Reply should be all text at this point (See https://github.com/gbin/err/issues/96)
return str(reply_)
def send_reply(reply_):
for part in split_string_after(reply_, self.bot_config.MESSAGE_SIZE_LIMIT):
self.send_simple_reply(mess, part, cmd in self.bot_config.DIVERT_TO_PRIVATE)
commands = self.re_commands if match else self.commands
try:
if inspect.isgeneratorfunction(commands[cmd]):
replies = commands[cmd](mess, match) if match else commands[cmd](mess, args)
for reply in replies:
if reply:
send_reply(process_reply(reply))
else:
reply = commands[cmd](mess, match) if match else commands[cmd](mess, args)
if reply:
send_reply(process_reply(reply))
except Exception as e:
tb = traceback.format_exc()
logging.exception('An error happened while processing '
'a message ("%s") from %s: %s"' %
(mess.body, jid, tb))
send_reply(self.MSG_ERROR_OCCURRED + ':\n %s' % e)
def is_admin(self, usr):
"""
an overridable check to see if a user is an administrator
"""
return usr in self.bot_config.BOT_ADMINS
def check_command_access(self, mess, cmd):
"""
Check command against ACL rules
Raises ACLViolation() if the command may not be executed in the given context
"""
usr = str(get_jid_from_message(mess))
typ = mess.type
if cmd not in self.bot_config.ACCESS_CONTROLS:
self.bot_config.ACCESS_CONTROLS[cmd] = self.bot_config.ACCESS_CONTROLS_DEFAULT
if ('allowusers' in self.bot_config.ACCESS_CONTROLS[cmd] and
usr not in self.bot_config.ACCESS_CONTROLS[cmd]['allowusers']):
raise ACLViolation("You're not allowed to access this command from this user")
if ('denyusers' in self.bot_config.ACCESS_CONTROLS[cmd] and
usr in self.bot_config.ACCESS_CONTROLS[cmd]['denyusers']):
raise ACLViolation("You're not allowed to access this command from this user")
if typ == 'groupchat':
stripped = mess.frm.stripped
if ('allowmuc' in self.bot_config.ACCESS_CONTROLS[cmd] and
self.bot_config.ACCESS_CONTROLS[cmd]['allowmuc'] is False):
raise ACLViolation("You're not allowed to access this command from a chatroom")
if ('allowrooms' in self.bot_config.ACCESS_CONTROLS[cmd] and
stripped not in self.bot_config.ACCESS_CONTROLS[cmd]['allowrooms']):
raise ACLViolation("You're not allowed to access this command from this room")
if ('denyrooms' in self.bot_config.ACCESS_CONTROLS[cmd] and
stripped in self.bot_config.ACCESS_CONTROLS[cmd]['denyrooms']):
raise ACLViolation("You're not allowed to access this command from this room")
else:
if ('allowprivate' in self.bot_config.ACCESS_CONTROLS[cmd] and
self.bot_config.ACCESS_CONTROLS[cmd]['allowprivate'] is False):
raise ACLViolation("You're not allowed to access this command via private message to me")
f = self.commands[cmd] if cmd in self.commands else self.re_commands[cmd]
if f._err_command_admin_only:
if typ == 'groupchat':
raise ACLViolation("You cannot administer the bot from a chatroom, message the bot directly")
if not self.is_admin(usr):
raise ACLViolation("This command requires bot-admin privileges")
def unknown_command(self, _, cmd, args):
""" Override the default unknown command behavior
"""
full_cmd = cmd + ' ' + args.split(' ')[0] if args else None
if full_cmd:
part1 = 'Command "%s" / "%s" not found.' % (cmd, full_cmd)
else:
part1 = 'Command "%s" not found.' % cmd
ununderscore_keys = [m.replace('_', ' ') for m in self.commands.keys()]
matches = difflib.get_close_matches(cmd, ununderscore_keys)
if full_cmd:
matches.extend(difflib.get_close_matches(full_cmd, ununderscore_keys))
matches = set(matches)
if matches:
return (part1 + '\n\nDid you mean "' + self.bot_config.BOT_PREFIX +
('" or "' + self.bot_config.BOT_PREFIX).join(matches) + '" ?')
else:
return part1
def inject_commands_from(self, instance_to_inject):
classname = instance_to_inject.__class__.__name__
for name, value in inspect.getmembers(instance_to_inject, inspect.ismethod):
if getattr(value, '_err_command', False):
commands = self.re_commands if getattr(value, '_err_re_command') else self.commands
name = getattr(value, '_err_command_name')
if name in commands:
f = commands[name]
new_name = (classname + '-' + name).lower()
self.warn_admins('%s.%s clashes with %s.%s so it has been renamed %s' % (
classname, name, type(f.__self__).__name__, f.__name__, new_name))
name = new_name
commands[name] = value
if getattr(value, '_err_re_command'):
logging.debug('Adding regex command : %s -> %s' % (name, value.__name__))
self.re_commands = commands
else:
logging.debug('Adding command : %s -> %s' % (name, value.__name__))
self.commands = commands
def remove_commands_from(self, instance_to_inject):
for name, value in inspect.getmembers(instance_to_inject, inspect.ismethod):
if getattr(value, '_err_command', False):
name = getattr(value, '_err_command_name')
if getattr(value, '_err_re_command') and name in self.re_commands:
del (self.re_commands[name])
elif not getattr(value, '_err_re_command') and name in self.commands:
del (self.commands[name])
def warn_admins(self, warning):
for admin in self.bot_config.BOT_ADMINS:
self.send(admin, warning)
def top_of_help_message(self):
"""Returns a string that forms the top of the help message
Override this method in derived class if you
want to add additional help text at the
beginning of the help message.
"""
return ""
def bottom_of_help_message(self):
"""Returns a string that forms the bottom of the help message
Override this method in derived class if you
want to add additional help text at the end
of the help message.
"""
return ""
@botcmd
def help(self, mess, args):
""" Returns a help string listing available options.
Automatically assigned to the "help" command."""
if not args:
if self.__doc__:
description = self.__doc__.strip()
else:
description = 'Available commands:'
usage = '\n'.join(sorted([
self.bot_config.BOT_PREFIX + '%s: %s' % (name, (command.__doc__ or
'(undocumented)').strip().split('\n', 1)[0])
for (name, command) in self.commands.items()
if name != 'help' and not command._err_command_hidden
]))
usage = '\n\n' + '\n\n'.join(filter(None, [usage, self.MSG_HELP_TAIL]))
else:
description = ''
if args in self.commands:
usage = (self.commands[args].__doc__ or
'undocumented').strip()
else:
usage = self.MSG_HELP_UNDEFINED_COMMAND
top = self.top_of_help_message()
bottom = self.bottom_of_help_message()
return ''.join(filter(None, [top, description, usage, bottom]))
def send(self, user, text, in_reply_to=None, message_type='chat', groupchat_nick_reply=False):
"""Sends a simple message to the specified user."""
nick_reply = self.bot_config.GROUPCHAT_NICK_PREFIXED
if (message_type == 'groupchat' and in_reply_to and nick_reply and groupchat_nick_reply):
reply_text = self.groupchat_reply_format().format(in_reply_to.nick, text)
else:
reply_text = text
mess = self.build_message(reply_text)
if hasattr(user, 'stripped'):
mess.to = user.stripped
else:
mess.to = user
if in_reply_to:
mess.type = in_reply_to.type
mess.frm = in_reply_to.to.stripped
else:
mess.type = message_type
mess.frm = self.jid
self.send_message(mess)
# ##### HERE ARE THE SPECIFICS TO IMPLEMENT PER BACKEND
def groupchat_reply_format(self):
raise NotImplementedError("It should be implemented specifically for your backend")
def build_message(self, text):
raise NotImplementedError("It should be implemented specifically for your backend")
def serve_forever(self):
raise NotImplementedError("It should be implemented specifically for your backend")
def connect(self):
"""Connects the bot to server or returns current connection
"""
raise NotImplementedError("It should be implemented specifically for your backend")
def join_room(self, room, username=None, password=None):
"""
Join a room (MUC).
:param room:
The JID/identifier of the room to join.
:param username:
An optional username to use.
:param password:
An optional password to use (for password-protected rooms).
.. deprecated:: 2.2.0
Use the methods on :class:`MUCRoom` instead.
"""
warnings.warn(
"Using join_room is deprecated, use query_room and the join "
"method on the resulting response instead.",
DeprecationWarning
)
self.query_room(room).join(username=username, password=password)
def query_room(self, room):
"""
Query a room for information.
:param room:
The JID/identifier of the room to query for.
:returns:
An instance of :class:`~MUCRoom`.
"""
raise NotImplementedError("It should be implemented specifically for your backend")
def shutdown(self):
pass
def connect_callback(self):
pass
def disconnect_callback(self):
pass
@property
def mode(self):
raise NotImplementedError("It should be implemented specifically for your backend")
def rooms(self):
"""
Return a list of rooms the bot is currently in.
:returns:
A list of :class:`~errbot.backends.base.MUCRoom` instances.
"""
raise NotImplementedError("It should be implemented specifically for your backend")
def get_jid_from_message(mess):
if mess.type == 'chat':
# strip the resource for direct chats
return mess.frm.stripped
fr = mess.frm
jid = Identifier(node=fr.node, domain=fr.domain, resource=fr.resource)
return jid
|
It's Luke Jermay's dangerous opener. Not sure which came first, but Luke has the same effect. He uses a slightly different presentation, but a brick and a father would work just as well.
Also has the very specific wording you mentioned.
kartoffelngeist wrote: It's Luke Jermay's dangerous opener. Not sure which came first, but Luke has the same effect. He uses a slightly different presentation, but a brick and a father would work just as well.
Well it's nice to know I'm thinking along the lines of the great Luke Jermay!! Do you know if his effect in still in print anywhere or on DVD etc? Just so I can see if my method can be honed. I'm definitely going to try it out in a couple of weeks and see how it plays!
Nevermind - just discovered it was in Coral Fang. I was lucky enough to find a copy as it's out of print. Expected delivery in 3 days. Hopefully it's as good as the TM reviews from way back in 2006!!!!!
kevmundo wrote: Nevermind - just discovered it was in Coral Fang. I was lucky enough to find a copy as it's out of print. Expected delivery in 3 days. Hopefully it's as good as the TM reviews from way back in 2006!!!!!
kevmundo wrote: Not sure I'll get any takers on this one but since this is TM I'll give it a shot!
I can't sleep. I mean I literally can't sleep because of this brick in a bag effect.
A spectator sits at a table and two bags are in front of him. The performer is very specific and states, 'pick a bag, and to be clear, whichever bag you choose will be the one I open, turn upside down and let the contents drop on your hand.' The spec picks the one on the right, it's turned upside down and a feather comes out. The other is turned upside down and a brick comes out.
Now, I have woken up in the middle of the night twice now dreaming about this effect, or I should say, having nightmares about it. It can't be equivoque since the performer is so specific. It must (I have convinced myself) be a mxxxxxe oxt. But what possible oxt could it be. I have thought of several oxts but they are all total c*** (not the best) and for the life of me I cannot think of a natural ending that would look clean. I'm findng it difficult to focus on anything else at the minute.
I'm not asking for anyone to tip any commercial methods since the oxt is never revealed. I'm not aware of it being commercially available? (If it is, please tell me and I'll buy it!). So, if you have an active imagination and you genuiely want to prevent me going on a killing spree, would you be kind enough to PM me if you have any ideas for a decent mxxxxxxe oxt for this effect. I can tell you all my many ideas, but I can assure you, they aren't any good!
This is such a typical magicians problem? You don't need to switch things out, you don't need gimmicks just use equivoque, done right the spectator has no idea so what difference does it make? Hint: It doesn't, it only makes a difference to you.
Did you ever see him perform the effect again? Just because he was so specific does not rule it out.
I'm assuming you haven't seen Max perform it? Forgive me if I'm wrong. There is no way on earth that this effect is achieved using equivoque. Max states quite clearly "There are two bags in front of you. Select one of them and whichever one you select I will empty the contents of that bag onto your hand." He emphasises that the choice is free and the selected bag will be emptied on his hand. Eugene Burger picks a bag. It has a feather in it. The other has a brick.
As I've said previously, my opinion is that the effect isn't an effect at all and Max was just lucky. It's just a genuine choice. The trick to it is there isn't a trick.
Oh I do hope you're right. I got a little E-mail this morning from Ebay saying that my manuscript has been dispatched. From what I've read of the reviews it's a beautiful piece of work. Really looking forward to getting it. I'll let you know if I have a lightbulb moment once I've read it!
Just so you know I received my copy of Coral Fang. It's interesting (though slightly off topic) - the entire manuscript is what I call "Advanced Mentalism." That doesn't mean that it's difficult. It means that it's so advanced hardly anyone will perform it. In my view there's two types of mentalism: Straightforward and Advanced.
Straightforward mentalism is mentalism that's either self working or requires memory systems or just plain hard practice.
Advanced mentalism (IMHO) is the kind of mentalism that when you read the method you say to yourself "NO XXXXXXX way is that gonna work" and you throw it in your 'never to be performed in a million years draw.' A good example would be 'Tervil' or 'A question and the answer' in PME. The swindels are so bold, so daring, so obvious once you know their secret, that no-one with a single brain cell would dare to perform them.
BUT...... There are some that realise they CAN be performed, and with great success. And it's those people that are the "advanced mentalists" in my view.
Coral Fang is just that, and I imagine 90% of people who bought it cried when they read some of the methods. I read, read, read and re-read Dangerous Opener. It annoyed me because it's a Dunninger subtlety I am aware of and it is explained in full in Psychological Subleties which I own, but Luke Jermay has simply employed the concept in a beautiful, elegant and effortless way. We all stand on the shoulders of giants I suppose.
Well, I have made a decision. I'm giving an after dinner speech at a Rotary Club on 21st November. This is going to be my closer. I'm using Luke Jermay's method. There's going to be approximately 50 people present. Anything less and I'd be too scared to use it. If it works then I'll let you know. I have a masonic Ladies night gig in February (approx 100 people) and I' m desperate to find a powerful closer for it. Hoping this is it.......... but......... it's advanced mentalism, so I'm scared!!!
New to the forum but I too was kept awake by this brick in a bag video from Max. I know there are many people who have published a solution. I have a solution that is simple and elegant. I do not want it published here. If you P.M. me, I will let you know my solution as simple as I think it is. Cheers, Brad.
Last edited by radbrad on Jul 6th, '17, 01:27, edited 1 time in total.
Welcome to TM Brad, FYI the PM function doesn't kick in until after your first five posts.
kevmundo wrote: I'm assuming you haven't seen Max perform it? Forgive me if I'm wrong. There is no way on earth that this effect is achieved using equivoque. Max states quite clearly "There are two bags in front of you. Select one of them and whichever one you select I will empty the contents of that bag onto your hand." He emphasises that the choice is free and the selected bag will be emptied on his hand. Eugene Burger picks a bag. It has a feather in it. The other has a brick.
What if both bags had a brick and feather in?
Your thinking how does he make him choose the correct bag when I'm thinking what if either bag was correct? |
""" Script that tries to parse a dataset to profile its variables"""
import os
import sys
from databasehelper import DatabaseHelper
from profiler import Profiler
def main():
"""
Main function of the program
Arguments should be:
1) path of the file to analyse
2) format of the file
3) id of the resource
"""
if len(sys.argv) < 4:
# TO!DO: Print to log
print('Call should be: py <script.py> <path> <file_format> '
'<resource_id>')
return -1
path = sys.argv[1]
file_format = sys.argv[2]
resource_id = sys.argv[3]
if os.path.exists(path) and os.path.isfile(path):
with open(path, 'rb') as file:
profiler = Profiler(file, file_format.lower())
profiler.profile_dataset()
save_to_database(resource_id, profiler)
return 0
def save_to_database(resource_id: int, profiler: Profiler):
""" Save the structure of the dataset resource in the database """
db_hlpr = DatabaseHelper()
db_hlpr.open_connection()
types = db_hlpr.get_variable_types_id(profiler.final_types)
pfl_id = db_hlpr.create_profile(resource_id)
tbl_id = db_hlpr.add_resource_table(pfl_id, offset_y=profiler.offset)
length = len(profiler.final_types)
variables = [None]*length
for i in range(0, length):
typ = profiler.final_types[i]
if profiler.real_headers:
name = profiler.real_headers[i]
else:
name = "Col-" + str(i)
idt = db_hlpr.add_variable(tbl_id, i, types[typ], name)
variables[i] = idt
i = 0
for row in profiler.row_set:
if i < (profiler.offset+1):
i = i + 1
continue
db_hlpr.add_row(tbl_id, i, row, variables)
i = i + 1
db_hlpr.close_connection()
if __name__ == "__main__":
sys.exit(main())
|
Background and objective: The aim of this study is to define the level of organizational citizenship behavior and perception of organizational justice and related factors in employees of Cukurova University Hospital. Methodology: Our sample consisted of 159 employees. Data was collected with a sociodemographic questionnaire, Organizational Citizenship Behavior Scale, and Perception of Organizational Justice Scale. Results: All subdomain scores of organizational citizenship behavior were at the lowest quartile. Three out of four subordinates scores of perception of organizational justice were in moderate-high group. Only informational justice score was in low-moderate group. The significant related factors for all subdomains of organizational citizenship behavior were number of children, occupation, and tenure. Age was significantly related to all subordinates except civic virtue. Gender was significantly related to courtesy and sportsmanship. Perception of organizational justice was significantly related to age, number of patients per day, and the department where the participant works. Organizational citizenship behavior was significantly correlated with perception of organizational justice. Conclusion: As the level of organizational citizenship behavior were found to be low and the level of perception of organizational justice were found to be moderate-high in employees of Cukurova University Hospital, interventions are required to improve the level of organizational citizenship behavior and informational justice. |
# coding=utf-8
# Progetto: Pushetta API
# Service layer con le funzionalità per la gestione Subscribers
from rest_framework import generics, permissions
from rest_framework.response import Response
from rest_framework import status
from django.conf import settings
from core.models import Subscriber, Channel, ChannelSubscribeRequest
from core.models import ACCEPTED, PENDING
from api.serializers import SubscriberSerializer, ChannelSerializer, ChannelSubscribeRequestSerializer
from core.subscriber_manager import SubscriberManager
class SubscriberList(generics.GenericAPIView):
"""
Handle device subscription to Pushetta
"""
serializer_class = SubscriberSerializer
def post(self, request, format=None):
serializer = SubscriberSerializer(data=request.DATA)
if serializer.is_valid():
is_sandbox = (True if settings.ENVIRONMENT == "dev" else False)
subscriber_data = serializer.object
subscriber, created = Subscriber.objects.get_or_create(device_id=subscriber_data["device_id"],
defaults={'sub_type': subscriber_data["sub_type"],
'sandbox': is_sandbox, 'enabled': True,
'name': subscriber_data["name"],
'token': subscriber_data["token"]})
if not created:
subscriber.token = subscriber_data["token"]
subscriber.name = subscriber_data["name"]
subscriber.save()
# Update del token nelle subscription del device
subMamager = SubscriberManager()
channel_subscriptions = subMamager.get_device_subscriptions(subscriber_data["device_id"])
for channel_sub in channel_subscriptions:
subMamager.subscribe(channel_sub, subscriber_data["sub_type"], subscriber_data["device_id"],
subscriber_data["token"])
return Response(serializer.data, status=status.HTTP_201_CREATED)
else:
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
class SubcriptionsList(generics.GenericAPIView):
"""
Handle subscriptions to channels of a specific device
"""
permission_classes = [
permissions.AllowAny
]
serializer_class = ChannelSerializer
def get(self, request, format=None, deviceId=None):
channel_names = SubscriberManager().get_device_subscriptions(deviceId)
channels = Channel.objects.filter(name__in=channel_names)
serializer = ChannelSerializer(channels, many=True)
return Response(serializer.data)
class DeviceSubscriptionsRequests(generics.GenericAPIView):
"""
Handle list of device requests (subscribed and pending subscriptions)
"""
permission_classes = [
permissions.AllowAny
]
serializer_class = ChannelSubscribeRequestSerializer
def get(self, request, format=None, deviceId=None):
channel_names = SubscriberManager().get_device_subscriptions(deviceId)
# Uso ChannelSubscribeRequestSerializer e quelli già sottoscritti li aggiungo fake come ACCEPTED
channels = Channel.objects.filter(name__in=channel_names)
subscribed = [ChannelSubscribeRequest(channel=ch, device_id=deviceId, status=ACCEPTED) for ch in channels]
# Le richieste visualizzate client side sono solo quelle
requests = ChannelSubscribeRequest.objects.filter(device_id=deviceId).filter(status=PENDING)
serializer = ChannelSubscribeRequestSerializer(subscribed + list(requests), many=True)
return Response(serializer.data)
|
April Parker, one of the stars of Marvel's Runaways, has teased her character's plot for season 2. Runaways is one of two Marvel TV shows that are aimed at young adults, and season 2 is due to stream on Hulu in December. Parker plays Catherine Wilder, one of the key members of the criminal group known as the Pride, whose plans have been severely disrupted by their teenage children.
Season 1 left the Wilder family divided, with Catherine and her husband heartbroken after Alex and his friends left home and took to the streets - finally becoming the titular Runaways. Although Catherine doesn't know it, her son Alex seems to have gone to extreme lengths to prepare to defend himself; in the last episode, he even managed to acquire a gun. Catherine isn't going to find it easy to pull her family back together again.
In an interview with Hulu Watcher, Parker teased that her character will remain dedicated to Alex over the course of season 2. Catherine and her husband, Geoffrey have just one goal: to find Alex and bring him home. Although she wouldn't give any details, she confirmed that the Wilders will use "all the resources at our disposal," including the corrupt police force of Los Angeles. Where Catherine had been divided and conflicted in Runaways season 1, now she's laser-focused. "All of the regret and conflict is gone," Parker explained. "Now, she has one goal and that’s getting Alex back. And she’ll make that happen by any means necessary."
The final scenes of season 1 showed the group walking through Los Angeles at sunrise, one of the best moments in the series yet. It put their situation in frankly a far more realistic context than even the comics ever achieved; the teens' privileged upbringing means they're in over their heads. What's more, the Wilders have already ensured the police are looking for them.
It will be fascinating to see if Catherine Wilder's focus causes problems for the Pride. The TV version of the Pride is slightly different to the original comic book incarnation; rather than committed criminals, they're unwilling agents of a mysterious being named Jonah. Jonah's true motives remain unrevealed, but he's unlikely to take well to the Pride's failure to control their children. What's more, Jonah has no qualms about killing the Runaways - indeed, he already killed Nico Minoru's sister Amy, a fact that will no doubt leave Catherine and Geoffrey Wilder deeply concerned.
Interestingly, Parker insisted that she had to read through the comics in order to truly understood Catherine Wilder's character. Her goal is to honor Brian K. Vaughan's original Runaways comic book run, which is one of the most beloved in comic book history. Given that's the case, viewers should expect Catherine to be one of the most dangerous, and ruthless, members of the Pride. Now that her goal is simplified, she's going to be a force to be reckoned with.
Marvel's Runaways returns on Hulu on December 21. |
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import math
import operator
import time
import operator
def readratings(f):
ratings = {}
index=0
somme=0
for row in f:
line = row.split("\t")
userid, movieid, rating = int(line[0]), int(line[1]), int(line[2])
ratings.setdefault(userid, {})
ratings[userid][movieid] = rating
somme=somme+rating
index=index+1
return ratings,somme/index
def transpose(util):
transposed = {}
for id1 in util:
for id2 in util[id1]:
transposed.setdefault(id2, {})
transposed[id2][id1] = util[id1][id2]
return transposed
def normalize(util):
avgs = {}
for id1 in util:
avg = 0.0
for id2 in util[id1]:
avg += util[id1][id2]
avg = float(avg)/len(util[id1])
for id2 in util[id1]:
util[id1][id2] -= avg
avgs[id1] = avg
return avgs
def bais_item(movies,mean,lamda=5):
bais_items={}
for item in movies:
somme=0
index=0
for user in movies[item]:
somme=somme+(movies[item][user]-mean)
index=index+1
bais_items[item]=somme/(index+lamda)
return bais_items
def bais_user(users,mean,bais_items,lamda=5):
bais_users={}
for user in users:
somme=0
index=0
for movie in users[user]:
somme=somme+(users[user][movie]-mean-bais_items[movie])
index=index+1
bais_users[user]=somme/(index+lamda)
return bais_users
def dcg_at_k(r, k, method=0):
r = np.asfarray(r)[:k]
if r.size:
if method == 0:
return r[0] + np.sum(r[1:] / np.log2(np.arange(2, r.size + 1)))
elif method == 1:
return np.sum(r / np.log2(np.arange(2, r.size + 2)))
else:
raise ValueError('method must be 0 or 1.')
return 0
def ndcg_at_k(r, k, method=0):
dcg_max = dcg_at_k(sorted(r, reverse=True), k, method)
if not dcg_max:
return 0.
return dcg_at_k(r, k, method) / dcg_max
def histogram_plot(users):
list=[]
for u in users:
for j in users[u]:
list.append(users[u][j])
x=np.array(list)
print type(x)
print x
n = 4
bins=np.arange(1,7,1)
fig, ax = plt.subplots(1,1)
ax.hist(x, bins=bins, align='left')
ax.set_xticks(bins[:-1])
plt.xlabel("Rating value")
plt.ylabel("Frequency")
plt.show()
if __name__ == "__main__":
init = time.time()
# read in training data set
f1 = open("ua.base")
users,s = readratings(f1)
f1.close()
histogram_plot(users)
# read in test data set
f2 = open("ua.test")
rated,a = readratings(f2)
# normalize user ratings
movies = transpose(users)
b_items=bais_item(movies,s,lamda=5)
b_users=bais_user(users,s,b_items,lamda=5)
total = 0
totalrmse = 0.0
totalndcg=0
for userid in rated:
list=[]
for movieid in rated[userid]:
if movieid in movies:
list.append((rated[userid][movieid],-(s+b_items[movieid]+b_users[userid])))
totalrmse += (s+b_items[movieid]+b_users[userid]-rated[userid][movieid])**2
total += 1
list.sort(key=operator.itemgetter(1))
totalndcg=totalndcg+ndcg_at_k([list[i][0] for i in xrange(len(list))],len(list))
print "RMSE= ", math.sqrt(totalrmse/total)
print "NDCG=", totalndcg/len(rated)
print "elapsed time=",time.time()-init
|
Have you ever been caught snoozing at work? Or has anyone pointed out or woke you up while you dozed off on the desk? Getting caught in a guerilla nap, especially by your manager is quite embarrassing. And while napping is not a welcoming act at workplace, contemporary organisations are paying increasing attention to this concern to save on their cost. A sleep deprived and fatigued employee is less productive than an active one. Hence, allowing power nap is a saviour for lackadaisical employee.
Researchers say that a nap has carries same magnitude of benefits as a full night sleep. Renowned organisations such as Google, Zappos, PwC and Capital One Labs are among the ones who let their employees succumb to nap at work.
Why is it OK to take a nap in office?
Napping helps an employee regain lost concentrations levels. It is quite rejuvenating and boosts productivity. It holds potential to reduce anxiety and depression that creeps in mind of an individual due to pressing deadlines and commitments. Above all, companies have understood that the barometer of organisational success lies in the health of employees. So, they don’t leave any stone unturned to keep employees in sane during the working hours.
Best for getting straight back to work. A 10 to 20 minutes nap is a powerful one for body and mind. It helps to release fatigue and lets you regain vigour. You can take a power nap during the tea time if your manager doesn’t let you take a nap on desk. It is best to move out of your work stations and find a place that is less crowded. Get a quick nap and keep your mobile on vibrate mode, so that it becomes easy for you to wake up after 15-20 minutes.
A 30 minutes nap! It creates a sleep hangover after you wake up, so take some additional time of around 10 minutes to regain consciousness. It is advisable, that when you need a siesta of 30 minutes, seek permission of your manager, maybe in person. If your manager will be cooperative one he/she won’t stop you from it. You might experience listlessness after waking up. So have something to eat so that you are not famished when returning to work.
You can try this nap occasionally. This nap comprises of 60 minutes (1 hour). When you have a presentation in the post-lunch hours and you are sleep deprived, or unwell, go for slow wave nap. It acts as a best medicine to help you remember concrete facts, figures and data during the presentation.
This nap is said to enhance cognitive memory processing. It will also keep you fresh for a big day.
Although this is not the kind of nap that should be taken at the workplace, but if you are unwell and find it hard to sit on your desk, try full sleep cycle nap. It is a nap wherein you can doze off for 90 minutes. Full cycle nap boosts energy levels and will also enhance your productivity. When you are to work for longer hours in office, keep aside sometime for napping!
Surprisingly napping in the office is common and culturally accepted in Japan. It is because sleeping on duty is considered as a subtle sign of diligence. While you might be rebuked for napping in office, try out the power nap when you get right time from work. But don’t make napping a habit in office. Take proper sleep at night to kick start a fresh new day! |
'''
NMEA.py
Defines NMEA sentence and other useful classes
'''
class Point:
'''
Point: Simple coordinate point
Attributes:
lat: Latutude (decimal)
lng: Longitude (decimal)
alt: Altitude (meters)
'''
def __init__(self, lat=0, lng=0, alt=0):
self.lat = lat
self.lng = lng
self.alt = alt
def __str__(self):
return '{0}, {1}, {2} meters'.format(self.lat, self.lng, self.alt)
def getDistance(self, toPoint):
'''
Gets the distance (in arbitrary units) to another point
'''
return math.sqrt(math.pow((self.lat - toPoint.lat),2) + math.pow((self.lng - toPoint.lng),2))
class GGA :
'''
NMEA GGA: fix data
Attributes:
time: String with UTC time
lat: Latitude (decimal value)
lng: Longitude (decimal value)
fix_quality:
0 = Error (no fix)
1 = GPS fix (SPS)
2 = DGPS fix
3 = PPS fix
4 = Real Time Kinematic
5 = Float RTK
6 = estimated (dead reckoning) (2.3 feature)
7 = Manual input mode
8 = Simulation mode
num_sats: number of satellites being tracked
hdp: Horizontal dilution of position
alt: Altitude, Meters, above mean sea level
geoid_height: Height of geoid (mean sea level) above WGS84 ellipsoid
checkum: message checksum
valid: is this a valid or invalid message (based on complete data and checksum)
'''
def __init__(self, inputString=''):
s = inputString.split(',')
if not len(s) == 15 or not s[0] == '$GPGGA':
raise ValueError('Invalid input string for NMEA GGA object, given string was: ' + inputString)
else:
try:
self.time = s[1]
self.lat = float(s[2][:2]) + float(s[2][2:])/60
if(s[3] == 'S'):
self.lat = -1 * self.lat
self.lng = float(s[4][:3]) + float(s[4][3:])/60
if(s[5] == 'W'):
self.lng = -1 * self.lng
self.fix_quality = s[6]
self.num_sats = int(s[7])
self.hdp = float(s[8])
self.alt = float(s[9])
self.geoid_height = float(s[11])
self.checksum = s[14]
self.valid = _validateChecksum(inputString, self.checksum)
except ValueError:
if not len(self.time):
self.time = ''
if not hasattr(self, 'lat') or not self.lat:
self.lat = 0.0
if not hasattr(self, 'lng') or not self.lng:
self.lng = 0.0
if not hasattr(self, 'fix_quality') or not self.fix_quality:
self.fix_quality = 0
if not hasattr(self, 'num_sats') or not self.num_sats:
self.num_sats = 0
if not hasattr(self, 'hdp') or not self.hdp:
self.hdp = 0.0
if not hasattr(self, 'alt') or not self.alt:
self.alt = 0.0
if not hasattr(self, 'geoid_height') or not self.geoid_height:
self.geoid_height = 0.0
if not hasattr(self, 'checksum') or not self.checksum:
self.checksum = ''
self.valid = False
def getPoint(self):
'''
Returns a Point version of itself
'''
return Point(self.lat, self.lng, self.alt)
def getNMEA(line):
'''
Given a line of text, tries to make a NMEA object from it, or returns None.
Args:
line: NMEA sentence
Returns:
NMEA object if valid line (eg. GGA), None if not valid
'''
if not line:
return None
else:
s = line.split(',')
if len(s) == 15 and s[0] == '$GPGGA':
return GGA(line)
else:
return None
def _validateChecksum(line):
'''
Given a NMEA sentence line, validates the checksum
Args:
line: NMEA sentence
Returns:
True if valid, False otherwise
'''
try:
if line.index('$') == 0 and '*' in line:
check_against = line[1:line.index('*')]
checksum = int(line[line.index('*')+1:], 16)
result = 0
for char in check_against:
result = result ^ ord(char)
return checksum == result
except ValueError:
return False
|
This list of Saab Dealers located in Cascade, Iowa (IA) is believed to have been correct at the time of posting. If your Cascade, Iowa Saab Dealership is not listed here or you see an error in your listing, you may tell us about it here. Standard listings are free, but they are neither guaranteed nor warranteed. |
#!/usr/bin/python
#encoding: utf-8
'''
This file is part of the Panda3D user interface library, Beast.
See included "License.txt"
'''
class Color(object):
'''
Converts a full color eg, 255 color, (255, 255, 255) OR (255, 255, 255, 255) to a float color (1.0, 1.0, 1.0, 1.0)
Also accepts a float color, in which case it simply returns the float color after validation.
Good for ensuring a full 255 or float color is indeed a float color.
'''
@staticmethod
def fullToFloat(floatOrFull):
assert type(floatOrFull) == list or type(floatOrFull) == tuple
assert len(floatOrFull) == 3 or len(floatOrFull) == 4
# Float color must stay within 0-1, and (1, 1, 1) could be 255 full color!
# So within range 0-1, with floats is float color
# And within range 1 with int is full 255 color
isFloatColor = True
for c in floatOrFull:
if c > 1.0:
isFloatColor = False
break
elif (c == 1) and (type(c) == int or type(c) == long):
isFloatColor = False
break
if isFloatColor:
if len(floatOrFull) == 3:
floatOrFull = (floatOrFull[0], floatOrFull[1], floatOrFull[2], 1.0)
return floatOrFull
r = floatOrFull[0] / 255.0
g = floatOrFull[1] / 255.0
b = floatOrFull[2] / 255.0
if len(floatOrFull) == 4:
a = floatOrFull[3] / 255.0
else:
a = 1.0
return (r, g, b, a)
'''
Converts a hex color eg, #FFFFFF OR #FFFFFFFF, to a float color (1.0, 1.0, 1.0, 1.0)
'''
@staticmethod
def hexToFloat(hexColor):
assert type(hexColor) == str or type(hexColor) == unicode
if hexColor.startswith('#'):
hexColor = hexColor[1:]
assert len(hexColor) == 6 or len(hexColor) == 8, 'Hex color must be either #RRGGBB or #RRGGBBAA format!'
r, g, b = int(hexColor[:2], 16), int(hexColor[2:4], 16), int(hexColor[4:6], 16)
if len(hexColor) == 8:
a = int(hexColor[6:8], 16)
else:
a = 255
return Color.fullToFloat((r, g, b, a))
'''
Converts a float color eg, (1.0, 1.0, 1.0) OR (1.0, 1.0, 1.0, 1.0) to a full color, (255, 255, 255, 255)
'''
@staticmethod
def floatToFull(floatColor):
assert type(floatColor) == list or type(floatColor) == tuple
assert len(floatColor) == 3 or len(floatColor) == 4
r = int(round(floatColor[0] * 255.0, 0))
g = int(round(floatColor[1] * 255.0, 0))
b = int(round(floatColor[2] * 255.0, 0))
if len(floatColor) == 4:
a = int(round(floatColor[3] * 255.0, 0))
else:
a = 255
return (r, g, b, a)
'''
Converts a float color eg, (1.0, 1.0, 1.0) OR (1.0, 1.0, 1.0, 1.0) to a hex color, #FFFFFFFF
'''
@staticmethod
def floatToHex(floatColor, withPound = True):
fullColor = Color.floatToFull(floatColor)
assert type(fullColor) == list or type(fullColor) == tuple
assert len(fullColor) == 3 or len(fullColor) == 4
if len(fullColor) == 3:
hexColor = '%02x%02x%02x' % fullColor
elif len(fullColor) == 4:
hexColor = '%02x%02x%02x%02x' % fullColor
if len(hexColor) == 6:
hexColor = hexColor + 'FF'
hexColor = hexColor.upper()
if withPound:
return '#' + hexColor
else:
return hexColor
'''
Color storage class, takes a single color, in any compatible format, and can convert it to other formats
(1.0, 1.0, 1.0), or (1.0, 1.0, 1.0, 1.0)
(255, 255, 255), or (255, 255, 255, 255)
#RRGGBB, or #RRGGBBAA (with or without pound sign)
'''
def __init__(self, color = None):
self.__color = (0.0, 0.0, 0.0, 0.0)
if color:
self.setColor(color)
'''
Change the color stored by this instance to a different one, this is the same as the constructor optional argument
'''
def setColor(self, color):
if type(color) == str or type(color) == unicode:
self.__color = self.hexToFloat(color)
elif type(color) == tuple or type(color) == list:
self.__color = self.fullToFloat(color)
else:
raise AssertionError('Invalid color format, should be either string, unicode, tuple, or list')
'''
Convert the stored color into a tuple of floats, ranging from 0-1, eg (0.5, 0.5, 0.5, 0.5)
'''
def getAsFloat(self):
return tuple(self.__color)
'''
Convert the stored color into a full 255 color, ranging from 0-255, eg (128, 128, 128, 128)
'''
def getAsFull(self):
return self.floatToFull(self.__color)
'''
Convert the stored color into a hex color, optionally starting with a pound # sign, eg #80808080
Note: Third argument is Alpha/Transparency, which may just be FF. For "fully solid"
'''
def getAsHex(self, withPound = True):
return self.floatToHex(self.__color, withPound)
if __name__ == '__main__':
def log(col, c):
c.setColor(col)
print '-> %s' % (col,)
print '-> float -> %s' % (c.getAsFloat(),)
print '-> full -> %s' % (c.getAsFull(),)
print '-> hex -> %s' % (c.getAsHex(),)
print
c = Color()
log((0.5, 0.5, 0.5), c)
log((0.5, 0.5, 0.5, 0.5), c)
log((128, 128, 128), c)
log((128, 128, 128, 128), c)
log('#808080', c)
log('#80808080', c)
|
Heads up! This article regarding phone add-on accessory contains affiliate links. If you buy through the link, we may get a commission on your purchases that can help our blog site maintenance. You will not incur additional costs when you click the links. Thanks.
Hi readers! We're assuming that you would want to level up your photos for your social media or for any other purpose without the need of buying expensive accessories that's why you're reading this. At the end of this article, we hope that this Phone add-on accessory will give you more information regarding the said product.
Don't get us wrong, buying the fine and correct equipment can definitely give you better quality photos. If you don't want to pay a huge amount of money or is currently not able to pay for those accessories. This item might be for you.
Nowadays, almost everyone owns a smartphone with a decent camera that they use for capturing photos. It is easy and convenient to bring and can be taken out as quickly as possible. Plus it is multi-purpose and built in so you will definitely have it with you wherever you go.
Your phone depending on the quality will still limit your ability to capture your subject. It may dismay you as you're not able to take the view or image that you wanted to portray because you used your phone as a capturing device.
It might look plain and boring, which would be fine if you're taking pictures on gatherings or parties but if you want to take a photo of something wide or a picturesque view, your phone camera might not be enough.
It is the same case for us, we go to places and capture images and views but was sometimes dissatisfied with the outcome as it is not what we have in mind. Luckily, we have come across a certain accessory for our phone that enhances the photo quality and adds a little artistic view to it.
These are the Phone Clip-on Lenses. What exactly are clip-on lenses? How can you use it? Why would it be beneficial to you?
First, let's define a Clip-on lens.
Clip-on lenses are technically smaller lenses that you can attach to your phone by clipping it and then adjusting the focus to align with your smartphone camera.
It gives an artistic sense to your photos and provides enhancements to the old boring images. It is portable thus it gives convenience to its user and is very easy to use.
Choose the lens that you would want to use to capture a certain image. It can be a wide angle, fish-eye or macro lens. We will talk about the difference of the three later, so continue reading.
Clip the lens to your front or back camera.
Align the lens with your smartphone camera.
We have bought a lens bundle including these three different type of lenses.
They are the wide angle, fish-eye, and macro lenses.
Wide angle lens enables more of the backgrounds, scenes, subjects into the image. Better application of the wide angle lens is when you're taking pictures of landscapes as it gives a wider view of the subject. Your normal square or rectangular image will have more length in it providing better photos.
You will have an easier time of including more on your pictures without the need of using high-end types of equipment like dslr cameras. We used it mostly to capture picturesque views and sceneries that we usually cannot fit with the use of smartphones only.
For samples of Wide Angle lenses, you can check our other travel blogs here. We used a wide-angle to capture the beauty of Dubai Creek on this sample.
Fish-Eye Lens is a type of wide angle lens as well but it gives off a different view to its users. It provides a visual distortion to create a wide panorama or hemispherical images. It gives an artistic design to your already wide angled image. Same as wide angle, you can definitely fit more subjects if you use the fish-eye lens.
It makes the image much more interesting than the normal rectangular wide angle. There are many uses for a fish-eye lens that you can discover and use for a better image composition.
You can check our photos on our recent Jordan trip for samples of fish-eye photographs.
Macro Lens or the term macro has always been associated with close up photography. It enables a detailed view of a certain subject, usually smaller subjects to become clearer.ra.
It gives a vivid view of the things that we usually don't see with the normal eye. This type of lens captures smaller images and make it a big one on your camera like a magnifying glass.
We have tried using it a couple of times to test it but wasn't able to use it on our travel photography yet.
There's also the telephoto lens but since we haven't used it yet we cannot give you an honest review on the subject.
Now, that we have provided you with the different kinds of lens and how you can use it. Next up is what to consider when buying these phone add-on accessory.
Consider first your budget - How much are you willing to spend on this phone lenses? Will you be using all of it? Will a bundle or a single piece of the lens be right for you? Putting into consideration how much is your budget will help you a long way in deciding if you want to buy all the different kind of lens or buying just the most useful for you. We have to admit we haven't been using the macro lens most of the time. This should be the first priority if you want to invest in these lenses.
The material of the lens and quality - taking into consideration which materials were used for the product can help you decide if the amount your paying for is worth it. You should always go for quality rather than cheap products as quality products will definitely work for a long time.
Convenience - You should also consider how easy it will be to use the lens? Is the clip on stable, will it be easy to bring when you go outdoors. Will it fit inside your bag? Having an easy to carry lenses can help you in packing lightly if its small, you can possibly fit it in your pockets as well.
These are some things to consider when you're planning to buy your new lens. Make sure to research or inquire which lens is good for use.
You can check out our links below for quality lens.
I'll update these links once we get a link from Lazada for orders in the Philippines.
That's it for our content regarding clip-on lenses. We hope that you can use this very easy trick to level up your travel photos and add some spice to your next trip pictures. You can check this article related to buying a clip-on lens that we used for our research as well.
For more updates on our contents and product offerings, sign up for our newsletter down below. We promise you we won't spam your mailbox.
You can check out our previous posts for travel and leisure ideas below.
Don't miss out on our next content, subscribe now. |
import pygame
import pymunk
from pymunk.vec2d import Vec2d
from source import game
from source.constants import *
from source.utilities import *
class Projectile:
def __init__(self, position, velocity, impulse):
self.radius = BULLET_RADIUS
mass = BULLET_MASS
moment = pymunk.moment_for_circle(mass, 0, self.radius)
self.body = pymunk.Body(mass, moment)
self.body.position = position
self.shape = pymunk.Circle(self.body, self.radius)
self.shape.collision_type = CT_BULLET
game.space.add(self.body, self.shape)
self.strength = BULLET_STRENGTH
self.body.velocity = velocity
self.body.apply_impulse_at_world_point(impulse)
game.object_manager.register(self)
def update(self):
Utils.remove_if_outside_game_area(self.body, self, self.radius)
def hit(self, damage):
self.strength -= damage
if self.strength < 0:
game.object_manager.unregister(self)
def delete(self):
# print('Bullet removed')
game.space.remove(self.body, self.shape)
def draw(self):
draw_tuple = Utils.vec2d_to_draw_tuple(self.body.position)
pygame.draw.circle(game.screen, (255, 255, 255), draw_tuple, self.radius)
class Bomb:
def __init__(self, position, velocity, impulse):
self.radius = BOMB_RADIUS
mass = BOMB_MASS
moment = pymunk.moment_for_circle(mass, 0, self.radius)
self.body = pymunk.Body(mass, moment)
self.body.position = position
self.shape = pymunk.Circle(self.body, self.radius)
self.shape.collision_type = CT_BOMB
game.space.add(self.body, self.shape)
self.strength = BOMB_STRENGTH
self.body.velocity = velocity
self.body.apply_impulse_at_world_point(impulse)
game.object_manager.register(self)
self.birth = pygame.time.get_ticks()
self.lifetime = BOMB_TIMER
self.exploded = False
def explode(self):
print('BANG!')
# BOMB_BLAST_RADIUS
# # Create blast sensor shape
self.blast_shape = pymunk.Circle(self.body, BOMB_BLAST_RADIUS)
self.blast_shape.sensor = True
self.blast_shape.collision_type = CT_BLAST
game.space.add(self.blast_shape)
# game.object_manager.unregister(self)
self.exploded = True
def update(self):
if self.exploded:
game.object_manager.unregister(self)
age = pygame.time.get_ticks() - self.birth
if age > self.lifetime and not self.exploded:
self.explode()
Utils.remove_if_outside_game_area(self.body, self, BOMB_BLAST_RADIUS)
def hit(self, damage):
self.strength -= damage
if self.strength < 0:
self.explode()
game.object_manager.unregister(self)
def delete(self):
print('Bomb removed')
if hasattr(self, 'blast_shape'):
game.space.remove(self.blast_shape)
game.space.remove(self.body, self.shape)
def draw(self):
draw_tuple = Utils.vec2d_to_draw_tuple(self.body.position)
pygame.draw.circle(game.screen, (0, 255, 0), draw_tuple, self.radius)
class PrimaryCannon:
def __init__(self, parent):
self.parent = parent
game.object_manager.register(self)
self.cannon_power = CANNON_POWER
def activate(self):
position = self.pos
local_impulse = Vec2d(0, CANNON_POWER)
parent_angle = self.parent.body.angle
impulse = local_impulse.rotated(parent_angle)
velocity = self.parent.body.velocity
Projectile(position, velocity, impulse)
def update(self):
parent_pos = self.parent.body.position
parent_angle = self.parent.body.angle
local_offset = Vec2d(0, BOMBER_HEIGHT/2)
self.pos = parent_pos + (local_offset.rotated(parent_angle))
self.draw_pos = Utils.vec2d_to_draw_tuple(self.pos)
def delete(self):
pass
def draw(self):
pygame.draw.circle(game.screen, (255, 0, 0), self.draw_pos, 1)
class SecondaryBombLauncher:
def __init__(self, parent):
self.parent = parent
game.object_manager.register(self)
def activate(self):
position = self.pos
local_impulse = Vec2d(0, -BOMB_LAUNCHER_POWER)
parent_angle = self.parent.body.angle
impulse = local_impulse.rotated(parent_angle)
velocity = self.parent.body.velocity
Bomb(position, velocity, impulse)
def update(self):
parent_pos = self.parent.body.position
parent_angle = self.parent.body.angle
local_offset = Vec2d(0, -BOMBER_HEIGHT/2)
self.pos = parent_pos + (local_offset.rotated(parent_angle))
self.draw_pos = Utils.vec2d_to_draw_tuple(self.pos)
def delete(self):
pass
def draw(self):
pygame.draw.circle(game.screen, (0, 0, 255), self.draw_pos, 3)
|
Anna Richards, an eighth-grade student at Canton Charter Academy, was surprised during an assembly with a $5,000 college scholarship. Principal Kelie Fuller announced Anna as one of seven recipients of the 2018 National Heritage Academies (NHA) CollegeBoundTM Scholarship.
As part of the application process, each applicant was required to write a personal essay on one of four topics. Anna selected perseverance, a moral focus virtue, for her personal essay. |
# Copyright 2014 PressLabs SRL
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import pytest
from fuse import FuseOSError
from gitfs.views.read_only import ReadOnlyView
class TestReadOnly(object):
def test_cant_write(self):
view = ReadOnlyView()
for method in ["write", "create", "utimens",
"chmod", "mkdir"]:
with pytest.raises(FuseOSError):
getattr(view, method)("path", 1)
with pytest.raises(FuseOSError):
view.getxattr("path", "name", 1)
with pytest.raises(FuseOSError):
view.chown("path", 1, 1)
def test_always_return_0(self):
view = ReadOnlyView()
for method in ["flush", "releasedir", "release"]:
assert getattr(view, method)("path", 1) == 0
assert view.opendir("path") == 0
def test_open(self):
view = ReadOnlyView()
with pytest.raises(FuseOSError):
view.open("path", os.O_WRONLY)
assert view.open("path", os.O_RDONLY) == 0
def test_access(self):
view = ReadOnlyView()
with pytest.raises(FuseOSError):
view.access("path", os.W_OK)
assert view.access("path", os.R_OK) == 0
|
The Psychology Department offers two undergraduate degrees (B.A. and B.S.) and a minor in psychology. The B.A. program offers a broad background on the subject, while the B.S. program is designed for close study of specialized areas. We also offer a multidisciplinary minor in Aging.
Our department provides undergraduates with opportunities usually only offered to graduate students. Internships, student organizations, and advanced research projects help prepare our students for graduate school and rewarding career opportunities. |
import web
import psycopg2
import traceback
import sys, os,traceback
from operator import itemgetter
import db.KLPDB
import db.Queries_dise
from utils.CommonUtil import CommonUtil
#connection = db.KLPDB.getConnection()
#cursor = connection.cursor()
cursor = db.KLPDB.getWebDbConnection1()
class Demographics:
def generateData(self,cons_type, constid):
data = {}
constype = "mp"
if cons_type == 1:
data["const_type"]='MP'
constype = "mp"
elif cons_type == 2:
data["const_type"]='MLA'
constype = "mla"
elif cons_type == 3:
data["const_type"]='Corporator'
constype = "corporator"
elif cons_type == 4:
data["const_type"]='District'
constype = "district"
elif cons_type == 5:
data["const_type"]='Block'
constype = "block"
elif cons_type == 6:
data["const_type"]='Cluster'
constype = "cluster"
data["const_name"]=str(constid[0])
queries = ['gend_sch']
#,'gend_presch']
data.update(self.genderGraphs(constype,constid,queries))
#queries = ['mt_sch']
#,'mt_presch']
#data.update(self.mtGraphs(constype,constid,queries))
queries = ['moi_sch','cat_sch','enrol_sch']
#,'enrol_presch']
data.update(self.pieGraphs(constype,constid,queries))
data.update(self.constituencyData(constype,constid))
return data
def genderGraphs(self,constype,constid,qkeys):
data = {}
for querykey in qkeys:
result = cursor.query(db.Queries_dise.getDictionary(constype)[constype + '_' + querykey],{'s':constid})
chartdata ={}
for row in result:
chartdata[str(row.sex.strip())] = int(row.sum)
if len(chartdata.keys()) > 0:
total = chartdata['Boy']+chartdata['Girl']
percBoys = round(float(chartdata['Boy'])/total*100,0)
percGirls = round(float(chartdata['Girl'])/total*100,0)
data[querykey+"_tb"]=chartdata
else:
data[querykey+"_hasdata"] = 0
return data
def mtGraphs(self,constype,constid,qkeys):
data = {}
for querykey in qkeys:
result = cursor.query(db.Queries_dise.getDictionary(constype)[constype + '_' + querykey],{'s':constid})
tabledata = {}
invertdata = {}
order_lst = []
for row in result:
invertdata[int(row.sum)] = str(row.mt.strip().title())
if len(invertdata.keys()) > 0:
checklist = sorted(invertdata)
others = 0
for i in checklist[0:len(checklist)-4]:
others = others + i
del invertdata[i]
invertdata[others] = 'Others'
tabledata = dict(zip(invertdata.values(),invertdata.keys()))
if 'Other' in tabledata.keys():
tabledata['Others'] = tabledata['Others'] + tabledata['Other']
del tabledata['Other']
for i in sorted(tabledata,key=tabledata.get,reverse=True):
order_lst.append(i)
if len(tabledata.keys()) > 0:
data[querykey + "_tb"] = tabledata
data[querykey + "_ord_lst"] = order_lst
else:
data[querykey + "_hasdata"] = 0
return data
def pieGraphs(self,constype,constid,qkeys):
data = {}
for querykey in qkeys:
result = cursor.query(db.Queries_dise.getDictionary(constype)[constype + '_' + querykey],{'s':constid})
tabledata = {}
for row in result:
tabledata[str(row.a1.strip().title())] = str(int(row.a2))
sorted_x = sorted(tabledata.items(), key=itemgetter(1))
tabledata = dict(sorted_x)
if len(tabledata.keys()) > 0:
data[querykey + "_tb"] = tabledata
else:
data[querykey + "_hasdata"] = 0
return data
def constituencyData(self,constype,constid):
data = {}
util = CommonUtil()
ret_data = util.constituencyData(constype,constid)
data.update(ret_data[0])
neighbors = self.neighboursData(ret_data[1],ret_data[2])
if neighbors:
data.update(neighbors)
return data
def neighboursData(self, neighbours, constype):
data = {}
constype_str = constype
try:
if len(neighbours) > 0:
neighbours_sch = {}
neighbours_presch = {}
result = cursor.query(db.Queries_dise.getDictionary(constype)[constype_str + '_neighbour_sch'], {'s':tuple(neighbours)})
for row in result:
neighbours_sch[row.const_ward_name.strip()]={'schcount':str(row.count)}
#result = cursor.query(db.Queries_dise.getDictionary(constype)[constype_str + '_neighbour_presch'], {'s':tuple(neighbours)})
#for row in result:
#neighbours_presch[row.const_ward_name.strip()] = 0 #{'preschcount':str(row.count)}
result = cursor.query(db.Queries_dise.getDictionary(constype)[constype_str + '_neighbour_gendsch'],{'s':tuple(neighbours)})
for row in result:
neighbours_sch[row.const_ward_name.strip()][row.sex.strip()] = str(row.sum)
#result = cursor.query(db.Queries_dise.getDictionary(constype)[constype_str + '_neighbour_gendpresch'],{'s':tuple(neighbours)})
#for row in result:
#neighbours_presch[row.const_ward_name.strip()][row.sex.strip()] = str(row.sum)
#neighbours_presch[row.const_ward_name.strip()]['Boys'] = 0 #str(row.sum)
#neighbours_presch[row.const_ward_name.strip()]['Girls'] = 0 #str(row.sum)
if len(neighbours_sch.keys()) > 0:
data["neighbours_sch"] = neighbours_sch
else:
data["neighbours_sch_hasdata"] = 0
if len(neighbours_presch.keys()) > 0:
data["neighbours_presch"] = neighbours_presch
else:
data["neighbours_presch_hasdata"] = 0
else:
data["neighbours_sch_hasdata"] = 0
data["neighbours_presch_hasdata"] = 0
return data
except:
print "Unexpected error:", sys.exc_info()
traceback.print_exc(file=sys.stdout)
return None
|
Notice that your Dean's signature on the Academic part of this proposal also commits the Dean to cover any possible budget shortfalls. This underlines the necessity for you to engage the budget process carefully and with some 'contingency funding' allotment to cover the unexpected (fligh problems, currency exchange flucuations, etc.). |
# -*- coding: utf-8 -*-
from distutils.core import setup
from setuptools import find_packages
with open('docs/requirements.txt') as f:
required = f.read().splitlines()
setup(
name='tango-contact-manager',
version='0.10.0',
author=u'Tim Baxter',
author_email='[email protected]',
url='https://github.com/tBaxter/tango-contact-manager',
license='LICENSE',
description="""Provides contact forms and any other user submission form you might want.
Create user submission forms on the fly, straight from the Django admin.
""",
long_description=open('README.md').read(),
zip_safe=False,
packages=find_packages(),
include_package_data=True,
install_requires=required,
classifiers=[
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Framework :: Django",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3.4',
],
)
|
Triple strand of spectacular clear Swarovski Crystals. Dazzling AB crystal center. A 2” extender chain is added for your pets’ comfort and this piece is finished with a secure lobster clasp and the signature Swarovski Crystal Briolette Teardrop. |
import contextlib
import json
from .common import assert_command, exec_command
from .marathon import watch_deployment
def remove_job(job_id):
""" Remove a job
:param job_id: id of job to remove
:type job_id: str
:rtype: None
"""
assert_command(['dcos', 'job', 'remove',
'--stop-current-job-runs', job_id])
def show_job(app_id):
"""Show details of a Metronome job.
:param app_id: The id for the application
:type app_id: str
:returns: The requested Metronome job.
:rtype: dict
"""
cmd = ['dcos', 'job', 'show', app_id]
returncode, stdout, stderr = exec_command(cmd)
assert returncode == 0
assert stderr == b''
result = json.loads(stdout.decode('utf-8'))
assert isinstance(result, dict)
assert result['id'] == app_id
return result
def show_job_schedule(app_id, schedule_id):
"""Show details of a Metronome schedule.
:param app_id: The id for the job
:type app_id: str
:param schedule_id: The id for the schedule
:type schedule_id: str
:returns: The requested Metronome job.
:rtype: dict
"""
cmd = ['dcos', 'job', 'schedule', 'show', app_id, '--json']
returncode, stdout, stderr = exec_command(cmd)
assert returncode == 0
assert stderr == b''
result = json.loads(stdout.decode('utf-8'))
assert isinstance(result[0], dict)
assert result[0]['id'] == schedule_id
return result[0]
@contextlib.contextmanager
def job(path, job_id):
"""Context manager that deploys a job on entrance, and removes it on
exit.
:param path: path to job's json definition:
:type path: str
:param job_id: job id
:type job_id: str
:param wait: whether to wait for the deploy
:type wait: bool
:rtype: None
"""
add_job(path)
try:
yield
finally:
remove_job(job_id)
def watch_job_deployments(count=300):
"""Wait for all deployments to complete.
:param count: max number of seconds to wait
:type count: int
:rtype: None
"""
deps = list_job_deployments()
for dep in deps:
watch_deployment(dep['id'], count)
def add_job(job_path):
""" Add a job, and wait for it to deploy
:param job_path: path to job's json definition
:type job_path: str
:param wait: whether to wait for the deploy
:type wait: bool
:rtype: None
"""
assert_command(['dcos', 'job', 'add', job_path])
def list_job_deployments(expected_count=None, app_id=None):
"""Get all active deployments.
:param expected_count: assert that number of active deployments
equals `expected_count`
:type expected_count: int
:param app_id: only get deployments for this app
:type app_id: str
:returns: active deployments
:rtype: [dict]
"""
cmd = ['dcos', 'job', 'list', '--json']
if app_id is not None:
cmd.append(app_id)
returncode, stdout, stderr = exec_command(cmd)
result = json.loads(stdout.decode('utf-8'))
assert returncode == 0
if expected_count is not None:
assert len(result) == expected_count
assert stderr == b''
return result
|
Only a minimum discard is made from each ingot, but barsshould be free from visible piping. Seams and other surface irregularities maybe present. Chemistry is subject to ladle analysis only. Easily machined,welded, and formed but not suitable for applications involving forging, heattreating, cold drawing or other operations where internal soundness or freedomfrom detrimental surface imperfections is of prime importance. Conforms to ASTMA575.
Manufactured under additional controls in rolling andsurface preparation. Sufficient discard is made from each ingot to avoidinjurious piping. To minimize surface imperfections, blooms and billets arecarefully inspected and prepared by shipping, scarfing, and grinding prior torolling into bars. Chemistry is subject to ladle and check analyses. Thisquality is desirable when more exact requirements exist for machining and heattreatment. Conforms to ASTM A576.
In addition, The Metal Store further assures the quality ofthe steel bars by having them produced to explicit specifications, designed topromote superior and more uniform characteristics.
These are general and special purpose steels widely used fora variety of job requirements. Choose from low cost, versatile 1018… up to1117, 1141 all produced to explicit Metal Store specifications for controlledchemistry, to assure greater internal soundness, cleanliness, improved hardeningqualities and better machinability.
For maximum machinability, choose from our screw stock –including12L14, 12L14 + Te 1214Bi and 1215, or if greater strength is required, selectfrom our medium carbon, direct hardening steels. And if extreme size accuracy,straightness and concentricity are required, see our listings of ground andpolished shafting stock.
A36 (HR) – Bars designed for use in structural applications.Minimum physical properties of 36 KSI yield strength and 58 KSI tensilestrength. Identification color: brown & pink. Conforms to ASTM A36.
1018, 1020 (HR,CF) – A low carbon steel with a medium manganese content. Has tool hardeningproperties, fair machinability. Readily brazed and welded. Suitable forshafting and for applications that do not require the greater strength of highcarbon and alloy steels. Identification color: green. Conforms to ASTM A – 108and AMS 5069. (Not applicable to bars under 2 15/16” and sizes lighter than29.34 lbs. per foot).
1117 (HR, CF) - Alow carbon, high manganese steel. Machinability is greatly improved over 1018,and case hardening is deep and uniform, supported by a tough ductile core.Withstands bending, broaching, and most deforming, without cracking.Identification color: gold. Conforms to ASTM A-108, AMS 5022 and QQ-S-637.
11L17 (HR, CF)- The addition of .15 to .35% lead tothe 1117 analysis provides for even faster machining without changing theexcellent case hardening characteristics. Identification color: gold withpurple dot. Conforms to ASTM A-108 and QQ-S-637.
M1020 (HR) – A low carbon, general purpose, merchantquality steel, produced to wider carbon and manganese ranges that standardsteels. Suitable for forming and welding. No color identification.
1035 (HR) – Anintermediate carbon, special quality machinery steel, higher in strength andhardness than low carbon steel. Used for studs, bolts, etc. Identificationcolor: blue.
1040, 1045 (HR,CF) – Medium carbon steels used when greater strength and hardness is desiredin the as-rolled condition. Can be hammer forged and responds to heattreatment. Suitable for flame and induction hardening. Uses include gears,shafts, axles, bolts and studs, machine parts, etc. Identification color:yellow.
M1044 (HR) –merchant quality steel is similar to ASI 1045, but less costly. Contains lesseramounts of manganese (.25/.60); strength and responsiveness to heat treatmentare approximately the same. Identification color: yellow.
1050 (CF) –Strain hardened, stress relieved material which offers 100 KSI yield strength.Improved strength over 1045. Identification color: yellow/pink dot. Conforms toASTM A311.
1141 (HR, CF) – Amedium carbon, special quality, manganese steel with improved machinability andbetter heat treatment response (surface hardness is deeper and more uniform)than plain carbon steels. Good as-rolled strength and toughness. Uses includegears, shafts, axles, bolts, studs, pins, etc. Identification color: brown.
11L41 (HR, CF) -Has all the desirable characteristics of Rytense plus greatly superiormachinability – due to 0.15%/0.35% lead addition. Identification color: brownwith purple dot.
1144 (HR, CF) - Similarto 1141 with slightly higher carbon and sulphur content resulting in superiormachinability and improved response to heat treating. Often used for inductionhardened parts requiring 55 RC surface. Identification color: brown &orange. Conforms to ASTM A311.
1144 A311 CL.B(CF) - Steel bars with 100 KSI minimum yield. Capable of flame hardening to56-60 R.D. for such applications as piston rods. Identification color: brown& orange. Conforms to ASTM A311.
Stressproof (CF)– High strength without heat treatment. Stress relieved. Readily machinablewith minimum distortion. Identification color: brown and yellow. Conforms toASTM A311.
Fatigue-proof(CF) – Higher strength than Stressproof achieved by mechanical working andthermal treatment. Eliminates need for heat treating and secondary operations(cleaning, straightening, etc.). Readily machinable with low residual stresses.Identification color: brown/white dot.
These are the fastestmachining steels for highest production rates on screw machines. All are inthe low carbon range and can be case hardened. When superior case hardeningqualities are required, selection can be made from the low carbon-casehardening steels.
1215 (HR, CF) –Fast cutting steel is the standard screw stock. A resulphurized andrephosphurized steel for typical production runs. Cutting speeds and machiningcharacteristics approach Ledloy 300. Machined finish is smooth and bright.Identification color: orange. Conforms to ASTM A108 and QQ-S-637.
1241 Bi – Alead-free alternative to 12L14 and 12L14Te or Se. Bismuth in steel acts asinternal lubricant, thus reducing cutting forces and minimizing tool wear atthe same high rates as leaded products. Identification color: black/purple dot.Conforms to ASTM A108.
12L14 (CF) – Alead bearing steel with average machining rates 30%-40% faster than 1215.Here’s a steel that offers inherent ductility combined with finer surfacequality. Since 12L14 is an extraordinarily fast machining steel, it has becomethe favorite for automatic screw machine work. Identification color: purple.Conforms to ASTM A108 and QQ-S-637.
12L14 + Te and 12L14+Se (CF) – A leaded tellurium or selenium bearing material which is amongour fastest machining steel bars. Increases parts production a minimum of 25% over conventional leaded steel. Finish isexcellent and savings in tool life are substantial. Identification color: 12L14+ Te, purple/gold dot; 12L14+ Se, purple/green dot. Conforms to ASTM A108 andQQ-S-637.
Extreme size accuracy, straightness and concentricity tominimize wear in high speed applications. Turned,ground & polished bars can be machined unsymmetrically, as in key-seating,with minimum distortion because cold drawing strains are not developed.Drawn, ground & polished barscombine the strength advantages of cold drawn stock with extra accuracy andlustrous finish. Conforms to ASTM A-108 and QQ-S-637.
Special finish drawn, ground & polished bars are usedwhere extreme precision and size accuracy, straightness and finish arenecessary. Conforms to ASTM A-108 and QQ-S-637.
Identification color: 1045 DGP or TGP - red & yellow;1141 – red & brown; Stressproof- brown& yellow/red dot; 1215 – red;1144 SRA 100 – brown & yellow/white dot.
Our stock includes chrome plated bars for cylinderapplications in most any stock size or in the custom size you need. All barsare precision ground and plated to 68/70 Rc surface hardness with 6/12 RMSfinish. Plating thickness .0005”. To protect surfaces, we cardboard tube barsduring all phases of handling and storage. |
# -*- coding: utf-8 -*-
"""Tests for forms and WTForms extensions."""
# noqa: D103
import itertools
from unittest.mock import MagicMock
import pytest
from wtforms import Form, StringField
from orcid_hub.forms import validate_orcid_id_field # noqa: E128
from orcid_hub.forms import BitmapMultipleValueField, CountrySelectField, PartialDate, PartialDateField
from orcid_hub.models import PartialDate as PartialDateDbField
def test_partial_date_widget(): # noqa
assert '<option selected value="1995">1995</option>' in PartialDate()(
MagicMock(data=PartialDateDbField(1995)))
field = MagicMock(label="LABEL", id="ID", data=PartialDateDbField(2017, 5, 13))
field.name = "NAME"
pd = PartialDate()(field)
assert '<option selected value="2017">2017</option>' in pd
assert '<option selected value="5">05</option>' in pd
assert '<option selected value="13">13</option><option value="14">14</option>' in pd
assert '"NAME:year"' in pd
assert '"NAME:month"' in pd
assert '"NAME:day"' in pd
@pytest.fixture
def test_form(): # noqa
class F(Form):
pdf1 = PartialDateField("f1", default=PartialDateDbField(1995), id="test-id-1")
pdf2 = PartialDateField("f2", default=PartialDateDbField(2017, 5, 13), id="test-id-2")
pdf3 = PartialDateField("f3")
csf1 = CountrySelectField()
csf2 = CountrySelectField(label="Select Country")
bmvf1 = BitmapMultipleValueField(choices=[
(
1,
"one",
),
(
2,
"two",
),
(
4,
"four",
),
])
bmvf2 = BitmapMultipleValueField(
choices=[
(
1,
"one",
),
(
2,
"two",
),
(
4,
"four",
),
], )
bmvf2.is_bitmap_value = False
return F
def test_partial_date_field_defaults(test_form): # noqa
tf = test_form()
assert tf.pdf1.data == PartialDateDbField(1995)
assert tf.pdf2.data == PartialDateDbField(2017, 5, 13)
assert tf.pdf1.label.text == "f1"
class DummyPostData(dict): # noqa
def __init__(self, data): # noqa
super().__init__()
self.update(data)
def getlist(self, key): # noqa
v = self[key]
if not isinstance(v, (list, tuple)):
v = [v]
return v
def test_partial_date_field_with_data(test_form): # noqa
tf = test_form(DummyPostData({"pdf1:year": "2000", "pdf1:month": "1", "pdf1:day": "31"}))
pdf1 = tf.pdf1()
assert '<option selected value="31">' in pdf1
assert '<option value="2001">2001</option><option selected value="2000">2000</option>' in pdf1
assert '<option value="">Month</option><option selected value="1">01</option><option value="2">' in pdf1
def test_partial_date_field_errors(test_form): # noqa
tf = test_form(
DummyPostData({
"pdf1:year": "ERROR",
"pdf1:month": "ERROR",
"pdf1:day": "ERROR"
}))
tf.validate()
assert len(tf.pdf1.process_errors) > 0
tf = test_form(DummyPostData({"pdf1:year": "2001", "pdf1:month": "", "pdf1:day": "31"}))
tf.validate()
assert len(tf.pdf1.errors) > 0
tf = test_form(DummyPostData({"pdf1:year": "", "pdf1:month": "12", "pdf1:day": "31"}))
tf.validate()
assert len(tf.pdf1.errors) > 0
tf = test_form(DummyPostData({"pdf1:year": "2001", "pdf1:month": "13", "pdf1:day": ""}))
tf.validate()
assert len(tf.pdf1.errors) > 0
tf = test_form(DummyPostData({"pdf1:year": "2001", "pdf1:month": "-1", "pdf1:day": ""}))
tf.validate()
assert len(tf.pdf1.errors) > 0
tf = test_form(DummyPostData({"pdf1:year": "1995", "pdf1:month": "2", "pdf1:day": "29"}))
tf.validate()
assert len(tf.pdf1.errors) > 0
tf = test_form(DummyPostData({"pdf1:year": "1996", "pdf1:month": "2", "pdf1:day": "29"}))
tf.validate()
assert not tf.pdf1.errors
tf = test_form(DummyPostData({"pdf1:year": "1994", "pdf1:month": "2", "pdf1:day": "30"}))
tf.validate()
assert len(tf.pdf1.errors) > 0
tf = test_form(DummyPostData({"pdf1:year": "1994", "pdf1:month": "4", "pdf1:day": "31"}))
tf.validate()
assert len(tf.pdf1.errors) > 0
for m in itertools.chain(range(9, 13, 2), range(2, 8, 2)):
tf = test_form(DummyPostData({"pdf1:year": "1994", "pdf1:month": f"{m}", "pdf1:day": "31"}))
tf.validate()
assert len(tf.pdf1.errors) > 0
def test_partial_date_field_with_filter(test_form): # noqa
test_form.pdf = PartialDateField(
"f", filters=[lambda pd: PartialDateDbField(pd.year + 1, pd.month + 1, pd.day + 1)])
tf = test_form(DummyPostData({"pdf:year": "2012", "pdf:month": "4", "pdf:day": "12"}))
pdf = tf.pdf()
assert '<option selected value="13">' in pdf
assert '<option selected value="2013">' in pdf
assert '<option selected value="5">' in pdf
assert len(tf.pdf1.process_errors) == 0
def failing_filter(*args, **kwargs):
raise ValueError("ERROR!!!")
test_form.pdf = PartialDateField("f", filters=[failing_filter])
tf = test_form(DummyPostData({"pdf:year": "2012", "pdf:month": "4", "pdf:day": "12"}))
assert len(tf.pdf.process_errors) > 0
assert "ERROR!!!" in tf.pdf.process_errors
def test_partial_date_field_with_obj(test_form): # noqa
tf = test_form(None, obj=MagicMock(pdf1=PartialDateDbField(2017, 1, 13)))
pdf1 = tf.pdf1()
assert '<option selected value="13">' in pdf1
assert '<option selected value="2017">2017</option>' in pdf1
assert '<option value="">Month</option><option selected value="1">01</option><option value="2">' in pdf1
tf = test_form(None, obj=MagicMock(pdf3=PartialDateDbField(2017)))
pdf3 = tf.pdf3()
assert '<option selected value="">' in pdf3
assert '<option selected value="2017">2017</option>' in pdf3
assert '<option selected value="">Month</option><option value="1">01</option><option value="2">' in pdf3
def test_partial_date_field_with_data_and_obj(test_form): # noqa
tf = test_form(
DummyPostData({
"pdf1:year": "2000"
}), MagicMock(pdf1=PartialDateDbField(2017, 1, 13)))
pdf1 = tf.pdf1()
assert '<option selected value="13">' in pdf1
assert '<option value="2001">2001</option><option selected value="2000">2000</option>' in pdf1
assert '<option value="">Month</option><option selected value="1">01</option><option value="2">' in pdf1
def test_orcid_validation(test_form): # noqa
orcid_id = StringField("ORCID iD", [
validate_orcid_id_field,
])
orcid_id.data = "0000-0001-8228-7153"
validate_orcid_id_field(test_form, orcid_id)
orcid_id.data = "INVALID FORMAT"
with pytest.raises(ValueError) as excinfo:
validate_orcid_id_field(test_form, orcid_id)
assert f"Invalid ORCID iD {orcid_id.data}. It should be in the form of 'xxxx-xxxx-xxxx-xxxx' where x is a digit." \
in str(excinfo.value)
orcid_id.data = "0000-0001-8228-7154"
with pytest.raises(ValueError) as excinfo:
validate_orcid_id_field(test_form, orcid_id)
assert f"Invalid ORCID iD {orcid_id.data} checksum. Make sure you have entered correct ORCID iD." in str(
excinfo.value)
def test_country_select_field(test_form): # noqa
tf = test_form()
assert tf.csf1.label.text == "Country"
assert tf.csf2.label.text == "Select Country"
def test_bitmap_multiple_value_field(test_form): # noqa
tf = test_form()
tf.bmvf1.data = 3
tf.bmvf2.data = (
1,
4,
)
tf.validate()
tf.bmvf1.process_data(5)
tf.bmvf1.process_data([1, 4])
|
Clark Feltman, an apprentice lineman in the Sneads district graduated from Southeast Lineman Training Center (SLTC) in Trenton, Georgia in April.
"I learned a lot at SLTC - it was fun to meet new people and learn about the job. Learning in a classroom setting was great - overall, it was an invaluable experience. But, to me, the biggest thing I learned was how and why to do things safely. I feel like they did a good job of teaching us how to be safe before we did the work," said Feltman.
In 1934, William Wister Haines, a 24-year-old lineman ceased his employment for a while, holed-up and wrote a book about the adventures of a young lineman breaking into the trade. That book, SLIM, become a national best seller, enjoyed by thousands of readers who knew nothing about linework. It was just a great story and could only have been written by someone who had lived linework. Warner Brothers purchased the rights to the movie, hired Haines to write the screenplay, help with technical direction and arranged for Henry Fonda to play the title character.
William Wister Haines experienced the life of a “boomer” from 1926-1930, then did catenary work on the Pennsylvania Railroad for three years. Only a man who started as a grunt, dreamed of putting on the gaffs, climbed poles and towers, worked in ice, hail and snow, and survived linework’s everyday dangers could describe the life of a lineman so well that his thoughts and words would ring true nearly a century later.
In 2009, Bill Haines, Jr., son of the author, republished 250 copies of the book and they sold out within four days! Once the book became available, several organizations began using copies of the novel as awards - for topping out (achieving the highest level of expertise a lineman can attain), prizes at rodeos and Southeast Lineman Training Center’s “Slim the Lineman - William Wister Haines” award.
“Slim sets an example-he’s the lineman who is totally dedicated to the trade, to the process of continuous learning, being the best possible lineman and above all, to safety. While the book was written about line work in the 20s, these principles remain paramount. The illustrations by Robert Lawson, of work in those early days, are fantastic,” said Haines.
The last publication of the book was over 50 years before its second publication and old grainy copies of the movie had become treasures. Generations had been skipped and linemen had no way to know that the movie came from an exciting, powerful novel.
“A personal copy of the book is one that a young lineman can treasure forever-a symbol that he succeeded with years of rigorous training and is headed for a meaningful, challenging career,” said Haines.
When asked how he felt when he was selected to receive the SLIM Award, Feltman said, "It was probably one of the best feelings I've ever had. It's a great accomplishment for me and I was at a loss for words." |
#!/usr/bin/env python
##############################################
# The MIT License (MIT)
# Copyright (c) 2016 Kevin Walchko
# see LICENSE for full details
##############################################
from pyxl320 import Packet
# from pyxl320 import DummySerial
from pyxl320 import ServoSerial
import argparse
DESCRIPTION = """
Set the angle of a servo in degrees.
Example: set servo 3 to angle 45
./set_angle /dev/serial0 45 -i 3
"""
def handleArgs():
parser = argparse.ArgumentParser(description=DESCRIPTION, formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument('-i', '--id', help='servo id', type=int, default=1)
parser.add_argument('port', help='serial port or \'dummy\' for testing', type=str)
parser.add_argument('angle', help='servo angle in degrees: 0.0 - 300.0', type=float)
args = vars(parser.parse_args())
return args
def main():
args = handleArgs()
ID = args['id']
port = args['port'] # '/dev/tty.usbserial-A5004Flb'
angle = args['angle']
print('Setting servo[{}] to {:.2f} on port {}'.format(ID, angle, port))
if port.lower() == 'dummy':
serial = ServoSerial(port=port, fake=True)
else:
serial = ServoSerial(port=port)
serial.open()
pkt = Packet.makeServoPacket(ID, angle) # move servo 1 to 158.6 degrees
ans = serial.sendPkt(pkt) # send packet to servo
if ans:
print('status: {}'.format(ans))
if __name__ == '__main__':
main()
|
Summary: Change is inevitable--so roll with it!
Change! What comes to mind when you hear the word change? Do you think of the "jingle,jangle" in your pocket? Do you think of the many times your mom would change the furniture in the house around, making you stub your toe in the darkness? What about the many times you have to change outfits because you couldn’t figure out what to wear? Change! This is even hard for church members--especially in the pastoral leadership. And in the words of a friend of mine, "You have made everybody glad." "How", I asked. "Some were glad to see you come, others were glad while you were there and now some are glad to see you go." No matter what perception you may have on change, we must all realize that change is a necessary part of life. Some of us welcome changes, while others of us despise change.
One show that usually catches my attention is that of "Family Matters" which features the Winslow family and their pesky neighbor, Steve Urkel. Steve is a nerdy, clumsy type fellow. When he knocks something over, or breaks something, he usually asks in that irritating voice, "Did I do that?" But Steve is a smart fellow for he invents things--sometimes they work and most of the time they don’t work. But he built a machine in which he could turn himself into another person. He goes from Steve Urkel to Stephan Urkel. He goes from bumbling idiot-to-smooth, cool and popular with the ladies. Even Laura falls head over heels for Stephan....All because he has a radical change from Steve the nerd to Stephan the cool and smooth. Just as Steve’s change is a radical one, we, too must experience a change.
Learning to pray can change our lives. We live in a time when lives need to be changed for the good of man kind and the glory of God. Sinners need to become saints and saints need to become more like the Lord Jesus Christ with each passing day. I believe the most powerful thing anyone can do for someone else is not to loan them a couple of dollars, not someone to run errands, but for someone to also pray for them. Because through prayer you and I can touch the very heart of God who in turn can touch anyone, anywhere regardless of the circumstances. The power is not in the prayer itself, but in the power that God releases in response to the prayer. Is there someone in your life you are praying for everyday? Do you believe God is going to change someone’s life because you are praying for them?
From this prayer, Paul says we can pray the kind of prayers which release God’s power into our lives and the lives of those we are praying for. There are reasons for unanswered prayers in our lives and one of them is that we get fired up for a few days. We shake the heavens with our prayers and about a week later, we forget that prayer because we have failed to see the results as we would want to see the. We have the idea that God is not going to act because He does not meet our schedule as we would desire Him. We stop praying! Our prayer lives are just like our Bible-reading lives--inconsistent. Yet Paul says we have not stopped praying for you. I believe that if mothers and fathers would pray specific consistent prayers for their children, they would see dramatic changes in their children. If husbands and wives began to pray for each other, they would see something happen in their marriages. If you prayed for your church and really prayed for God’s anointing, you would see a change in the church. But you should not expect a change, not expect the anointing of God by simply coming in on Sunday morning and politely bowing your head for a dose of "cold prayer". Whatever happened to getting up early by your bedside and praying to God--whatever happened to prayer meetings at church--I believe they have been substituted for "fellowship hour", or "coffee hour". A songwriter once penned these words, "we don’t pray like we used to..." As a young girl, I remember vividly the deacons would pray and I knew something was going on in my little bitty soul that would say "thank you Jesus."
In verse 9, we are admonished to ask God to fill us with the knowledge of His will. Paul did not pray that the Colossians would wonder about the will of God. Too many of us are wondering what the will of God is for our own lives. Listen, if God has a will for our lives, and He does....His desire for our lives will be clear. You won’t have to wonder, hem or haw, about the will of the Lord for your life. We pray, Lord, save me, yet we continue doing the same old stuff...looking for the will of the Lord through the Ouija Board or calling 1-900-ARE-YOUSTUPID!...looking for the will of God through the tea leaves...We are looking for a change! Praying can change thing...praying can change people....praying can change situations...praying can change circumstances.
Thanks for the Steve Urkel illustration!
Change is inevitable--so roll with it! |
"""Contains form class defintions for form for the questions app.
Classes: [
SurveyQuestionForm
]
"""
from django import forms
from django.core.urlresolvers import reverse
from crispy_forms.bootstrap import FormActions
from crispy_forms.helper import FormHelper
from crispy_forms.layout import Submit, Layout
from . import config
from .models import ChoiceSelectionAnswerFormat, ListInputAnswerFormat
class SurveyQuestionForm(forms.Form):
"""Models a form for how a question should be posed to the user."""
answer_input = forms.CharField(
label = "",
required=False,
widget=forms.Textarea()
)
class Meta:
fields = ("answer_input")
def __init__(self, *args, **kwargs):
survey_question = kwargs.pop('survey_question', None)
super().__init__(*args, **kwargs)
# NOTE: It is assumed that all answer_formats belonging to a question
# is of the same type.
# TODO: ENHANCEMENT: Enforce same answer format instances or throw an
# error
answer_formats = survey_question.question.answer_formats.all()
answer_format = answer_formats[0]
# Determine what the type of 'answer_input' form field is and its
# associated widget type.
if isinstance(answer_format, ChoiceSelectionAnswerFormat):
choices = ((a.id, a.choice_name) for a in answer_formats)
label = ""
required = False
if (answer_format.answer_format_type == config.CHOICE_SELECTION_MANY_FORMAT):
self.fields['answer_input'] = forms.MultipleChoiceField(
label=label,
required=required,
choices=choices
)
self.fields['answer_input'].widget = forms.CheckboxSelectMultiple()
else:
self.fields['answer_input'] = forms.ChoiceField(
label=label,
required=required,
choices=choices
)
self.fields['answer_input'].widget = forms.RadioSelect()
elif isinstance(answer_format, ListInputAnswerFormat):
self.fields['answer_input'].help_text = "Please enter each new item on a new line."
self.helper = FormHelper()
self.helper.form_class = 'form-horizontal'
self.helper.form_method = 'post'
self.helper.form_action = reverse("questions:answer-survey-question")
self.helper.layout = Layout(
'answer_input',
)
self.helper.layout.append(
FormActions(
Submit('skip', 'Skip'),
Submit('submit', 'Submit', css_class='btn-primary'),
)
)
|
Our extensive network enables us to perform operations near you.
Our goal is to reduce as much as we can the unavailability of our client’s equipment. Delivery vans run with great regularity and frequency between your site and our accredited laboratories. Autocal transports your equipment in padded crates to ensure optimal and secure transport conditions. |
########################################################################
#
# OpenOffice.org - a multi-platform office productivity suite
#
# Author:
# Kohei Yoshida <[email protected]>
#
# The Contents of this file are made available subject to the terms
# of GNU Lesser General Public License Version 2.1 and any later
# version.
#
########################################################################
import sys
import globals
from globals import getSignedInt
# ----------------------------------------------------------------------------
# Reference: The Microsoft Compound Document File Format by Daniel Rentz
# http://sc.openoffice.org/compdocfileformat.pdf
# ----------------------------------------------------------------------------
from globals import output
class NoRootStorage(Exception): pass
class ByteOrder:
LittleEndian = 0
BigEndian = 1
Unknown = 2
class BlockType:
MSAT = 0
SAT = 1
SSAT = 2
Directory = 3
class StreamLocation:
SAT = 0
SSAT = 1
class Header(object):
@staticmethod
def byteOrder (chars):
b1, b2 = ord(chars[0]), ord(chars[1])
if b1 == 0xFE and b2 == 0xFF:
return ByteOrder.LittleEndian
elif b1 == 0xFF and b2 == 0xFE:
return ByteOrder.BigEndian
else:
return ByteOrder.Unknown
def __init__ (self, bytes, params):
self.bytes = bytes
self.MSAT = None
self.docId = None
self.uId = None
self.revision = 0
self.version = 0
self.byteOrder = ByteOrder.Unknown
self.minStreamSize = 0
self.numSecMSAT = 0
self.numSecSSAT = 0
self.numSecSAT = 0
self.__secIDFirstMSAT = -2
self.__secIDFirstDirStrm = -2
self.__secIDFirstSSAT = -2
self.secSize = 512
self.secSizeShort = 64
self.params = params
def getSectorSize (self):
return 2**self.secSize
def getShortSectorSize (self):
return 2**self.secSizeShort
def getFirstSectorID (self, blockType):
if blockType == BlockType.MSAT:
return self.__secIDFirstMSAT
elif blockType == BlockType.SSAT:
return self.__secIDFirstSSAT
elif blockType == BlockType.Directory:
return self.__secIDFirstDirStrm
return -2
def output (self):
def printRawBytes (bytes):
for b in bytes:
output("%2.2X "%ord(b))
output("\n")
def printSep (c='-', w=68, prefix=''):
print(prefix + c*w)
printSep('=', 68)
print("Compound Document Header")
printSep('-', 68)
if self.params.debug:
globals.dumpBytes(self.bytes[0:512])
printSep('-', 68)
# document ID and unique ID
output("Document ID: ")
printRawBytes(self.docId)
output("Unique ID: ")
printRawBytes(self.uId)
# revision and version
print("Revision: %d Version: %d"%(self.revision, self.version))
# byte order
output("Byte order: ")
if self.byteOrder == ByteOrder.LittleEndian:
print("little endian")
elif self.byteOrder == ByteOrder.BigEndian:
print("big endian")
else:
print("unknown")
# sector size (usually 512 bytes)
print("Sector size: %d (%d)"%(2**self.secSize, self.secSize))
# short sector size (usually 64 bytes)
print("Short sector size: %d (%d)"%(2**self.secSizeShort, self.secSizeShort))
# total number of sectors in SAT (equals the number of sector IDs
# stored in the MSAT).
print("Total number of sectors used in SAT: %d"%self.numSecSAT)
print("Sector ID of the first sector of the directory stream: %d"%
self.__secIDFirstDirStrm)
print("Minimum stream size: %d"%self.minStreamSize)
if self.__secIDFirstSSAT == -2:
print("Sector ID of the first SSAT sector: [none]")
else:
print("Sector ID of the first SSAT sector: %d"%self.__secIDFirstSSAT)
print("Total number of sectors used in SSAT: %d"%self.numSecSSAT)
if self.__secIDFirstMSAT == -2:
# There is no more sector ID stored outside the header.
print("Sector ID of the first MSAT sector: [end of chain]")
else:
# There is more sector IDs than 109 IDs stored in the header.
print("Sector ID of the first MSAT sector: %d"%(self.__secIDFirstMSAT))
print("Total number of sectors used to store additional MSAT: %d"%self.numSecMSAT)
def parse (self):
# document ID and unique ID
self.docId = self.bytes[0:8]
self.uId = self.bytes[8:24]
# revision and version
self.revision = getSignedInt(self.bytes[24:26])
self.version = getSignedInt(self.bytes[26:28])
# byte order
self.byteOrder = Header.byteOrder(self.bytes[28:30])
# sector size (usually 512 bytes)
self.secSize = getSignedInt(self.bytes[30:32])
# short sector size (usually 64 bytes)
self.secSizeShort = getSignedInt(self.bytes[32:34])
# total number of sectors in SAT (equals the number of sector IDs
# stored in the MSAT).
self.numSecSAT = getSignedInt(self.bytes[44:48])
self.__secIDFirstDirStrm = getSignedInt(self.bytes[48:52])
self.minStreamSize = getSignedInt(self.bytes[56:60])
self.__secIDFirstSSAT = getSignedInt(self.bytes[60:64])
self.numSecSSAT = getSignedInt(self.bytes[64:68])
self.__secIDFirstMSAT = getSignedInt(self.bytes[68:72])
self.numSecMSAT = getSignedInt(self.bytes[72:76])
# master sector allocation table
self.MSAT = MSAT(2**self.secSize, self.bytes, self.params)
# First part of MSAT consisting of an array of up to 109 sector IDs.
# Each sector ID is 4 bytes in length.
for i in xrange(0, 109):
pos = 76 + i*4
id = getSignedInt(self.bytes[pos:pos+4])
if id == -1:
break
self.MSAT.appendSectorID(id)
if self.__secIDFirstMSAT != -2:
# additional sectors are used to store more SAT sector IDs.
secID = self.__secIDFirstMSAT
size = self.getSectorSize()
inLoop = True
while inLoop:
pos = 512 + secID*size
bytes = self.bytes[pos:pos+size]
n = int(size/4)
for i in xrange(0, n):
pos = i*4
id = getSignedInt(bytes[pos:pos+4])
if id < 0:
inLoop = False
break
elif i == n-1:
# last sector ID - points to the next MSAT sector.
secID = id
break
else:
self.MSAT.appendSectorID(id)
return 512
def getMSAT (self):
return self.MSAT
def getSAT (self):
return self.MSAT.getSAT()
def getSSAT (self):
ssatID = self.getFirstSectorID(BlockType.SSAT)
if ssatID < 0:
return None
chain = self.getSAT().getSectorIDChain(ssatID)
if len(chain) == 0:
return None
obj = SSAT(2**self.secSize, self.bytes, self.params)
for secID in chain:
obj.addSector(secID)
obj.buildArray()
return obj
def getDirectory (self):
dirID = self.getFirstSectorID(BlockType.Directory)
if dirID < 0:
return None
chain = self.getSAT().getSectorIDChain(dirID)
if len(chain) == 0:
return None
obj = Directory(self, self.params)
for secID in chain:
obj.addSector(secID)
return obj
def dummy ():
pass
class MSAT(object):
"""Master Sector Allocation Table (MSAT)
This class represents the master sector allocation table (MSAT) that stores
sector IDs that point to all the sectors that are used by the sector
allocation table (SAT). The actual SAT are to be constructed by combining
all the sectors pointed by the sector IDs in order of occurrence.
"""
def __init__ (self, sectorSize, bytes, params):
self.sectorSize = sectorSize
self.secIDs = []
self.bytes = bytes
self.__SAT = None
self.params = params
def appendSectorID (self, id):
self.secIDs.append(id)
def output (self):
print('')
print("="*68)
print("Master Sector Allocation Table (MSAT)")
print("-"*68)
for id in self.secIDs:
print("sector ID: %5d (pos: %7d)"%(id, 512+id*self.sectorSize))
def getSATSectorPosList (self):
list = []
for id in self.secIDs:
pos = 512 + id*self.sectorSize
list.append([id, pos])
return list
def getSAT (self):
if self.__SAT != None:
return self.__SAT
obj = SAT(self.sectorSize, self.bytes, self.params)
for id in self.secIDs:
obj.addSector(id)
obj.buildArray()
self.__SAT = obj
return self.__SAT
class SAT(object):
"""Sector Allocation Table (SAT)
"""
def __init__ (self, sectorSize, bytes, params):
self.sectorSize = sectorSize
self.sectorIDs = []
self.bytes = bytes
self.array = []
self.params = params
def getSectorSize (self):
return self.sectorSize
def addSector (self, id):
self.sectorIDs.append(id)
def buildArray (self):
if len(self.array) > 0:
# array already built.
return
numItems = int(self.sectorSize/4)
self.array = []
for secID in self.sectorIDs:
pos = 512 + secID*self.sectorSize
for i in xrange(0, numItems):
beginPos = pos + i*4
id = getSignedInt(self.bytes[beginPos:beginPos+4])
self.array.append(id)
def outputRawBytes (self):
bytes = ""
for secID in self.sectorIDs:
pos = 512 + secID*self.sectorSize
bytes += self.bytes[pos:pos+self.sectorSize]
globals.dumpBytes(bytes, 512)
def outputArrayStats (self):
sectorTotal = len(self.array)
sectorP = 0 # >= 0
sectorM1 = 0 # -1
sectorM2 = 0 # -2
sectorM3 = 0 # -3
sectorM4 = 0 # -4
sectorMElse = 0 # < -4
sectorLiveTotal = 0
for i in xrange(0, len(self.array)):
item = self.array[i]
if item >= 0:
sectorP += 1
elif item == -1:
sectorM1 += 1
elif item == -2:
sectorM2 += 1
elif item == -3:
sectorM3 += 1
elif item == -4:
sectorM4 += 1
elif item < -4:
sectorMElse += 1
else:
sectorLiveTotal += 1
print("total sector count: %4d"%sectorTotal)
print("* live sector count: %4d"%sectorP)
print("* end-of-chain sector count: %4d"%sectorM2) # end-of-chain is also live
print("* free sector count: %4d"%sectorM1)
print("* SAT sector count: %4d"%sectorM3)
print("* MSAT sector count: %4d"%sectorM4)
print("* other sector count: %4d"%sectorMElse)
def output (self):
print('')
print("="*68)
print("Sector Allocation Table (SAT)")
print("-"*68)
if self.params.debug:
self.outputRawBytes()
print("-"*68)
for i in xrange(0, len(self.array)):
print("%5d: %5d"%(i, self.array[i]))
print("-"*68)
self.outputArrayStats()
def getSectorIDChain (self, initID):
if initID < 0:
return []
chain = [initID]
nextID = self.array[initID]
while nextID != -2:
chain.append(nextID)
nextID = self.array[nextID]
return chain
class SSAT(SAT):
"""Short Sector Allocation Table (SSAT)
SSAT contains an array of sector ID chains of all short streams, as oppposed
to SAT which contains an array of sector ID chains of all standard streams.
The sector IDs included in the SSAT point to the short sectors in the short
stream container stream.
The first sector ID of SSAT is in the header, and the IDs of the remaining
sectors are contained in the SAT as a sector ID chain.
"""
def output (self):
print('')
print("="*68)
print("Short Sector Allocation Table (SSAT)")
print("-"*68)
if self.params.debug:
self.outputRawBytes()
print("-"*68)
for i in xrange(0, len(self.array)):
item = self.array[i]
output("%3d : %3d\n"%(i, item))
self.outputArrayStats()
class Directory(object):
"""Directory Entries
This stream contains a list of directory entries that are stored within the
entire file stream.
"""
class Type:
Empty = 0
UserStorage = 1
UserStream = 2
LockBytes = 3
Property = 4
RootStorage = 5
class NodeColor:
Red = 0
Black = 1
Unknown = 99
class Entry:
def __init__ (self):
self.Name = ''
self.CharBufferSize = 0
self.Type = Directory.Type.Empty
self.NodeColor = Directory.NodeColor.Unknown
self.DirIDLeft = -1
self.DirIDRight = -1
self.DirIDRoot = -1
self.UniqueID = None
self.UserFlags = None
self.TimeCreated = None
self.TimeModified = None
self.StreamSectorID = -2
self.StreamSize = 0
self.bytes = []
def __init__ (self, header, params):
self.sectorSize = header.getSectorSize()
self.bytes = header.bytes
self.minStreamSize = header.minStreamSize
self.sectorIDs = []
self.entries = []
self.SAT = header.getSAT()
self.SSAT = header.getSSAT()
self.header = header
self.RootStorage = None
self.RootStorageBytes = ""
self.params = params
def __buildRootStorageBytes (self):
if self.RootStorage == None:
# no root storage exists.
return
firstSecID = self.RootStorage.StreamSectorID
chain = self.header.getSAT().getSectorIDChain(firstSecID)
for secID in chain:
pos = 512 + secID*self.sectorSize
self.RootStorageBytes += self.header.bytes[pos:pos+self.sectorSize]
def __getRawStream (self, entry):
chain = []
if entry.StreamLocation == StreamLocation.SAT:
chain = self.header.getSAT().getSectorIDChain(entry.StreamSectorID)
elif entry.StreamLocation == StreamLocation.SSAT:
chain = self.header.getSSAT().getSectorIDChain(entry.StreamSectorID)
if entry.StreamLocation == StreamLocation.SSAT:
# Get the root storage stream.
if self.RootStorage == None:
raise NoRootStorage
bytes = ""
self.__buildRootStorageBytes()
size = self.header.getShortSectorSize()
for id in chain:
pos = id*size
bytes += self.RootStorageBytes[pos:pos+size]
return bytes
offset = 512
size = self.header.getSectorSize()
bytes = ""
for id in chain:
pos = offset + id*size
bytes += self.header.bytes[pos:pos+size]
return bytes
def getRawStreamByName (self, name):
bytes = []
for entry in self.entries:
if entry.Name == name:
bytes = self.__getRawStream(entry)
break
return bytes
def addSector (self, id):
self.sectorIDs.append(id)
def output (self, debug=False):
print('')
print("="*68)
print("Directory")
if debug:
print("-"*68)
print("sector(s) used:")
for secID in self.sectorIDs:
print(" sector %d"%secID)
print("")
for secID in self.sectorIDs:
print("-"*68)
print(" Raw Hex Dump (sector %d)"%secID)
print("-"*68)
pos = globals.getSectorPos(secID, self.sectorSize)
globals.dumpBytes(self.bytes[pos:pos+self.sectorSize], 128)
for entry in self.entries:
self.__outputEntry(entry, debug)
def __outputEntry (self, entry, debug):
print("-"*68)
if len(entry.Name) > 0:
name = entry.Name
if ord(name[0]) <= 5:
name = "<%2.2Xh>%s"%(ord(name[0]), name[1:])
print("name: %s (name buffer size: %d bytes)"%(name, entry.CharBufferSize))
else:
print("name: [empty] (name buffer size: %d bytes)"%entry.CharBufferSize)
if self.params.debug:
print("-"*68)
globals.dumpBytes(entry.bytes)
print("-"*68)
output("type: ")
if entry.Type == Directory.Type.Empty:
print("empty")
elif entry.Type == Directory.Type.LockBytes:
print("lock bytes")
elif entry.Type == Directory.Type.Property:
print("property")
elif entry.Type == Directory.Type.RootStorage:
print("root storage")
elif entry.Type == Directory.Type.UserStorage:
print("user storage")
elif entry.Type == Directory.Type.UserStream:
print("user stream")
else:
print("[unknown type]")
output("node color: ")
if entry.NodeColor == Directory.NodeColor.Red:
print("red")
elif entry.NodeColor == Directory.NodeColor.Black:
print("black")
elif entry.NodeColor == Directory.NodeColor.Unknown:
print("[unknown color]")
print("linked dir entries: left: %d; right: %d; root: %d"%
(entry.DirIDLeft, entry.DirIDRight, entry.DirIDRoot))
self.__outputRaw("unique ID", entry.UniqueID)
self.__outputRaw("user flags", entry.UserFlags)
self.__outputRaw("time created", entry.TimeCreated)
self.__outputRaw("time last modified", entry.TimeModified)
output("stream info: ")
if entry.StreamSectorID < 0:
print("[empty stream]")
else:
strmLoc = "SAT"
if entry.StreamLocation == StreamLocation.SSAT:
strmLoc = "SSAT"
print("(first sector ID: %d; size: %d; location: %s)"%
(entry.StreamSectorID, entry.StreamSize, strmLoc))
satObj = None
secSize = 0
if entry.StreamLocation == StreamLocation.SAT:
satObj = self.SAT
secSize = self.header.getSectorSize()
elif entry.StreamLocation == StreamLocation.SSAT:
satObj = self.SSAT
secSize = self.header.getShortSectorSize()
if satObj != None:
chain = satObj.getSectorIDChain(entry.StreamSectorID)
print("sector count: %d"%len(chain))
print("total sector size: %d"%(len(chain)*secSize))
if self.params.showSectorChain:
self.__outputSectorChain(chain)
def __outputSectorChain (self, chain):
line = "sector chain: "
lineLen = len(line)
for id in chain:
frag = "%d, "%id
fragLen = len(frag)
if lineLen + fragLen > 68:
print(line)
line = frag
lineLen = fragLen
else:
line += frag
lineLen += fragLen
if line[-2:] == ", ":
line = line[:-2]
lineLen -= 2
if lineLen > 0:
print(line)
def __outputRaw (self, name, bytes):
if bytes == None:
return
output("%s: "%name)
for byte in bytes:
output("%2.2X "%ord(byte))
print("")
def getDirectoryNames (self):
names = []
for entry in self.entries:
names.append(entry.Name)
return names
def parseDirEntries (self):
if len(self.entries):
# directory entries already built
return
# combine all sectors first.
bytes = ""
for secID in self.sectorIDs:
pos = globals.getSectorPos(secID, self.sectorSize)
bytes += self.bytes[pos:pos+self.sectorSize]
self.entries = []
# each directory entry is exactly 128 bytes.
numEntries = int(len(bytes)/128)
if numEntries == 0:
return
for i in xrange(0, numEntries):
pos = i*128
self.entries.append(self.parseDirEntry(bytes[pos:pos+128]))
def parseDirEntry (self, bytes):
entry = Directory.Entry()
entry.bytes = bytes
name = globals.getUTF8FromUTF16(bytes[0:64])
entry.Name = name
entry.CharBufferSize = getSignedInt(bytes[64:66])
entry.Type = getSignedInt(bytes[66:67])
entry.NodeColor = getSignedInt(bytes[67:68])
entry.DirIDLeft = getSignedInt(bytes[68:72])
entry.DirIDRight = getSignedInt(bytes[72:76])
entry.DirIDRoot = getSignedInt(bytes[76:80])
entry.UniqueID = bytes[80:96]
entry.UserFlags = bytes[96:100]
entry.TimeCreated = bytes[100:108]
entry.TimeModified = bytes[108:116]
entry.StreamSectorID = getSignedInt(bytes[116:120])
entry.StreamSize = getSignedInt(bytes[120:124])
entry.StreamLocation = StreamLocation.SAT
if entry.Type != Directory.Type.RootStorage and \
entry.StreamSize < self.header.minStreamSize:
entry.StreamLocation = StreamLocation.SSAT
if entry.Type == Directory.Type.RootStorage and entry.StreamSectorID >= 0:
# This is an existing root storage.
self.RootStorage = entry
return entry
|
On this page, you’ll see announcements that administrators and moderators have left for the group.
The best way to communicate with Pelagios Commons is through our Mailing List. If you need to get in touch with the team directly you can do so at [email protected].
You can find out more about which cookies we are using or switch them off in settings You can find out more about the cookie and personal data policy here. |
# -*- coding: utf-8 -*-
"""Primary wepy simulation database driver and access API using the
HDF5 format.
The HDF5 Format Specification
=============================
As part of the wepy framework this module provides a fully-featured
API for creating and accessing data generated in weighted ensemble
simulations run with wepy.
The need for a special purpose format is many-fold but primarily it is
the nonlinear branching structure of walker trajectories coupled with
weights.
That is for standard simulations data is organized as independent
linear trajectories of frames each related linearly to the one before
it and after it.
In weighted ensemble due to the resampling (i.e. cloning and merging)
of walkers, a single frame may have multiple 'child' frames.
This is the primary motivation for this format.
However, in practice it solves several other issues and itself is a
more general and flexible format than for just weighted ensemble
simulations.
Concretely the WepyHDF5 format is simply an informally described
schema that is commensurable with the HDF5 constructs of hierarchical
groups (similar to unix filesystem directories) arranged as a tree
with datasets as the leaves.
The hierarchy is fairly deep and so we will progress downwards from
the top and describe each broad section in turn breaking it down when
necessary.
Header
------
The items right under the root of the tree are:
- runs
- topology
- _settings
The first item 'runs' is itself a group that contains all of the
primary data from simulations. In WepyHDF5 the run is the unit
dataset. All data internal to a run is self contained. That is for
multiple dependent trajectories (e.g. from cloning and merging) all
exist within a single run.
This excludes metadata-like things that may be needed for interpreting
this data, such as the molecular topology that imposes structure over
a frame of atom positions. This information is placed in the
'topology' item.
The topology field has no specified internal structure at this
time. However, with the current implementation of the WepyHDF5Reporter
(which is the principal implementation of generating a WepyHDF5
object/file from simulations) this is simply a string dataset. This
string dataset should be a JSON compliant string. The format of which
is specified elsewhere and was borrowed from the mdtraj library.
Warning! this format and specification for the topology is subject to
change in the future and will likely be kept unspecified indefinitely.
For most intents and purposes (which we assume to be for molecular or
molecular-like simulations) the 'topology' item (and perhaps any other
item at the top level other than those proceeded by and underscore,
such as in the '_settings' item) is merely useful metadata that
applies to ALL runs and is not dynamical.
In the language of the orchestration module all data in 'runs' uses
the same 'apparatus' which is the function that takes in the initial
conditions for walkers and produces new walkers. The apparatus may
differ in the specific values of parameters but not in kind. This is
to facilitate runs that are continuations of other runs. For these
kinds of simulations the state of the resampler, boundary conditions,
etc. will not be as they were initially but are the same in kind or
type.
All of the necessary type information of data in runs is kept in the
'_settings' group. This is used to serialize information about the
data types, shapes, run to run continuations etc. This allows for the
initialization of an empty (no runs) WepyHDF5 database at one time and
filling of data at another time. Otherwise types of datasets would
have to be inferred from the data itself, which may not exist yet.
As a convention items which are preceeded by an underscore (following
the python convention) are to be considered hidden and mechanical to
the proper functioning of various WepyHDF5 API features, such as
sparse trajectory fields.
The '_settings' is specified as a simple key-value structure, however
values may be arbitrarily complex.
Runs
----
The meat of the format is contained within the runs group:
- runs
- 0
- 1
- 2
- ...
Under the runs group are a series of groups for each run. Runs are
named according to the order in which they were added to the database.
Within a run (say '0' from above) we have a number of items:
- 0
- init_walkers
- trajectories
- decision
- resampling
- resampler
- warping
- progress
- boundary_conditions
Trajectories
^^^^^^^^^^^^
The 'trajectories' group is where the data for the frames of the
walker trajectories is stored.
Even though the tree-like trajectories of weighted ensemble data may
be well suited to having a tree-like storage topology we have opted to
use something more familiar to the field, and have used a collection
of linear "trajectories".
This way of breaking up the trajectory data coupled with proper
records of resampling (see below) allows for the imposition of a tree
structure without committing to that as the data storage topology.
This allows the WepyHDF5 format to be easily used as a container
format for collections of linear trajectories. While this is not
supported in any real capacity it is one small step to convergence. We
feel that a format that contains multiple trajectories is important
for situations like weighted ensemble where trajectories are
interdependent. The transition to a storage format like HDF5 however
opens up many possibilities for new features for trajectories that
have not occurred despite several attempts to forge new formats based
on HDF5 (TODO: get references right; see work in mdtraj and MDHDF5).
Perhaps these formats have not caught on because the existing formats
(e.g. XTC, DCD) for simple linear trajectories are good enough and
there is little motivation to migrate.
However, by making the WepyHDF5 format (and related sub-formats to be
described e.g. record groups and the trajectory format) both cover a
new use case which can't be achieved with old formats and old ones
with ease.
Once users see the power of using a format like HDF5 from using wepy
they may continue to use it for simpler simulations.
In any case the 'trajectories' in the group for weighted ensemble
simulations should be thought of only as containers and not literally
as trajectories. That is frame 4 does not necessarily follow from
frame 3. So one may think of them more as "lanes" or "slots" for
trajectory data that needs to be stitched together with the
appropriate resampling records.
The routines and methods for generating contiguous trajectories from
the data in WepyHDF5 are given through the 'analysis' module, which
generates "traces" through the dataset.
With this in mind we will describe the sub-format of a trajectory now.
The 'trajectories' group is similar to the 'runs' group in that it has
sub-groups whose names are numbers. These numbers however are not the
order in which they are created but an index of that trajectory which
are typically laid out all at once.
For a wepy simulation with a constant number of walkers you will only
ever need as many trajectories/slots as there are walkers. So if you
have 8 walkers then you will have trajectories 0 through 7. Concretely:
- runs
- 0
- trajectories
- 0
- 1
- 2
- 3
- 4
- 5
- 6
- 7
If we look at trajectory 0 we might see the following groups within:
- positions
- box_vectors
- velocities
- weights
Which is what you would expect for a constant pressure molecular
dynamics simulation where you have the positions of the atoms, the box
size, and velocities of the atoms.
The particulars for what "fields" a trajectory in general has are not
important but this important use-case is directly supported in the
WepyHDF5 format.
In any such simulation, however, the 'weights' field will appear since
this is the weight of the walker of this frame and is a value
important to weighted ensemble and not the underlying dynamics.
The naive approach to these fields is that each is a dataset of
dimension (n_frames, feature_vector_shape[0], ...) where the first dimension
is the cycle_idx and the rest of the dimensions are determined by the
atomic feature vector for each field for a single frame.
For example, the positions for a molecular simulation with 100 atoms
with x, y, and z coordinates that ran for 1000 cycles would be a
dataset of the shape (1000, 100, 3). Similarly the box vectors would
be (1000, 3, 3) and the weights would be (1000, 1).
This uniformity vastly simplifies accessing and adding new variables
and requires that individual state values in walkers always be arrays
with shapes, even when they are single values (e.g. energy). The
exception being the weight which is handled separately.
However, this situation is actually more complex to allow for special
features.
First of all is the presence of compound fields which allow nesting of
multiple groups.
The above "trajectory fields" would have identifiers such as the
literal strings 'positions' and 'box_vectors', while a compound field
would have an identifier 'observables/rmsd' or 'alt_reps/binding_site'.
Use of trajectory field names using the '/' path separator will
automatically make a field a group and the last element of the field
name the dataset. So for the observables example we might have:
- 0
- observables
- rmsd
- sasa
Where the rmsd would be accessed as a trajectory field of trajectory 0
as 'observables/rmsd' and the solvent accessible surface area as
'observables/sasa'.
This example introduces how the WepyHDF5 format is not only useful for
storing data produced by simulation but also in the analysis of that
data and computation of by-frame quantities.
The 'observables' compound group key prefix is special and will be
used in the 'compute_observables' method.
The other special compound group key prefix is 'alt_reps' which is
used for particle simulations to store "alternate representation" of
the positions. This is useful in cooperation with the next feature of
wepy trajectory fields to allow for more economical storage of data.
The next feature (and complication of the format) is the allowance for
sparse fields. As the fields were introduced we said that they should
have as many feature vectors as there are frames for the
simulation. In the example however, you will notice that storing both
the full atomic positions and velocities for a long simulation
requires a heavy storage burden.
So perhaps you only want to store the velocities (or forces) every 100
frames so that you can be able to restart a simulation form midway
through the simulation. This is achieved through sparse fields.
A sparse field is no longer a dataset but a group with two items:
- _sparse_idxs
- data
The '_sparse_idxs' are simply a dataset of integers that assign each
element of the 'data' dataset to a frame index. Using the above
example we run a simulation for 1000 frames with 100 atoms and we save
the velocities every 100 frames we would have a 'velocities/data'
dataset of shape (100, 100, 3) which is 10 times less data than if it
were saved every frame.
While this complicates the storage format use of the proper API
methods should be transparent whether you are returning a sparse field
or not.
As alluded to above the use of sparse fields can be used for more than
just accessory fields. In many simulations, such as those with full
atomistic simulations of proteins in solvent we often don't care about
the dynamics of most of the atoms in the simulation and so would like
to not have to save them.
The 'alt_reps' compound field is meant to solve this. For example, the
WepyHDF5Reporter supports a special option to save only a subset of
the atoms in the main 'positions' field but also to save the full
atomic system as an alternate representation, which is the field name
'alt_reps/all_atoms'. So that you can still save the full system every
once in a while but be economical in what positions you save every
single frame.
Note that there really isn't a way to achieve this with other
formats. You either make a completely new trajectory with only the
atoms of interest and now you are duplicating those in two places, or
you duplicate and then filter your full systems trajectory file and
rely on some sort of index to always live with it in the filesystem,
which is a very precarious scenario. The situation is particularly
hopeless for weighted ensemble trajectories.
Init Walkers
^^^^^^^^^^^^
The data stored in the 'trajectories' section is the data that is
returned after running dynamics in a cycle. Since we view the WepyHDF5
as a completely self-contained format for simulations it seems
negligent to rely on outside sources (such as the filesystem) for the
initial structures that seeded the simulations. These states (and
weights) can be stored in this group.
The format of this group is identical to the one for trajectories
except that there is only one frame for each slot and so the shape of
the datasets for each field is just the shape of the feature vector.
Record Groups
^^^^^^^^^^^^^
TODO: add reference to reference groups
The last five items are what are called 'record groups' and all follow
the same format.
Each record group contains itself a number of datasets, where the
names of the datasets correspond to the 'field names' from the record
group specification. So each record groups is simply a key-value store
where the values must be datasets.
For instance the fields in the 'resampling' (which is particularly
important as it encodes the branching structure) record group for a
WExplore resampler simulation are:
- step_idx
- walker_idx
- decision_id
- target_idxs
- region_assignment
Where the 'step_idx' is an integer specifying which step of resampling
within the cycle the resampling action took place (the cycle index is
metadata for the group). The 'walker_idx' is the index of the walker
that this action was assigned to. The 'decision_id' is an integer that
is related to an enumeration of decision types that encodes which
discrete action is to be taken for this resampling event (the
enumeration is in the 'decision' item of the run groups). The
'target_idxs' is a variable length 1-D array of integers which assigns
the results of the action to specific target 'slots' (which was
discussed for the 'trajectories' run group). And the
'region_assignment' is specific to WExplore which reports on which
region the walker was in at that time, and is a variable length 1-D
array of integers.
Additionally, record groups are broken into two types:
- continual
- sporadic
Continual records occur once per cycle and so there is no extra
indexing necessary.
Sporadic records can happen multiple or zero times per cycle and so
require a special index for them which is contained in the extra
dataset '_cycle_idxs'.
It is worth noting that the underlying methods for each record group
are general. So while these are the official wepy record groups that
are supported if there is a use-case that demands a new record group
it is a fairly straightforward task from a developers perspective.
"""
import os.path as osp
from collections import Sequence, namedtuple, defaultdict, Counter
import itertools as it
import json
from warnings import warn
from copy import copy
import logging
import gc
import numpy as np
import h5py
import networkx as nx
from wepy.analysis.parents import resampling_panel
from wepy.util.mdtraj import mdtraj_to_json_topology, json_to_mdtraj_topology, \
traj_fields_to_mdtraj
from wepy.util.util import traj_box_vectors_to_lengths_angles
from wepy.util.json_top import json_top_subset, json_top_atom_count
# optional dependencies
try:
import mdtraj as mdj
except ModuleNotFoundError:
warn("mdtraj is not installed and that functionality will not work", RuntimeWarning)
try:
import pandas as pd
except ModuleNotFoundError:
warn("pandas is not installed and that functionality will not work", RuntimeWarning)
## h5py settings
# we set the libver to always be the latest (which should be 1.10) so
# that we know we can always use SWMR and the newest features. We
# don't care about backwards compatibility with HDF5 1.8. Just update
# in a new virtualenv if this is a problem for you
H5PY_LIBVER = 'latest'
## Header and settings keywords
TOPOLOGY = 'topology'
"""Default header apparatus dataset. The molecular topology dataset."""
SETTINGS = '_settings'
"""Name of the settings group in the header group."""
RUNS = 'runs'
"""The group name for runs."""
## metadata fields
RUN_IDX = 'run_idx'
"""Metadata field for run groups for the run index within this file."""
RUN_START_SNAPSHOT_HASH = 'start_snapshot_hash'
"""Metadata field for a run that corresponds to the hash of the
starting simulation snapshot in orchestration."""
RUN_END_SNAPSHOT_HASH = 'end_snapshot_hash'
"""Metadata field for a run that corresponds to the hash of the
ending simulation snapshot in orchestration."""
TRAJ_IDX = 'traj_idx'
"""Metadata field for trajectory groups for the trajectory index in that run."""
## Misc. Names
CYCLE_IDX = 'cycle_idx'
"""String for setting the names of cycle indices in records and
miscellaneous situations."""
## Settings field names
SPARSE_FIELDS = 'sparse_fields'
"""Settings field name for sparse field trajectory field flags."""
N_ATOMS = 'n_atoms'
"""Settings field name group for the number of atoms in the default positions field."""
N_DIMS_STR = 'n_dims'
"""Settings field name for positions field spatial dimensions."""
MAIN_REP_IDXS = 'main_rep_idxs'
"""Settings field name for the indices of the full apparatus topology in
the default positions trajectory field."""
ALT_REPS_IDXS = 'alt_reps_idxs'
"""Settings field name for the different 'alt_reps'. The indices of
the atoms from the full apparatus topology for each."""
FIELD_FEATURE_SHAPES_STR = 'field_feature_shapes'
"""Settings field name for the trajectory field shapes."""
FIELD_FEATURE_DTYPES_STR = 'field_feature_dtypes'
"""Settings field name for the trajectory field data types."""
UNITS = 'units'
"""Settings field name for the units of the trajectory fields."""
RECORD_FIELDS = 'record_fields'
"""Settings field name for the record fields that are to be included
in the truncated listing of record group fields."""
CONTINUATIONS = 'continuations'
"""Settings field name for the continuations relationships between runs."""
## Run Fields Names
TRAJECTORIES = 'trajectories'
"""Run field name for the trajectories group."""
INIT_WALKERS = 'init_walkers'
"""Run field name for the initial walkers group."""
DECISION = 'decision'
"""Run field name for the decision enumeration group."""
## Record Groups Names
RESAMPLING = 'resampling'
"""Record group run field name for the resampling records """
RESAMPLER = 'resampler'
"""Record group run field name for the resampler records """
WARPING = 'warping'
"""Record group run field name for the warping records """
PROGRESS = 'progress'
"""Record group run field name for the progress records """
BC = 'boundary_conditions'
"""Record group run field name for the boundary conditions records """
## Record groups constants
# special datatypes strings
NONE_STR = 'None'
"""String signifying a field of unspecified shape. Used for
serializing the None python object."""
CYCLE_IDXS = '_cycle_idxs'
"""Group name for the cycle indices of sporadic records."""
# records can be sporadic or continual. Continual records are
# generated every cycle and are saved every cycle and are for all
# walkers. Sporadic records are generated conditional on specific
# events taking place and thus may or may not be produced each
# cycle. There also is not a single record for each (cycle, step) like
# there would be for continual ones because they can occur for single
# walkers, boundary conditions, or resamplers.
SPORADIC_RECORDS = (RESAMPLER, WARPING, RESAMPLING, BC)
"""Enumeration of the record groups that are sporadic."""
## Trajectories Group
# Default Trajectory Constants
N_DIMS = 3
"""Number of dimensions for the default positions."""
# Required Trajectory Fields
WEIGHTS = 'weights'
"""The field name for the frame weights."""
# default fields for trajectories
POSITIONS = 'positions'
"""The field name for the default positions."""
BOX_VECTORS = 'box_vectors'
"""The field name for the default box vectors."""
VELOCITIES = 'velocities'
"""The field name for the default velocities."""
FORCES = 'forces'
"""The field name for the default forces."""
TIME = 'time'
"""The field name for the default time."""
KINETIC_ENERGY = 'kinetic_energy'
"""The field name for the default kinetic energy."""
POTENTIAL_ENERGY = 'potential_energy'
"""The field name for the default potential energy."""
BOX_VOLUME = 'box_volume'
"""The field name for the default box volume."""
PARAMETERS = 'parameters'
"""The field name for the default parameters."""
PARAMETER_DERIVATIVES = 'parameter_derivatives'
"""The field name for the default parameter derivatives."""
ALT_REPS = 'alt_reps'
"""The field name for the default compound field observables."""
OBSERVABLES = 'observables'
"""The field name for the default compound field observables."""
## Trajectory Field Constants
WEIGHT_SHAPE = (1,)
"""Weights feature vector shape."""
WEIGHT_DTYPE = np.float
"""Weights feature vector data type."""
# Default Trajectory Field Constants
FIELD_FEATURE_SHAPES = ((TIME, (1,)),
(BOX_VECTORS, (3,3)),
(BOX_VOLUME, (1,)),
(KINETIC_ENERGY, (1,)),
(POTENTIAL_ENERGY, (1,)),
)
"""Default shapes for the default fields."""
FIELD_FEATURE_DTYPES = ((POSITIONS, np.float),
(VELOCITIES, np.float),
(FORCES, np.float),
(TIME, np.float),
(BOX_VECTORS, np.float),
(BOX_VOLUME, np.float),
(KINETIC_ENERGY, np.float),
(POTENTIAL_ENERGY, np.float),
)
"""Default data types for the default fields."""
# Positions (and thus velocities and forces) are determined by the
# N_DIMS (which can be customized) and more importantly the number of
# particles which is always different. All the others are always wacky
# and different.
POSITIONS_LIKE_FIELDS = (VELOCITIES, FORCES)
"""Default trajectory fields which are the same shape as the main positions field."""
## Trajectory field features keys
# sparse trajectory fields
DATA = 'data'
"""Name of the dataset in sparse trajectory fields."""
SPARSE_IDXS = '_sparse_idxs'
"""Name of the dataset that indexes sparse trajectory fields."""
# utility for paths
def _iter_field_paths(grp):
"""Return all subgroup field name paths from a group.
Useful for compound fields. For example if you have the group
observables with multiple subfields:
- observables
- rmsd
- sasa
Passing the h5py group 'observables' will return the full field
names for each subfield:
- 'observables/rmsd'
- 'observables/sasa'
Parameters
----------
grp : h5py.Group
The group to enumerate subfield names for.
Returns
-------
subfield_names : list of str
The full names for the subfields of the group.
"""
field_paths = []
for field_name in grp:
if isinstance(grp[field_name], h5py.Group):
for subfield in grp[field_name]:
# if it is a sparse field don't do the subfields since
# they will be _sparse_idxs and data which are not
# what we want here
if field_name not in grp.file['_settings/sparse_fields']:
field_paths.append(field_name + '/' + subfield)
else:
field_paths.append(field_name)
return field_paths
class WepyHDF5(object):
"""Wrapper for h5py interface to an HDF5 file object for creation and
access of WepyHDF5 data.
This is the primary implementation of the API for creating,
accessing, and modifying data in an HDF5 file that conforms to the
WepyHDF5 specification.
"""
MODES = ('r', 'r+', 'w', 'w-', 'x', 'a')
"""The recognized modes for opening the WepyHDF5 file."""
WRITE_MODES = ('r+', 'w', 'w-', 'x', 'a')
#### dunder methods
def __init__(self, filename, mode='x',
topology=None,
units=None,
sparse_fields=None,
feature_shapes=None, feature_dtypes=None,
n_dims=None,
alt_reps=None, main_rep_idxs=None,
swmr_mode=False,
expert_mode=False
):
"""Constructor for the WepyHDF5 class.
Initialize a new Wepy HDF5 file. This will create an h5py.File
object.
The File will be closed after construction by default.
mode:
r Readonly, file must exist
r+ Read/write, file must exist
w Create file, truncate if exists
x or w- Create file, fail if exists
a Read/write if exists, create otherwise
Parameters
----------
filename : str
File path
mode : str
Mode specification for opening the HDF5 file.
topology : str
JSON string representing topology of system being simulated.
units : dict of str : str, optional
Mapping of trajectory field names to string specs
for units.
sparse_fields : list of str, optional
List of trajectory fields that should be initialized as sparse.
feature_shapes : dict of str : shape_spec, optional
Mapping of trajectory fields to their shape spec for initialization.
feature_dtypes : dict of str : dtype_spec, optional
Mapping of trajectory fields to their shape spec for initialization.
n_dims : int, default: 3
Set the number of spatial dimensions for the default
positions trajectory field.
alt_reps : dict of str : list of int, optional
Specifies that there will be 'alt_reps' of positions each
named by the keys of this mapping and containing the
indices in each value list.
main_rep_idxs : list of int, optional
The indices of atom positions to save as the main 'positions'
trajectory field. Defaults to all atoms.
expert_mode : bool
If True no initialization is performed other than the
setting of the filename. Useful mainly for debugging.
Raises
------
AssertionError
If the mode is not one of the supported mode specs.
AssertionError
If a topology is not given for a creation mode.
Warns
-----
If initialization data was given but the file was opened in a read mode.
"""
self._filename = filename
self._swmr_mode = swmr_mode
if expert_mode is True:
self._h5 = None
self._wepy_mode = None
self._h5py_mode = None
self.closed = None
# terminate the constructor here
return None
assert mode in self.MODES, \
"mode must be either one of: {}".format(', '.join(self.MODES))
# the top level mode enforced by wepy.hdf5
self._wepy_mode = mode
# the lower level h5py mode. THis was originally different to
# accomodate different modes at teh wepy level for
# concatenation. I will leave these separate because this is
# used elsewhere and could be a feature in the future.
self._h5py_mode = mode
# Temporary metadata: used to initialize the object but not
# used after that
self._topology = topology
self._units = units
self._n_dims = n_dims
self._n_coords = None
# set hidden feature shapes and dtype, which are only
# referenced if needed when trajectories are created. These
# will be saved in the settings section in the actual HDF5
# file
self._field_feature_shapes_kwarg = feature_shapes
self._field_feature_dtypes_kwarg = feature_dtypes
self._field_feature_dtypes = None
self._field_feature_shapes = None
# save the sparse fields as a private variable for use in the
# create constructor
if sparse_fields is None:
self._sparse_fields = []
else:
self._sparse_fields = sparse_fields
# if we specify an atom subset of the main POSITIONS field
# we must save them
self._main_rep_idxs = main_rep_idxs
# a dictionary specifying other alt_reps to be saved
if alt_reps is not None:
self._alt_reps = alt_reps
# all alt_reps are sparse
alt_rep_keys = ['{}/{}'.format(ALT_REPS, key) for key in self._alt_reps.keys()]
self._sparse_fields.extend(alt_rep_keys)
else:
self._alt_reps = {}
# open the file and then run the different constructors based
# on the mode
with h5py.File(filename, mode=self._h5py_mode,
libver=H5PY_LIBVER, swmr=self._swmr_mode) as h5:
self._h5 = h5
# set SWMR mode if asked for if we are in write mode also
if self._swmr_mode is True and mode in self.WRITE_MODES:
self._h5.swmr_mode = swmr_mode
# create file mode: 'w' will create a new file or overwrite,
# 'w-' and 'x' will not overwrite but will create a new file
if self._wepy_mode in ['w', 'w-', 'x']:
self._create_init()
# read/write mode: in this mode we do not completely overwrite
# the old file and start again but rather write over top of
# values if requested
elif self._wepy_mode in ['r+']:
self._read_write_init()
# add mode: read/write create if doesn't exist
elif self._wepy_mode in ['a']:
if osp.exists(self._filename):
self._read_write_init()
else:
self._create_init()
# read only mode
elif self._wepy_mode == 'r':
# if any data was given, warn the user
if any([kwarg is not None for kwarg in
[topology, units, sparse_fields,
feature_shapes, feature_dtypes,
n_dims, alt_reps, main_rep_idxs]]):
warn("Data was given but opening in read-only mode", RuntimeWarning)
# then run the initialization process
self._read_init()
# flush the buffers
self._h5.flush()
# set the h5py mode to the value in the actual h5py.File
# object after creation
self._h5py_mode = self._h5.mode
# get rid of the temporary variables
del self._topology
del self._units
del self._n_dims
del self._n_coords
del self._field_feature_shapes_kwarg
del self._field_feature_dtypes_kwarg
del self._field_feature_shapes
del self._field_feature_dtypes
del self._sparse_fields
del self._main_rep_idxs
del self._alt_reps
# variable to reflect if it is closed or not, should be closed
# after initialization
self.closed = True
# end of the constructor
return None
# TODO is this right? shouldn't we actually delete the data then close
def __del__(self):
self.close()
# context manager methods
def __enter__(self):
self.open()
# self._h5 = h5py.File(self._filename,
# libver=H5PY_LIBVER, swmr=self._swmr_mode)
# self.closed = False
return self
def __exit__(self, exc_type, exc_value, exc_tb):
self.close()
@property
def swmr_mode(self):
return self._swmr_mode
@swmr_mode.setter
def swmr_mode(self, val):
self._swmr_mode = val
# TODO custom deepcopy to avoid copying the actual HDF5 object
#### hidden methods (_method_name)
### constructors
def _create_init(self):
"""Creation mode constructor.
Completely overwrite the data in the file. Reinitialize the values
and set with the new ones if given.
"""
assert self._topology is not None, \
"Topology must be given for a creation constructor"
# initialize the runs group
runs_grp = self._h5.create_group(RUNS)
# initialize the settings group
settings_grp = self._h5.create_group(SETTINGS)
# create the topology dataset
self._h5.create_dataset(TOPOLOGY, data=self._topology)
# sparse fields
if self._sparse_fields is not None:
# make a dataset for the sparse fields allowed. this requires
# a 'special' datatype for variable length strings. This is
# supported by HDF5 but not numpy.
vlen_str_dt = h5py.special_dtype(vlen=str)
# create the dataset with empty values for the length of the
# sparse fields given
sparse_fields_ds = settings_grp.create_dataset(SPARSE_FIELDS,
(len(self._sparse_fields),),
dtype=vlen_str_dt,
maxshape=(None,))
# set the flags
for i, sparse_field in enumerate(self._sparse_fields):
sparse_fields_ds[i] = sparse_field
# field feature shapes and dtypes
# initialize to the defaults, this gives values to
# self._n_coords, and self.field_feature_dtypes, and
# self.field_feature_shapes
self._set_default_init_field_attributes(n_dims=self._n_dims)
# save the number of dimensions and number of atoms in settings
settings_grp.create_dataset(N_DIMS_STR, data=np.array(self._n_dims))
settings_grp.create_dataset(N_ATOMS, data=np.array(self._n_coords))
# the main rep atom idxs
settings_grp.create_dataset(MAIN_REP_IDXS, data=self._main_rep_idxs, dtype=np.int)
# alt_reps settings
alt_reps_idxs_grp = settings_grp.create_group(ALT_REPS_IDXS)
for alt_rep_name, idxs in self._alt_reps.items():
alt_reps_idxs_grp.create_dataset(alt_rep_name, data=idxs, dtype=np.int)
# if both feature shapes and dtypes were specified overwrite
# (or initialize if not set by defaults) the defaults
if (self._field_feature_shapes_kwarg is not None) and\
(self._field_feature_dtypes_kwarg is not None):
self._field_feature_shapes.update(self._field_feature_shapes_kwarg)
self._field_feature_dtypes.update(self._field_feature_dtypes_kwarg)
# any sparse field with unspecified shape and dtype must be
# set to None so that it will be set at runtime
for sparse_field in self.sparse_fields:
if (not sparse_field in self._field_feature_shapes) or \
(not sparse_field in self._field_feature_dtypes):
self._field_feature_shapes[sparse_field] = None
self._field_feature_dtypes[sparse_field] = None
# save the field feature shapes and dtypes in the settings group
shapes_grp = settings_grp.create_group(FIELD_FEATURE_SHAPES_STR)
for field_path, field_shape in self._field_feature_shapes.items():
if field_shape is None:
# set it as a dimensionless array of NaN
field_shape = np.array(np.nan)
shapes_grp.create_dataset(field_path, data=field_shape)
dtypes_grp = settings_grp.create_group(FIELD_FEATURE_DTYPES_STR)
for field_path, field_dtype in self._field_feature_dtypes.items():
if field_dtype is None:
dt_str = NONE_STR
else:
# make a json string of the datatype that can be read
# in again, we call np.dtype again because there is no
# np.float.descr attribute
dt_str = json.dumps(np.dtype(field_dtype).descr)
dtypes_grp.create_dataset(field_path, data=dt_str)
# initialize the units group
unit_grp = self._h5.create_group(UNITS)
# if units were not given set them all to None
if self._units is None:
self._units = {}
for field_path in self._field_feature_shapes.keys():
self._units[field_path] = None
# set the units
for field_path, unit_value in self._units.items():
# ignore the field if not given
if unit_value is None:
continue
unit_path = '{}/{}'.format(UNITS, field_path)
unit_grp.create_dataset(unit_path, data=unit_value)
# create the group for the run data records
records_grp = settings_grp.create_group(RECORD_FIELDS)
# create a dataset for the continuation run tuples
# (continuation_run, base_run), where the first element
# of the new run that is continuing the run in the second
# position
self._init_continuations()
def _read_write_init(self):
"""Read-write mode constructor."""
self._read_init()
def _add_init(self):
"""The addition mode constructor.
Create the dataset if it doesn't exist and put it in r+ mode,
otherwise, just open in r+ mode.
"""
if not any(self._exist_flags):
self._create_init()
else:
self._read_write_init()
def _read_init(self):
"""Read mode constructor."""
pass
def _set_default_init_field_attributes(self, n_dims=None):
"""Sets the feature_shapes and feature_dtypes to be the default for
this module. These will be used to initialize field datasets when no
given during construction (i.e. for sparse values)
Parameters
----------
n_dims : int
"""
# we use the module defaults for the datasets to initialize them
field_feature_shapes = dict(FIELD_FEATURE_SHAPES)
field_feature_dtypes = dict(FIELD_FEATURE_DTYPES)
# get the number of coordinates of positions. If there is a
# main_reps then we have to set the number of atoms to that,
# if not we count the number of atoms in the topology
if self._main_rep_idxs is None:
self._n_coords = json_top_atom_count(self.topology)
self._main_rep_idxs = list(range(self._n_coords))
else:
self._n_coords = len(self._main_rep_idxs)
# get the number of dimensions as a default
if n_dims is None:
self._n_dims = N_DIMS
# feature shapes for positions and positions-like fields are
# not known at the module level due to different number of
# coordinates (number of atoms) and number of dimensions
# (default 3 spatial). We set them now that we know this
# information.
# add the postitions shape
field_feature_shapes[POSITIONS] = (self._n_coords, self._n_dims)
# add the positions-like field shapes (velocities and forces) as the same
for poslike_field in POSITIONS_LIKE_FIELDS:
field_feature_shapes[poslike_field] = (self._n_coords, self._n_dims)
# set the attributes
self._field_feature_shapes = field_feature_shapes
self._field_feature_dtypes = field_feature_dtypes
def _get_field_path_grp(self, run_idx, traj_idx, field_path):
"""Given a field path for the trajectory returns the group the field's
dataset goes in and the key for the field name in that group.
The field path for a simple field is just the name of the
field and for a compound field it is the compound field group
name with the subfield separated by a '/' like
'observables/observable1' where 'observables' is the compound
field group and 'observable1' is the subfield name.
Parameters
----------
run_idx : int
traj_idx : int
field_path : str
Returns
-------
group : h5py.Group
field_name : str
"""
# check if it is compound
if '/' in field_path:
# split it
grp_name, field_name = field_path.split('/')
# get the hdf5 group
grp = self.h5['{}/{}/{}/{}/{}'.format(RUNS, run_idx, TRAJECTORIES, traj_idx, grp_name)]
# its simple so just return the root group and the original path
else:
grp = self.h5
field_name = field_path
return grp, field_name
def _init_continuations(self):
"""This will either create a dataset in the settings for the
continuations or if continuations already exist it will reinitialize
them and delete the data that exists there.
Returns
-------
continuation_dset : h5py.Dataset
"""
# if the continuations dset already exists we reinitialize the
# data
if CONTINUATIONS in self.settings_grp:
cont_dset = self.settings_grp[CONTINUATIONS]
cont_dset.resize( (0,2) )
# otherwise we just create the data
else:
cont_dset = self.settings_grp.create_dataset(CONTINUATIONS, shape=(0,2), dtype=np.int,
maxshape=(None, 2))
return cont_dset
def _add_run_init(self, run_idx, continue_run=None):
"""Routines for creating a run includes updating and setting object
global variables, increasing the counter for the number of runs.
Parameters
----------
run_idx : int
continue_run : int
Index of the run to continue.
"""
# add the run idx as metadata in the run group
self._h5['{}/{}'.format(RUNS, run_idx)].attrs[RUN_IDX] = run_idx
# if this is continuing another run add the tuple (this_run,
# continues_run) to the continutations settings
if continue_run is not None:
self.add_continuation(run_idx, continue_run)
def _add_init_walkers(self, init_walkers_grp, init_walkers):
"""Adds the run field group for the initial walkers.
Parameters
----------
init_walkers_grp : h5py.Group
The group to add the walker data to.
init_walkers : list of objects implementing the Walker interface
The walkers to save in the group
"""
# add the initial walkers to the group by essentially making
# new trajectories here that will only have one frame
for walker_idx, walker in enumerate(init_walkers):
walker_grp = init_walkers_grp.create_group(str(walker_idx))
# weights
# get the weight from the walker and make a feature array of it
weights = np.array([[walker.weight]])
# then create the dataset and set it
walker_grp.create_dataset(WEIGHTS, data=weights)
# state fields data
for field_key, field_value in walker.state.dict().items():
# values may be None, just ignore them
if field_value is not None:
# just create the dataset by making it a feature array
# (wrapping it in another list)
walker_grp.create_dataset(field_key, data=np.array([field_value]))
def _init_run_sporadic_record_grp(self, run_idx, run_record_key, fields):
"""Initialize a sporadic record group for a run.
Parameters
----------
run_idx : int
run_record_key : str
The record group name.
fields : list of field specs
Each field spec is a 3-tuple of
(field_name : str, field_shape : shape_spec, field_dtype : dtype_spec)
Returns
-------
record_group : h5py.Group
"""
# create the group
run_grp = self.run(run_idx)
record_grp = run_grp.create_group(run_record_key)
# initialize the cycles dataset that maps when the records
# were recorded
record_grp.create_dataset(CYCLE_IDXS, (0,), dtype=np.int,
maxshape=(None,))
# for each field simply create the dataset
for field_name, field_shape, field_dtype in fields:
# initialize this field
self._init_run_records_field(run_idx, run_record_key,
field_name, field_shape, field_dtype)
return record_grp
def _init_run_continual_record_grp(self, run_idx, run_record_key, fields):
"""Initialize a continual record group for a run.
Parameters
----------
run_idx : int
run_record_key : str
The record group name.
fields : list of field specs
Each field spec is a 3-tuple of
(field_name : str, field_shape : shape_spec, field_dtype : dtype_spec)
Returns
-------
record_group : h5py.Group
"""
# create the group
run_grp = self.run(run_idx)
record_grp = run_grp.create_group(run_record_key)
# for each field simply create the dataset
for field_name, field_shape, field_dtype in fields:
self._init_run_records_field(run_idx, run_record_key,
field_name, field_shape, field_dtype)
return record_grp
def _init_run_records_field(self, run_idx, run_record_key,
field_name, field_shape, field_dtype):
"""Initialize a single field for a run record group.
Parameters
----------
run_idx : int
run_record_key : str
The name of the record group.
field_name : str
The name of the field in the record group.
field_shape : tuple of int
The shape of the dataset for the field.
field_dtype : dtype_spec
An h5py recognized data type.
Returns
-------
dataset : h5py.Dataset
"""
record_grp = self.run(run_idx)[run_record_key]
# check if it is variable length
if field_shape is Ellipsis:
# make a special dtype that allows it to be
# variable length
vlen_dt = h5py.special_dtype(vlen=field_dtype)
# this is only allowed to be a single dimension
# since no real shape was given
dset = record_grp.create_dataset(field_name, (0,), dtype=vlen_dt,
maxshape=(None,))
# its not just make it normally
else:
# create the group
dset = record_grp.create_dataset(field_name, (0, *field_shape), dtype=field_dtype,
maxshape=(None, *field_shape))
return dset
def _is_sporadic_records(self, run_record_key):
"""Tests whether a record group is sporadic or not.
Parameters
----------
run_record_key : str
Record group name.
Returns
-------
is_sporadic : bool
True if the record group is sporadic False if not.
"""
# assume it is continual and check if it is in the sporadic groups
if run_record_key in SPORADIC_RECORDS:
return True
else:
return False
def _init_traj_field(self, run_idx, traj_idx, field_path, feature_shape, dtype):
"""Initialize a trajectory field.
Initialize a data field in the trajectory to be empty but
resizeable.
Parameters
----------
run_idx : int
traj_idx : int
field_path : str
Field name specification.
feature_shape : shape_spec
Specification of shape of a feature vector of the field.
dtype : dtype_spec
Specification of the feature vector datatype.
"""
# check whether this is a sparse field and create it
# appropriately
if field_path in self.sparse_fields:
# it is a sparse field
self._init_sparse_traj_field(run_idx, traj_idx, field_path, feature_shape, dtype)
else:
# it is not a sparse field (AKA simple)
self._init_contiguous_traj_field(run_idx, traj_idx, field_path, feature_shape, dtype)
def _init_contiguous_traj_field(self, run_idx, traj_idx, field_path, shape, dtype):
"""Initialize a contiguous (non-sparse) trajectory field.
Parameters
----------
run_idx : int
traj_idx : int
field_path : str
Field name specification.
feature_shape : tuple of int
Shape of the feature vector of the field.
dtype : dtype_spec
H5py recognized datatype
"""
traj_grp = self._h5['{}/{}/{}/{}'.format(RUNS, run_idx, TRAJECTORIES, traj_idx)]
# create the empty dataset in the correct group, setting
# maxshape so it can be resized for new feature vectors to be added
traj_grp.create_dataset(field_path, (0, *[0 for i in shape]), dtype=dtype,
maxshape=(None, *shape))
def _init_sparse_traj_field(self, run_idx, traj_idx, field_path, shape, dtype):
"""
Parameters
----------
run_idx : int
traj_idx : int
field_path : str
Field name specification.
feature_shape : shape_spec
Specification for the shape of the feature.
dtype : dtype_spec
Specification for the dtype of the feature.
"""
traj_grp = self._h5['{}/{}/{}/{}'.format(RUNS, run_idx, TRAJECTORIES, traj_idx)]
# check to see that neither the shape and dtype are
# None which indicates it is a runtime defined value and
# should be ignored here
if (shape is None) or (dtype is None):
# do nothing
pass
else:
# only create the group if you are going to add the
# datasets so the extend function can know if it has been
# properly initialized easier
sparse_grp = traj_grp.create_group(field_path)
# create the dataset for the feature data
sparse_grp.create_dataset(DATA, (0, *[0 for i in shape]), dtype=dtype,
maxshape=(None, *shape))
# create the dataset for the sparse indices
sparse_grp.create_dataset(SPARSE_IDXS, (0,), dtype=np.int, maxshape=(None,))
def _init_traj_fields(self, run_idx, traj_idx,
field_paths, field_feature_shapes, field_feature_dtypes):
"""Initialize a number of fields for a trajectory.
Parameters
----------
run_idx : int
traj_idx : int
field_paths : list of str
List of field names.
field_feature_shapes : list of shape_specs
field_feature_dtypes : list of dtype_specs
"""
for i, field_path in enumerate(field_paths):
self._init_traj_field(run_idx, traj_idx,
field_path, field_feature_shapes[i], field_feature_dtypes[i])
def _add_traj_field_data(self,
run_idx,
traj_idx,
field_path,
field_data,
sparse_idxs=None,
):
"""Add a trajectory field to a trajectory.
If the sparse indices are given the field will be created as a
sparse field otherwise a normal one.
Parameters
----------
run_idx : int
traj_idx : int
field_path : str
Field name.
field_data : numpy.array
The data array to set for the field.
sparse_idxs : arraylike of int of shape (1,)
List of cycle indices that the data corresponds to.
"""
# get the traj group
traj_grp = self._h5['{}/{}/{}/{}'.format(RUNS, run_idx, TRAJECTORIES, traj_idx)]
# if it is a sparse dataset we need to add the data and add
# the idxs in a group
if sparse_idxs is None:
# first require that the dataset exist and is exactly the
# same as the one that already exists (if indeed it
# does). If it doesn't raise a specific error letting the
# user know that they will have to delete the dataset if
# they want to change it to something else
try:
dset = traj_grp.require_dataset(field_path, shape=field_data.shape, dtype=field_data.dtype,
exact=True,
maxshape=(None, *field_data.shape[1:]))
except TypeError:
raise TypeError("For changing the contents of a trajectory field it must be the same shape and dtype.")
# if that succeeds then go ahead and set the data to the
# dataset (overwriting if it is still there)
dset[...] = field_data
else:
sparse_grp = traj_grp.create_group(field_path)
# add the data to this group
sparse_grp.create_dataset(DATA, data=field_data,
maxshape=(None, *field_data.shape[1:]))
# add the sparse idxs
sparse_grp.create_dataset(SPARSE_IDXS, data=sparse_idxs,
maxshape=(None,))
def _extend_contiguous_traj_field(self, run_idx, traj_idx, field_path, field_data):
"""Add multiple new frames worth of data to the end of an existing
contiguous (non-sparse)trajectory field.
Parameters
----------
run_idx : int
traj_idx : int
field_path : str
Field name
field_data : numpy.array
The frames of data to add.
"""
traj_grp = self.h5['{}/{}/{}/{}'.format(RUNS, run_idx, TRAJECTORIES, traj_idx)]
field = traj_grp[field_path]
# make sure this is a feature vector
assert len(field_data.shape) > 1, \
"field_data must be a feature vector with the same number of dimensions as the number"
# of datase new frames
n_new_frames = field_data.shape[0]
# check the field to make sure it is not empty
if all([i == 0 for i in field.shape]):
# check the feature shape against the maxshape which gives
# the feature dimensions for an empty dataset
assert field_data.shape[1:] == field.maxshape[1:], \
"field feature dimensions must be the same, i.e. all but the first dimension"
# if it is empty resize it to make an array the size of
# the new field_data with the maxshape for the feature
# dimensions
feature_dims = field.maxshape[1:]
field.resize( (n_new_frames, *feature_dims) )
# set the new data to this
field[0:, ...] = field_data
else:
# make sure the new data has the right dimensions against
# the shape it already has
assert field_data.shape[1:] == field.shape[1:], \
"field feature dimensions must be the same, i.e. all but the first dimension"
# append to the dataset on the first dimension, keeping the
# others the same, these must be feature vectors and therefore
# must exist
field.resize( (field.shape[0] + n_new_frames, *field.shape[1:]) )
# add the new data
field[-n_new_frames:, ...] = field_data
def _extend_sparse_traj_field(self, run_idx, traj_idx, field_path, values, sparse_idxs):
"""Add multiple new frames worth of data to the end of an existing
contiguous (non-sparse)trajectory field.
Parameters
----------
run_idx : int
traj_idx : int
field_path : str
Field name
values : numpy.array
The frames of data to add.
sparse_idxs : list of int
The cycle indices the values correspond to.
"""
field = self.h5['{}/{}/{}/{}/{}'.format(RUNS, run_idx, TRAJECTORIES, traj_idx, field_path)]
field_data = field[DATA]
field_sparse_idxs = field[SPARSE_IDXS]
# number of new frames
n_new_frames = values.shape[0]
# if this sparse_field has been initialized empty we need to resize
if all([i == 0 for i in field_data.shape]):
# check the feature shape against the maxshape which gives
# the feature dimensions for an empty dataset
assert values.shape[1:] == field_data.maxshape[1:], \
"input value features have shape {}, expected {}".format(
values.shape[1:], field_data.maxshape[1:])
# if it is empty resize it to make an array the size of
# the new values with the maxshape for the feature
# dimensions
feature_dims = field_data.maxshape[1:]
field_data.resize( (n_new_frames, *feature_dims) )
# set the new data to this
field_data[0:, ...] = values
else:
# make sure the new data has the right dimensions
assert values.shape[1:] == field_data.shape[1:], \
"field feature dimensions must be the same, i.e. all but the first dimension"
# append to the dataset on the first dimension, keeping the
# others the same, these must be feature vectors and therefore
# must exist
field_data.resize( (field_data.shape[0] + n_new_frames, *field_data.shape[1:]) )
# add the new data
field_data[-n_new_frames:, ...] = values
# add the sparse idxs in the same way
field_sparse_idxs.resize( (field_sparse_idxs.shape[0] + n_new_frames,
*field_sparse_idxs.shape[1:]) )
# add the new data
field_sparse_idxs[-n_new_frames:, ...] = sparse_idxs
def _add_sparse_field_flag(self, field_path):
"""Register a trajectory field as sparse in the header settings.
Parameters
----------
field_path : str
Name of the trajectory field you want to flag as sparse
"""
sparse_fields_ds = self._h5['{}/{}'.format(SETTINGS, SPARSE_FIELDS)]
# make sure it isn't already in the sparse_fields
if field_path in sparse_fields_ds[:]:
warn("sparse field {} already a sparse field, ignoring".format(field_path))
sparse_fields_ds.resize( (sparse_fields_ds.shape[0] + 1,) )
sparse_fields_ds[sparse_fields_ds.shape[0] - 1] = field_path
def _add_field_feature_shape(self, field_path, field_feature_shape):
"""Add the shape to the header settings for a trajectory field.
Parameters
----------
field_path : str
The name of the trajectory field you want to set for.
field_feature_shape : shape_spec
The shape spec to serialize as a dataset.
"""
shapes_grp = self._h5['{}/{}'.format(SETTINGS, FIELD_FEATURE_SHAPES_STR)]
shapes_grp.create_dataset(field_path, data=np.array(field_feature_shape))
def _add_field_feature_dtype(self, field_path, field_feature_dtype):
"""Add the data type to the header settings for a trajectory field.
Parameters
----------
field_path : str
The name of the trajectory field you want to set for.
field_feature_dtype : dtype_spec
The dtype spec to serialize as a dataset.
"""
feature_dtype_str = json.dumps(field_feature_dtype.descr)
dtypes_grp = self._h5['{}/{}'.format(SETTINGS, FIELD_FEATURE_DTYPES_STR)]
dtypes_grp.create_dataset(field_path, data=feature_dtype_str)
def _set_field_feature_shape(self, field_path, field_feature_shape):
"""Add the trajectory field shape to header settings or set the value.
Parameters
----------
field_path : str
The name of the trajectory field you want to set for.
field_feature_shape : shape_spec
The shape spec to serialize as a dataset.
"""
# check if the field_feature_shape is already set
if field_path in self.field_feature_shapes:
# check that the shape was previously saved as "None" as we
# won't overwrite anything else
if self.field_feature_shapes[field_path] is None:
full_path = '{}/{}/{}'.format(SETTINGS, FIELD_FEATURE_SHAPES_STR, field_path)
# we have to delete the old data and set new data
del self.h5[full_path]
self.h5.create_dataset(full_path, data=field_feature_shape)
else:
raise AttributeError(
"Cannot overwrite feature shape for {} with {} because it is {} not {}".format(
field_path, field_feature_shape, self.field_feature_shapes[field_path],
NONE_STR))
# it was not previously set so we must create then save it
else:
self._add_field_feature_shape(field_path, field_feature_shape)
def _set_field_feature_dtype(self, field_path, field_feature_dtype):
"""Add the trajectory field dtype to header settings or set the value.
Parameters
----------
field_path : str
The name of the trajectory field you want to set for.
field_feature_dtype : dtype_spec
The dtype spec to serialize as a dataset.
"""
feature_dtype_str = json.dumps(field_feature_dtype.descr)
# check if the field_feature_dtype is already set
if field_path in self.field_feature_dtypes:
# check that the dtype was previously saved as "None" as we
# won't overwrite anything else
if self.field_feature_dtypes[field_path] is None:
full_path = '{}/{}/{}'.format(SETTINGS, FIELD_FEATURE_DTYPES_STR, field_path)
# we have to delete the old data and set new data
del self.h5[full_path]
self.h5.create_dataset(full_path, data=feature_dtype_str)
else:
raise AttributeError(
"Cannot overwrite feature dtype for {} with {} because it is {} not ".format(
field_path, field_feature_dtype, self.field_feature_dtypes[field_path],
NONE_STR))
# it was not previously set so we must create then save it
else:
self._add_field_feature_dtype(field_path, field_feature_dtype)
def _extend_run_record_data_field(self, run_idx, run_record_key,
field_name, field_data):
"""Primitive record append method.
Adds data for a single field dataset in a run records group. This
is done without paying attention to whether it is sporadic or
continual and is supposed to be only the data write method.
Parameters
----------
run_idx : int
run_record_key : str
Name of the record group.
field_name : str
Name of the field in the record group to add to.
field_data : arraylike
The data to add to the field.
"""
records_grp = self.h5['{}/{}/{}'.format(RUNS, run_idx, run_record_key)]
field = records_grp[field_name]
# make sure this is a feature vector
assert len(field_data.shape) > 1, \
"field_data must be a feature vector with the same number of dimensions as the number"
# of datase new frames
n_new_frames = field_data.shape[0]
# check whether it is a variable length record, by getting the
# record dataset dtype and using the checker to see if it is
# the vlen special type in h5py
if h5py.check_dtype(vlen=field.dtype) is not None:
# if it is we have to treat it differently, since it
# cannot be multidimensional
# if the dataset has no data in it we need to reshape it
if all([i == 0 for i in field.shape]):
# initialize this array
# if it is empty resize it to make an array the size of
# the new field_data with the maxshape for the feature
# dimensions
field.resize( (n_new_frames,) )
# set the new data to this
for i, row in enumerate(field_data):
field[i] = row
# otherwise just add the data
else:
# resize the array but it is only of rank because
# of variable length data
field.resize( (field.shape[0] + n_new_frames, ) )
# add each row to the newly made space
for i, row in enumerate(field_data):
field[(field.shape[0] - 1) + i] = row
# if it is not variable length we don't have to treat it
# differently
else:
# if this is empty we need to reshape the dataset to accomodate data
if all([i == 0 for i in field.shape]):
# check the feature shape against the maxshape which gives
# the feature dimensions for an empty dataset
assert field_data.shape[1:] == field.maxshape[1:], \
"field feature dimensions must be the same, i.e. all but the first dimension"
# if it is empty resize it to make an array the size of
# the new field_data with the maxshape for the feature
# dimensions
feature_dims = field.maxshape[1:]
field.resize( (n_new_frames, *feature_dims) )
# set the new data to this
field[0:, ...] = field_data
# otherwise just add the data
else:
# append to the dataset on the first dimension, keeping the
# others the same, these must be feature vectors and therefore
# must exist
field.resize( (field.shape[0] + n_new_frames, *field.shape[1:]) )
# add the new data
field[-n_new_frames:, ...] = field_data
def _run_record_namedtuple(self, run_record_key):
"""Generate a namedtuple record type for a record group.
The class name will be formatted like '{}_Record' where the {}
will be replaced with the name of the record group.
Parameters
----------
run_record_key : str
Name of the record group
Returns
-------
RecordType : namedtuple
The record type to generate records for this record group.
"""
Record = namedtuple('{}_Record'.format(run_record_key),
[CYCLE_IDX] + self.record_fields[run_record_key])
return Record
def _convert_record_field_to_table_column(self, run_idx, run_record_key, record_field):
"""Converts a dataset of feature vectors to more palatable values for
use in external datasets.
For single value feature vectors it unwraps them into single
values.
For 1-D feature vectors it casts them as tuples.
Anything of higher rank will raise an error.
Parameters
----------
run_idx : int
run_record_key : str
Name of the record group
record_field : str
Name of the field of the record group
Returns
-------
record_dset : list
Table-ified values
Raises
------
TypeError
If the field feature vector shape rank is greater than 1.
"""
# get the field dataset
rec_grp = self.records_grp(run_idx, run_record_key)
dset = rec_grp[record_field]
# if it is variable length or if it has more than one element
# cast all elements to tuples
if h5py.check_dtype(vlen=dset.dtype) is not None:
rec_dset = [tuple(value) for value in dset[:]]
# if it is not variable length make sure it is not more than a
# 1D feature vector
elif len(dset.shape) > 2:
raise TypeError(
"cannot convert fields with feature vectors more than 1 dimension,"
" was given {} for {}/{}".format(
dset.shape[1:], run_record_key, record_field))
# if it is only a rank 1 feature vector and it has more than
# one element make a tuple out of it
elif dset.shape[1] > 1:
rec_dset = [tuple(value) for value in dset[:]]
# otherwise just get the single value instead of keeping it as
# a single valued feature vector
else:
rec_dset = [value[0] for value in dset[:]]
return rec_dset
def _convert_record_fields_to_table_columns(self, run_idx, run_record_key):
"""Convert record group data to truncated namedtuple records.
This uses the specified record fields from the header settings
to choose which record group fields to apply this to.
Does no checking to make sure the fields are
"table-ifiable". If a field is not it will raise a TypeError.
Parameters
----------
run_idx : int
run_record_key : str
The name of the record group
Returns
-------
table_fields : dict of str : list
Mapping of the record group field to the table-ified values.
"""
fields = {}
for record_field in self.record_fields[run_record_key]:
fields[record_field] = self._convert_record_field_to_table_column(
run_idx, run_record_key, record_field)
return fields
def _make_records(self, run_record_key, cycle_idxs, fields):
"""Generate a list of proper (nametuple) records for a record group.
Parameters
----------
run_record_key : str
Name of the record group
cycle_idxs : list of int
The cycle indices you want to get records for.
fields : list of str
The fields to make record entries for.
Returns
-------
records : list of namedtuple objects
"""
Record = self._run_record_namedtuple(run_record_key)
# for each record we make a tuple and yield it
records = []
for record_idx in range(len(cycle_idxs)):
# make a record for this cycle
record_d = {CYCLE_IDX : cycle_idxs[record_idx]}
for record_field, column in fields.items():
datum = column[record_idx]
record_d[record_field] = datum
record = Record(*(record_d[key] for key in Record._fields))
records.append(record)
return records
def _run_records_sporadic(self, run_idxs, run_record_key):
"""Generate records for a sporadic record group for a multi-run
contig.
If multiple run indices are given assumes that these are a
contig (e.g. the second run index is a continuation of the
first and so on). This method is considered low-level and does
no checking to make sure this is true.
The cycle indices of records from "continuation" runs will be
modified so as the records will be indexed as if they are a
single run.
Uses the record fields settings to decide which fields to use.
Parameters
----------
run_idxs : list of int
The indices of the runs in the order they are in the contig
run_record_key : str
Name of the record group
Returns
-------
records : list of namedtuple objects
"""
# we loop over the run_idxs in the contig and get the fields
# and cycle idxs for the whole contig
fields = None
cycle_idxs = np.array([], dtype=int)
# keep a cumulative total of the runs cycle idxs
prev_run_cycle_total = 0
for run_idx in run_idxs:
# get all the value columns from the datasets, and convert
# them to something amenable to a table
run_fields = self._convert_record_fields_to_table_columns(run_idx, run_record_key)
# we need to concatenate each field to the end of the
# field in the master dictionary, first we need to
# initialize it if it isn't already made
if fields is None:
# if it isn't initialized we just set it as this first
# run fields dictionary
fields = run_fields
else:
# if it is already initialized we need to go through
# each field and concatenate
for field_name, field_data in run_fields.items():
# just add it to the list of fields that will be concatenated later
fields[field_name].extend(field_data)
# get the cycle idxs for this run
rec_grp = self.records_grp(run_idx, run_record_key)
run_cycle_idxs = rec_grp[CYCLE_IDXS][:]
# add the total number of cycles that came before this run
# to each of the cycle idxs to get the cycle_idxs in terms
# of the full contig
run_contig_cycle_idxs = run_cycle_idxs + prev_run_cycle_total
# add these cycle indices to the records for the whole contig
cycle_idxs = np.hstack( (cycle_idxs, run_contig_cycle_idxs) )
# add the total number of cycle_idxs from this run to the
# running total
prev_run_cycle_total += self.num_run_cycles(run_idx)
# then make the records from the fields
records = self._make_records(run_record_key, cycle_idxs, fields)
return records
def _run_records_continual(self, run_idxs, run_record_key):
"""Generate records for a continual record group for a multi-run
contig.
If multiple run indices are given assumes that these are a
contig (e.g. the second run index is a continuation of the
first and so on). This method is considered low-level and does
no checking to make sure this is true.
The cycle indices of records from "continuation" runs will be
modified so as the records will be indexed as if they are a
single run.
Uses the record fields settings to decide which fields to use.
Parameters
----------
run_idxs : list of int
The indices of the runs in the order they are in the contig
run_record_key : str
Name of the record group
Returns
-------
records : list of namedtuple objects
"""
cycle_idxs = np.array([], dtype=int)
fields = None
prev_run_cycle_total = 0
for run_idx in run_idxs:
# get all the value columns from the datasets, and convert
# them to something amenable to a table
run_fields = self._convert_record_fields_to_table_columns(run_idx, run_record_key)
# we need to concatenate each field to the end of the
# field in the master dictionary, first we need to
# initialize it if it isn't already made
if fields is None:
# if it isn't initialized we just set it as this first
# run fields dictionary
fields = run_fields
else:
# if it is already initialized we need to go through
# each field and concatenate
for field_name, field_data in run_fields.items():
# just add it to the list of fields that will be concatenated later
fields[field_name].extend(field_data)
# get one of the fields (if any to iterate over)
record_fields = self.record_fields[run_record_key]
main_record_field = record_fields[0]
# make the cycle idxs from that
run_rec_grp = self.records_grp(run_idx, run_record_key)
run_cycle_idxs = np.array(range(run_rec_grp[main_record_field].shape[0]))
# add the total number of cycles that came before this run
# to each of the cycle idxs to get the cycle_idxs in terms
# of the full contig
run_contig_cycle_idxs = run_cycle_idxs + prev_run_cycle_total
# add these cycle indices to the records for the whole contig
cycle_idxs = np.hstack( (cycle_idxs, run_contig_cycle_idxs) )
# add the total number of cycle_idxs from this run to the
# running total
prev_run_cycle_total += self.num_run_cycles(run_idx)
# then make the records from the fields
records = self._make_records(run_record_key, cycle_idxs, fields)
return records
def _get_contiguous_traj_field(self, run_idx, traj_idx, field_path, frames=None):
"""Access actual data for a trajectory field.
Parameters
----------
run_idx : int
traj_idx : int
field_path : str
Trajectory field name to access
frames : list of int, optional
The indices of the frames to return if you don't want all of them.
Returns
-------
field_data : arraylike
The data requested for the field.
"""
full_path = '{}/{}/{}/{}/{}'.format(RUNS, run_idx, TRAJECTORIES, traj_idx, field_path)
if frames is None:
field = self._h5[full_path][:]
else:
field = self._h5[full_path][list(frames)]
return field
def _get_sparse_traj_field(self, run_idx, traj_idx, field_path, frames=None, masked=True):
"""Access actual data for a trajectory field.
Parameters
----------
run_idx : int
traj_idx : int
field_path : str
Trajectory field name to access
frames : list of int, optional
The indices of the frames to return if you don't want all of them.
masked : bool
If True returns the array data as numpy masked array, and
only the available values if False.
Returns
-------
field_data : arraylike
The data requested for the field.
"""
traj_path = '{}/{}/{}/{}'.format(RUNS, run_idx, TRAJECTORIES, traj_idx)
traj_grp = self.h5[traj_path]
field = traj_grp[field_path]
n_frames = traj_grp[POSITIONS].shape[0]
if frames is None:
data = field[DATA][:]
# if it is to be masked make the masked array
if masked:
sparse_idxs = field[SPARSE_IDXS][:]
filled_data = np.full( (n_frames, *data.shape[1:]), np.nan)
filled_data[sparse_idxs] = data
mask = np.full( (n_frames, *data.shape[1:]), True)
mask[sparse_idxs] = False
data = np.ma.masked_array(filled_data, mask=mask)
else:
# get the sparse idxs and the frames to slice from the
# data
sparse_idxs = field[SPARSE_IDXS][:]
# we get a boolean array of the rows of the data table
# that we are to slice from
sparse_frame_idxs = np.argwhere(np.isin(sparse_idxs, frames))
data = field[DATA][list(sparse_frame_idxs)]
# if it is to be masked make the masked array
if masked:
# the empty arrays the size of the number of requested frames
filled_data = np.full( (len(frames), *field[DATA].shape[1:]), np.nan)
mask = np.full( (len(frames), *field[DATA].shape[1:]), True )
# take the data which exists and is part of the frames
# selection, and put it into the filled data where it is
# supposed to be
filled_data[np.isin(frames, sparse_idxs)] = data
# unmask the present values
mask[np.isin(frames, sparse_idxs)] = False
data = np.ma.masked_array(filled_data, mask=mask)
return data
def _add_run_field(self,
run_idx,
field_path,
data,
sparse_idxs=None,
force=False):
"""Add a trajectory field to all trajectories in a run.
By enforcing adding it to all trajectories at one time we
promote in-run consistency.
Parameters
----------
run_idx : int
field_path : str
Name to set the trajectory field as. Can be compound.
data : arraylike of shape (n_trajectories, n_cycles, feature_vector_shape[0],...)
The data for all trajectories to be added.
sparse_idxs : list of int
If the data you are adding is sparse specify which cycles to apply them to.
If 'force' is turned on, no checking for constraints will be done.
"""
# TODO, SNIPPET: check that we have the right permissions
# if field_exists:
# # if we are in a permissive write mode we delete the
# # old dataset and add the new one, overwriting old data
# if self.mode in ['w', 'w-', 'x', 'r+']:
# logging.info("Dataset already present. Overwriting.")
# del obs_grp[field_name]
# obs_grp.create_dataset(field_name, data=results)
# # this will happen in 'c' and 'c-' modes
# else:
# raise RuntimeError(
# "Dataset already exists and file is in concatenate mode ('c' or 'c-')")
# check that the data has the correct number of trajectories
if not force:
assert len(data) == self.num_run_trajs(run_idx),\
"The number of trajectories in data, {}, is different than the number"\
"of trajectories in the run, {}.".format(len(data), self.num_run_trajs(run_idx))
# for each trajectory check that the data is compliant
for traj_idx, traj_data in enumerate(data):
if not force:
# check that the number of frames is not larger than that for the run
if traj_data.shape[0] > self.num_run_cycles(run_idx):
raise ValueError("The number of frames in data for traj {} , {},"
"is larger than the number of frames"
"for this run, {}.".format(
traj_idx, data.shape[1], self.num_run_cycles(run_idx)))
# if the number of frames given is the same or less than
# the number of frames in the run
elif (traj_data.shape[0] <= self.num_run_cycles(run_idx)):
# if sparse idxs were given we check to see there is
# the right number of them
# and that they match the number of frames given
if data.shape[0] != len(sparse_idxs[traj_idx]):
raise ValueError("The number of frames provided for traj {}, {},"
"was less than the total number of frames, {},"
"but an incorrect number of sparse idxs were supplied, {}."\
.format(traj_idx, traj_data.shape[0],
self.num_run_cycles(run_idx), len(sparse_idxs[traj_idx])))
# if there were strictly fewer frames given and the
# sparse idxs were not given we need to raise an error
elif (traj_data.shape[0] < self.num_run_cycles(run_idx)):
raise ValueError("The number of frames provided for traj {}, {},"
"was less than the total number of frames, {},"
"but sparse_idxs were not supplied.".format(
traj_idx, traj_data.shape[0],
self.num_run_cycles(run_idx)))
# add it to each traj
for i, idx_tup in enumerate(self.run_traj_idx_tuples([run_idx])):
if sparse_idxs is None:
self._add_traj_field_data(*idx_tup, field_path, data[i])
else:
self._add_traj_field_data(*idx_tup, field_path, data[i],
sparse_idxs=sparse_idxs[i])
def _add_field(self, field_path, data, sparse_idxs=None,
force=False):
"""Add a trajectory field to all runs in a file.
Parameters
----------
field_path : str
Name of trajectory field
data : list of arraylike
Each element of this list corresponds to a single run. The
elements of which are arraylikes of shape (n_trajectories,
n_cycles, feature_vector_shape[0],...) for each run.
sparse_idxs : list of list of int
The list of cycle indices to set for the sparse fields. If
None, no trajectories are set as sparse.
"""
for i, run_idx in enumerate(self.run_idxs):
if sparse_idxs is not None:
self._add_run_field(run_idx, field_path, data[i], sparse_idxs=sparse_idxs[i],
force=force)
else:
self._add_run_field(run_idx, field_path, data[i],
force=force)
#### Public Methods
### File Utilities
@property
def filename(self):
"""The path to the underlying HDF5 file."""
return self._filename
def open(self, mode=None):
"""Open the underlying HDF5 file for access.
Parameters
----------
mode : str
Valid mode spec. Opens the HDF5 file in this mode if given
otherwise uses the existing mode.
"""
if mode is None:
mode = self.mode
if self.closed:
self.set_mode(mode)
self._h5 = h5py.File(self._filename, mode,
libver=H5PY_LIBVER, swmr=self.swmr_mode)
self.closed = False
else:
raise IOError("This file is already open")
def close(self):
"""Close the underlying HDF5 file. """
if not self.closed:
self._h5.flush()
self._h5.close()
self.closed = True
@property
def mode(self):
"""The WepyHDF5 mode this object was created with."""
return self._wepy_mode
@mode.setter
def mode(self, mode):
"""Set the mode for opening the file with."""
self.set_mode(mode)
def set_mode(self, mode):
"""Set the mode for opening the file with."""
if not self.closed:
raise AttributeError("Cannot set the mode while the file is open.")
self._set_h5_mode(mode)
self._wepy_mode = mode
@property
def h5_mode(self):
"""The h5py.File mode the HDF5 file currently has."""
return self._h5.mode
def _set_h5_mode(self, h5_mode):
"""Set the mode to open the HDF5 file with.
This really shouldn't be set without using the main wepy mode
as they need to be aligned.
"""
if not self.closed:
raise AttributeError("Cannot set the mode while the file is open.")
self._h5py_mode = h5_mode
@property
def h5(self):
"""The underlying h5py.File object."""
return self._h5
### h5py object access
def run(self, run_idx):
"""Get the h5py.Group for a run.
Parameters
----------
run_idx : int
Returns
-------
run_group : h5py.Group
"""
return self._h5['{}/{}'.format(RUNS, int(run_idx))]
def traj(self, run_idx, traj_idx):
"""Get an h5py.Group trajectory group.
Parameters
----------
run_idx : int
traj_idx : int
Returns
-------
traj_group : h5py.Group
"""
return self._h5['{}/{}/{}/{}'.format(RUNS, run_idx, TRAJECTORIES, traj_idx)]
def run_trajs(self, run_idx):
"""Get the trajectories group for a run.
Parameters
----------
run_idx : int
Returns
-------
trajectories_grp : h5py.Group
"""
return self._h5['{}/{}/{}'.format(RUNS, run_idx, TRAJECTORIES)]
@property
def runs(self):
"""The runs group."""
return self.h5[RUNS]
def run_grp(self, run_idx):
"""A group for a single run."""
return self.runs["{}".format(run_idx)]
def run_start_snapshot_hash(self, run_idx):
"""Hash identifier for the starting snapshot of a run from
orchestration.
"""
return self.run_grp(run_idx).attrs[RUN_START_SNAPSHOT_HASH]
def run_end_snapshot_hash(self, run_idx):
"""Hash identifier for the ending snapshot of a run from
orchestration.
"""
return self.run_grp(run_idx).attrs[RUN_END_SNAPSHOT_HASH]
def set_run_start_snapshot_hash(self, run_idx, snaphash):
"""Set the starting snapshot hash identifier for a run from
orchestration.
"""
if RUN_START_SNAPSHOT_HASH not in self.run_grp(run_idx).attrs:
self.run_grp(run_idx).attrs[RUN_START_SNAPSHOT_HASH] = snaphash
else:
raise AttributeError("The snapshot has already been set.")
def set_run_end_snapshot_hash(self, run_idx, snaphash):
"""Set the ending snapshot hash identifier for a run from
orchestration.
"""
if RUN_END_SNAPSHOT_HASH not in self.run_grp(run_idx).attrs:
self.run_grp(run_idx).attrs[RUN_END_SNAPSHOT_HASH] = snaphash
else:
raise AttributeError("The snapshot has already been set.")
@property
def settings_grp(self):
"""The header settings group."""
settings_grp = self.h5[SETTINGS]
return settings_grp
def decision_grp(self, run_idx):
"""Get the decision enumeration group for a run.
Parameters
----------
run_idx : int
Returns
-------
decision_grp : h5py.Group
"""
return self.run(run_idx)[DECISION]
def init_walkers_grp(self, run_idx):
"""Get the group for the initial walkers for a run.
Parameters
----------
run_idx : int
Returns
-------
init_walkers_grp : h5py.Group
"""
return self.run(run_idx)[INIT_WALKERS]
def records_grp(self, run_idx, run_record_key):
"""Get a record group h5py.Group for a run.
Parameters
----------
run_idx : int
run_record_key : str
Name of the record group
Returns
-------
run_record_group : h5py.Group
"""
path = '{}/{}/{}'.format(RUNS, run_idx, run_record_key)
return self.h5[path]
def resampling_grp(self, run_idx):
"""Get this record group for a run.
Parameters
----------
run_idx : int
Returns
-------
run_record_group : h5py.Group
"""
return self.records_grp(run_idx, RESAMPLING)
def resampler_grp(self, run_idx):
"""Get this record group for a run.
Parameters
----------
run_idx : int
Returns
-------
run_record_group : h5py.Group
"""
return self.records_grp(run_idx, RESAMPLER)
def warping_grp(self, run_idx):
"""Get this record group for a run.
Parameters
----------
run_idx : int
Returns
-------
run_record_group : h5py.Group
"""
return self.records_grp(run_idx, WARPING)
def bc_grp(self, run_idx):
"""Get this record group for a run.
Parameters
----------
run_idx : int
Returns
-------
run_record_group : h5py.Group
"""
return self.records_grp(run_idx, BC)
def progress_grp(self, run_idx):
"""Get this record group for a run.
Parameters
----------
run_idx : int
Returns
-------
run_record_group : h5py.Group
"""
return self.records_grp(run_idx, PROGRESS)
def iter_runs(self, idxs=False, run_sel=None):
"""Generator for iterating through the runs of a file.
Parameters
----------
idxs : bool
If True yields the run index in addition to the group.
run_sel : list of int, optional
If not None should be a list of the runs you want to iterate over.
Yields
------
run_idx : int, if idxs is True
run_group : h5py.Group
"""
if run_sel is None:
run_sel = self.run_idxs
for run_idx in self.run_idxs:
if run_idx in run_sel:
run = self.run(run_idx)
if idxs:
yield run_idx, run
else:
yield run
def iter_trajs(self, idxs=False, traj_sel=None):
"""Generator for iterating over trajectories in a file.
Parameters
----------
idxs : bool
If True returns a tuple of the run index and trajectory
index in addition to the trajectory group.
traj_sel : list of int, optional
If not None is a list of tuples of (run_idx, traj_idx)
selecting which trajectories to iterate over.
Yields
------
traj_id : tuple of int, if idxs is True
A tuple of (run_idx, traj_idx) for the group
trajectory : h5py.Group
"""
# set the selection of trajectories to iterate over
if traj_sel is None:
idx_tups = self.run_traj_idx_tuples()
else:
idx_tups = traj_sel
# get each traj for each idx_tup and yield them for the generator
for run_idx, traj_idx in idx_tups:
traj = self.traj(run_idx, traj_idx)
if idxs:
yield (run_idx, traj_idx), traj
else:
yield traj
def iter_run_trajs(self, run_idx, idxs=False):
"""Iterate over the trajectories of a run.
Parameters
----------
run_idx : int
idxs : bool
If True returns a tuple of the run index and trajectory
index in addition to the trajectory group.
Returns
-------
iter_trajs_generator : generator for the iter_trajs method
"""
run_sel = self.run_traj_idx_tuples([run_idx])
return self.iter_trajs(idxs=idxs, traj_sel=run_sel)
### Settings
@property
def defined_traj_field_names(self):
"""A list of the settings defined field names all trajectories have in the file."""
return list(self.field_feature_shapes.keys())
@property
def observable_field_names(self):
"""Returns a list of the names of the observables that all trajectories have.
If this encounters observable fields that don't occur in all
trajectories (inconsistency) raises an inconsistency error.
"""
n_trajs = self.num_trajs
field_names = Counter()
for traj in self.iter_trajs():
for name in list(traj['observables']):
field_names[name] += 1
# if any of the field names has not occured for every
# trajectory we raise an error
for field_name, count in field_names.items():
if count != n_trajs:
raise TypeError("observable field names are inconsistent")
# otherwise return the field names for the observables
return list(field_names.keys())
def _check_traj_field_consistency(self, field_names):
"""Checks that every trajectory has the given fields across
the entire dataset.
Parameters
----------
field_names : list of str
The field names to check for.
Returns
-------
consistent : bool
True if all trajs have the fields, False otherwise
"""
n_trajs = self.num_trajs
field_names = Counter()
for traj in self.iter_trajs():
for name in field_names:
if name in traj:
field_names[name] += 1
# if any of the field names has not occured for every
# trajectory we raise an error
for field_name, count in field_names:
if count != n_trajs:
return False
return True
@property
def record_fields(self):
"""The record fields for each record group which are selected for inclusion in the truncated records.
These are the fields which are considered to be table-ified.
Returns
-------
record_fields : dict of str : list of str
Mapping of record group name to alist of the record group fields.
"""
record_fields_grp = self.settings_grp[RECORD_FIELDS]
record_fields_dict = {}
for group_name, dset in record_fields_grp.items():
record_fields_dict[group_name] = list(dset)
return record_fields_dict
@property
def sparse_fields(self):
"""The trajectory fields that are sparse."""
return self.h5['{}/{}'.format(SETTINGS, SPARSE_FIELDS)][:]
@property
def main_rep_idxs(self):
"""The indices of the atoms included from the full topology in the default 'positions' trajectory """
if '{}/{}'.format(SETTINGS, MAIN_REP_IDXS) in self.h5:
return self.h5['{}/{}'.format(SETTINGS, MAIN_REP_IDXS)][:]
else:
return None
@property
def alt_reps_idxs(self):
"""Mapping of the names of the alt reps to the indices of the atoms
from the topology that they include in their datasets."""
idxs_grp = self.h5['{}/{}'.format(SETTINGS, ALT_REPS_IDXS)]
return {name : ds[:] for name, ds in idxs_grp.items()}
@property
def alt_reps(self):
"""Names of the alt reps."""
idxs_grp = self.h5['{}/{}'.format(SETTINGS, ALT_REPS_IDXS)]
return {name for name in idxs_grp.keys()}
@property
def field_feature_shapes(self):
"""Mapping of the names of the trajectory fields to their feature
vector shapes."""
shapes_grp = self.h5['{}/{}'.format(SETTINGS, FIELD_FEATURE_SHAPES_STR)]
field_paths = _iter_field_paths(shapes_grp)
shapes = {}
for field_path in field_paths:
shape = shapes_grp[field_path][()]
if np.isnan(shape).all():
shapes[field_path] = None
else:
shapes[field_path] = shape
return shapes
@property
def field_feature_dtypes(self):
"""Mapping of the names of the trajectory fields to their feature
vector numpy dtypes."""
dtypes_grp = self.h5['{}/{}'.format(SETTINGS, FIELD_FEATURE_DTYPES_STR)]
field_paths = _iter_field_paths(dtypes_grp)
dtypes = {}
for field_path in field_paths:
dtype_str = dtypes_grp[field_path][()]
# if there is 'None' flag for the dtype then return None
if dtype_str == NONE_STR:
dtypes[field_path] = None
else:
dtype_obj = json.loads(dtype_str)
dtype_obj = [tuple(d) for d in dtype_obj]
dtype = np.dtype(dtype_obj)
dtypes[field_path] = dtype
return dtypes
@property
def continuations(self):
"""The continuation relationships in this file."""
return self.settings_grp[CONTINUATIONS][:]
@property
def metadata(self):
"""File metadata (h5py.attrs)."""
return dict(self._h5.attrs)
def decision_enum(self, run_idx):
"""Mapping of decision enumerated names to their integer representations.
Parameters
----------
run_idx : int
Returns
-------
decision_enum : dict of str : int
Mapping of the decision ID string to the integer representation.
See Also
--------
WepyHDF5.decision_value_names : for the reverse mapping
"""
enum_grp = self.decision_grp(run_idx)
enum = {}
for decision_name, dset in enum_grp.items():
enum[decision_name] = dset[()]
return enum
def decision_value_names(self, run_idx):
"""Mapping of the integer values for decisions to the decision ID strings.
Parameters
----------
run_idx : int
Returns
-------
decision_enum : dict of int : str
Mapping of the decision integer to the decision ID string representation.
See Also
--------
WepyHDF5.decision_enum : for the reverse mapping
"""
enum_grp = self.decision_grp(run_idx)
rev_enum = {}
for decision_name, dset in enum_grp.items():
value = dset[()]
rev_enum[value] = decision_name
return rev_enum
### Topology
def get_topology(self, alt_rep=POSITIONS):
"""Get the JSON topology for a particular represenation of the positions.
By default gives the topology for the main 'positions' field
(when alt_rep 'positions'). To get the full topology the file
was initialized with set `alt_rep` to `None`. Topologies for
alternative representations (subfields of 'alt_reps') can be
obtained by passing in the key for that alt_rep. For example,
'all_atoms' for the field in alt_reps called 'all_atoms'.
Parameters
----------
alt_rep : str
The base name of the alternate representation, or 'positions', or None.
Returns
-------
topology : str
The JSON topology string for the representation.
"""
top = self.topology
# if no alternative representation is given we just return the
# full topology
if alt_rep is None:
pass
# otherwise we either give the main representation topology
# subset
elif alt_rep == POSITIONS:
top = json_top_subset(top, self.main_rep_idxs)
# or choose one of the alternative representations
elif alt_rep in self.alt_reps_idxs:
top = json_top_subset(top, self.alt_reps_idxs[alt_rep])
# and raise an error if the given alternative representation
# is not given
else:
raise ValueError("alt_rep {} not found".format(alt_rep))
return top
@property
def topology(self):
"""The topology for the full simulated system.
May not be the main representation in the POSITIONS field; for
that use the `get_topology` method.
Returns
-------
topology : str
The JSON topology string for the full representation.
"""
return self._h5[TOPOLOGY][()]
def get_mdtraj_topology(self, alt_rep=POSITIONS):
"""Get an mdtraj.Topology object for a system representation.
By default gives the topology for the main 'positions' field
(when alt_rep 'positions'). To get the full topology the file
was initialized with set `alt_rep` to `None`. Topologies for
alternative representations (subfields of 'alt_reps') can be
obtained by passing in the key for that alt_rep. For example,
'all_atoms' for the field in alt_reps called 'all_atoms'.
Parameters
----------
alt_rep : str
The base name of the alternate representation, or 'positions', or None.
Returns
-------
topology : str
The JSON topology string for the full representation.
"""
json_top = self.get_topology(alt_rep=alt_rep)
return json_to_mdtraj_topology(json_top)
## Initial walkers
def initial_walker_fields(self, run_idx, fields, walker_idxs=None):
"""Get fields from the initial walkers of the simulation.
Parameters
----------
run_idx : int
Run to get initial walkers for.
fields : list of str
Names of the fields you want to retrieve.
walker_idxs : None or list of int
If None returns all of the walkers fields, otherwise a
list of ints that are a selection from those walkers.
Returns
-------
walker_fields : dict of str : array of shape
Dictionary mapping fields to the values for all
walkers. Frames will be either in counting order if no
indices were requested or the order of the walker indices
as given.
"""
# set the walker indices if not specified
if walker_idxs is None:
walker_idxs = range(self.num_init_walkers(run_idx))
init_walker_fields = {field : [] for field in fields}
# for each walker go through and add the selected fields
for walker_idx in walker_idxs:
init_walker_grp = self.init_walkers_grp(run_idx)[str(walker_idx)]
for field in fields:
# we remove the first dimension because we just want
# them as a single frame
init_walker_fields[field].append(init_walker_grp[field][:][0])
# convert the field values to arrays
init_walker_fields = {field : np.array(val) for field, val in init_walker_fields.items()}
return init_walker_fields
def initial_walkers_to_mdtraj(self, run_idx, walker_idxs=None, alt_rep=POSITIONS):
"""Generate an mdtraj Trajectory from a trace of frames from the runs.
Uses the default fields for positions (unless an alternate
representation is specified) and box vectors which are assumed
to be present in the trajectory fields.
The time value for the mdtraj trajectory is set to the cycle
indices for each trace frame.
This is useful for converting WepyHDF5 data to common
molecular dynamics data formats accessible through the mdtraj
library.
Parameters
----------
run_idx : int
Run to get initial walkers for.
fields : list of str
Names of the fields you want to retrieve.
walker_idxs : None or list of int
If None returns all of the walkers fields, otherwise a
list of ints that are a selection from those walkers.
alt_rep : None or str
If None uses default 'positions' representation otherwise
chooses the representation from the 'alt_reps' compound field.
Returns
-------
traj : mdtraj.Trajectory
"""
rep_path = self._choose_rep_path(alt_rep)
init_walker_fields = self.initial_walker_fields(run_idx, [rep_path, BOX_VECTORS],
walker_idxs=walker_idxs)
return self.traj_fields_to_mdtraj(init_walker_fields, alt_rep=alt_rep)
### Counts and Indexing
@property
def num_atoms(self):
"""The number of atoms in the full topology representation."""
return self.h5['{}/{}'.format(SETTINGS, N_ATOMS)][()]
@property
def num_dims(self):
"""The number of spatial dimensions in the positions and alt_reps trajectory fields."""
return self.h5['{}/{}'.format(SETTINGS, N_DIMS_STR)][()]
@property
def num_runs(self):
"""The number of runs in the file."""
return len(self._h5[RUNS])
@property
def num_trajs(self):
"""The total number of trajectories in the entire file."""
return len(list(self.run_traj_idx_tuples()))
def num_init_walkers(self, run_idx):
"""The number of initial walkers for a run.
Parameters
----------
run_idx : int
Returns
-------
n_walkers : int
"""
return len(self.init_walkers_grp(run_idx))
def num_walkers(self, run_idx, cycle_idx):
"""Get the number of walkers at a given cycle in a run.
Parameters
----------
run_idx : int
cycle_idx : int
Returns
-------
n_walkers : int
"""
if cycle_idx >= self.num_run_cycles(run_idx):
raise ValueError(
f"Run {run_idx} has {self.num_run_cycles(run_idx)} cycles, {cycle_idx} requested")
# TODO: currently we do not have a well-defined mechanism for
# actually storing variable number of walkers in the
# trajectory data so just return the number of trajectories
return self.num_run_trajs(run_idx)
def num_run_trajs(self, run_idx):
"""The number of trajectories in a run.
Parameters
----------
run_idx : int
Returns
-------
n_trajs : int
"""
return len(self._h5['{}/{}/{}'.format(RUNS, run_idx, TRAJECTORIES)])
def num_run_cycles(self, run_idx):
"""The number of cycles in a run.
Parameters
----------
run_idx : int
Returns
-------
n_cycles : int
"""
return self.num_traj_frames(run_idx, 0)
def num_traj_frames(self, run_idx, traj_idx):
"""The number of frames in a given trajectory.
Parameters
----------
run_idx : int
traj_idx : int
Returns
-------
n_frames : int
"""
return self.traj(run_idx, traj_idx)[POSITIONS].shape[0]
@property
def run_idxs(self):
"""The indices of the runs in the file."""
return list(range(len(self._h5[RUNS])))
def run_traj_idxs(self, run_idx):
"""The indices of trajectories in a run.
Parameters
----------
run_idx : int
Returns
-------
traj_idxs : list of int
"""
return list(range(len(self._h5['{}/{}/{}'.format(RUNS, run_idx, TRAJECTORIES)])))
def run_traj_idx_tuples(self, runs=None):
"""Get identifier tuples (run_idx, traj_idx) for all trajectories in
all runs.
Parameters
----------
runs : list of int, optional
If not None, a list of run indices to restrict to.
Returns
-------
run_traj_tuples : list of tuple of int
A listing of all trajectories by their identifying tuple
of (run_idx, traj_idx).
"""
tups = []
if runs is None:
run_idxs = self.run_idxs
else:
run_idxs = runs
for run_idx in run_idxs:
for traj_idx in self.run_traj_idxs(run_idx):
tups.append((run_idx, traj_idx))
return tups
def get_traj_field_cycle_idxs(self, run_idx, traj_idx, field_path):
"""Returns the cycle indices for a sparse trajectory field.
Parameters
----------
run_idx : int
traj_idx : int
field_path : str
Name of the trajectory field
Returns
-------
cycle_idxs : arraylike of int
"""
traj_path = '{}/{}/{}/{}'.format(RUNS, run_idx, TRAJECTORIES, traj_idx)
if not field_path in self._h5[traj_path]:
raise KeyError("key for field {} not found".format(field_path))
# if the field is not sparse just return the cycle indices for
# that run
if field_path not in self.sparse_fields:
cycle_idxs = np.array(range(self.num_run_cycles(run_idx)))
else:
cycle_idxs = self._h5[traj_path][field_path][SPARSE_IDXS][:]
return cycle_idxs
def next_run_idx(self):
"""The index of the next run if it were to be added.
Because runs are named as the integer value of the order they
were added this gives the index of the next run that would be
added.
Returns
-------
next_run_idx : int
"""
return self.num_runs
def next_run_traj_idx(self, run_idx):
"""The index of the next trajectory for this run.
Parameters
----------
run_idx : int
Returns
-------
next_traj_idx : int
"""
return self.num_run_trajs(run_idx)
### Aggregation
def is_run_contig(self, run_idxs):
"""This method checks that if a given list of run indices is a valid
contig or not.
Parameters
----------
run_idxs : list of int
The run indices that would make up the contig in order.
Returns
-------
is_contig : bool
"""
run_idx_continuations = [np.array([run_idxs[idx+1], run_idxs[idx]])
for idx in range(len(run_idxs)-1)]
#gets the contigs array
continuations = self.settings_grp[CONTINUATIONS][:]
# checks if sub contigs are in contigs list or not.
for run_continuous in run_idx_continuations:
contig = False
for continuous in continuations:
if np.array_equal(run_continuous, continuous):
contig = True
if not contig:
return False
return True
def clone(self, path, mode='x'):
"""Clone the header information of this file into another file.
Clones this WepyHDF5 file without any of the actual runs and run
data. This includes the topology, units, sparse_fields,
feature shapes and dtypes, alt_reps, and main representation
information.
This method will flush the buffers for this file.
Does not preserve metadata pertaining to inter-run
relationships like continuations.
Parameters
----------
path : str
File path to save the new file.
mode : str
The mode to open the new file with.
Returns
-------
new_file : h5py.File
The handle to the new file. It will be closed.
"""
assert mode in ['w', 'w-', 'x'], "must be opened in a file creation mode"
# we manually construct an HDF5 and copy the groups over
new_h5 = h5py.File(path, mode=mode, libver=H5PY_LIBVER)
new_h5.require_group(RUNS)
# flush the datasets buffers
self.h5.flush()
new_h5.flush()
# copy the existing datasets to the new one
h5py.h5o.copy(self._h5.id, TOPOLOGY.encode(), new_h5.id, TOPOLOGY.encode())
h5py.h5o.copy(self._h5.id, UNITS.encode(), new_h5.id, UNITS.encode())
h5py.h5o.copy(self._h5.id, SETTINGS.encode(), new_h5.id, SETTINGS.encode())
# now make a WepyHDF5 object in "expert_mode" which means it
# is just empy and we construct it manually, "surgically" as I
# like to call it
new_wepy_h5 = WepyHDF5(path, expert_mode=True)
# perform the surgery:
# attach the h5py.File
new_wepy_h5._h5 = new_h5
# set the wepy mode to read-write since the creation flags
# were already used in construction of the h5py.File object
new_wepy_h5._wepy_mode = 'r+'
new_wepy_h5._h5py_mode = 'r+'
# for the settings we need to get rid of the data for interun
# relationships like the continuations, so we reinitialize the
# continuations for the new file
new_wepy_h5._init_continuations()
# close the h5py.File and set the attribute to closed
new_wepy_h5._h5.close()
new_wepy_h5.closed = True
# return the runless WepyHDF5 object
return new_wepy_h5
def link_run(self, filepath, run_idx, continue_run=None, **kwargs):
"""Add a run from another file to this one as an HDF5 external
link.
Parameters
----------
filepath : str
File path to the HDF5 file that the run is on.
run_idx : int
The run index from the target file you want to link.
continue_run : int, optional
The run from the linking WepyHDF5 file you want the target
linked run to continue.
kwargs : dict
Adds metadata (h5py.attrs) to the linked run.
Returns
-------
linked_run_idx : int
The index of the linked run in the linking file.
"""
# link to the external run
ext_run_link = h5py.ExternalLink(filepath, '{}/{}'.format(RUNS, run_idx))
# the run index in this file, as determined by the counter
here_run_idx = self.next_run_idx()
# set the local run as the external link to the other run
self._h5['{}/{}'.format(RUNS, here_run_idx)] = ext_run_link
# run the initialization routines for adding a run
self._add_run_init(here_run_idx, continue_run=continue_run)
run_grp = self._h5['{}/{}'.format(RUNS, here_run_idx)]
# add metadata if given
for key, val in kwargs.items():
if key != RUN_IDX:
run_grp.attrs[key] = val
else:
warn('run_idx metadata is set by wepy and cannot be used', RuntimeWarning)
return here_run_idx
def link_file_runs(self, wepy_h5_path):
"""Link all runs from another WepyHDF5 file.
This preserves continuations within that file. This will open
the file if not already opened.
Parameters
----------
wepy_h5_path : str
Filepath to the file you want to link runs from.
Returns
-------
new_run_idxs : list of int
The new run idxs from the linking file.
"""
wepy_h5 = WepyHDF5(wepy_h5_path, mode='r')
with wepy_h5:
ext_run_idxs = wepy_h5.run_idxs
continuations = wepy_h5.continuations
# add the runs
new_run_idxs = []
for ext_run_idx in ext_run_idxs:
# link the next run, and get its new run index
new_run_idx = self.link_run(wepy_h5_path, ext_run_idx)
# save that run idx
new_run_idxs.append(new_run_idx)
# copy the continuations over translating the run idxs,
# for each continuation in the other files continuations
for continuation in continuations:
# translate each run index from the external file
# continuations to the run idxs they were just assigned in
# this file
self.add_continuation(new_run_idxs[continuation[0]],
new_run_idxs[continuation[1]])
return new_run_idxs
def extract_run(self, filepath, run_idx,
continue_run=None,
run_slice=None,
**kwargs):
"""Add a run from another file to this one by copying it and
truncating it if necessary.
Parameters
----------
filepath : str
File path to the HDF5 file that the run is on.
run_idx : int
The run index from the target file you want to link.
continue_run : int, optional
The run from the linking WepyHDF5 file you want the target
linked run to continue.
run_slice :
kwargs : dict
Adds metadata (h5py.attrs) to the linked run.
Returns
-------
linked_run_idx : int
The index of the linked run in the linking file.
"""
# close ourselves if not already done, so we can write using
# the lower level API
was_open = False
if not self.closed:
self.close()
was_open = True
# do the copying
# open the other file and get the runs in it and the
# continuations it has
wepy_h5 = WepyHDF5(filepath, mode='r')
with self:
# normalize our HDF5s path
self_path = osp.realpath(self.filename)
# the run index in this file, as determined by the counter
here_run_idx = self.next_run_idx()
# get the group name for the new run in this HDF5
target_grp_path = "/runs/{}".format(here_run_idx)
with wepy_h5:
# link the next run, and get its new run index
new_h5 = wepy_h5.copy_run_slice(run_idx, self_path,
target_grp_path,
run_slice=run_slice,
mode='r+')
# close it since we are done
new_h5.close()
with self:
# run the initialization routines for adding a run, just
# sets some metadata
self._add_run_init(here_run_idx, continue_run=continue_run)
run_grp = self._h5['{}/{}'.format(RUNS, here_run_idx)]
# add metadata if given
for key, val in kwargs.items():
if key != RUN_IDX:
run_grp.attrs[key] = val
else:
warn('run_idx metadata is set by wepy and cannot be used', RuntimeWarning)
if was_open:
self.open()
return here_run_idx
def extract_file_runs(self, wepy_h5_path,
run_slices=None):
"""Extract (copying and truncating appropriately) all runs from
another WepyHDF5 file.
This preserves continuations within that file. This will open
the file if not already opened.
Parameters
----------
wepy_h5_path : str
Filepath to the file you want to link runs from.
Returns
-------
new_run_idxs : list of int
The new run idxs from the linking file.
"""
if run_slices is None:
run_slices = {}
# open the other file and get the runs in it and the
# continuations it has
wepy_h5 = WepyHDF5(wepy_h5_path, mode='r')
with wepy_h5:
# the run idx in the external file
ext_run_idxs = wepy_h5.run_idxs
continuations = wepy_h5.continuations
# then for each run in it copy them to this file
new_run_idxs = []
for ext_run_idx in ext_run_idxs:
# get the run_slice spec for the run in the other file
run_slice = run_slices[ext_run_idx]
# get the index this run should be when it is added
new_run_idx = self.extract_run(wepy_h5_path, ext_run_idx,
run_slice=run_slice)
# save that run idx
new_run_idxs.append(new_run_idx)
was_closed = False
if self.closed:
self.open()
was_closed = True
# copy the continuations over translating the run idxs,
# for each continuation in the other files continuations
for continuation in continuations:
# translate each run index from the external file
# continuations to the run idxs they were just assigned in
# this file
self.add_continuation(new_run_idxs[continuation[0]],
new_run_idxs[continuation[1]])
if was_closed:
self.close()
return new_run_idxs
def join(self, other_h5):
"""Given another WepyHDF5 file object does a left join on this
file, renumbering the runs starting from this file.
This function uses the H5O function for copying. Data will be
copied not linked.
Parameters
----------
other_h5 : h5py.File
File handle to the file you want to join to this one.
"""
with other_h5 as h5:
for run_idx in h5.run_idxs:
# the other run group handle
other_run = h5.run(run_idx)
# copy this run to this file in the next run_idx group
self.h5.copy(other_run, '{}/{}'.format(RUNS, self.next_run_idx()))
### initialization and data generation
def add_metadata(self, key, value):
"""Add metadata for the whole file.
Parameters
----------
key : str
value : h5py value
h5py valid metadata value.
"""
self._h5.attrs[key] = value
def init_record_fields(self, run_record_key, record_fields):
"""Initialize the settings record fields for a record group in the
settings group.
Save which records are to be considered from a run record group's
datasets to be in the table like representation. This exists
to allow there to large and small datasets for records to be
stored together but allow for a more compact single table like
representation to be produced for serialization.
Parameters
----------
run_record_key : str
Name of the record group you want to set this for.
record_fields : list of str
Names of the fields you want to set as record fields.
"""
record_fields_grp = self.settings_grp[RECORD_FIELDS]
# make a dataset for the sparse fields allowed. this requires
# a 'special' datatype for variable length strings. This is
# supported by HDF5 but not numpy.
vlen_str_dt = h5py.special_dtype(vlen=str)
# create the dataset with the strings of the fields which are records
record_group_fields_ds = record_fields_grp.create_dataset(run_record_key,
(len(record_fields),),
dtype=vlen_str_dt,
maxshape=(None,))
# set the flags
for i, record_field in enumerate(record_fields):
record_group_fields_ds[i] = record_field
def init_resampling_record_fields(self, resampler):
"""Initialize the record fields for this record group.
Parameters
----------
resampler : object implementing the Resampler interface
The resampler which contains the data for which record fields to set.
"""
self.init_record_fields(RESAMPLING, resampler.resampling_record_field_names())
def init_resampler_record_fields(self, resampler):
"""Initialize the record fields for this record group.
Parameters
----------
resampler : object implementing the Resampler interface
The resampler which contains the data for which record fields to set.
"""
self.init_record_fields(RESAMPLER, resampler.resampler_record_field_names())
def init_bc_record_fields(self, bc):
"""Initialize the record fields for this record group.
Parameters
----------
bc : object implementing the BoundaryConditions interface
The boundary conditions object which contains the data for which record fields to set.
"""
self.init_record_fields(BC, bc.bc_record_field_names())
def init_warping_record_fields(self, bc):
"""Initialize the record fields for this record group.
Parameters
----------
bc : object implementing the BoundaryConditions interface
The boundary conditions object which contains the data for which record fields to set.
"""
self.init_record_fields(WARPING, bc.warping_record_field_names())
def init_progress_record_fields(self, bc):
"""Initialize the record fields for this record group.
Parameters
----------
bc : object implementing the BoundaryConditions interface
The boundary conditions object which contains the data for which record fields to set.
"""
self.init_record_fields(PROGRESS, bc.progress_record_field_names())
def add_continuation(self, continuation_run, base_run):
"""Add a continuation between runs.
Parameters
----------
continuation_run : int
The run index of the run that will be continuing another
base_run : int
The run that is being continued.
"""
continuations_dset = self.settings_grp[CONTINUATIONS]
continuations_dset.resize((continuations_dset.shape[0] + 1, continuations_dset.shape[1],))
continuations_dset[continuations_dset.shape[0] - 1] = np.array([continuation_run, base_run])
def new_run(self, init_walkers, continue_run=None, **kwargs):
"""Initialize a new run.
Parameters
----------
init_walkers : list of objects implementing the Walker interface
The walkers that will be the start of this run.
continue_run : int, optional
If this run is a continuation of another set which one it is continuing.
kwargs : dict
Metadata to set for the run.
Returns
-------
run_grp : h5py.Group
The group of the newly created run.
"""
# check to see if the continue_run is actually in this file
if continue_run is not None:
if continue_run not in self.run_idxs:
raise ValueError("The continue_run idx given, {}, is not present in this file".format(
continue_run))
# get the index for this run
new_run_idx = self.next_run_idx()
# create a new group named the next integer in the counter
run_grp = self._h5.create_group('{}/{}'.format(RUNS, new_run_idx))
# set the initial walkers group
init_walkers_grp = run_grp.create_group(INIT_WALKERS)
self._add_init_walkers(init_walkers_grp, init_walkers)
# initialize the walkers group
traj_grp = run_grp.create_group(TRAJECTORIES)
# run the initialization routines for adding a run
self._add_run_init(new_run_idx, continue_run=continue_run)
# add metadata if given
for key, val in kwargs.items():
if key != RUN_IDX:
run_grp.attrs[key] = val
else:
warn('run_idx metadata is set by wepy and cannot be used', RuntimeWarning)
return run_grp
# application level methods for setting the fields for run record
# groups given the objects themselves
def init_run_resampling(self, run_idx, resampler):
"""Initialize data for resampling records.
Initialized the run record group as well as settings for the
fields.
This method also creates the decision group for the run.
Parameters
----------
run_idx : int
resampler : object implementing the Resampler interface
The resampler which contains the data for which record fields to set.
Returns
-------
record_grp : h5py.Group
"""
# set the enumeration of the decisions
self.init_run_resampling_decision(0, resampler)
# set the data fields that can be used for table like records
resampler.resampler_record_field_names()
resampler.resampling_record_field_names()
# then make the records group
fields = resampler.resampling_fields()
grp = self.init_run_record_grp(run_idx, RESAMPLING, fields)
return grp
def init_run_resampling_decision(self, run_idx, resampler):
"""Initialize the decision group for the run resampling records.
Parameters
----------
run_idx : int
resampler : object implementing the Resampler interface
The resampler which contains the data for which record fields to set.
"""
self.init_run_fields_resampling_decision(run_idx, resampler.DECISION.enum_dict_by_name())
def init_run_resampler(self, run_idx, resampler):
"""Initialize data for this record group in a run.
Initialized the run record group as well as settings for the
fields.
Parameters
----------
run_idx : int
resampler : object implementing the Resampler interface
The resampler which contains the data for which record fields to set.
Returns
-------
record_grp : h5py.Group
"""
fields = resampler.resampler_fields()
grp = self.init_run_record_grp(run_idx, RESAMPLER, fields)
return grp
def init_run_warping(self, run_idx, bc):
"""Initialize data for this record group in a run.
Initialized the run record group as well as settings for the
fields.
Parameters
----------
run_idx : int
bc : object implementing the BoundaryConditions interface
The boundary conditions object which contains the data for which record fields to set.
Returns
-------
record_grp : h5py.Group
"""
fields = bc.warping_fields()
grp = self.init_run_record_grp(run_idx, WARPING, fields)
return grp
def init_run_progress(self, run_idx, bc):
"""Initialize data for this record group in a run.
Initialized the run record group as well as settings for the
fields.
Parameters
----------
run_idx : int
bc : object implementing the BoundaryConditions interface
The boundary conditions object which contains the data for which record fields to set.
Returns
-------
record_grp : h5py.Group
"""
fields = bc.progress_fields()
grp = self.init_run_record_grp(run_idx, PROGRESS, fields)
return grp
def init_run_bc(self, run_idx, bc):
"""Initialize data for this record group in a run.
Initialized the run record group as well as settings for the
fields.
Parameters
----------
run_idx : int
bc : object implementing the BoundaryConditions interface
The boundary conditions object which contains the data for which record fields to set.
Returns
-------
record_grp : h5py.Group
"""
fields = bc.bc_fields()
grp = self.init_run_record_grp(run_idx, BC, fields)
return grp
# application level methods for initializing the run records
# groups with just the fields and without the objects
def init_run_fields_resampling(self, run_idx, fields):
"""Initialize this record group fields datasets.
Parameters
----------
run_idx : int
fields : list of str
Names of the fields to initialize
Returns
-------
record_grp : h5py.Group
"""
grp = self.init_run_record_grp(run_idx, RESAMPLING, fields)
return grp
def init_run_fields_resampling_decision(self, run_idx, decision_enum_dict):
"""Initialize the decision group for this run.
Parameters
----------
run_idx : int
decision_enum_dict : dict of str : int
Mapping of decision ID strings to integer representation.
"""
decision_grp = self.run(run_idx).create_group(DECISION)
for name, value in decision_enum_dict.items():
decision_grp.create_dataset(name, data=value)
def init_run_fields_resampler(self, run_idx, fields):
"""Initialize this record group fields datasets.
Parameters
----------
run_idx : int
fields : list of str
Names of the fields to initialize
Returns
-------
record_grp : h5py.Group
"""
grp = self.init_run_record_grp(run_idx, RESAMPLER, fields)
return grp
def init_run_fields_warping(self, run_idx, fields):
"""Initialize this record group fields datasets.
Parameters
----------
run_idx : int
fields : list of str
Names of the fields to initialize
Returns
-------
record_grp : h5py.Group
"""
grp = self.init_run_record_grp(run_idx, WARPING, fields)
return grp
def init_run_fields_progress(self, run_idx, fields):
"""Initialize this record group fields datasets.
Parameters
----------
run_idx : int
fields : list of str
Names of the fields to initialize
Returns
-------
record_grp : h5py.Group
"""
grp = self.init_run_record_grp(run_idx, PROGRESS, fields)
return grp
def init_run_fields_bc(self, run_idx, fields):
"""Initialize this record group fields datasets.
Parameters
----------
run_idx : int
fields : list of str
Names of the fields to initialize
Returns
-------
record_grp : h5py.Group
"""
grp = self.init_run_record_grp(run_idx, BC, fields)
return grp
def init_run_record_grp(self, run_idx, run_record_key, fields):
"""Initialize a record group for a run.
Parameters
----------
run_idx : int
run_record_key : str
The name of the record group.
fields : list of str
The names of the fields to set for the record group.
"""
# initialize the record group based on whether it is sporadic
# or continual
if self._is_sporadic_records(run_record_key):
grp = self._init_run_sporadic_record_grp(run_idx, run_record_key,
fields)
else:
grp = self._init_run_continual_record_grp(run_idx, run_record_key,
fields)
# TODO: should've been removed already just double checking things are good without it
# def traj_n_frames(self, run_idx, traj_idx):
# """
# Parameters
# ----------
# run_idx :
# traj_idx :
# Returns
# -------
# """
# return self.traj(run_idx, traj_idx)[POSITIONS].shape[0]
def add_traj(self, run_idx, data, weights=None, sparse_idxs=None, metadata=None):
"""Add a full trajectory to a run.
Parameters
----------
run_idx : int
data : dict of str : arraylike
Mapping of trajectory fields to the data for them to add.
weights : 1-D arraylike of float
The weights of each frame. If None defaults all frames to 1.0.
sparse_idxs : list of int
Cycle indices the data corresponds to.
metadata : dict of str : value
Metadata for the trajectory.
Returns
-------
traj_grp : h5py.Group
"""
# convenient alias
traj_data = data
# initialize None kwargs
if sparse_idxs is None:
sparse_idxs = {}
if metadata is None:
metadata = {}
# positions are mandatory
assert POSITIONS in traj_data, "positions must be given to create a trajectory"
assert isinstance(traj_data[POSITIONS], np.ndarray)
n_frames = traj_data[POSITIONS].shape[0]
# if weights are None then we assume they are 1.0
if weights is None:
weights = np.ones((n_frames, 1), dtype=float)
else:
assert isinstance(weights, np.ndarray), "weights must be a numpy.ndarray"
assert weights.shape[0] == n_frames,\
"weights and the number of frames must be the same length"
# current traj_idx
traj_idx = self.next_run_traj_idx(run_idx)
# make a group for this trajectory, with the current traj_idx
# for this run
traj_grp = self._h5.create_group(
'{}/{}/{}/{}'.format(RUNS, run_idx, TRAJECTORIES, traj_idx))
# add the run_idx as metadata
traj_grp.attrs[RUN_IDX] = run_idx
# add the traj_idx as metadata
traj_grp.attrs[TRAJ_IDX] = traj_idx
# add the rest of the metadata if given
for key, val in metadata.items():
if not key in [RUN_IDX, TRAJ_IDX]:
traj_grp.attrs[key] = val
else:
warn("run_idx and traj_idx are used by wepy and cannot be set", RuntimeWarning)
# check to make sure the positions are the right shape
assert traj_data[POSITIONS].shape[1] == self.num_atoms, \
"positions given have different number of atoms: {}, should be {}".format(
traj_data[POSITIONS].shape[1], self.num_atoms)
assert traj_data[POSITIONS].shape[2] == self.num_dims, \
"positions given have different number of dims: {}, should be {}".format(
traj_data[POSITIONS].shape[2], self.num_dims)
# add datasets to the traj group
# weights
traj_grp.create_dataset(WEIGHTS, data=weights, dtype=WEIGHT_DTYPE,
maxshape=(None, *WEIGHT_SHAPE))
# positions
positions_shape = traj_data[POSITIONS].shape
# add the rest of the traj_data
for field_path, field_data in traj_data.items():
# if there were sparse idxs for this field pass them in
if field_path in sparse_idxs:
field_sparse_idxs = sparse_idxs[field_path]
# if this is a sparse field and no sparse_idxs were given
# we still need to initialize it as a sparse field so it
# can be extended properly so we make sparse_idxs to match
# the full length of this initial trajectory data
elif field_path in self.sparse_fields:
field_sparse_idxs = np.arange(positions_shape[0])
# otherwise it is not a sparse field so we just pass in None
else:
field_sparse_idxs = None
self._add_traj_field_data(run_idx, traj_idx, field_path, field_data,
sparse_idxs=field_sparse_idxs)
## initialize empty sparse fields
# get the sparse field datasets that haven't been initialized
traj_init_fields = list(sparse_idxs.keys()) + list(traj_data.keys())
uninit_sparse_fields = set(self.sparse_fields).difference(traj_init_fields)
# the shapes
uninit_sparse_shapes = [self.field_feature_shapes[field] for field in uninit_sparse_fields]
# the dtypes
uninit_sparse_dtypes = [self.field_feature_dtypes[field] for field in uninit_sparse_fields]
# initialize the sparse fields in the hdf5
self._init_traj_fields(run_idx, traj_idx,
uninit_sparse_fields, uninit_sparse_shapes, uninit_sparse_dtypes)
return traj_grp
def extend_traj(self, run_idx, traj_idx, data, weights=None):
"""Extend a trajectory with data for all fields.
Parameters
----------
run_idx : int
traj_idx : int
data : dict of str : arraylike
The data to add for each field of the trajectory. Must all
have the same first dimension.
weights : arraylike
Weights for the frames of the trajectory. If None defaults all frames to 1.0.
"""
if self._wepy_mode == 'c-':
assert self._append_flags[dataset_key], "dataset is not available for appending to"
# convenient alias
traj_data = data
# number of frames to add
n_new_frames = traj_data[POSITIONS].shape[0]
n_frames = self.num_traj_frames(run_idx, traj_idx)
# calculate the new sparse idxs for sparse fields that may be
# being added
sparse_idxs = np.array(range(n_frames, n_frames + n_new_frames))
# get the trajectory group
traj_grp = self._h5['{}/{}/{}/{}'.format(RUNS, run_idx, TRAJECTORIES, traj_idx)]
## weights
# if weights are None then we assume they are 1.0
if weights is None:
weights = np.ones((n_new_frames, 1), dtype=float)
else:
assert isinstance(weights, np.ndarray), "weights must be a numpy.ndarray"
assert weights.shape[0] == n_new_frames,\
"weights and the number of frames must be the same length"
# add the weights
weights_ds = traj_grp[WEIGHTS]
# append to the dataset on the first dimension, keeping the
# others the same, if they exist
if len(weights_ds.shape) > 1:
weights_ds.resize( (weights_ds.shape[0] + n_new_frames, *weights_ds.shape[1:]) )
else:
weights_ds.resize( (weights_ds.shape[0] + n_new_frames, ) )
# add the new data
weights_ds[-n_new_frames:, ...] = weights
# add the other fields
for field_path, field_data in traj_data.items():
# if the field hasn't been initialized yet initialize it,
# unless we are in SWMR mode
if not field_path in traj_grp:
# if in SWMR mode you cannot create groups so if we
# are in SWMR mode raise a warning that the data won't
# be recorded
if self.swmr_mode:
warn("New datasets cannot be created while in SWMR mode. The field {} will"
"not be saved. If you want to save this it must be"
"previously created".format(field_path))
else:
feature_shape = field_data.shape[1:]
feature_dtype = field_data.dtype
# not specified as sparse_field, no settings
if (not field_path in self.field_feature_shapes) and \
(not field_path in self.field_feature_dtypes) and \
not field_path in self.sparse_fields:
# only save if it is an observable
is_observable = False
if '/' in field_path:
group_name = field_path.split('/')[0]
if group_name == OBSERVABLES:
is_observable = True
if is_observable:
warn("the field '{}' was received but not previously specified"
" but is being added because it is in observables.".format(field_path))
# save sparse_field flag, shape, and dtype
self._add_sparse_field_flag(field_path)
self._set_field_feature_shape(field_path, feature_shape)
self._set_field_feature_dtype(field_path, feature_dtype)
else:
raise ValueError("the field '{}' was received but not previously specified"
"it is being ignored because it is not an observable.".format(field_path))
# specified as sparse_field but no settings given
elif (self.field_feature_shapes[field_path] is None and
self.field_feature_dtypes[field_path] is None) and \
field_path in self.sparse_fields:
# set the feature shape and dtype since these
# should be 0 in the settings
self._set_field_feature_shape(field_path, feature_shape)
self._set_field_feature_dtype(field_path, feature_dtype)
# initialize
self._init_traj_field(run_idx, traj_idx, field_path, feature_shape, feature_dtype)
# extend it either as a sparse field or a contiguous field
if field_path in self.sparse_fields:
self._extend_sparse_traj_field(run_idx, traj_idx, field_path, field_data, sparse_idxs)
else:
self._extend_contiguous_traj_field(run_idx, traj_idx, field_path, field_data)
## application level append methods for run records groups
def extend_cycle_warping_records(self, run_idx, cycle_idx, warping_data):
"""Add records for each field for this record group.
Parameters
----------
run_idx : int
cycle_idx : int
The cycle index these records correspond to.
warping_data : dict of str : arraylike
Mapping of the record group fields to a collection of
values for each field.
"""
self.extend_cycle_run_group_records(run_idx, WARPING, cycle_idx, warping_data)
def extend_cycle_bc_records(self, run_idx, cycle_idx, bc_data):
"""Add records for each field for this record group.
Parameters
----------
run_idx : int
cycle_idx : int
The cycle index these records correspond to.
bc_data : dict of str : arraylike
Mapping of the record group fields to a collection of
values for each field.
"""
self.extend_cycle_run_group_records(run_idx, BC, cycle_idx, bc_data)
def extend_cycle_progress_records(self, run_idx, cycle_idx, progress_data):
"""Add records for each field for this record group.
Parameters
----------
run_idx : int
cycle_idx : int
The cycle index these records correspond to.
progress_data : dict of str : arraylike
Mapping of the record group fields to a collection of
values for each field.
"""
self.extend_cycle_run_group_records(run_idx, PROGRESS, cycle_idx, progress_data)
def extend_cycle_resampling_records(self, run_idx, cycle_idx, resampling_data):
"""Add records for each field for this record group.
Parameters
----------
run_idx : int
cycle_idx : int
The cycle index these records correspond to.
resampling_data : dict of str : arraylike
Mapping of the record group fields to a collection of
values for each field.
"""
self.extend_cycle_run_group_records(run_idx, RESAMPLING, cycle_idx, resampling_data)
def extend_cycle_resampler_records(self, run_idx, cycle_idx, resampler_data):
"""Add records for each field for this record group.
Parameters
----------
run_idx : int
cycle_idx : int
The cycle index these records correspond to.
resampler_data : dict of str : arraylike
Mapping of the record group fields to a collection of
values for each field.
"""
self.extend_cycle_run_group_records(run_idx, RESAMPLER, cycle_idx, resampler_data)
def extend_cycle_run_group_records(self, run_idx, run_record_key, cycle_idx, fields_data):
"""Extend data for a whole records group.
This must have the cycle index for the data it is appending as
this is done for sporadic and continual datasets.
Parameters
----------
run_idx : int
run_record_key : str
Name of the record group.
cycle_idx : int
The cycle index these records correspond to.
fields_data : dict of str : arraylike
Mapping of the field name to the values for the records being added.
"""
record_grp = self.records_grp(run_idx, run_record_key)
# if it is sporadic add the cycle idx
if self._is_sporadic_records(run_record_key):
# get the cycle idxs dataset
record_cycle_idxs_ds = record_grp[CYCLE_IDXS]
# number of old and new records
n_new_records = len(fields_data)
n_existing_records = record_cycle_idxs_ds.shape[0]
# make a new chunk for the new records
record_cycle_idxs_ds.resize( (n_existing_records + n_new_records,) )
# add an array of the cycle idx for each record
record_cycle_idxs_ds[n_existing_records:] = np.full((n_new_records,), cycle_idx)
# then add all the data for the field
for record_dict in fields_data:
for field_name, field_data in record_dict.items():
self._extend_run_record_data_field(run_idx, run_record_key,
field_name, np.array([field_data]))
### Analysis Routines
## Record Getters
def run_records(self, run_idx, run_record_key):
"""Get the records for a record group for a single run.
Parameters
----------
run_idx : int
run_record_key : str
The name of the record group.
Returns
-------
records : list of namedtuple objects
The list of records for the run's record group.
"""
# wrap this in a list since the underlying functions accept a
# list of records
run_idxs = [run_idx]
return self.run_contig_records(run_idxs, run_record_key)
def run_contig_records(self, run_idxs, run_record_key):
"""Get the records for a record group for the contig that is formed by
the run indices.
This alters the cycle indices for the records so that they
appear to have come from a single run. That is they are the
cycle indices of the contig.
Parameters
----------
run_idxs : list of int
The run indices that form a contig. (i.e. element 1
continues element 0)
run_record_key : str
Name of the record group.
Returns
-------
records : list of namedtuple objects
The list of records for the contig's record group.
"""
# if there are no fields return an empty list
record_fields = self.record_fields[run_record_key]
if len(record_fields) == 0:
return []
# get the iterator for the record idxs, if the group is
# sporadic then we just use the cycle idxs
if self._is_sporadic_records(run_record_key):
records = self._run_records_sporadic(run_idxs, run_record_key)
else:
records = self._run_records_continual(run_idxs, run_record_key)
return records
def run_records_dataframe(self, run_idx, run_record_key):
"""Get the records for a record group for a single run in the form of
a pandas DataFrame.
Parameters
----------
run_idx : int
run_record_key : str
Name of record group.
Returns
-------
record_df : pandas.DataFrame
"""
records = self.run_records(run_idx, run_record_key)
return pd.DataFrame(records)
def run_contig_records_dataframe(self, run_idxs, run_record_key):
"""Get the records for a record group for a contig of runs in the form
of a pandas DataFrame.
Parameters
----------
run_idxs : list of int
The run indices that form a contig. (i.e. element 1
continues element 0)
run_record_key : str
The name of the record group.
Returns
-------
records_df : pandas.DataFrame
"""
records = self.run_contig_records(run_idxs, run_record_key)
return pd.DataFrame(records)
# application level specific methods for each main group
# resampling
def resampling_records(self, run_idxs):
"""Get the records this record group for the contig that is formed by
the run indices.
This alters the cycle indices for the records so that they
appear to have come from a single run. That is they are the
cycle indices of the contig.
Parameters
----------
run_idxs : list of int
The run indices that form a contig. (i.e. element 1
continues element 0)
Returns
-------
records : list of namedtuple objects
The list of records for the contig's record group.
"""
return self.run_contig_records(run_idxs, RESAMPLING)
def resampling_records_dataframe(self, run_idxs):
"""Get the records for this record group for a contig of runs in the
form of a pandas DataFrame.
Parameters
----------
run_idxs : list of int
The run indices that form a contig. (i.e. element 1
continues element 0)
Returns
-------
records_df : pandas.DataFrame
"""
return pd.DataFrame(self.resampling_records(run_idxs))
# resampler records
def resampler_records(self, run_idxs):
"""Get the records this record group for the contig that is formed by
the run indices.
This alters the cycle indices for the records so that they
appear to have come from a single run. That is they are the
cycle indices of the contig.
Parameters
----------
run_idxs : list of int
The run indices that form a contig. (i.e. element 1
continues element 0)
Returns
-------
records : list of namedtuple objects
The list of records for the contig's record group.
"""
return self.run_contig_records(run_idxs, RESAMPLER)
def resampler_records_dataframe(self, run_idxs):
"""Get the records for this record group for a contig of runs in the
form of a pandas DataFrame.
Parameters
----------
run_idxs : list of int
The run indices that form a contig. (i.e. element 1
continues element 0)
Returns
-------
records_df : pandas.DataFrame
"""
return pd.DataFrame(self.resampler_records(run_idxs))
# warping
def warping_records(self, run_idxs):
"""Get the records this record group for the contig that is formed by
the run indices.
This alters the cycle indices for the records so that they
appear to have come from a single run. That is they are the
cycle indices of the contig.
Parameters
----------
run_idxs : list of int
The run indices that form a contig. (i.e. element 1
continues element 0)
Returns
-------
records : list of namedtuple objects
The list of records for the contig's record group.
"""
return self.run_contig_records(run_idxs, WARPING)
def warping_records_dataframe(self, run_idxs):
"""Get the records for this record group for a contig of runs in the
form of a pandas DataFrame.
Parameters
----------
run_idxs : list of int
The run indices that form a contig. (i.e. element 1
continues element 0)
Returns
-------
records_df : pandas.DataFrame
"""
return pd.DataFrame(self.warping_records(run_idxs))
# boundary conditions
def bc_records(self, run_idxs):
"""Get the records this record group for the contig that is formed by
the run indices.
This alters the cycle indices for the records so that they
appear to have come from a single run. That is they are the
cycle indices of the contig.
Parameters
----------
run_idxs : list of int
The run indices that form a contig. (i.e. element 1
continues element 0)
Returns
-------
records : list of namedtuple objects
The list of records for the contig's record group.
"""
return self.run_contig_records(run_idxs, BC)
def bc_records_dataframe(self, run_idxs):
"""Get the records for this record group for a contig of runs in the
form of a pandas DataFrame.
Parameters
----------
run_idxs : list of int
The run indices that form a contig. (i.e. element 1
continues element 0)
Returns
-------
records_df : pandas.DataFrame
"""
return pd.DataFrame(self.bc_records(run_idxs))
# progress
def progress_records(self, run_idxs):
"""Get the records this record group for the contig that is formed by
the run indices.
This alters the cycle indices for the records so that they
appear to have come from a single run. That is they are the
cycle indices of the contig.
Parameters
----------
run_idxs : list of int
The run indices that form a contig. (i.e. element 1
continues element 0)
Returns
-------
records : list of namedtuple objects
The list of records for the contig's record group.
"""
return self.run_contig_records(run_idxs, PROGRESS)
def progress_records_dataframe(self, run_idxs):
"""Get the records for this record group for a contig of runs in the
form of a pandas DataFrame.
Parameters
----------
run_idxs : list of int
The run indices that form a contig. (i.e. element 1
continues element 0)
Returns
-------
records_df : pandas.DataFrame
"""
return pd.DataFrame(self.progress_records(run_idxs))
def run_resampling_panel(self, run_idx):
"""Generate a resampling panel from the resampling records of a run.
Parameters
----------
run_idx : int
Returns
-------
resampling_panel : list of list of list of namedtuple records
The panel (list of tables) of resampling records in order
(cycle, step, walker)
"""
return self.run_contig_resampling_panel([run_idx])
def run_contig_resampling_panel(self, run_idxs):
"""Generate a resampling panel from the resampling records of a
contig, which is a series of runs.
Parameters
----------
run_idxs : list of int
The run indices that form a contig. (i.e. element 1
continues element 0)
Returns
-------
resampling_panel : list of list of list of namedtuple records
The panel (list of tables) of resampling records in order
(cycle, step, walker)
"""
# check the contig to make sure it is a valid contig
if not self.is_run_contig(run_idxs):
raise ValueError("The run_idxs provided are not a valid contig, {}.".format(
run_idxs))
# make the resampling panel from the resampling records for the contig
contig_resampling_panel = resampling_panel(self.resampling_records(run_idxs),
is_sorted=False)
return contig_resampling_panel
# Trajectory Field Setters
def add_run_observable(self, run_idx, observable_name, data, sparse_idxs=None):
"""Add a trajectory sub-field in the compound field "observables" for
a single run.
Parameters
----------
run_idx : int
observable_name : str
What to name the observable subfield.
data : arraylike of shape (n_trajs, feature_vector_shape[0], ...)
The data for all of the trajectories that will be set to
this observable field.
sparse_idxs : list of int, optional
If not None, specifies the cycle indices this data corresponds to.
"""
obs_path = '{}/{}'.format(OBSERVABLES, observable_name)
self._add_run_field(run_idx, obs_path, data, sparse_idxs=sparse_idxs)
def add_traj_observable(self, observable_name, data, sparse_idxs=None):
"""Add a trajectory sub-field in the compound field "observables" for
an entire file, on a trajectory basis.
Parameters
----------
observable_name : str
What to name the observable subfield.
data : list of arraylike
The data for each run are the elements of this
argument. Each element is an arraylike of shape
(n_traj_frames, feature_vector_shape[0],...) where the
n_run_frames is the number of frames in trajectory.
sparse_idxs : list of list of int, optional
If not None, specifies the cycle indices this data
corresponds to. First by run, then by trajectory.
"""
obs_path = '{}/{}'.format(OBSERVABLES, observable_name)
run_results = []
for run_idx in range(self.num_runs):
run_num_trajs = self.num_run_trajs(run_idx)
run_results.append([])
for traj_idx in range(run_num_trajs):
run_results[run_idx].append(data[(run_idx * run_num_trajs) + traj_idx])
run_sparse_idxs = None
if sparse_idxs is not None:
run_sparse_idxs = []
for run_idx in range(self.num_runs):
run_num_trajs = self.num_run_trajs(run_idx)
run_sparse_idxs.append([])
for traj_idx in range(run_num_trajs):
run_sparse_idxs[run_idx].append(sparse_idxs[(run_idx * run_num_trajs) + traj_idx])
self.add_observable(observable_name, run_results,
sparse_idxs=run_sparse_idxs)
def add_observable(self, observable_name, data, sparse_idxs=None):
"""Add a trajectory sub-field in the compound field "observables" for
an entire file, on a compound run and trajectory basis.
Parameters
----------
observable_name : str
What to name the observable subfield.
data : list of list of arraylike
The data for each run are the elements of this
argument. Each element is a list of the trajectory
observable arraylikes of shape (n_traj_frames,
feature_vector_shape[0],...).
sparse_idxs : list of list of int, optional
If not None, specifies the cycle indices this data
corresponds to. First by run, then by trajectory.
"""
obs_path = '{}/{}'.format(OBSERVABLES, observable_name)
self._add_field(
obs_path,
data,
sparse_idxs=sparse_idxs,
)
def compute_observable(self, func, fields, args,
map_func=map,
traj_sel=None,
save_to_hdf5=None, idxs=False, return_results=True):
"""Compute an observable on the trajectory data according to a
function. Optionally save that data in the observables data group for
the trajectory.
Parameters
----------
func : callable
The function to apply to the trajectory fields (by
cycle). Must accept a dictionary mapping string trajectory
field names to a feature vector for that cycle and return
an arraylike. May accept other positional arguments as well.
fields : list of str
A list of trajectory field names to pass to the mapped function.
args : tuple
A single tuple of arguments which will be expanded and
passed to the mapped function for every evaluation.
map_func : callable
The mapping function. The implementation of how to map the
computation function over the data. Default is the python
builtin `map` function. Can be a parallel implementation
for example.
traj_sel : list of tuple, optional
If not None, a list of trajectory identifier tuple
(run_idx, traj_idx) to restrict the computation to.
save_to_hdf5 : None or string, optional
If not None, a string that specifies the name of the
observables sub-field that the computed values will be saved to.
idxs : bool
If True will return the trajectory identifier tuple
(run_idx, traj_idx) along with other return values.
return_results : bool
If True will return the results of the mapping. If not
using the 'save_to_hdf5' option, be sure to use this or
results will be lost.
Returns
-------
traj_id_tuples : list of tuple of int, if 'idxs' option is True
A list of the tuple identifiers for each trajectory result.
results : list of arraylike, if 'return_results' option is True
A list of arraylike feature vectors for each trajectory.
"""
if save_to_hdf5 is not None:
assert self.mode in ['w', 'w-', 'x', 'r+', 'c', 'c-'],\
"File must be in a write mode"
assert isinstance(save_to_hdf5, str),\
"`save_to_hdf5` should be the field name to save the data in the `observables`"\
" group in each trajectory"
# the field name comes from this kwarg if it satisfies the
# string condition above
field_name = save_to_hdf5
# calculate the results and accumulate them here
results = []
# and the indices of the results
result_idxs = []
# map over the trajectories and apply the function and save
# the results
for result in self.traj_fields_map(func, fields, args,
map_func=map_func, traj_sel=traj_sel, idxs=True):
idx_tup, obs_features = result
results.append(obs_features)
result_idxs.append(idx_tup)
# we want to separate writing and computation so we can do it
# in parallel without having multiple writers. So if we are
# writing directly to the HDF5 we add the results to it.
# if we are saving this to the trajectories observables add it as a dataset
if save_to_hdf5:
# reshape the results to be in the observable shape:
observable = [[] for run_idx in self.run_idxs]
for result_idx, traj_results in zip(result_idxs, results):
run_idx, traj_idx = result_idx
observable[run_idx].append(traj_results)
self.add_observable(
field_name,
observable,
sparse_idxs=None,
)
if return_results:
if idxs:
return result_idxs, results
else:
return results
## Trajectory Getters
def get_traj_field(self, run_idx, traj_idx, field_path, frames=None, masked=True):
"""Returns a numpy array for the given trajectory field.
You can control how sparse fields are returned using the
`masked` option. When True (default) a masked numpy array will
be returned such that you can get which cycles it is from,
when False an unmasked array of the data will be returned
which has no cycle information.
Parameters
----------
run_idx : int
traj_idx : int
field_path : str
Name of the trajectory field to get
frames : None or list of int
If not None, a list of the frame indices of the trajectory
to return values for.
masked : bool
If true will return sparse field values as masked arrays,
otherwise just returns the compacted data.
Returns
-------
field_data : arraylike
The data for the trajectory field.
"""
traj_path = '{}/{}/{}/{}'.format(RUNS, run_idx, TRAJECTORIES, traj_idx)
# if the field doesn't exist return None
if not field_path in self._h5[traj_path]:
raise KeyError("key for field {} not found".format(field_path))
# return None
# get the field depending on whether it is sparse or not
if field_path in self.sparse_fields:
return self._get_sparse_traj_field(run_idx, traj_idx, field_path,
frames=frames, masked=masked)
else:
return self._get_contiguous_traj_field(run_idx, traj_idx, field_path,
frames=frames)
def get_trace_fields(self,
frame_tups,
fields,
same_order=True,
):
"""Get trajectory field data for the frames specified by the trace.
Parameters
----------
frame_tups : list of tuple of int
The trace values. Each tuple is of the form
(run_idx, traj_idx, frame_idx).
fields : list of str
The names of the fields to get for each frame.
same_order : bool
(Default = True)
If True will ensure that the results will be sorted exactly
as the order of the frame_tups were. If False will return
them in an arbitrary implementation determined order that
should be more efficient.
Returns
-------
trace_fields : dict of str : arraylike
Mapping of the field names to the array of feature vectors
for the trace.
"""
# TODO, WARN: this is known to not work properly in all
# cases. While this is an important feature, we defer the
# implementation of chunking to another function or interace
if False:
def argsort(seq):
return sorted(range(len(seq)), key=seq.__getitem__)
def apply_argsorted(shuffled_seq, sorted_idxs):
return [shuffled_seq[i] for i in sorted_idxs]
# first sort the frame_tups so we can chunk them up by
# (run, traj) to get more efficient reads since these are
# chunked by these datasets.
# we do an argsort here so that we can map fields back to
# the order they came in (if requested)
sorted_idxs = argsort(frame_tups)
# then sort them as we will iterate through them
sorted_frame_tups = apply_argsorted(frame_tups, sorted_idxs)
# generate the chunks by (run, traj)
read_chunks = defaultdict(list)
for run_idx, traj_idx, frame_idx in sorted_frame_tups:
read_chunks[(run_idx, traj_idx)].append(frame_idx)
# go through each chunk and read data for each field
frame_fields = {}
for field in fields:
# for each field collect the chunks
field_chunks = []
for chunk_key, frames in read_chunks.items():
run_idx, traj_idx = chunk_key
frames_field = self.get_traj_field(run_idx, traj_idx, field,
frames=frames)
field_chunks.append(frames_field)
# then aggregate them
field_unsorted = np.concatenate(field_chunks)
del field_chunks; gc.collect()
# if we want them sorted sort them back to the
# original (unsorted) order, otherwise just return
# them
if same_order:
frame_fields[field] = field_unsorted[sorted_idxs]
else:
frame_fields[field] = field_unsorted
del field_unsorted; gc.collect()
else:
frame_fields = {field : [] for field in fields}
for run_idx, traj_idx, cycle_idx in frame_tups:
for field in fields:
frame_field = self.get_traj_field(
run_idx,
traj_idx,
field,
frames=[cycle_idx],
)
# the first dimension doesn't matter here since we
# only get one frame at a time.
frame_fields[field].append(frame_field[0])
# combine all the parts of each field into single arrays
for field in fields:
frame_fields[field] = np.array(frame_fields[field])
return frame_fields
def get_run_trace_fields(self, run_idx, frame_tups, fields):
"""Get trajectory field data for the frames specified by the trace
within a single run.
Parameters
----------
run_idx : int
frame_tups : list of tuple of int
The trace values. Each tuple is of the form
(traj_idx, frame_idx).
fields : list of str
The names of the fields to get for each frame.
Returns
-------
trace_fields : dict of str : arraylike
Mapping of the field names to the array of feature vectors
for the trace.
"""
frame_fields = {field : [] for field in fields}
for traj_idx, cycle_idx in frame_tups:
for field in fields:
frame_field = self.get_traj_field(run_idx, traj_idx, field, frames=[cycle_idx])
# the first dimension doesn't matter here since we
# only get one frame at a time.
frame_fields[field].append(frame_field[0])
# combine all the parts of each field into single arrays
for field in fields:
frame_fields[field] = np.array(frame_fields[field])
return frame_fields
def get_contig_trace_fields(self, contig_trace, fields):
"""Get field data for all trajectories of a contig for the frames
specified by the contig trace.
Parameters
----------
contig_trace : list of tuple of int
The trace values. Each tuple is of the form
(run_idx, frame_idx).
fields : list of str
The names of the fields to get for each cycle.
Returns
-------
contig_fields : dict of str : arraylike
of shape (n_cycles, n_trajs, field_feature_shape[0],...)
Mapping of the field names to the array of feature vectors
for contig trace.
"""
# to be efficient we want to group our grabbing of fields by run
# so we group them by run
runs_frames = defaultdict(list)
# and we get the runs in the order to fetch them
run_idxs = []
for run_idx, cycle_idx in contig_trace:
runs_frames[run_idx].append(cycle_idx)
if not run_idx in run_idxs:
run_idxs.append(run_idx)
# (there must be the same number of trajectories in each run)
n_trajs_test = self.num_run_trajs(run_idxs[0])
assert all([True if n_trajs_test == self.num_run_trajs(run_idx) else False
for run_idx in run_idxs])
# then using this we go run by run and get all the
# trajectories
field_values = {}
for field in fields:
# we gather trajectories in "bundles" (think sticks
# strapped together) and each bundle represents a run, we
# will concatenate the ends of the bundles together to get
# the full array at the end
bundles = []
for run_idx in run_idxs:
run_bundle = []
for traj_idx in self.run_traj_idxs(run_idx):
# get the values for this (field, run, trajectory)
traj_field_vals = self.get_traj_field(run_idx, traj_idx, field,
frames=runs_frames[run_idx],
masked=True)
run_bundle.append(traj_field_vals)
# convert this "bundle" of trajectory values (think
# sticks side by side) into an array
run_bundle = np.array(run_bundle)
bundles.append(run_bundle)
# stick the bundles together end to end to make the value
# for this field , the first dimension currently is the
# trajectory_index, but we want to make the cycles the
# first dimension. So we stack them along that axis then
# transpose the first two axes (not the rest of them which
# should stay the same). Pardon the log terminology, but I
# don't know a name for a bunch of bundles taped together.
field_log = np.hstack(tuple(bundles))
field_log = np.swapaxes(field_log, 0, 1)
field_values[field] = field_log
return field_values
def iter_trajs_fields(self, fields, idxs=False, traj_sel=None):
"""Generator for iterating over fields trajectories in a file.
Parameters
----------
fields : list of str
Names of the trajectory fields you want to yield.
idxs : bool
If True will also return the tuple identifier of the
trajectory the field data is from.
traj_sel : list of tuple of int
If not None, a list of trajectory identifiers to restrict
iteration over.
Yields
------
traj_identifier : tuple of int if 'idxs' option is True
Tuple identifying the trajectory the data belongs to
(run_idx, traj_idx).
fields_data : dict of str : arraylike
Mapping of the field name to the array of feature vectors
of that field for this trajectory.
"""
for idx_tup, traj in self.iter_trajs(idxs=True, traj_sel=traj_sel):
run_idx, traj_idx = idx_tup
dsets = {}
# DEBUG if we ask for debug prints send in the run and
# traj index so the function can print this out TESTING if
# this causes no problems (it doesn't seem like it would
# from the code this will be removed permanently)
# dsets['run_idx'] = run_idx
# dsets[TRAJ_IDX] = traj_idx
for field in fields:
try:
dset = traj[field][:]
except KeyError:
warn("field \"{}\" not found in \"{}\"".format(field, traj.name), RuntimeWarning)
dset = None
dsets[field] = dset
if idxs:
yield (run_idx, traj_idx), dsets
else:
yield dsets
def traj_fields_map(self, func, fields, args,
map_func=map, idxs=False, traj_sel=None):
"""Function for mapping work onto field of trajectories.
Parameters
----------
func : callable
The function to apply to the trajectory fields (by
cycle). Must accept a dictionary mapping string trajectory
field names to a feature vector for that cycle and return
an arraylike. May accept other positional arguments as well.
fields : list of str
A list of trajectory field names to pass to the mapped function.
args : None or or tuple
A single tuple of arguments which will be
passed to the mapped function for every evaluation.
map_func : callable
The mapping function. The implementation of how to map the
computation function over the data. Default is the python
builtin `map` function. Can be a parallel implementation
for example.
traj_sel : list of tuple, optional
If not None, a list of trajectory identifier tuple
(run_idx, traj_idx) to restrict the computation to.
idxs : bool
If True will return the trajectory identifier tuple
(run_idx, traj_idx) along with other return values.
Returns
-------
traj_id_tuples : list of tuple of int, if 'idxs' option is True
A list of the tuple identifiers for each trajectory result.
results : list of arraylike
A list of arraylike feature vectors for each trajectory.
"""
# check the args and kwargs to see if they need expanded for
# mapping inputs
#first go through each run and get the number of cycles
n_cycles = 0
for run_idx in self.run_idxs:
n_cycles += self.num_run_cycles(run_idx)
mapped_args = []
for arg in args:
# make a generator out of it to map as inputs
mapped_arg = (arg for i in range(n_cycles))
mapped_args.append(mapped_arg)
# make a generator for the arguments to pass to the function
# from the mapper, for the extra arguments we just have an
# endless generator
map_args = (self.iter_trajs_fields(fields, traj_sel=traj_sel, idxs=False),
*(it.repeat(arg) for arg in args))
results = map_func(func, *map_args)
if idxs:
if traj_sel is None:
traj_sel = self.run_traj_idx_tuples()
return zip(traj_sel, results)
else:
return results
def to_mdtraj(self, run_idx, traj_idx, frames=None, alt_rep=None):
"""Convert a trajectory to an mdtraj Trajectory object.
Works if the right trajectory fields are defined. Minimally
this is a representation, including the 'positions' field or
an 'alt_rep' subfield.
Will also set the unitcell lengths and angle if the
'box_vectors' field is present.
Will also set the time for the frames if the 'time' field is
present, although this is likely not useful since walker
segments have the time reset.
Parameters
----------
run_idx : int
traj_idx : int
frames : None or list of int
If not None, a list of the frames to include.
alt_rep : str
If not None, an 'alt_reps' subfield name to use for
positions instead of the 'positions' field.
Returns
-------
traj : mdtraj.Trajectory
"""
traj_grp = self.traj(run_idx, traj_idx)
# the default for alt_rep is the main rep
if alt_rep is None:
rep_key = POSITIONS
rep_path = rep_key
else:
rep_key = alt_rep
rep_path = '{}/{}'.format(ALT_REPS, alt_rep)
topology = self.get_mdtraj_topology(alt_rep=rep_key)
# get the frames if they are not given
if frames is None:
frames = self.get_traj_field_cycle_idxs(run_idx, traj_idx, rep_path)
# get the data for all or for the frames specified
positions = self.get_traj_field(run_idx, traj_idx, rep_path,
frames=frames, masked=False)
try:
time = self.get_traj_field(run_idx, traj_idx, TIME,
frames=frames, masked=False)[:, 0]
except KeyError:
warn("time not in this trajectory, ignoring")
time = None
try:
box_vectors = self.get_traj_field(run_idx, traj_idx, BOX_VECTORS,
frames=frames, masked=False)
except KeyError:
warn("box_vectors not in this trajectory, ignoring")
box_vectors = None
if box_vectors is not None:
unitcell_lengths, unitcell_angles = traj_box_vectors_to_lengths_angles(box_vectors)
if (box_vectors is not None) and (time is not None):
traj = mdj.Trajectory(positions, topology,
time=time,
unitcell_lengths=unitcell_lengths, unitcell_angles=unitcell_angles)
elif box_vectors is not None:
traj = mdj.Trajectory(positions, topology,
unitcell_lengths=unitcell_lengths, unitcell_angles=unitcell_angles)
elif time is not None:
traj = mdj.Trajectory(positions, topology,
time=time)
else:
traj = mdj.Trajectory(positions, topology)
return traj
def trace_to_mdtraj(self, trace, alt_rep=None):
"""Generate an mdtraj Trajectory from a trace of frames from the runs.
Uses the default fields for positions (unless an alternate
representation is specified) and box vectors which are assumed
to be present in the trajectory fields.
The time value for the mdtraj trajectory is set to the cycle
indices for each trace frame.
This is useful for converting WepyHDF5 data to common
molecular dynamics data formats accessible through the mdtraj
library.
Parameters
----------
trace : list of tuple of int
The trace values. Each tuple is of the form
(run_idx, traj_idx, frame_idx).
alt_rep : None or str
If None uses default 'positions' representation otherwise
chooses the representation from the 'alt_reps' compound field.
Returns
-------
traj : mdtraj.Trajectory
"""
rep_path = self._choose_rep_path(alt_rep)
trace_fields = self.get_trace_fields(trace, [rep_path, BOX_VECTORS])
return self.traj_fields_to_mdtraj(trace_fields, alt_rep=alt_rep)
def run_trace_to_mdtraj(self, run_idx, trace, alt_rep=None):
"""Generate an mdtraj Trajectory from a trace of frames from the runs.
Uses the default fields for positions (unless an alternate
representation is specified) and box vectors which are assumed
to be present in the trajectory fields.
The time value for the mdtraj trajectory is set to the cycle
indices for each trace frame.
This is useful for converting WepyHDF5 data to common
molecular dynamics data formats accessible through the mdtraj
library.
Parameters
----------
run_idx : int
The run the trace is over.
run_trace : list of tuple of int
The trace values. Each tuple is of the form
(traj_idx, frame_idx).
alt_rep : None or str
If None uses default 'positions' representation otherwise
chooses the representation from the 'alt_reps' compound field.
Returns
-------
traj : mdtraj.Trajectory
"""
rep_path = self._choose_rep_path(alt_rep)
trace_fields = self.get_run_trace_fields(run_idx, trace, [rep_path, BOX_VECTORS])
return self.traj_fields_to_mdtraj(trace_fields, alt_rep=alt_rep)
def _choose_rep_path(self, alt_rep):
"""Given a positions specification string, gets the field name/path
for it.
Parameters
----------
alt_rep : str
The short name (non relative path) for a representation of
the positions.
Returns
-------
rep_path : str
The relative field path to that representation.
E.g.:
If you give it 'positions' or None it will simply return
'positions', however if you ask for 'all_atoms' it will return
'alt_reps/all_atoms'.
"""
# the default for alt_rep is the main rep
if alt_rep == POSITIONS:
rep_path = POSITIONS
elif alt_rep is None:
rep_key = POSITIONS
rep_path = rep_key
# if it is already a path we don't add more to it and just
# return it.
elif len(alt_rep.split('/')) > 1:
if len(alt_rep.split('/')) > 2:
raise ValueError("unrecognized alt_rep spec")
elif alt_rep.split('/')[0] != ALT_REPS:
raise ValueError("unrecognized alt_rep spec")
else:
rep_path = alt_rep
else:
rep_key = alt_rep
rep_path = '{}/{}'.format(ALT_REPS, alt_rep)
return rep_path
def traj_fields_to_mdtraj(self, traj_fields, alt_rep=POSITIONS):
"""Create an mdtraj.Trajectory from a traj_fields dictionary.
Parameters
----------
traj_fields : dict of str : arraylike
Dictionary of the traj fields to their values
alt_reps : str
The base alt rep name for the positions representation to
use for the topology, should have the corresponding
alt_rep field in the traj_fields
Returns
-------
traj : mdtraj.Trajectory object
This is mainly a convenience function to retrieve the correct
topology for the positions which will be passed to the generic
`traj_fields_to_mdtraj` function.
"""
rep_path = self._choose_rep_path(alt_rep)
json_topology = self.get_topology(alt_rep=rep_path)
return traj_fields_to_mdtraj(traj_fields, json_topology, rep_key=rep_path)
def copy_run_slice(self, run_idx, target_file_path, target_grp_path,
run_slice=None, mode='x'):
"""Copy this run to another HDF5 file (target_file_path) at the group
(target_grp_path)"""
assert mode in ['w', 'w-', 'x', 'r+'], "must be opened in write mode"
if run_slice is not None:
assert run_slice[1] >= run_slice[0], "Must be a contiguous slice"
# get a list of the frames to use
slice_frames = list(range(*run_slice))
# we manually construct an HDF5 wrapper and copy the groups over
new_h5 = h5py.File(target_file_path, mode=mode, libver=H5PY_LIBVER)
# flush the datasets buffers
self.h5.flush()
new_h5.flush()
# get the run group we are interested in
run_grp = self.run(run_idx)
# slice the datasets in the run and set them in the new file
if run_slice is not None:
# initialize the group for the run
new_run_grp = new_h5.require_group(target_grp_path)
# copy the init walkers group
self.h5.copy(run_grp[INIT_WALKERS], new_run_grp,
name=INIT_WALKERS)
# copy the decision group
self.h5.copy(run_grp[DECISION], new_run_grp,
name=DECISION)
# create the trajectories group
new_trajs_grp = new_run_grp.require_group(TRAJECTORIES)
# slice the trajectories and copy them
for traj_idx in run_grp[TRAJECTORIES]:
traj_grp = run_grp[TRAJECTORIES][traj_idx]
traj_id = "{}/{}".format(TRAJECTORIES, traj_idx)
new_traj_grp = new_trajs_grp.require_group(str(traj_idx))
for field_name in _iter_field_paths(run_grp[traj_id]):
field_path = "{}/{}".format(traj_id, field_name)
data = self.get_traj_field(run_idx, traj_idx, field_name,
frames=slice_frames)
# if it is a sparse field we need to create the
# dataset differently
if field_name in self.sparse_fields:
# create a group for the field
new_field_grp = new_traj_grp.require_group(field_name)
# slice the _sparse_idxs from the original
# dataset that are between the slice
cycle_idxs = self.traj(run_idx, traj_idx)[field_name]['_sparse_idxs'][:]
sparse_idx_idxs = np.argwhere(np.logical_and(
cycle_idxs[:] >= run_slice[0], cycle_idxs[:] < run_slice[1]
)).flatten().tolist()
# the cycle idxs there is data for
sliced_cycle_idxs = cycle_idxs[sparse_idx_idxs]
# get the data for these cycles
field_data = data[sliced_cycle_idxs]
# get the information on compression,
# chunking, and filters and use it when we set
# the new data
field_data_dset = traj_grp[field_name]['data']
data_dset_kwargs = {
'chunks' : field_data_dset.chunks,
'compression' : field_data_dset.compression,
'compression_opts' : field_data_dset.compression_opts,
'shuffle' : field_data_dset.shuffle,
'fletcher32' : field_data_dset.fletcher32,
}
# and for the sparse idxs although it is probably overkill
field_idxs_dset = traj_grp[field_name]['_sparse_idxs']
idxs_dset_kwargs = {
'chunks' : field_idxs_dset.chunks,
'compression' : field_idxs_dset.compression,
'compression_opts' : field_idxs_dset.compression_opts,
'shuffle' : field_idxs_dset.shuffle,
'fletcher32' : field_idxs_dset.fletcher32,
}
# then create the datasets
new_field_grp.create_dataset('_sparse_idxs',
data=sliced_cycle_idxs,
**idxs_dset_kwargs)
new_field_grp.create_dataset('data',
data=field_data,
**data_dset_kwargs)
else:
# get the information on compression,
# chunking, and filters and use it when we set
# the new data
field_dset = traj_grp[field_name]
# since we are slicing we want to make sure
# that the chunks are smaller than the
# slices. Normally chunks are (1, ...) for a
# field, but may not be for observables
# (perhaps they should but thats for another issue)
chunks = (1, *field_dset.chunks[1:])
dset_kwargs = {
'chunks' : chunks,
'compression' : field_dset.compression,
'compression_opts' : field_dset.compression_opts,
'shuffle' : field_dset.shuffle,
'fletcher32' : field_dset.fletcher32,
}
# require the dataset first to automatically build
# subpaths for compound fields if necessary
dset = new_traj_grp.require_dataset(field_name,
data.shape, data.dtype,
**dset_kwargs)
# then set the data depending on whether it is
# sparse or not
dset[:] = data
# then do it for the records
for rec_grp_name, rec_fields in self.record_fields.items():
rec_grp = run_grp[rec_grp_name]
# if this is a contiguous record we can skip the cycle
# indices to record indices conversion that is
# necessary for sporadic records
if self._is_sporadic_records(rec_grp_name):
cycle_idxs = rec_grp[CYCLE_IDXS][:]
# get dataset info
cycle_idxs_dset = rec_grp[CYCLE_IDXS]
# we use autochunk, because I can't figure out how
# the chunks are set and I can't reuse them
idxs_dset_kwargs = {
'chunks' : True,
# 'chunks' : cycle_idxs_dset.chunks,
'compression' : cycle_idxs_dset.compression,
'compression_opts' : cycle_idxs_dset.compression_opts,
'shuffle' : cycle_idxs_dset.shuffle,
'fletcher32' : cycle_idxs_dset.fletcher32,
}
# get the indices of the records we are interested in
record_idxs = np.argwhere(np.logical_and(
cycle_idxs >= run_slice[0], cycle_idxs < run_slice[1]
)).flatten().tolist()
# set the cycle indices in the new run group
new_recgrp_cycle_idxs_path = '{}/{}/_cycle_idxs'.format(target_grp_path,
rec_grp_name)
cycle_data = cycle_idxs[record_idxs]
cycle_dset = new_h5.require_dataset(new_recgrp_cycle_idxs_path,
cycle_data.shape, cycle_data.dtype,
**idxs_dset_kwargs)
cycle_dset[:] = cycle_data
# if contiguous just set the record indices as the
# range between the slice
else:
record_idxs = list(range(run_slice[0], run_slice[1]))
# then for each rec_field slice those and set them in the new file
for rec_field in rec_fields:
field_dset = rec_grp[rec_field]
# get dataset info
field_dset_kwargs = {
'chunks' : True,
# 'chunks' : field_dset.chunks,
'compression' : field_dset.compression,
'compression_opts' : field_dset.compression_opts,
'shuffle' : field_dset.shuffle,
'fletcher32' : field_dset.fletcher32,
}
rec_field_path = "{}/{}".format(rec_grp_name, rec_field)
new_recfield_grp_path = '{}/{}'.format(target_grp_path, rec_field_path)
# if it is a variable length dtype make the dtype
# that for the dataset and we also slice the
# dataset differently
vlen_type = h5py.check_dtype(vlen=field_dset.dtype)
if vlen_type is not None:
dtype = h5py.special_dtype(vlen=vlen_type)
else:
dtype = field_dset.dtype
# if there are no records don't attempt to add them
# get the shape
shape = (len(record_idxs), *field_dset.shape[1:])
new_field_dset = new_h5.require_dataset(new_recfield_grp_path,
shape, dtype,
**field_dset_kwargs)
# if there aren't records just don't do anything,
# and if there are get them and add them
if len(record_idxs) > 0:
rec_data = field_dset[record_idxs]
# if it is a variable length data type we have
# to do it 1 by 1
if vlen_type is not None:
for i, vlen_rec in enumerate(rec_data):
new_field_dset[i] = rec_data[i]
# otherwise just set it all at once
else:
new_field_dset[:] = rec_data
# just copy the whole thing over, since this will probably be
# more efficient
else:
# split off the last bit of the target path, for copying we
# need it's parent group but not it to exist
target_grp_path_basename = target_grp_path.split('/')[-1]
target_grp_path_prefix = target_grp_path.split('/')[:-1]
new_run_prefix_grp = self.h5.require_group(target_grp_path_prefix)
# copy the whole thing
self.h5.copy(run_grp, new_run_prefix_grp,
name=target_grp_path_basename)
# flush the datasets buffers
self.h5.flush()
new_h5.flush()
return new_h5
|
This website only offers information about tutors, it is not a referral or endorsement of any particular tutor nor does it make any recommendation or representation about the ability, quality or competence of the tutors listed here. Hiring a tutor is based on your judgment and evaluation; you are responsible for selecting a reading system and tutor. This website is one source of information, you may wish to consider multiple sources in order to make an informed decision. Use of this website implies consent to this disclaimer. |
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import sys
import shutil
import hashlib
import tarfile
import StringIO
from cuddlefish._version import get_versions
from cuddlefish.docs import apiparser
from cuddlefish.docs import apirenderer
from cuddlefish.docs import webdocs
from documentationitem import get_module_list
from documentationitem import get_devguide_list
from documentationitem import ModuleInfo
from documentationitem import DevGuideItemInfo
from linkrewriter import rewrite_links
import simplejson as json
DIGEST = "status.md5"
TGZ_FILENAME = "addon-sdk-docs.tgz"
def get_sdk_docs_path(env_root):
return os.path.join(env_root, "doc")
def get_base_url(env_root):
sdk_docs_path = get_sdk_docs_path(env_root).lstrip("/")
return "file://"+"/"+"/".join(sdk_docs_path.split(os.sep))+"/"
def clean_generated_docs(docs_dir):
status_file = os.path.join(docs_dir, "status.md5")
if os.path.exists(status_file):
os.remove(status_file)
index_file = os.path.join(docs_dir, "index.html")
if os.path.exists(index_file):
os.remove(index_file)
dev_guide_dir = os.path.join(docs_dir, "dev-guide")
if os.path.exists(dev_guide_dir):
shutil.rmtree(dev_guide_dir)
api_doc_dir = os.path.join(docs_dir, "modules")
if os.path.exists(api_doc_dir):
shutil.rmtree(api_doc_dir)
def generate_static_docs(env_root, override_version=get_versions()["version"]):
clean_generated_docs(get_sdk_docs_path(env_root))
generate_docs(env_root, override_version, stdout=StringIO.StringIO())
tgz = tarfile.open(TGZ_FILENAME, 'w:gz')
tgz.add(get_sdk_docs_path(env_root), "doc")
tgz.close()
return TGZ_FILENAME
def generate_local_docs(env_root):
return generate_docs(env_root, get_versions()["version"], get_base_url(env_root))
def generate_named_file(env_root, filename_and_path):
module_list = get_module_list(env_root)
web_docs = webdocs.WebDocs(env_root, module_list, get_versions()["version"], get_base_url(env_root))
abs_path = os.path.abspath(filename_and_path)
path, filename = os.path.split(abs_path)
if abs_path.startswith(os.path.join(env_root, 'doc', 'module-source')):
module_root = os.sep.join([env_root, "doc", "module-source"])
module_info = ModuleInfo(env_root, module_root, path, filename)
write_module_doc(env_root, web_docs, module_info, False)
elif abs_path.startswith(os.path.join(get_sdk_docs_path(env_root), 'dev-guide-source')):
devguide_root = os.sep.join([env_root, "doc", "dev-guide-source"])
devguideitem_info = DevGuideItemInfo(env_root, devguide_root, path, filename)
write_devguide_doc(env_root, web_docs, devguideitem_info, False)
else:
raise ValueError("Not a valid path to a documentation file")
def generate_docs(env_root, version=get_versions()["version"], base_url=None, stdout=sys.stdout):
docs_dir = get_sdk_docs_path(env_root)
# if the generated docs don't exist, generate everything
if not os.path.exists(os.path.join(docs_dir, "dev-guide")):
print >>stdout, "Generating documentation..."
generate_docs_from_scratch(env_root, version, base_url)
current_status = calculate_current_status(env_root)
open(os.path.join(docs_dir, DIGEST), "w").write(current_status)
else:
current_status = calculate_current_status(env_root)
previous_status_file = os.path.join(docs_dir, DIGEST)
docs_are_up_to_date = False
if os.path.exists(previous_status_file):
docs_are_up_to_date = current_status == open(previous_status_file, "r").read()
# if the docs are not up to date, generate everything
if not docs_are_up_to_date:
print >>stdout, "Regenerating documentation..."
generate_docs_from_scratch(env_root, version, base_url)
open(os.path.join(docs_dir, DIGEST), "w").write(current_status)
return get_base_url(env_root) + "index.html"
# this function builds a hash of the name and last modification date of:
# * every file in "doc/sdk" which ends in ".md"
# * every file in "doc/dev-guide-source" which ends in ".md"
# * every file in "doc/static-files" which does not start with "."
def calculate_current_status(env_root):
docs_dir = get_sdk_docs_path(env_root)
current_status = hashlib.md5()
module_src_dir = os.path.join(env_root, "doc", "module-source")
for (dirpath, dirnames, filenames) in os.walk(module_src_dir):
for filename in filenames:
if filename.endswith(".md"):
current_status.update(filename)
current_status.update(str(os.path.getmtime(os.path.join(dirpath, filename))))
guide_src_dir = os.path.join(docs_dir, "dev-guide-source")
for (dirpath, dirnames, filenames) in os.walk(guide_src_dir):
for filename in filenames:
if filename.endswith(".md"):
current_status.update(filename)
current_status.update(str(os.path.getmtime(os.path.join(dirpath, filename))))
package_dir = os.path.join(env_root, "packages")
for (dirpath, dirnames, filenames) in os.walk(package_dir):
for filename in filenames:
if filename.endswith(".md"):
current_status.update(filename)
current_status.update(str(os.path.getmtime(os.path.join(dirpath, filename))))
base_html_file = os.path.join(docs_dir, "static-files", "base.html")
current_status.update(base_html_file)
current_status.update(str(os.path.getmtime(os.path.join(dirpath, base_html_file))))
return current_status.digest()
def generate_docs_from_scratch(env_root, version, base_url):
docs_dir = get_sdk_docs_path(env_root)
module_list = get_module_list(env_root)
web_docs = webdocs.WebDocs(env_root, module_list, version, base_url)
must_rewrite_links = True
if base_url:
must_rewrite_links = False
clean_generated_docs(docs_dir)
# py2.5 doesn't have ignore=, so we delete tempfiles afterwards. If we
# required >=py2.6, we could use ignore=shutil.ignore_patterns("*~")
for (dirpath, dirnames, filenames) in os.walk(docs_dir):
for n in filenames:
if n.endswith("~"):
os.unlink(os.path.join(dirpath, n))
# generate api docs for all modules
if not os.path.exists(os.path.join(docs_dir, "modules")):
os.mkdir(os.path.join(docs_dir, "modules"))
[write_module_doc(env_root, web_docs, module_info, must_rewrite_links) for module_info in module_list]
# generate third-party module index
third_party_index_file = os.sep.join([env_root, "doc", "module-source", "third-party-modules.md"])
third_party_module_list = [module_info for module_info in module_list if module_info.level() == "third-party"]
write_module_index(env_root, web_docs, third_party_index_file, third_party_module_list, must_rewrite_links)
# generate high-level module index
high_level_index_file = os.sep.join([env_root, "doc", "module-source", "high-level-modules.md"])
high_level_module_list = [module_info for module_info in module_list if module_info.level() == "high"]
write_module_index(env_root, web_docs, high_level_index_file, high_level_module_list, must_rewrite_links)
# generate low-level module index
low_level_index_file = os.sep.join([env_root, "doc", "module-source", "low-level-modules.md"])
low_level_module_list = [module_info for module_info in module_list if module_info.level() == "low"]
write_module_index(env_root, web_docs, low_level_index_file, low_level_module_list, must_rewrite_links)
# generate dev-guide docs
devguide_list = get_devguide_list(env_root)
[write_devguide_doc(env_root, web_docs, devguide_info, must_rewrite_links) for devguide_info in devguide_list]
# make /md/dev-guide/welcome.html the top level index file
doc_html = web_docs.create_guide_page(os.path.join(docs_dir, 'dev-guide-source', 'index.md'))
write_file(env_root, doc_html, docs_dir, 'index', False)
def write_module_index(env_root, web_docs, source_file, module_list, must_rewrite_links):
doc_html = web_docs.create_module_index(source_file, module_list)
base_filename, extension = os.path.splitext(os.path.basename(source_file))
destination_path = os.sep.join([env_root, "doc", "modules"])
write_file(env_root, doc_html, destination_path, base_filename, must_rewrite_links)
def write_module_doc(env_root, web_docs, module_info, must_rewrite_links):
doc_html = web_docs.create_module_page(module_info)
write_file(env_root, doc_html, module_info.destination_path(), module_info.base_filename(), must_rewrite_links)
def write_devguide_doc(env_root, web_docs, devguide_info, must_rewrite_links):
doc_html = web_docs.create_guide_page(devguide_info.source_path_and_filename())
write_file(env_root, doc_html, devguide_info.destination_path(), devguide_info.base_filename(), must_rewrite_links)
def write_file(env_root, doc_html, dest_dir, filename, must_rewrite_links):
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
dest_path_html = os.path.join(dest_dir, filename) + ".html"
replace_file(env_root, dest_path_html, doc_html, must_rewrite_links)
return dest_path_html
def replace_file(env_root, dest_path, file_contents, must_rewrite_links):
if os.path.exists(dest_path):
os.remove(dest_path)
# before we copy the final version, we'll rewrite the links
# I'll do this last, just because we know definitely what the dest_path is at this point
if must_rewrite_links and dest_path.endswith(".html"):
file_contents = rewrite_links(env_root, get_sdk_docs_path(env_root), file_contents, dest_path)
open(dest_path, "w").write(file_contents)
|
If you love privacy, scenic views, beautiful mature hardwoods with gentle rolling hills then Whisper Valley Estates is the exclusive neighborhood for you. You'll have easy access to the new bridge, shopping, restaurants and the majestic St. Croix river. Nearby attractions include White Eagle Golf Course, Perch Lake, Bass Lake Cheese Factory, and the quaint river towns of Hudson, WI and Stillwater, MN. Don't miss this opportunity to bring your builder and live the dream in this covenant protected neighborhood of 3 to 5 acre lots. This is lot 10 at 3.07 acres. |
#! /usr/bin/python3.4
# -*-coding:utf-8 -*
def Count(Motifs):
"""
Returns the count matrix of a list of sequences.
The count matrix is the number of times a nucleotid appears at a position in the pool of sequences (Motifs).
:param Motifs: The sequences to make the count matrix of.
:type Motifs: list of string
:return: the count matrix
:rtype: dict of list of int with nucleotids as keys.
..seealso:: Count()
"""
count = {}
k = len(Motifs[0])
for symbol in "ACGT":
count[symbol] = []
for j in range(k):
count[symbol].append(0)
t = len(Motifs)
for i in range(t):
for j in range(k):
symbol = Motifs[i][j]
count[symbol][j] += 1
return count
def Profile(Motifs):
"""
Returns the profile matrix of a list of sequences.
The profile matrix is the frequency of a nucleotid at a position in the pool of sequences (Motifs).
:param Motifs: The sequences to make the profile matrix of.
:type Motifs: list of string
:return: the profile matrix
:rtype: dict of list of float with nucleotids as keys.
..seealso:: Count()
"""
t = len(Motifs)
k = len(Motifs[0])
profile = {}
count= Count(Motifs)
for symbol in "ACGT":
profile[symbol] = []
for j in range(k):
profile[symbol].append(0)
for symbol in "ACGT":
for j in range(k):
if t >0:
profile[symbol][j]= count[symbol][j]/t
return profile
def Consensus(Motifs):
"""
Returns the consensus sequence of several sequences.
:param Motifs: the sequences to produce a consensus of.
:type Motifs: list of string
:return: the consensus sequence
:rtype: string
..warnings:: the strings in Motifs must only be composed on the letters A,C,G,T.
..seealso:: Count()
"""
consensus = ""
k = len(Motifs[0])
count=Count(Motifs)
for j in range(k):
m = 0
frequentSymbol = ""
for symbol in "ACGT":
if count[symbol][j] > m:
m = count[symbol][j]
frequentSymbol = symbol
consensus += frequentSymbol
return consensus
def Score(Motifs):
"""
returns the number of unpopular letter in the motif matrix (Motifs).
:param Motifs: the motif matrix.
:type Motifs: a list of string
:return: the number of unpopular letters in the motif matrix.
:rtype: int
..seealso:: Count(), Consensus()
"""
t = len(Motifs)
k = len(Motifs[0])
score=0
count=Count(Motifs)
consensus = Consensus(Motifs)
for symbol in "ACGT":
for j in range(k):
if symbol != consensus[j]:
score += count[symbol][j]
return score
# Input: String Text and profile matrix Profile
# Output: Pr(Text, Profile)
def Pr(Text, Profile):
# insert your code here
compteur=0
Pr=1
for letter in Text:
Pr=Pr*Profile[letter][compteur]
compteur+=1
return Pr
# Input: String Text, an integer k, and profile matrix Profile
# Output: ProfileMostProbablePattern(Text, k, Profile)
def ProfileMostProbablePattern(Text, k, Profile):
prm=-1
s=""
for i in range(len(Text)-k+1):
pr=Pr(Text[i:i+k],Profile)
if pr>prm:
prm=pr
s=str(Text[i:i+k])
return str(s)
|
Business customers sometimes need their Internet service configured into a "Bridged mode" where they are putting other routing equipment behind the Broadband CPE. The below information provides general instructions on how to configure the Arris BGW210-700 Internet Gateway for IP Passthrough mode, an effective equivalent to a bridge mode configuration.
IP Passthrough means the Broadband CPE device terminates the VDSL/Fiber connection, authenticates with the network, receives a WAN IP, and shares that IP address with a single customer device connected to the Broadband CPE equipment. IP Passthrough will only allow one connection to be "unfiltered" or pingable from the WAN or internet side of the Broadband CPE equipment.
The IP Passthrough feature allows a single device on the LAN to have the gateway's public address assigned to it. It also provides port address translation (PAT) or network address and port translation (NAPT) via the same public IP address for all other hosts on the private LAN subnet. Using IP Passthrough, the public WAN IP is used to provide IP address translation for private LAN computers. The public WAN IP is assigned and reused on a LAN computer.
Note: Remember to make a copy of all current settings before proceeding.
Open your web-browser from a computer directly connected to the Arris BGW210-700.
Enter http://192.168.1.254 in the browser address location field.
Click the IP Passthrough tab to configure the following settings.
DHCP can automatically serve the WAN IP address to a LAN computer. When DHCP is used for addressing the designated IP Passthrough computer, the acquired or configured WAN address is passed to DHCP, which will dynamically configure a single servable address subnet, and reserve the address for the configured PC's MAC address. This dynamic subnet configuration is based on the local and remote WAN address and subnet mask.
The two DHCP modes assign the needed WAN IP information to the client automatically. You can select the MAC address of the computer you want to be the IP Passthrough client with fixed mode or with first-come-first-served dynamic. The first client to renew its address will be assigned the WAN IP.
Manual mode is like statically configuring your connected computer. With Manual mode, you configure the TCP/IP Properties of the LAN client deviceyou want to be the IP Passthrough client. You then manually enter the WAN IP address, gateway address, and so on that matches the WAN IP address information of your Broadband CPE device. This mode works the same as the DHCP modes. Unsolicited WAN traffic will get passed to this client. The client is still able to access the BGW210 device and other LAN clients on the 192.168.1.x network.
DHCP Lease: By default, the IP Passthrough host's DHCP leases will be shortened to two minutes. This allows for timely updates of the host's IP address, which will be a private IP address before the WAN connection is established. After the WAN connection is established and has an address, the IP Passthrough host can renew its DHCP address binding to acquire the WAN IP address. You may alter this setting.
Click Save. Changes take effect upon restart.
Note: IP Passthrough Restriction: Since both the BGW210 Internet Gateway and the IP Passthrough host use the same IP address, new sessions that conflict with existing sessions will be rejected by the BGW210. For example, suppose you are working from home using an IPSec tunnel from the router and from the IP Passthrough host. Both tunnels go to the same remote endpoint, such as the VPN access concentrator at your employer's office. In this case, the first one to start the IPSec traffic will be allowed; the second one from the WAN is indistinguishable and will fail.
What is IgLou's Internet Access SLA (Service Level Agreement)? |
# hscimgloader.py
# ALS 2017/05/02
import numpy as np
import os
import requests
import astropy.units as u
from astropy.io import fits
import re
from ..loader import imgLoader
from ...filters import surveysetup
from ..get_credential import getCrendential
from . import hscurl
nanomaggy = u.def_unit('nanomaggy', 3.631e-6*u.Jy)
u.add_enabled_units([nanomaggy])
u.nanomaggy=nanomaggy
class hscimgLoader(imgLoader):
def __init__(self, **kwargs):
"""
hscimgLoader, child of imgLoader
download stamps from HSC DAS Quarry
download psf by either:
(iaa) call sumire to infer psf from calexp and download psf from sumire
(online) download calexp from server and infer psf locally
on top of imgLoader init, set self.survey = 'hsc',
add attributes self.img_width_pix, self.img_height_pix
do not load obj.sdss.xid by default unless to_make_obj_sdss= True
Additional Params
-----------------
rerun = 's16a_wide' (str)
release_version = 'dr1' (str)
username (optional) (str): STARs account
password (optional) (str): STARs account
Public Methods
--------------
__init__(self, **kwargs)
make_stamp(self, band, overwrite=False, **kwargs)
make_stamps(self, overwrite=False, **kwargs)
make_psf(self, band, overwrite=False, to_keep_calexp=False)
make_psfs(self, overwrite=False, to_keep_calexp=False)
Instruction for stars username and password
-------------------------------------------
1) as arguments
hscimgLoader(..., username=username, password=password)
2) as environmental variable
$ export HSC_SSP_CAS_USERNAME
$ read -s HSC_SSP_CAS_USERNAME
$ export HSC_SSP_CAS_PASSWORD
$ read -s HSC_SSP_CAS_PASSWORD
3) enter from terminal
Attributes
----------
(in addition to loader attributes)
rerun = s16a_wide
semester = s16a
release_version = dr1
survey = 'hsc'
bands = ['g', 'r', 'i', 'z', 'y']
username
password
status (bool)
whether an hsc object is successfully identified
"""
super(hscimgLoader, self).__init__(**kwargs)
# set data release parameters
self.rerun = kwargs.pop('rerun', 's16a_wide')
self.semester = self.rerun.split('_')[0]
self.release_version = kwargs.pop('release_version', 'dr1')
# set hsc object parameters
self.status = super(self.__class__, self).add_obj_hsc(update=False, release_version=self.release_version, rerun=self.rerun)
self.survey = 'hsc'
self.bands = surveysetup.surveybands[self.survey]
self.pixsize = surveysetup.pixsize[self.survey]
self._add_attr_img_width_pix_arcsec()
# set connection parameters
self.__username = kwargs.pop('username', '')
self.__password = kwargs.pop('password', '')
if self.__username == '' or self.__password == '':
self.__username = getCrendential("HSC_SSP_CAS_USERNAME", cred_name = 'STARs username')
self.__password = getCrendential("HSC_SSP_CAS_PASSWORD", cred_name = 'STARs password')
def _get_fn_calexp(self, band):
return 'calexp-{0}.fits'.format(band)
def _get_filter_name(self, band):
return "HSC-{band}".format(band=band.upper())
def make_stamp(self, band, overwrite=False, **kwargs):
"""
make stamp image of the specified band of the object. takes care of overwrite with argument 'overwrite'. Default: do not overwrite. See _download_stamp() for specific implementation.
Params
----------
band (string) = 'r'
overwrite (boolean) = False
**kwargs: to be passed to _download_stamp()
e.g., imgtype='coadd', tract='', rerun='', see _download_stamp()
if not specified then use self.rerun.
Return
----------
status: True if downloaded or skipped, False if download fails
"""
return self._imgLoader__make_file_core(func_download_file=self._download_stamp, func_naming_file=self.get_fn_stamp, band=band, overwrite=overwrite, **kwargs)
def make_stamps(self, overwrite=False, **kwargs):
"""
make stamps of all bands, see make_stamp()
"""
return self._imgLoader__make_files_core(func_download_file=self._download_stamp, func_naming_file=self.get_fn_stamp, overwrite=overwrite, **kwargs)
def _download_stamp(self, band, imgtype='coadd', tract='', tokeepraw=False, n_trials=5):
"""
download hsc cutout img using HSC DAS Querry. Provides only ra, dec to DAS Querry and download the default coadd. always overwrite.
convert it to stamp images.
ra, dec can be decimal degrees (12.345) or sexagesimal (1:23:35)
for details see hsc query manual
https://hscdata.mtk.nao.ac.jp/das_quarry/manual.html
Args
--------
band
imgtype='coadd'
tract=''
tokeepraw = False (bool):
whether to keep the downloaded raw HSC image, which has four extensions.
n_trials=5
how many times to retry requesting if there is requests errors such as connection error.
Return
----------
status: True if downloaded, False if download fails
"""
rerun = self.rerun
# setting
fp_out = self.get_fp_stamp(band)
semi_width_inarcsec = (self.img_width_arcsec.to(u.arcsec).value/2.)-0.1 # to get pix number right
semi_height_inarcsec = (self.img_height_arcsec.to(u.arcsec).value/2.)-0.1
sw = '%.5f'%semi_width_inarcsec+'asec'
sh = '%.5f'%semi_height_inarcsec+'asec'
# get url
url = hscurl.get_hsc_cutout_url(self.ra, self.dec, band=band, rerun=rerun, tract=tract, imgtype=imgtype, sw=sw, sh=sh)
# query, download, and convert to new unit
# writing two files (if successful): raw img file and stamp img file.
rqst = self._retry_request(url, n_trials=n_trials)
if rqst.status_code == 200:
fp_raw = self._write_request_to_file(rqst)
self._write_fits_unit_specified_in_nanomaggy(filein=fp_raw, fileout=fp_out)
if not tokeepraw:
os.remove(fp_raw)
return True
else:
print("[hscimgloader] image cannot be retrieved")
return False
def make_psf(self, band, overwrite=False, **kwargs):
"""
make psf image of the specified band of the object. See _download_psf() for details.
Params
----------
band (string) = 'r'
overwrite (boolean) = False
**kwargs: to be passed to _download_psf()
e.g., imgtype='coadd'
Return
----------
status: True if downloaded or skipped, False if download fails
"""
return self._imgLoader__make_file_core(func_download_file=self._download_psf, func_naming_file=self.get_fn_psf, band=band, overwrite=overwrite, **kwargs)
def make_psfs(self, overwrite=False, **kwargs):
"""
make psfs of all bands, see make_psf()
"""
return self._imgLoader__make_files_core(func_download_file=self._download_psf, func_naming_file=self.get_fn_psf, overwrite=overwrite, **kwargs)
def _download_psf(self, band, imgtype='coadd', rerun='', tract='', patch_s='', n_trials=5):
"""
download hsc cutout img using HSC DAS Querry. Provides only ra, dec to DAS Querry and download the default coadd. always overwrite. If rerun not specified then use self.rerun.
for details see manual https://hscdata.mtk.nao.ac.jp/psf/4/manual.html#Bulk_mode
https://hscdata.mtk.nao.ac.jp/das_quarry/manual.html
Args
--------
band
imgtype='coadd'
rerun=self.rerun
tract=''
patch_s=''
n_trials=5
how many times to retry requesting if there is requests errors such as connection error.
Return
----------
status: True if downloaded, False if download fails
"""
if rerun == '':
rerun = self.rerun
# setting
fp_out = self.get_fp_psf(band)
# get url
url = hscurl.get_hsc_psf_url(ra=self.ra, dec=self.dec, band=band, rerun=rerun, tract=tract, patch=patch_s, imgtype=imgtype)
# download
rqst = self._retry_request(url, n_trials=n_trials)
if rqst.status_code == 200:
self._write_request_to_file(rqst, fn=os.path.basename(fp_out))
return True
else:
print("[hscimgloader] psf cannot be retrieved")
return False
def _retry_request(self, url, n_trials=5):
"""
request url and retries for up to n_trials times if requests exceptions are raised, such as ConnectionErrors. Uses self.__username self.__password as authentication.
"""
for _ in range(n_trials):
try:
rqst = requests.get(url, auth=(self.__username, self.__password))
return rqst
break
except requests.exceptions.RequestException as e:
print(("[hscimgloader] retrying as error detected: "+str(e)))
def _write_request_to_file(self, rqst, fn=''):
"""
write requested file under self.dir_obj with original filename unless filename specified
Args
--------
rqst: request result
fn ='' (str):
the filename to be saved to. default: use original filename.
Return
--------
fp_out (string): the entire filepath to the file written
"""
d = rqst.headers['content-disposition']
if fn == '':
fn = re.findall("filename=(.+)", d)[0][1:-1]
fp_out = self.dir_obj + fn
with open(fp_out, 'wb') as out:
for bits in rqst.iter_content():
out.write(bits)
return fp_out
def _write_fits_unit_converted_to_nanomaggy(self, filein, fileout):
"""
!!!!!!! WARNING !!!!!!!! this funciton is not used currently
Convert raw hsc image to an image with unit nanomaggy, changing the data value.
take only the second hdu hdu[1] as data in output
read in fits file filein with no bunit but FLUXMAG0 and convert to one fits file with unit nanomaggy, and write to fileout.
Notes on Unit conversion
-----------
HSC fluxmag0 is set such that a pix value of 1 has a magAB of 27 so:
fluxmag0 = header_combine['FLUXMAG0']
# 63095734448.0194
pixunit = 10.**-19.44 / fluxmag0 * (u.erg * u.s**-1 * u.cm**-2 * u.Hz**-1)
# u.Quantity('5.754399373371546e-31 erg / cm2 / Hz / s')
nanomaggy_per_raw_unit = float((u.nanomaggy/pixunit).decompose())
# 63.099548091890085
But this function should work even with other fluxmag 0, as we set
nanomaggy_per_raw_unit = fluxmag0 * 10**-9
"""
hdu = fits.open(filein)
header_combine = hdu[1].header+hdu[0].header
# sanity check
if header_combine['FLUXMAG0'] != 63095734448.0194:
raise ValueError("HSC FLUXMAG0 different from usual. Although shouldnt be a problem")
if 'BUNIT' in header_combine:
raise ValueError("Input fits file should not have BUNIT")
nanomaggy_per_raw_unit = header_combine['FLUXMAG0']*10.**-9
data_nanomaggy = hdu[1].data/nanomaggy_per_raw_unit
header_combine.set(keyword='BUNIT', value='nanomaggy', comment="1 nanomaggy = 3.631e-6 Jy")
header_combine['COMMENT'] = "Unit converted to nanomaggy by ALS"
header_combine.remove(keyword='FLUXMAG0')
hdu_abbrv = fits.PrimaryHDU(data_nanomaggy, header=header_combine)
hdu_abbrv.writeto(fileout, overwrite=True)
def _write_fits_unit_specified_in_nanomaggy(self, filein, fileout):
"""
Convert a raw hsc image to an image with unit nanomaggy, the data values unchanged.
Take only the second hdu hdu[1] as data in output.
read in fits file filein with no bunit but FLUXMAG0 and convert to one fits file with unit nanomaggy, and write to fileout.
Notes on Unit conversion
-----------
HSC fluxmag0 is set such that a pix value of 1 has a magAB of 27 so:
fluxmag0 = header_combine['FLUXMAG0']
# 63095734448.0194
pixunit = 10.**-19.44 / fluxmag0 * (u.erg * u.s**-1 * u.cm**-2 * u.Hz**-1)
# u.Quantity('5.754399373371546e-31 erg / cm2 / Hz / s')
nanomaggy_per_raw_unit = float((u.nanomaggy/pixunit).decompose())
# 63.099548091890085
raw_unit_per_nanomaggy = 1/nanomaggy_per_raw_unit
# 0.015847974038478506
But this function should work even with other fluxmag 0, as we set
nanomaggy_per_raw_unit = fluxmag0 * 10**-9
"""
hdu = fits.open(filein)
header_combine = hdu[1].header+hdu[0].header
# sanity check
if header_combine['FLUXMAG0'] != 63095734448.0194:
raise ValueError("HSC FLUXMAG0 different from assumed")
if 'BUNIT' in header_combine:
raise ValueError("Input fits file should not have BUNIT")
bunit = '1.58479740e-02 nanomaggy'
header_combine.set(keyword='BUNIT', value=bunit, comment="1 nanomaggy = 3.631e-6 Jy")
header_combine['COMMENT'] = "Unit specified in nanomaggy by ALS"
data = hdu[1].data
hdu_abbrv = fits.PrimaryHDU(data, header=header_combine)
hdu_abbrv.writeto(fileout, overwrite=True)
|
Weustink AC, Nieman K, Pugliese F, Mollet NR, Meijboom BW, van Mieghem C, ten Kate GJ, Cademartiri F, Krestin GP, de Feyter PJ. Diagnostic Accuracy of Computed Tomography Angiography in Patients After Bypass Grafting Comparison With Invasive Coronary Angiography. J Am Coll Cardiol Img 2009;2:816–24.
The author Meijboom BW should have been listed as Meijboom WB. The authors apologize for this error. |
import numpy as np
# import FitsUtils
import FittingUtilities
import HelperFunctions
import matplotlib.pyplot as plt
import sys
import os
from astropy import units
from astropy.io import fits, ascii
import DataStructures
from scipy.interpolate import InterpolatedUnivariateSpline as interp
import MakeModel
import HelperFunctions
from collections import Counter
from sklearn.gaussian_process import GaussianProcess
import warnings
def SmoothData(order, windowsize=91, smoothorder=5, lowreject=3, highreject=3, numiters=10, expand=0, normalize=True):
denoised = HelperFunctions.Denoise(order.copy())
denoised.y = FittingUtilities.Iterative_SV(denoised.y, windowsize, smoothorder, lowreject=lowreject,
highreject=highreject, numiters=numiters, expand=expand)
if normalize:
denoised.y /= denoised.y.max()
return denoised
def roundodd(num):
rounded = round(num)
if rounded % 2 != 0:
return rounded
else:
if rounded > num:
return rounded - 1
else:
return rounded + 1
def GPSmooth(data, low=0.1, high=10, debug=False):
"""
This will smooth the data using Gaussian processes. It will find the best
smoothing parameter via cross-validation to be between the low and high.
The low and high keywords are reasonable bounds for A and B stars with
vsini > 100 km/s.
"""
smoothed = data.copy()
# First, find outliers by doing a guess smooth
smoothed = SmoothData(data, normalize=False)
temp = smoothed.copy()
temp.y = data.y / smoothed.y
temp.cont = FittingUtilities.Continuum(temp.x, temp.y, lowreject=2, highreject=2, fitorder=3)
outliers = HelperFunctions.FindOutliers(temp, numsiglow=3, expand=5)
if len(outliers) > 0:
data.y[outliers] = smoothed.y[outliers]
gp = GaussianProcess(corr='squared_exponential',
theta0=np.sqrt(low * high),
thetaL=low,
thetaU=high,
normalize=False,
nugget=(data.err / data.y) ** 2,
random_start=1)
try:
gp.fit(data.x[:, None], data.y)
except ValueError:
#On some orders with large telluric residuals, this will fail.
# Just fall back to the old smoothing method in that case.
return SmoothData(data), 91
if debug:
print "\tSmoothing parameter theta = ", gp.theta_
smoothed.y, smoothed.err = gp.predict(data.x[:, None], eval_MSE=True)
return smoothed, gp.theta_[0][0]
if __name__ == "__main__":
fileList = []
plot = False
vsini_file = "%s/School/Research/Useful_Datafiles/Vsini.csv" % (os.environ["HOME"])
for arg in sys.argv[1:]:
if "-p" in arg:
plot = True
elif "-vsini" in arg:
vsini_file = arg.split("=")[-1]
else:
fileList.append(arg)
#Read in the vsini table
vsini_data = ascii.read(vsini_file)[10:]
if len(fileList) == 0:
fileList = [f for f in os.listdir("./") if f.endswith("telluric_corrected.fits")]
for fname in fileList:
orders = HelperFunctions.ReadFits(fname, extensions=True, x="wavelength", y="flux", cont="continuum",
errors="error")
#Find the vsini of this star
header = fits.getheader(fname)
starname = header["object"].split()[0].replace("_", " ")
found = False
for data in vsini_data:
if data[0] == starname:
vsini = float(data[1])
found = True
if not found:
outfile = open("Warnings.log", "a")
outfile.write("Cannot find %s in the vsini data: %s\n" % (starname, vsini_file))
outfile.close()
warnings.warn("Cannot find %s in the vsini data: %s" % (starname, vsini_file))
print starname, vsini
#Begin looping over the orders
column_list = []
header_list = []
for i, order in enumerate(orders):
print "Smoothing order %i/%i" % (i + 1, len(orders))
#Fix errors
order.err[order.err > 1e8] = np.sqrt(order.y[order.err > 1e8])
#Linearize
xgrid = np.linspace(order.x[0], order.x[-1], order.x.size)
order = FittingUtilities.RebinData(order, xgrid)
dx = order.x[1] - order.x[0]
smooth_factor = 0.8
theta = roundodd(vsini / 3e5 * order.x.mean() / dx * smooth_factor)
denoised = SmoothData(order,
windowsize=theta,
smoothorder=3,
lowreject=3,
highreject=3,
expand=10,
numiters=10)
#denoised, theta = GPSmooth(order.copy())
#denoised, theta = CrossValidation(order.copy(), 5, 2, 2, 10)
#denoised, theta = OptimalSmooth(order.copy())
#denoised.y *= order.cont/order.cont.mean()
print "Window size = %.4f nm" % theta
column = {"wavelength": denoised.x,
"flux": order.y / denoised.y,
"continuum": denoised.cont,
"error": denoised.err}
header_list.append((("Smoother", theta, "Smoothing Parameter"),))
column_list.append(column)
if plot:
plt.figure(1)
plt.plot(order.x, order.y / order.y.mean())
plt.plot(denoised.x, denoised.y / denoised.y.mean())
plt.title(starname)
plt.figure(2)
plt.plot(order.x, order.y / denoised.y)
plt.title(starname)
#plt.plot(order.x, (order.y-denoised.y)/np.median(order.y))
#plt.show()
if plot:
plt.show()
outfilename = "%s_smoothed.fits" % (fname.split(".fits")[0])
print "Outputting to %s" % outfilename
HelperFunctions.OutputFitsFileExtensions(column_list, fname, outfilename, mode='new', headers_info=header_list)
|
Don’t spend it! Invest it📈 Do you agree? financialprofessional📈 #entrepreneur #success #business #motivation #millionaire #inspiration #wealth #luxury #rich #hustle #cash #lifestyle #startup #bitcoin #forex #entrepreneurship #money #finance #boss #goals #stocks #invest #successful #investing #work #passion #billionaire #grind #trading #businessman"
Pretty unsuccesful trip to the @partypokerlive Irish Open on the poker front with only @hudders_gtc managing a mincash in the Main Event and the rest bricking everything but we all had a great time regardless. It's always a fun stop and great to get to catch up with so many people at the bar, already looking forward to next year!
We receive payment directly from your attorney or the party who is responsible for paying the settlement (in many cases this is the insurance company). You do not have to pay any monthly payments to CLF. CLF gets paid only at the end of your case.
Avessi gli occhi di mio padre proverei a ragionare, ma sono nato con la voglia di strafare e col bisogno di volare; dicesti "chiudi gli occhi, non pensarci" ma quelli come me chiudono gli occhi solo per allontanarsi. |
# coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class ResourceUsageStatistics(Model):
"""the statistics information for resource usage.
Variables are only populated by the server, and will be ignored when
sending a request.
:ivar average: the average value.
:vartype average: float
:ivar minimum: the minimum value.
:vartype minimum: long
:ivar maximum: the maximum value.
:vartype maximum: long
"""
_validation = {
'average': {'readonly': True},
'minimum': {'readonly': True},
'maximum': {'readonly': True},
}
_attribute_map = {
'average': {'key': 'average', 'type': 'float'},
'minimum': {'key': 'minimum', 'type': 'long'},
'maximum': {'key': 'maximum', 'type': 'long'},
}
def __init__(self):
super(ResourceUsageStatistics, self).__init__()
self.average = None
self.minimum = None
self.maximum = None
|
Some Deadpool fans might still be licking their wounds in the wake of tonight’s Oscars (the movie did get snubbed, after all), so here’s a little something to ease the hurt. After all, Disney is the the place where dreams begin.
Animator Butch Hartman, the guy behind Fairly Oddparents, has turned his attention to foul-mouthed superhero Wade Wilson, adding him to classic Disney cartoons and movies. You can see Deadpool chilling with Jasmine on a magic carpet, lifting up a bunch of girls Gaston-style, or even replacing Mickey Mouse as the Steamboat Captain.
Hartman’s YouTube is full of fun animated treasures like this one, including sketches imagining what Danny Phantom would look like 10 years in the future, and several videos that turn characters from various shows and games (like Overwatch) into Fairly Oddparents. This is his first, though hopefully not last, foray into L’Art du Deadpool.
In the YouTube comments, Hartman’s currently taking suggestions for which “universe” Deadpool should invade next. If you have any ideas, definitely head over there and share your thoughts. My money’s on classic ‘90s anime like Sailor Moon, One Piece, or Cowboy Bebop. That’d be amazing. |
# -*- coding: utf-8 -*-
# this file is released under public domain and you can use without limitations
#########################################################################
## This is a sample controller
## - index is the default action of any application
## - user is required for authentication and authorization
## - download is for downloading files uploaded in the db (does streaming)
#########################################################################
import itertools
def index():
return dict()
def user():
"""
exposes:
http://..../[app]/default/user/login
http://..../[app]/default/user/logout
http://..../[app]/default/user/register
http://..../[app]/default/user/profile
http://..../[app]/default/user/retrieve_password
http://..../[app]/default/user/change_password
http://..../[app]/default/user/manage_users (requires membership in
use @auth.requires_login()
@auth.requires_membership('group name')
@auth.requires_permission('read','table name',record_id)
to decorate functions that need access control
"""
if request.args(0) == 'register':
db.auth_user.bio.writable = db.auth_user.bio.readable = False
db.auth_user.avatar.writable = db.auth_user.avatar.readable = False
return dict(form=auth())
@cache.action()
def download():
"""
allows downloading of uploaded files
http://..../[app]/default/download/[filename]
"""
return response.download(request, db)
def call():
"""
exposes services. for example:
http://..../[app]/default/call/jsonrpc
decorate with @services.jsonrpc the functions to expose
supports xml, json, xmlrpc, jsonrpc, amfrpc, rss, csv
"""
return service()
##################################################################################
#### ####
#### COURSE PAGES ####
#### ####
##################################################################################
def courses():
courses = db(Course).select()
return dict(courses=courses)
def course():
course_id = request.args(0, cast=int)
course = Course(id=course_id)
open_classes = course.classes(Class.status == 3).select()
limited_classes = [c for c in open_classes if c.available_until]
Interest.course.default = course_id
Interest.course.readable = Interest.course.writable = False
interest_form = SQLFORM(Interest)
if interest_form.process(onvalidation=check_if_exists).accepted:
response.flash = T("Thank you!")
elif interest_form.errors:
response.flash = T("Erros no formulário!")
return dict(
course=course,
open_classes=open_classes,
limited_classes=limited_classes,
interest_form=interest_form)
def enroll():
class_id = request.args(0, cast=int)
if not class_id in session.cart:
session.cart.append(class_id)
else:
session.flash = T('This course is already on your shopping cart!')
redirect(URL('payments', 'shopping_cart'))
@auth.requires_login()
def my_courses():
class_ids = db(Student.student == auth.user.id).select()
my_courses = db(Course.course_owner == auth.user.id).select()
classes = db(Class.id.belongs([x.class_id for x in class_ids])|\
Class.course.belongs([x.id for x in my_courses])).select()
return dict(classes=classes)
@auth.requires(lambda: enrolled_in_class(record_id=request.args(0, cast=int), record_type=1) | auth.has_membership("Admin"))
def my_class():
class_id = request.args(0, cast=int)
my_class = Class(id=class_id)
my_course = my_class.course
modules = db(Module.course_id == my_course).select()
return dict(my_class=my_class,
modules=modules)
@auth.requires(lambda: enrolled_in_class(record_id=request.args(0, cast=int), record_type=2) | auth.has_membership("Admin"))
def lesson():
lesson_id = request.args(0, cast=int)
class_id = request.args(1, cast=int)
lesson = Lesson(id=lesson_id)
if db(Schedule_Lesson.lesson_id == lesson_id).select().first().release_date > request.now.date():
raise HTTP(404)
page = int(request.vars.page or 1)
videos = lesson.videos.select()
texts = lesson.texts.select()
exercises = lesson.exercises.select()
merged_records = itertools.chain(videos, texts, exercises)
contents = sorted(merged_records, key=lambda record: record['place'])
if page <= 0 or page > len(contents):
raise HTTP(404)
is_correct = {}
if request.vars:
keys = request.vars.keys()
for key in keys:
if key != 'page':
q_id = int(key.split('_')[1])
question = Exercise(id=q_id)
if question.correct == int(request.vars[key]):
is_correct[key] = True
else:
is_correct[key] = False
return dict(lesson=lesson,
content=contents[page-1],
total_pages=len(contents),
is_correct=is_correct,
class_id=class_id)
@auth.requires(lambda: enrolled_in_class(record_id=request.args(0, cast=int), record_type=1) | auth.has_membership("Admin"))
def forum():
class_id = request.args(0, cast=int)
topics = db(Forum.class_id == class_id).select(orderby=~Forum.created_on)
return dict(topics=topics,
class_id=class_id)
@auth.requires(lambda: enrolled_in_class(record_id=request.args(0, cast=int), record_type=3) | auth.has_membership("Admin"))
def topic():
topic_id = request.args(0, cast=int)
topic = Forum(id=topic_id)
comments = db(Comment.post == topic_id).select()
Comment.post.default = topic_id
Comment.post.readable = Comment.post.writable = False
form = crud.create(Comment, next=URL('topic', args=topic_id))
return dict(topic=topic,
comments=comments,
form=form)
@auth.requires(lambda: enrolled_in_class(record_id=request.args(0, cast=int), record_type=1) | auth.has_membership("Admin"))
def new_topic():
class_id = request.args(0, cast=int)
Forum.class_id.default = class_id
Forum.class_id.readable = Forum.class_id.writable = False
form = SQLFORM(Forum)
if form.process().accepted:
redirect(URL('topic', args=form.vars.id))
return dict(form=form)
@auth.requires(lambda: enrolled_in_class(record_id=request.args(0, cast=int), record_type=1) | auth.has_membership("Admin"))
def calendar():
class_id = request.args(0, cast=int)
dates = db((Date.class_id == class_id)|(Date.class_id == None)).select()
my_class = Class(id=class_id)
modules = db(Module.course_id == my_class.course).select()
lessons = []
for module in modules:
for lesson in module.lessons.select():
lessons.append(lesson)
return dict(dates=dates,
my_class=my_class,
lessons=lessons)
@auth.requires(lambda: enrolled_in_class(record_id=request.args(0, cast=int), record_type=1) | auth.has_membership("Admin"))
def announcements():
class_id = request.args(0, cast=int)
announcements = db(Announcement.class_id == class_id).select()
return dict(announcements=announcements,
class_id=class_id) |
1. Product name: tape dispenser. 2. Product No.: LS-100.
Related Products : Tape Manufacturers & Tape Suppliers.
If you are the Tape manufacturer, supplier, vendor, factory, trading company, or exporter, and want to be listed on this directory, Join BMS now to become a Golden Supplier (for Chinese suppliers only). |
"""Thirdparty Packages for internal use.
"""
import sys
import os
def import_thirdparty(lib):
"""
Imports a thirdparty package "lib" by setting all paths correctly.
At the moment, there is only the "pyglet" library, so we just put
pyglet to sys.path temporarily, then import "lib" and then restore the path.
With more packages, we'll just put them to sys.path as well.
"""
seen = set()
def new_import(name, globals={}, locals={}, fromlist=[]):
if name in seen:
return old_import(name, globals, locals, fromlist)
seen.add(name)
sys.path.insert(0, os.path.join(os.path.abspath(os.path.dirname( \
__file__)), "pyglet"))
try:
m = old_import(name, globals, locals, fromlist)
finally:
del sys.path[0]
return m
import __builtin__
old_import = __builtin__.__import__
__builtin__.__import__ = new_import
try:
m = __import__(lib)
finally:
__builtin__.__import__ = old_import
return m
|
Classical liberalism arose in an era in which kings ruled and everyone else simply obeyed. Rulers and royalty believed that they had a divine right to make all the decisions within a society, and everyone else was resigned to curry favor in order to extract what little bit of self-governance they could squeeze out of the king. As philosophy and economics developed, it became obvious to the learned that leaving people alone to pursue their best sense of wellbeing just so happened to make any kingdom more prosperous. Eventually, the rulers of Western Europe realized that leaving their subjects to the greatest level of freedom possible also happened to produce the best and most prosperous outcome for the rulers themselves. We all know this story relatively well.
We could also tell a story about the same set of events using the lens of "fairness." In a world in which the subjugated are idiots and only the royals know what's best for society, the fairest outcome is that in which the rulers make the best and most reasonable proclamations. As subjects become more educated and less idiotic, fairness demands that they also enjoy some participation in the decisions of a nation. As the education and capability gap - and indeed even the wealth gap - between ruler and ruled becomes even smaller, then fairness demands that all people face more or less the same laws as all other people. Hence the end of monarchy and the rise of egalitarian democracy. Liberalism isn't only more prosperous, it's also fairer.
So, some of us have to worry about money a lot, while others of us do not have to worry much about money. Part of this comes down to different life choices; for example, someone who becomes a school teacher will never make as much as someone who becomes a heart surgeon. In a fair world, we're allowed to pursue different life choices as long as we are willing to live with the consequences of those choices. But another part of our wealth differences comes down to heredity. Some of us inherit an awful lot of wealth, and thus begin their lives with more prosperity than working class people will perhaps ever be able to earn, even if they make nothing but good choices for a hundred years straight. This inequality over the luck of being born doesn't seem fair to most of us.
When that unfairness is coupled with great financial hardship - such as crushing medical debt or the inability to afford decent housing - it's natural for some people to consider the merits of wealth redistribution. Perhaps taxing the very wealthy for the benefit of the very poor could alleviate more suffering than it causes. If so, society can gain, both in terms of fairness and in terms of prosperity. If it were possible, it would be a win-win: the poor would have much of their suffering alleviated while it would cost the wealthy comparatively little, and yet poor and wealthy alike would stand to gain from the benefits of a more egalitarian society, and perhaps a more prosperous one.
In the abstract, that all seems right. In the real world, however, we already live under a well-established regime of progressive taxation and wealth redistribution. This is true in every country I am aware of. Despite that fact, in every country I am aware of, there exists some debate about whether "the rich" should pay even more taxes and whether "the poor" should receive even more redistribution. The best answers to these debates, in my opinion, appear to be empirical. That is, we can analyze with reasonable accuracy the impact of variously imposed tax rates on economic behavior and determine relatively robustly which tax and redistribution rates are better, compared to others.
None of those economic analyses, however, can address the question of fairness.
For about a hundred years, libertarians and their precursors have been alone in the opinion that the wealthy should have some say as to the fairness of any wealth redistribution proposal. Most moral analyses will tell you that it's only morally fair to give money to a beggar who asks; it is only libertarianism that is willing to consider whether it might be immoral of the beggar to ask in the first place. It is most certainly only right-leaning strands of libertarianism that would suggest that the person being begged-from has a moral right to refuse.
This moral right to refuse is something that gives libertarians a bad name. We're often thought to be heartless and cruel because we believe that it's not fair to demand that the wealthy pay literally any tax rate approved through a democratic process. It's not that libertarians think that the poor should suffer, of course, it's just that many of us don't think it's fair to subject the rich to literally any tax rate, no matter how high.
Even some libertarians are uncomfortable phrasing it that way. Many would prefer to talk about the deleterious effects of high tax rates on the economy, or the non-existent benefits and ill effects on the labor market of wealth redistribution. They would much rather say that wealth redistribution is harmful rather than simply unfair.
The cynical explanation would be that libertarians are greedy knaves who want to keep all their money for themselves. This explanation fails mainly because few libertarians are millionaires, and the vast majority of millionaires are non-libertarians. There is something about libertarians that makes us keen to defend the rights of those whose tax burden is steepest on grounds of fairness. |
import os
import json
def download_wall(domain, dataset_folder, cut=1000000, access_token=None):
import vk
session = vk.Session(access_token=access_token)
api = vk.API(session)
info = dict()
docs_folder = os.path.join(dataset_folder, "documents")
os.makedirs(docs_folder, exist_ok=True)
os.makedirs(os.path.join(dataset_folder, "meta"), exist_ok=True)
id = 0
offset = 0
while True:
posts = api.wall.get(domain=domain, offset=offset, count=100)
for i in range(1, len(posts)):
post = posts[i]
text = post["text"].replace("<br>", "\n")
if len(text) > 50:
id += 1
text_id = "%06d.txt" % id
info[text_id] = dict()
info[text_id]["url"] = "https://vk.com/" + domain + \
"?w=wall" + str(post["from_id"]) + "_" + str(post["id"])
info[text_id]["time"] = post["date"]
text_file_name = os.path.join(docs_folder, text_id)
with open(text_file_name, "w", encoding='utf-8') as f:
f.write(text)
if id == cut:
break
offset += 100
# print (offset)
if len(posts) != 101:
break
if id == cut:
break
with open(os.path.join(dataset_folder, "meta", "meta.json"), "wb") as f:
f.write(json.dumps(info).encode("UTF-8"))
if __name__ == "__main__":
domain = "lurkopub_alive"
download_wall(
domain,
"D:\\visartm\\data\\datasets\\" +
domain,
cut=1000000)
|
So that You May Believe seeks to communicate the theological themes of the gospel according to John while broadening the reader's approach to interpreting the gospel. The sermons offer a fresh way of hearing the gospel proclamation and can help to rejuvenate the creativity of those charged with the weekly presentation of the good news of Jesus Christ.
Are you looking for a resource to enliven your preaching and teaching of the gospel of John? Creatively Preaching the Fourth Gospel is the book for you. Covering the entire gospel and using a variety of homiletical styles, Eaton plumbs the depths of John in a way that speaks clearly and powerfully to today's church. I know of no finer book on preaching the Fourth Gospel.
"In the beginning was the Word . . ." How does the preacher approach the overwhelming word of God? How does the preacher wrestle with the poetry and theological depth of John's gospel? How do preachers find their voice; their way of proclaiming that Word. Brand Eaton invites the preacher on a homiletical journey through the fourth gospel which is oft times ignored by preachers and the Lectionary alike. As his introduction reminds preachers, he often gets his "creative juices flowing" when listening to or reading the sermons of others. Eaton's creative touch will be felt throughout this preaching journey and may it get your preaching juices flowing.
Brand Eaton asks us to locate ourselves in the deep stories and dialogues of John's Gospel where words carry layers of meaning and striking images arrest our attention. Like a well-stocked guide, Eaton provides the contemporary analogies, user-friendly translations, and paths to meaning making to assure success in the journey.
Brand Eaton is Director of Spiritual Wellness at Bethany Village Retirement Community in Mechanicsburg, Pennsylvania. Prior to his current ministry location, Brand served as pastor to various United Methodist congregations in central Pennsylvania for twenty years. He is a graduate of Lycoming College (B.A.), Wesley Theological Seminary (M.Div., D.Min), and a Board Certified Chaplain. Brand and his wife, Susan Eaton, live in Dillsburg, Pennsylvania. |
"""
Django settings for django_beautifulseodang project.
Generated by 'django-admin startproject' using Django 1.8.14.
For more information on this file, see
https://docs.djangoproject.com/en/1.8/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.8/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
import sys
import json
from django.core.exceptions import ImproperlyConfigured
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Env for dev / deploy
def get_env(setting, envs):
try:
return envs[setting]
except KeyError:
error_msg = "You SHOULD set {} environ".format(setting)
raise ImproperlyConfigured(error_msg)
return
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'ukm)tbc+e%#gew3^%wxyk%@@e9&g%3(@zq&crilwlbvh@6n*l$'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
# Favicon
'favicon',
# Disqus
'disqus',
# ckeditor
'ckeditor',
'ckeditor_uploader',
# Bootstrap
'bootstrap3',
'bootstrapform',
'bootstrap_pagination',
'django_social_share',
# Fontawesome
'fontawesome',
# home
'home',
'social_django',
#'social.apps.django_app.default',
'django_beautifulseodang',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
'social_django.middleware.SocialAuthExceptionMiddleware',
)
ROOT_URLCONF = 'django_beautifulseodang.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
os.path.join(BASE_DIR, 'templates'),
#os.path.join(BASE_DIR, 'templates', 'allauth'),
os.path.join(BASE_DIR, 'templates', 'django_social_share'),
],
# 'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.i18n',
'django.template.context_processors.media',
'django.template.context_processors.static',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
'social_django.context_processors.backends',
'social_django.context_processors.login_redirect',
#'social.apps.django_app.context_processors.backends',
#'social.apps.django_app.context_processors.login_redirect',
],
'loaders': [
# APP_DIRS를 주석처리 해야지 동작
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
]
},
},
]
# Django all auth settings
AUTHENTICATION_BACKENDS = (
#'social_core.backends.github.GithubOAuth2', # Github for python-social-auth
'social_core.backends.twitter.TwitterOAuth', # Twitter for python-social-auth
'social_core.backends.google.GoogleOAuth2', # Google for python-social-auth
'social_core.backends.facebook.FacebookOAuth2', # Facebook for python-social-auth
'django.contrib.auth.backends.ModelBackend',
)
WSGI_APPLICATION = 'django_beautifulseodang.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.8/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
TEMPLATE_DIR = (
os.path.join(BASE_DIR, 'templates'),
)
# Internationalization
# https://docs.djangoproject.com/en/1.8/topics/i18n/
LANGUAGE_CODE = 'ko-kr' # 기본 한글
TIME_ZONE = 'Asia/Seoul'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.8/howto/static-files/
STATIC_URL = '/static/'
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'static'),
)
STATIC_ROOT = os.path.join(BASE_DIR, 'statics')
SITE_ID = 1
AUTH_PASSWORD_VALIDATORS = [
{
'NAME':
'django.contrib.auth.password_validation.MinimumLengthValidator',
'OPTIONS': {
'min_length': 9,
}
}
]
SOCIAL_AUTH_PIPELINE = (
'social.pipeline.social_auth.social_details',
'social.pipeline.social_auth.social_uid',
'social.pipeline.social_auth.auth_allowed',
'social.pipeline.social_auth.social_user',
'social.pipeline.user.get_username',
'social.pipeline.user.create_user',
#'accounts.social.create_user', # 덮어씀
#'accounts.social.update_avatar', # 추가함
'social.pipeline.social_auth.associate_user',
'social.pipeline.social_auth.load_extra_data',
'social.pipeline.user.user_details'
)
SOCIAL_AUTH_LOGIN_REDIRECT_URL = '/'
SOCIAL_AUTH_URL_NAMESPACE = 'social'
LOGIN_URL = 'login'
LOGOUT_URL = 'logout'
LOGIN_REDIRECT_URL = '/'
#ACCOUNT_FORMS = {
# 'login': 'home.forms.MyLoginForm',
# 'signup': 'home.forms.MySignupForm'
#}
DEV_ENVS = os.path.join(BASE_DIR, "envs_dev.json")
DEPLOY_ENVS = os.path.join(BASE_DIR, "envs.json")
if os.path.exists(DEV_ENVS): # Develop Env
env_file = open(DEV_ENVS)
elif os.path.exists(DEPLOY_ENVS): # Deploy Env
env_file = open(DEPLOY_ENVS)
else:
env_file = None
if env_file is None: # System environ
try:
FACEBOOK_KEY = os.environ['FACEBOOK_KEY']
FACEBOOK_SECRET = os.environ['FACEBOOK_SECRET']
GOOGLE_KEY = os.environ['GOOGLE_KEY']
GOOGLE_SECRET = os.environ['GOOGLE_SECRET']
except KeyError as error_msg:
raise ImproperlyConfigured(error_msg)
else: # JSON env
envs = json.loads(env_file.read())
FACEBOOK_KEY = get_env('FACEBOOK_KEY', envs)
FACEBOOK_SECRET = get_env('FACEBOOK_SECRET', envs)
GOOGLE_KEY = get_env('GOOGLE_KEY', envs)
GOOGLE_SECRET = get_env('GOOGLE_SECRET', envs)
# SocialLogin: Facebook
SOCIAL_AUTH_FACEBOOK_KEY = FACEBOOK_KEY
SOCIAL_AUTH_FACEBOOK_SECRET = FACEBOOK_SECRET
SOCIAL_AUTH_FACEBOOK_SCOPE = ['email']
SOCIAL_AUTH_FACEBOOK_PROFILE_EXTRA_PARAMS = {
'fields': 'id, name, email'
}
# SocialLogin: Google
SOCIAL_AUTH_GOOGLE_OAUTH2_KEY = GOOGLE_KEY
SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET = GOOGLE_SECRET
SOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE = ['email']
SOCIAL_AUTH_TWITTER_KEY = 'EUQaQkvpr4R22UTNofeqIfqsV'
SOCIAL_AUTH_TWITTER_SECRET = 'QLjJGjCGMxkIPvGaMymAcu7zZ2GcjMxrbHqt019v5FpIs3WTB1'
SESSION_SERIALIZER = 'django.contrib.sessions.serializers.PickleSerializer'
# favicon
FAVICON_PATH = STATIC_URL + 'img/favicon.png'
# ckeditor
MEDIA_URL = '/media/'
CKEDITOR_JQUERY_URL = '//ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js'
CKEDITOR_UPLOAD_PATH = "uploads/"
CKEDITOR_CONFIGS = {
'default': {
'toolbar': None,
}
}
# Disqus
DISQUS_API_KEY = 'zcJshWHxmOREPGjOrCq6r0rviSIIELz2iHWEdwDrpYSpko5wZDVBt60c7kYsvjlP'
DISQUS_WEBSITE_SHORTNAME = 'http-yeongseon-pythonanywhere-com'
#try:
# from .allauth_settings import *
#except ImportError:
# print("ImportError")
# pass
try:
from .bootstrap3_settings import *
except ImportError:
print("ImportError")
pass
|
The 4D Systems BoosterPack Adaptor suffers from the same reversed /RESET signal as the Arduino Adaptor. RESET signal is hard-connected to pin 11. Setting pin 11 to HIGH sends /RESET LOW to the screen and thus resets it. Returning pin 11 to LOW is needed to use the screen.
The RX / TX lines of the BoosterPack Adaptor are hard-connected to pins 3 / 4, and thus limits the compatibility with the LaunchPads.
This is the case for the TM4C LaunchPad, which uses pins 3 / 4 for standard Serial. On the opposite, the MSP432 LaunchPad is compatible. Default Serial is routed to USB, and Serial1 goes to pins 3 / 4 and is available to drive the screen. |
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from uuid import uuid4
from arche.interfaces import IFlashMessages
from pyramid.httpexceptions import HTTPFound
from arche_pas import _
from arche_pas.models import register_case
def callback_case_1(provider, user, data):
provider.store(user, data)
fm = IFlashMessages(provider.request)
msg = _("data_tied_at_login",
default="Since you already had an account with the same email address validated, "
"you've been logged in as that user. Your accounts have also been linked.")
fm.add(msg, type="success", auto_destruct=False)
# Will return a HTTP 302
return provider.login(user)
def callback_case_2(provider, user, data):
user.email_validated = True
provider.store(user, data)
fm = IFlashMessages(provider.request)
msg = _("accounts_linked_verified_since_logged_in",
default="You've linked your external login to this account.")
fm.add(msg, type="success", auto_destruct=False)
# Will return a HTTP 302
return provider.login(user)
def callback_must_be_logged_in(provider, user, data):
email = provider.get_email(data, validated=False)
msg = _("user_email_present",
default="There's already a user registered here with your email address: '${email}' "
"If this is your account, please login here first to "
"connect the two accounts.",
mapping={'email': email})
fm = IFlashMessages(provider.request)
fm.add(msg, type='danger', auto_destruct=False, require_commit=False)
raise HTTPFound(location=provider.request.resource_url(provider.request.root, 'login'))
def callback_register(provider, user, data):
reg_id = str(uuid4())
provider.request.session[reg_id] = data
# Register this user
return reg_id
def callback_maybe_attach_account(provider, user, data):
""" Only for logged in users."""
reg_id = str(uuid4())
provider.request.session[reg_id] = data
raise HTTPFound(location=provider.request.route_url('pas_link', provider=provider.name, reg_id=reg_id))
def includeme(config):
"""
Different registration cases for email:
1) Validated on server, validated locally and exist
2) Validated on server, exists locally but not validated, user logged in
3) Validated on server, exists locally but not validated, user not logged in
4) Validated on server, doesn't exist locally
5) Validated on server, doesn't match locally but user logged in
- change email?
Serious security breach risk:
6) Not validated/trusted on server, validated locally, user logged in
- Serious risk of hack: cross site scripting or accidental attach of credentials
7) Not validated/trusted on server, validated locally, user not logged in
8) Not validated/trusted on server, exists locally but not validated, user logged in
9) Not validated/trusted on server, local user not matched, user logged in
10) Not validated/trusted on server, exists locally but not validated, user not logged in
11) Not validated/trusted on server, doesn't exist locally, not logged in
12) No email from provider, user logged in
13) No email from provider, user not logged in
"""
register_case(
config.registry,
'case1',
title = "Validated on server, validated locally and user exists",
require_authenticated = None,
email_validated_provider = True,
email_validated_locally = True,
user_exist_locally = True,
provider_validation_trusted = True,
callback=callback_case_1,
)
register_case(
config.registry,
'case2',
title = "Validated on server, exists locally but not validated, user logged in",
require_authenticated = True,
email_validated_provider = True,
email_validated_locally = False,
user_exist_locally = True,
provider_validation_trusted = True,
callback = callback_case_2,
)
register_case(
config.registry,
'case3',
title = "Validated on server, exists locally but not validated, user not logged in",
require_authenticated = False,
email_validated_provider = True,
email_validated_locally = False,
user_exist_locally = True,
provider_validation_trusted = True,
callback = callback_must_be_logged_in,
)
register_case(
config.registry,
'case4',
title = "Validated on server, doesn't exist locally",
require_authenticated = False,
email_validated_provider = True,
#email_validated_locally = False,
user_exist_locally = False,
provider_validation_trusted = True,
callback = callback_register,
)
register_case(
config.registry,
'case5',
title = "Validated on server, doesn't match locally but is authenticated",
require_authenticated = True,
email_validated_provider = True,
#email_validated_locally = False,
user_exist_locally = False,
provider_validation_trusted = True,
callback = callback_maybe_attach_account,
)
register_case(
config.registry,
'case6',
title="Not validated/trusted on server, validated locally, user logged in",
require_authenticated = True,
#email_validated_provider = None,
email_validated_locally = True,
#user_exist_locally = True, Should be caught by email_validated_locally?
email_from_provider = None,
provider_validation_trusted = False,
callback = callback_maybe_attach_account,
)
register_case(
config.registry,
'case7',
title="Not validated/trusted on server, validated locally, user not logged in",
require_authenticated = False,
#email_validated_provider = None,
email_validated_locally = True,
#user_exist_locally = True, Should be caught by email_validated_locally?
email_from_provider = None,
provider_validation_trusted = False,
callback = callback_must_be_logged_in,
)
register_case(
config.registry,
'case8',
title="Not validated/trusted on server, exists locally but not validated, user logged in",
require_authenticated = True,
email_validated_provider = None,
email_validated_locally = False,
user_exist_locally = True,
email_from_provider = True,
provider_validation_trusted = False,
callback = callback_maybe_attach_account,
)
register_case(
config.registry,
'case9',
title="Not validated/trusted on server, local user not matched, user logged in",
require_authenticated = True,
email_validated_provider = None,
email_validated_locally = False,
user_exist_locally = False,
email_from_provider = True,
provider_validation_trusted = False,
callback = callback_maybe_attach_account, #FIXME: And change email?
)
register_case(
config.registry,
'case10',
title="Not validated/trusted on server, exists locally but not validated, user not logged in",
require_authenticated = False,
email_validated_provider = None,
email_validated_locally = False,
user_exist_locally = True,
email_from_provider = None,
provider_validation_trusted = False,
callback = callback_must_be_logged_in,
)
register_case(
config.registry,
'case11',
title="Not validated/trusted on server, doesn't exist locally",
require_authenticated = False,
email_validated_provider = None,
#email_validated_locally = False,
user_exist_locally = False,
email_from_provider = True,
provider_validation_trusted = False,
callback = callback_register,
)
register_case(
config.registry,
'case12',
title="No email from provider, user logged in",
require_authenticated = True,
email_validated_provider = None,
email_validated_locally = None,
# user_exist_locally = True,
email_from_provider = False,
provider_validation_trusted = None,
callback = callback_maybe_attach_account,
)
register_case(
config.registry,
'case13',
title="No email from provider, user not logged in",
require_authenticated = False,
#email_validated_provider = None,
#email_validated_locally = None,
#user_exist_locally = None,
email_from_provider = False,
#provider_validation_trusted = None,
callback=callback_register, #Allow registration here?
)
|
2017 NBA Draft Prospect Profiles: Is Markelle Fultz really worth the No. 1 pick?
Markelle Fultz is the best prospect in the 2017 NBA Draft, which is not exactly something that you would’ve seen coming had you known him as a sophomore in high school.
That was the year that Fultz failed to make the varsity team at DeMatha (Md.), one of the nation’s best high school basketball programs. From there, he developed not only into a point guard, but into one of the nation’s best high school players, eventually landing in the postseason all-star games and on the Team USA U-18 roster that competed in the FIBA Americas event.
Fultz committed to Lorenzo Romar early in the process and maintained that commitment, even as he watched a Washington team that failed to make the NCAA tournament lose Andrew Andrews to graduation and Marquese Chriss and Dejounte Murray to the NBA Draft. As a result, and in spite of the fact that Fultz was putting up insane numbers, the Huskies couldn’t even crack 10 wins with Fultz at the helm, and it eventually cost Lorenzo Romar his job despite the fact that the favorite for the No. 1 pick in the 2018 NBA Draft, Michael Porter Jr., had already signed to play for him.
How will NBA teams weigh that?
Fultz put up ridiculous numbers, but he did it on a team that was the laughing stock of the Pac-12 come February. Is that guy worth the pick?
STRENGTHS: Fultz is an unbelievably well-rounded offensive player. I’m not sure what there is that he can’t do on that end of the floor. He shot 41.3 percent from beyond the arc last year and better than 50 percent inside the arc. At 6-foot-4, he’s big enough — and physical enough — to take smaller defenders into the post and score in the paint or simply shoot over the top of them off the dribble, and he does so effectively. His 6-foot-10 wingspan, huge hands and explosion on the move means that he can finish in traffic, whether it be with a dunk over a defender — his extension in the lane is reminiscent of Kawhi Leonard — or a finish around the shot-blocker; Fultz has terrific body control, and when combined with his length, allows him to finish contested layups at weird angles.
He’s more than just a scorer, however, as he averaged 5.9 assists last season with a higher assist rate (35.4 vs. 31.4) and lower turnover rate (15.4 vs. 18.9) than Lonzo Ball. That’s startling efficiency considering that he played such a major role on a team with so few options around him. Since 2012, only six guards have bettered his usage rate and offensive rating: Damian Lillard, C.J. McCollum, Nate Wolters, Erick Green, Kay Felder and Jawun Evans.
Fultz is excellent leading the break in transition but may be even better operating in ball-screen actions — according to Synergy, more than 30 percent of his possessions came in the pick and roll last season, and he averaged 1.011 points-per-possession, which was in the 93rd percentile nationally. He is patient, he’s ruthless if you switch a bigger defender onto him and he has terrific vision, whether it’s driving and drawing a help defender, finding the screener rolling to the rim or popping for a jumper or spotting an open shooter on the weak side of the floor.
Ideally, that’s the role that Fultz would play in the NBA, as a ball-dominant lead guard in the mold of a James Harden or Russell Westbrook or John Wall.
But Fultz is also big enough and long enough to share a back court with a smaller guard — Isaiah Thomas? — because he will be able to defend shooting guards. He’s also a good enough shooter that he would be able to play off the ball offensively in that same scenario, meaning that he not only has the ceiling to be a new-age franchise lead guard in the NBA, he has the potential to be a multi-positional defender.
In theory, he’s everything NBA teams are looking for.
WEAKNESSES: The biggest concern with Fultz is on the defensive end of the floor. While he has the tools to be a plus-defender and has shown the ability to be a playmaker on that end — he averaged 1.6 steals and 1.2 blocks, many of which were of the chasedown variety — but it was his half court defense that was a concern.
In a word, he was far too lackadaisical on that end of the floor. Whether it was being late on a rotation, getting beat on a close out because his feet were wrong, getting hung up on a screen, switching when he shouldn’t because he didn’t want to chase a player around a screen, failing to sit down in a defensive stance, etc., it’s not difficult to watch tape and find examples of the mistakes that Fultz made. How much of that was playing on a bad team for a coach that didn’t hold him accountable defensively, and how much of that is who Fultz is as a player?
To be frank, my gut says it was more of the former than the latter, but there also is a concern that Fultz’ approach to the game is too casual. He’s the kind of player that needs to grow into a game as opposed to being a guy that takes games over from the jump, but that isn’t necessarily a bad thing for a guy who projects as a lead guard and a distributor.
The bigger issue with Fultz is that he lacks initial burst off the dribble and there are questions about whether or not he can turn the corner against NBA defenders. His game is awkward when you watch him, but that’s because he has this uncanny ability to get defenders off balance. Hesitation moves, hang-dribble pull-ups, splitting the pick-and-roll, euro-steps in traffic. Some might call it crafty or slippery, but the bottom-line is this: Fultz is able to get by defenders because he has them leaning the wrong direction, and once he gets a step on you, his length — both his strides and his extension — make it impossible to catch up.
But he’s not a Russell Westbrook or a John Wall in the sense that he’ll be able to get by any defender simply due to his explosiveness, and that is where the questions about his jumper come into play. If Fultz is going to consistently be able to get to the rim, that jumper is going to have to be a threat, because Fultz’s arsenal won’t be as effective if defenders can play off of him.
On the season, his shooting numbers were impressive, but those percentages took a dip against better competition and on possessions where he was guarded (1.020 PPP, 57th percentile) vs. unguarded (1.636 PPP, 94th percentile), although that may be a result of being on a team that had no other option for offense.
Put another way, Fultz is a tough-shot maker, and there is reason to wonder if he’ll be able to make those tough shots against NBA defenders.
NBA COMPARISON: There really isn’t a perfect comparison for what Fultz could end up being as an NBA player. James Harden is probably the most apt considering that they are roughly the same size with the same physical dimensions, they both are ball-dominant scorers that can see the floor, they both likely needed a smaller guard in the back court with them because, despite their physical tools, they both lack that mean streak defensively.
But comparing any rookie to a guy that could end up being the NBA MVP after a season where he averaged 29.1 points, 11.2 assists and 8.1 boards is probably unfair. Perhaps D'Angelo Russell is more fitting, at least in the sense that it limits some of the expectations.
Whatever the case may be, if Fultz reaches his ceiling, he’ll be a franchise lead guard that has an entire offensive built around him. If he decides that he wants to play on the defensive end of the floor as well, he could one day be a top five player in the league.
OUTLOOK: Fultz has the potential to be the face of a franchise at the lead guard spot. His skill-set — the scoring, the ability to operate in pick-and-rolls, the efficiency — and ability makes it easy to picture him one day ending up playing a role similar to that of Harden or Westbrook or Wall. At the same time, I find it hard to envision a world where Fultz doesn’t one day end up averaging 20 points and six assists. It’s hard not to love a prospect where their floor is a bigger, more athletic D’angelo Russell.
When a player has the least risk and the highest ceiling of anyone in a draft class, it’s no wonder they end up being the consensus pick to go No. 1. |
from os import listdir, path
from random import shuffle
SEPARATOR = '_'
class Attrib:
def __init__(self, number, id, path):
self.number, self.id, self.path = number, id, path
def __str__(self):
return 'Number : ' + str(self.number) + ', Id : ' + str(self.id) + ', Path : ' + self.path
def __unicode__(self):
return self.__str__()
def __repr__(self):
return self.__str__()
def __eq__(self, other):
return self.path == other.path
def listdir_nohidden(_path):
output = []
if path.isdir(_path):
for f in listdir(_path):
if not f.startswith('.'):
output.append(f)
return output
def analyze_image(filepath):
filename = filepath[1+filepath.rfind('/'):]
words = filename.split(SEPARATOR)
words[-1] = words[-1][0:words[-1].find('.')]#Remove extension
return Attrib(words[0], words[1], filepath)
def load_image(dirs):
output = []
for d in dirs:
i = 0
for f in listdir_nohidden(d):
for ff in listdir_nohidden(d+f):
output.append(analyze_image(d + f + '/' + ff))
i += 1
print(d, ' contains ', i, ' items')
shuffle(output)
return output
def query(dirs, n, different_than=None):
output = []
i = 0
for f in load_image(dirs):
if i >= n:
break
if different_than is None or (different_than is not None and f not in different_than):
output.append(f)
i += 1
return output
def save_file(path, list):
with open(path, 'w+') as f:
for l in list:
f.write(l.path + '\n')
# The data is from the Machine Learning book
filepath = '/Users/Diego/Github/Digit-Dataset/'
d = [filepath]
nb_training = 270*0.8
nb_validation = 270*0.2
training = query(d, nb_training, different_than=None)
validation = query(d, nb_validation, different_than=training)
print "\nTraining ", len(training), " items\n"
for t in training:
print(t)
print "\nValidation ", len(validation), " items\n"
for v in validation:
print(v)
save_file('/Users/Diego/Desktop/training.list', training)
save_file('/Users/Diego/Desktop/validation.list', validation) |
The lowest price of Samsung Galaxy On Max (Black, 32 GB)(4 GB RAM) is Rs. 16,900. You can get the best deal of Samsung Galaxy On Max (Black, 32 GB)(4 GB RAM) on Flipkart and you can also get the prices of other stores in India. All prices are in INR(Indian Rupees) & normally valid with EMI & COD for all cities like Kolkata, Lucknow, Chennai, Mumbai, Gurgaon, Bangalore, Pune, New Delhi, Hyderabad, Ahmedabad, Jaipur, Chandigarh, Patna and others.. Kindly report for any errors found in specifications of Samsung Galaxy On Max (Black, 32 GB)(4 GB RAM). All prices of Samsung Galaxy On Max (Black, 32 GB)(4 GB RAM) last updated today i.e., April 23, 2019.
Price ofSamsung Galaxy On Max (Black, 32 GB)(4 GB RAM) in the above table is in Indian Rupee.
The lowest and the best price ofSamsung Galaxy On Max (Black, 32 GB)(4 GB RAM) is Rs.16,900 On Flipkart.
This productSamsung Galaxy On Max (Black, 32 GB)(4 GB RAM) is available on Flipkart.
The prices of Samsung Galaxy On Max (Black, 32 GB)(4 GB RAM) varies often, to be updated on the prices please check yoursearch regularly and get all the latest prices ofSamsung Galaxy On Max (Black, 32 GB)(4 GB RAM).
This price of Samsung Galaxy On Max (Black, 32 GB)(4 GB RAM) is valid for all major cities of India including Kolkata, Chennai, Lucknow, Gurgaon, Mumbai, Bangalore, New Delhi, Pune, Hyderabad, Ahmedabad, Jaipur, Chandigarh, Patna and others. |
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This module implements the Presto protocol to submit SQL statements, track
their state and retrieve their result as described in
https://github.com/prestodb/presto/wiki/HTTP-Protocol
and Presto source code.
The outline of a query is:
- Send HTTP POST to the coordinator
- Retrieve HTTP response with ``nextUri``
- Get status of the query execution by sending a HTTP GET to the coordinator
Presto queries are managed by the ``PrestoQuery`` class. HTTP requests are
managed by the ``PrestoRequest`` class. the status of a query is represented
by ``PrestoStatus`` and the result by ``PrestoResult``.
The main interface is :class:`PrestoQuery`: ::
>> request = PrestoRequest(host='coordinator', port=8080, user='test')
>> query = PrestoQuery(request, sql)
>> rows = list(query.execute())
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from typing import Any, Dict, List, Optional, Text, Tuple, Union # NOQA for mypy types
import requests
from prestodb import constants
from prestodb import exceptions
import prestodb.logging
from prestodb.transaction import NO_TRANSACTION
import prestodb.redirect
__all__ = ['PrestoQuery', 'PrestoRequest']
logger = prestodb.logging.get_logger(__name__)
MAX_ATTEMPTS = constants.DEFAULT_MAX_ATTEMPTS
SOCKS_PROXY = os.environ.get('SOCKS_PROXY')
if SOCKS_PROXY:
PROXIES = {
'http': 'socks5://' + SOCKS_PROXY,
'https': 'socks5://' + SOCKS_PROXY,
}
else:
PROXIES = None
class ClientSession(object):
def __init__(
self,
catalog,
schema,
source,
user,
properties=None,
headers=None,
transaction_id=None,
):
self.catalog = catalog
self.schema = schema
self.source = source
self.user = user
if properties is None:
properties = {}
self._properties = properties
self._headers = headers or {}
self.transaction_id = transaction_id
@property
def properties(self):
return self._properties
@property
def headers(self):
return self._headers
def get_header_values(headers, header):
return [val.strip() for val in headers[header].split(',')]
def get_session_property_values(headers, header):
kvs = get_header_values(headers, header)
return [
(k.strip(), v.strip()) for k, v
in (kv.split('=', 1) for kv in kvs)
]
class PrestoStatus(object):
def __init__(self, id, stats, info_uri, next_uri, rows, columns=None):
self.id = id
self.stats = stats
self.info_uri = info_uri
self.next_uri = next_uri
self.rows = rows
self.columns = columns
def __repr__(self):
return (
'PrestoStatus('
'id={}, stats={{...}}, info_uri={}, next_uri={}, rows=<count={}>'
')'.format(
self.id,
self.info_uri,
self.next_uri,
len(self.rows),
)
)
class PrestoRequest(object):
"""
Manage the HTTP requests of a Presto query.
:param host: name of the coordinator
:param port: TCP port to connect to the coordinator
:param user: associated with the query. It is useful for access control
and query scheduling.
:param source: associated with the query. It is useful for access
control and query scheduling.
:param catalog: to query. The *catalog* is associated with a Presto
connector. This variable sets the default catalog used
by SQL statements. For example, if *catalog* is set
to ``some_catalog``, the SQL statement
``SELECT * FROM some_schema.some_table`` will actually
query the table
``some_catalog.some_schema.some_table``.
:param schema: to query. The *schema* is a logical abstraction to group
table. This variable sets the default schema used by
SQL statements. For eample, if *schema* is set to
``some_schema``, the SQL statement
``SELECT * FROM some_table`` will actually query the
table ``some_catalog.some_schema.some_table``.
:param session_properties: set specific Presto behavior for the current
session. Please refer to the output of
``SHOW SESSION`` to check the available
properties.
:param http_headers: HTTP headers to post/get in the HTTP requests
:param http_scheme: "http" or "https"
:param auth: class that manages user authentication. ``None`` means no
authentication.
:max_attempts: maximum number of attempts when sending HTTP requests. An
attempt is an HTTP request. 5 attempts means 4 retries.
:request_timeout: How long (in seconds) to wait for the server to send
data before giving up, as a float or a
``(connect timeout, read timeout)`` tuple.
The client initiates a query by sending an HTTP POST to the
coordinator. It then gets a response back from the coordinator with:
- An URI to query to get the status for the query and the remaining
data
- An URI to get more information about the execution of the query
- Statistics about the current query execution
Please refer to :class:`PrestoStatus` to access the status returned by
:meth:`PrestoRequest.process`.
When the client makes an HTTP request, it may encounter the following
errors:
- Connection or read timeout:
- There is a network partition and TCP segments are
either dropped or delayed.
- The coordinator stalled because of an OS level stall (page allocation
stall, long time to page in pages, etc...), a JVM stall (full GC), or
an application level stall (thread starving, lock contention)
- Connection refused: Configuration or runtime issue on the coordinator
- Connection closed:
As most of these errors are transient, the question the caller should set
retries with respect to when they want to notify the application that uses
the client.
"""
http = requests
HTTP_EXCEPTIONS = (
http.ConnectionError, # type: ignore
http.Timeout, # type: ignore
)
def __init__(
self,
host, # type: Text
port, # type: int
user, # type: Text
source=None, # type: Text
catalog=None, # type: Text
schema=None, # type: Text
session_properties=None, # type: Optional[Dict[Text, Any]]
http_session=None, # type: Any
http_headers=None, # type: Optional[Dict[Text, Text]]
transaction_id=NO_TRANSACTION, # type: Optional[Text]
http_scheme=constants.HTTP, # type: Text
auth=constants.DEFAULT_AUTH, # type: Optional[Any]
redirect_handler=prestodb.redirect.GatewayRedirectHandler(),
max_attempts=MAX_ATTEMPTS, # type: int
request_timeout=constants.DEFAULT_REQUEST_TIMEOUT, # type: Union[float, Tuple[float, float]]
handle_retry=exceptions.RetryWithExponentialBackoff(),
):
# type: (...) -> None
self._client_session = ClientSession(
catalog,
schema,
source,
user,
session_properties,
http_headers,
transaction_id,
)
self._host = host
self._port = port
self._next_uri = None # type: Optional[Text]
if http_session is not None:
self._http_session = http_session
else:
# mypy cannot follow module import
self._http_session = self.http.Session() # type: ignore
self._http_session.headers.update(self.http_headers)
self._exceptions = self.HTTP_EXCEPTIONS
self._auth = auth
if self._auth:
if http_scheme == constants.HTTP:
raise ValueError('cannot use authentication with HTTP')
self._auth.set_http_session(self._http_session)
self._exceptions += self._auth.get_exceptions()
self._redirect_handler = redirect_handler
self._request_timeout = request_timeout
self._handle_retry = handle_retry
self.max_attempts = max_attempts
self._http_scheme = http_scheme
@property
def transaction_id(self):
return self._client_session.transaction_id
@transaction_id.setter
def transaction_id(self, value):
self._client_session.transaction_id = value
@property
def http_headers(self):
# type: () -> Dict[Text, Text]
headers = {}
headers[constants.HEADER_CATALOG] = self._client_session.catalog
headers[constants.HEADER_SCHEMA] = self._client_session.schema
headers[constants.HEADER_SOURCE] = self._client_session.source
headers[constants.HEADER_USER] = self._client_session.user
headers[constants.HEADER_SESSION] = ','.join(
# ``name`` must not contain ``=``
'{}={}'.format(name, value)
for name, value in self._client_session.properties.items()
)
# merge custom http headers
for key in self._client_session.headers:
if key in headers.keys():
raise ValueError('cannot override reserved HTTP header {}'.format(key))
headers.update(self._client_session.headers)
transaction_id = self._client_session.transaction_id
headers[constants.HEADER_TRANSACTION] = transaction_id
return headers
@property
def max_attempts(self):
# type: () -> int
return self._max_attempts
@max_attempts.setter
def max_attempts(self, value):
# type: (int) -> None
self._max_attempts = value
if value == 1: # No retry
self._get = self._http_session.get
self._post = self._http_session.post
self._delete = self._http_session.delete
return
with_retry = exceptions.retry_with(
self._handle_retry,
exceptions=self._exceptions,
conditions=(
# need retry when there is no exception but the status code is 503
lambda response: getattr(response, 'status_code', None) == 503,
),
max_attempts=self._max_attempts,
)
self._get = with_retry(self._http_session.get)
self._post = with_retry(self._http_session.post)
self._delete = with_retry(self._http_session.delete)
def get_url(self, path):
# type: (Text) -> Text
return "{protocol}://{host}:{port}{path}".format(
protocol=self._http_scheme,
host=self._host,
port=self._port,
path=path
)
@property
def statement_url(self):
# type: () -> Text
return self.get_url(constants.URL_STATEMENT_PATH)
@property
def next_uri(self):
# type: () -> Text
return self._next_uri
def post(self, sql):
data = sql.encode('utf-8')
http_headers = self.http_headers
http_response = self._post(
self.statement_url,
data=data,
headers=http_headers,
timeout=self._request_timeout,
allow_redirects=self._redirect_handler is None,
proxies=PROXIES,
)
if self._redirect_handler is not None:
while http_response is not None and http_response.is_redirect:
location = http_response.headers['Location']
url = self._redirect_handler.handle(location)
logger.info('redirect {} from {} to {}'.format(
http_response.status_code,
location,
url,
))
http_response = self._post(
url,
data=data,
headers=http_headers,
timeout=self._request_timeout,
allow_redirects=False,
proxies=PROXIES,
)
return http_response
def get(self, url):
return self._get(
url,
headers=self.http_headers,
timeout=self._request_timeout,
proxies=PROXIES,
)
def delete(self, url):
return self._delete(
url,
timeout=self._request_timeout,
proxies=PROXIES,
)
def _process_error(self, error, query_id):
error_type = error['errorType']
if error_type == 'EXTERNAL':
raise exceptions.PrestoExternalError(error, query_id)
elif error_type == 'USER_ERROR':
return exceptions.PrestoUserError(error, query_id)
return exceptions.PrestoQueryError(error, query_id)
def raise_response_error(self, http_response):
if http_response.status_code == 503:
raise exceptions.Http503Error('error 503: service unavailable')
raise exceptions.HttpError(
'error {}{}'.format(
http_response.status_code,
': {}'.format(http_response.content) if http_response.content else '',
)
)
def process(self, http_response):
# type: (requests.Response) -> PrestoStatus
if not http_response.ok:
self.raise_response_error(http_response)
http_response.encoding = 'utf-8'
response = http_response.json()
logger.debug('HTTP {}: {}'.format(http_response.status_code, response))
if 'error' in response:
raise self._process_error(response['error'], response.get('id'))
if constants.HEADER_CLEAR_SESSION in http_response.headers:
for prop in get_header_values(
http_response.headers,
constants.HEADER_CLEAR_SESSION,
):
self._client_session.properties.pop(prop, None)
if constants.HEADER_SET_SESSION in http_response.headers:
for key, value in get_session_property_values(
http_response.headers,
constants.HEADER_SET_SESSION,
):
self._client_session.properties[key] = value
self._next_uri = response.get('nextUri')
return PrestoStatus(
id=response['id'],
stats=response['stats'],
info_uri=response['infoUri'],
next_uri=self._next_uri,
rows=response.get('data', []),
columns=response.get('columns'),
)
class PrestoResult(object):
"""
Represent the result of a Presto query as an iterator on rows.
This class implements the iterator protocol as a generator type
https://docs.python.org/3/library/stdtypes.html#generator-types
"""
def __init__(self, query, rows=None):
self._query = query
self._rows = rows or []
self._rownumber = 0
@property
def rownumber(self):
# type: () -> int
return self._rownumber
def __iter__(self):
# Initial fetch from the first POST request
for row in self._rows:
self._rownumber += 1
yield row
self._rows = None
# Subsequent fetches from GET requests until next_uri is empty.
while not self._query.is_finished():
rows = self._query.fetch()
for row in rows:
self._rownumber += 1
logger.debug('row {}'.format(row))
yield row
class PrestoQuery(object):
"""Represent the execution of a SQL statement by Presto."""
def __init__(
self,
request, # type: PrestoRequest
sql, # type: Text
):
# type: (...) -> None
self.query_id = None # type: Optional[Text]
self._stats = {} # type: Dict[Any, Any]
self._columns = None # type: Optional[List[Text]]
self._finished = False
self._cancelled = False
self._request = request
self._sql = sql
self._result = PrestoResult(self)
@property
def columns(self):
return self._columns
@property
def stats(self):
return self._stats
@property
def result(self):
return self._result
def execute(self):
# type: () -> PrestoResult
"""Initiate a Presto query by sending the SQL statement
This is the first HTTP request sent to the coordinator.
It sets the query_id and returns a Result object used to
track the rows returned by the query. To fetch all rows,
call fetch() until is_finished is true.
"""
if self._cancelled:
raise exceptions.PrestoUserError(
"Query has been cancelled",
self.query_id,
)
response = self._request.post(self._sql)
status = self._request.process(response)
self.query_id = status.id
self._stats.update({u'queryId': self.query_id})
self._stats.update(status.stats)
if status.next_uri is None:
self._finished = True
self._result = PrestoResult(self, status.rows)
return self._result
def fetch(self):
# type: () -> List[List[Any]]
"""Continue fetching data for the current query_id"""
response = self._request.get(self._request.next_uri)
status = self._request.process(response)
if status.columns:
self._columns = status.columns
self._stats.update(status.stats)
logger.debug(status)
if status.next_uri is None:
self._finished = True
return status.rows
def cancel(self):
# type: () -> None
"""Cancel the current query"""
if self.is_finished():
return
self._cancelled = True
if self._request.next_uri is None:
return
response = self._request.delete(self._request.next_uri)
if response.status_code == requests.codes.no_content:
return
self._request.raise_response_error(response)
def is_finished(self):
# type: () -> bool
return self._finished
|
Packed with features and truly a pleasure to drive! All of the premium features expected of a Mazda are offered, including: heated door mirrors, a power convertible roof, and much more. Mazda made sure to keep road-handling and sportiness at the top of it's priority list. It features an automatic transmission, rear-wheel drive, and a 2 liter 4 cylinder engine. |
#!/usr/bin/python
import sys
import json_model
import pyncs
def Run(argv):
if len(argv) < 2:
print "Usage: %s <model_file>" % argv[0]
model = json_model.JSONModel(argv[1])
if not model.valid:
print "Failed to load model"
return
model_specification = model.model_specification
simulation_parameters = pyncs.SimulationParameters()
simulation_parameters.thisown = False;
simulation = pyncs.Simulation(model_specification,
simulation_parameters)
if not simulation.init(pyncs.string_list(argv)):
print "Failed to initialize simulator."
return
print "Injecting pre-specified inputs."
for name, group in model.input_groups.items():
simulation.addInput(group)
print "Injection complete."
print "Adding reports."
sinks = {}
for name, report in model.reports.items():
source = simulation.addReport(report)
if not source:
print "Failed to add report %s" % name
return
#sinks[name] = pyncs.NullSink(source)
#sinks[name] = pyncs.AsciiStreamSink(source)
#sinks[name] = pyncs.AsciiFileSink(source, "/dev/fd/0")
sinks[name] = pyncs.AsciiFileSink(source, "/dev/fd/0")
print "Starting simulation."
for i in range(0,100):
simulation.step()
del simulation
if __name__ == "__main__":
Run(sys.argv)
|
Another mixed media art journal painting, Girl with a Balloon, is a tiny sketch created in pastel and paint on torn book pages. She is running free, balloon in hand, towards her dreams.
Find tons of inspiration for your own art journal with Art Journal Courage by Dina Wakley.
Full of techniques, prompts and encouragement to use your own handwriting, Art Journal Courage is a highly recommended must have for the art journal newbie and enthusiast. |
from __future__ import absolute_import, division, print_function
from collections import defaultdict
from operator import getitem
from datetime import datetime
from time import time
from ..compatibility import MutableMapping
from ..core import istask, ishashable
from ..utils_test import add # noqa: F401
class Store(MutableMapping):
""" Store - A storage of data and computation
Examples
--------
Store data like a dictionary
>>> import dask.store as ds
>>> s = ds.Store()
>>> s['x'] = 10
>>> s['x']
10
Also store computation on that data
>>> s['y'] = (add, 'x', 5)
Accessing these keys results in computations. Results may be cached for
reuse.
>>> s['y']
15
Design
------
A Store maintains the following state
dsk: dict
A dask to define all computation
cache: dict-like
Stores both ground data and cached intermediate values
data: set
The keys in the cache that can not be removed for correctness.
compute_time: dict:: {key: float}
dict mapping the time it took to compute each key
access_times: dict:: {key: [datetimes]}
The times at which a key was accessed
"""
def __init__(self, cache=None):
self.dsk = dict()
if cache is None:
cache = dict()
self.cache = cache
self.data = set()
self.compute_time = dict()
self.access_times = defaultdict(list)
def __setitem__(self, key, value):
if key in self.dsk:
if (self.dsk[key] == value or
self.dsk[key] == (getitem, self.cache, key) and
self.cache[key] == value):
return
else:
raise KeyError("Can not overwrite data")
if istask(value):
self.dsk[key] = value
else:
self.cache[key] = value
self.dsk[key] = (getitem, self.cache, key)
self.data.add(key)
def __getitem__(self, key):
if isinstance(key, list):
return (self[item] for item in key)
if not ishashable(key):
return key
if key not in self.dsk:
return key
self.access_times[key].append(datetime.now())
if key in self.cache:
return self.cache[key]
task = self.dsk[key]
func, args = task[0], task[1:]
if func == getitem and args[0] is self.cache:
return self.cache[args[1]]
args = [self[arg] for arg in args]
start = time()
result = func(*args)
end = time()
self.cache[key] = result
self.compute_time[key] = end - start
return result
def __len__(self):
return len(self.dsk)
def __iter__(self):
return iter(self.dsk)
def __delitem__(self, key):
raise ValueError("Dask Store does not support deletion")
|
There are no real sprites in SDL. As it’s a cross-platform library and sprites are heavily hardware-dependent, there’s no easy way to work them in. You could try to take advantage of a platform’s hardware by intercepting attempts at creating SDL surfaces and seeing if its properties fit within a hardware sprite, but it’d be tricky to make something like that work with existing code. Everything is done with bobs instead.
Anyway, that’s academic, as - from what I’ve seen so far - the GP2x doesn’t have sprites either.
Getting sprites working in SDL is pretty easy. Load a bitmap, convert it to a surface that matches the current display, then blit it out. It even supports a transparent value, so empty regions of a bitmap won’t be rendered to the display. I’ve built a Sprite class that does all of that business that inherits from a simplistic SpriteBase class.
As some of the sprites are animated (SAM and UFO at the moment), I’ve stolen the Animation class from Woopsi and re-jigged it to work with SDL’s types and surfaces instead of u16 bitmaps. I’ll integrate it into an AnimSprite class, which will inherit from the SpriteBase class.
I spent some time getting SVN working on my Linux box before starting coding today. I’ve got the SVN VMWare appliance created by Young Technologies installed. Getting it working was more fiddly than I’d hoped - authorisation seems to be broken (or there’s something cryptic in the config files I’ve missed) and the WebSVN install doesn’t seem to work properly (or, again, there’s a config problem somewhere). In any case, I don’t need authorisation and don’t care about the web interface. It gives me a source control solution, which is all I want. It’s separate from the underlying hardware and OS install as it’s a VM, so I can easily move it to another machine and back it up as needed. The code for ChromaX now lives in SVN. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.