code
stringlengths 501
4.91M
| package
stringlengths 2
88
| path
stringlengths 11
291
| filename
stringlengths 4
197
| parsed_code
stringlengths 0
4.91M
| quality_prob
float64 0
0.99
| learning_prob
float64 0.02
1
|
---|---|---|---|---|---|---|
``tda-api``: A TD Ameritrade API Wrapper
========================================
.. image:: https://img.shields.io/discord/720378361880248621.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2
:target: https://discord.gg/BEr6y6Xqyv
.. image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Fshieldsio-patreon.vercel.app%2Fapi%3Fusername%3Dalexgolec%26type%3Dpatrons&style=flat
:target: https://patreon.com/alexgolec
``schwab-api``
--------------
Background
==========
In 2020, Charles Schwab completed its purchase of TDAmeritrade. Prior to this
purchase, TDAmeritrade operated (in the author's personal opinion) the most
high-quality, accessible, and cost effective trading API. It offers developers
access to their TDAmeritrade accounts, trading in equities, ETFs, and options,
plus a wide range of historical data.
Since this purchase, Schwab has begun to transition TDAmeritrade customers onto
their own service. In late 2022, the announced the TDAmeritrade REST API will be
included in this transition, which will happen over the course of 2023.
What is ``schwab-api``?
=======================
The author of this repo (Alex Golec) had previous authored ``tda-api``, an
unofficial python wrapper around the previous TDAmeritrade API. This library is
currently the most popular method of accessing this API, with a community of
hundreds of active users on our Discord server.
While the details of the forthcoming Schwab API have not yet been announced,
this repository serves as a placeholder for a python wrapper around it. It
currently has no functionality. Stay tuned for updates as they become available.
**Disclaimer:** *schwab-api is an unofficial API wrapper. It is in no way
endorsed by or affiliated with TD Ameritrade, Charles Schwab or any associated
organization. Make sure to read and understand the terms of service of the
underlying API before using this package. The authors accept no responsibility
for any damage that might stem from use of this package. See the LICENSE file
for more details.*
|
schwab-py
|
/schwab-py-0.0.0a0.tar.gz/schwab-py-0.0.0a0/README.rst
|
README.rst
|
``tda-api``: A TD Ameritrade API Wrapper
========================================
.. image:: https://img.shields.io/discord/720378361880248621.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2
:target: https://discord.gg/BEr6y6Xqyv
.. image:: https://img.shields.io/endpoint.svg?url=https%3A%2F%2Fshieldsio-patreon.vercel.app%2Fapi%3Fusername%3Dalexgolec%26type%3Dpatrons&style=flat
:target: https://patreon.com/alexgolec
``schwab-api``
--------------
Background
==========
In 2020, Charles Schwab completed its purchase of TDAmeritrade. Prior to this
purchase, TDAmeritrade operated (in the author's personal opinion) the most
high-quality, accessible, and cost effective trading API. It offers developers
access to their TDAmeritrade accounts, trading in equities, ETFs, and options,
plus a wide range of historical data.
Since this purchase, Schwab has begun to transition TDAmeritrade customers onto
their own service. In late 2022, the announced the TDAmeritrade REST API will be
included in this transition, which will happen over the course of 2023.
What is ``schwab-api``?
=======================
The author of this repo (Alex Golec) had previous authored ``tda-api``, an
unofficial python wrapper around the previous TDAmeritrade API. This library is
currently the most popular method of accessing this API, with a community of
hundreds of active users on our Discord server.
While the details of the forthcoming Schwab API have not yet been announced,
this repository serves as a placeholder for a python wrapper around it. It
currently has no functionality. Stay tuned for updates as they become available.
**Disclaimer:** *schwab-api is an unofficial API wrapper. It is in no way
endorsed by or affiliated with TD Ameritrade, Charles Schwab or any associated
organization. Make sure to read and understand the terms of service of the
underlying API before using this package. The authors accept no responsibility
for any damage that might stem from use of this package. See the LICENSE file
for more details.*
| 0.75401 | 0.412944 |
schwarzlog
=======================
Library to add some missing functionality in Python's `logging` module.
$ pip install schwarzlog
Caveat: Most functionality is currently not documented. I'll to write some docs going forward, though.
Motivation / Background
--------------------------------
logging is often helpful to find problems in deployed code.
However Python's logging infrastructure is a bit annoying at times. For example if a library starts logging data but the application/unit test did not configure the logging infrastructure Python will emit warnings. If the library supports conditional logging (e.g. passing a flag if it should use logging to avoid the "no logging handler installed" issue mentioned above) this might complicate the library code (due to "is logging enabled" checks).
Also I find it a bit cumbersome to test Python's logging in libraries because one has to install global handlers (and clean up when the test is done!).
This library should solve all these problems with a helper function:
- It can just return a new logger with a specified name.
- If logging should be disabled entirely it just returns a fake logger which will discard all messages. The application doesn't have to be aware of this and no global state will be changed.
- The caller can also pass a pre-configured logger (e.g. to test the emitted log messages easily or to use customized logging mechanisms).
Since its inception this library was extended with a few useful helper functions and specialized logging classes.
CallbackLogger
--------------------------------
A `Logger`-like class which can trigger a additional callback in addition to passing a log message through the logging infrastructure. I'm using this to ensure severe problems logged by lower-level libraries will be displayed in the UI. If you set `merge_arguments = True` the callback only gets the final message (as `str`), otherwise it'll get the `logging.LogRecord`.
**Usage:**
```python
import logging
from schwarz.log_utils import CallbackLogger
_l = logging.getLogger('foo')
logged_msgs = []
cb = logged_msgs.append
log = CallbackLogger(log=_l, callback=cb, callback_minlevel=logging.ERROR, merge_arguments=True)
log.info('info message')
log.error('error message')
logged_msgs == ['error message']
```
ForwardingLogger
--------------------------------
This logger forwards messages above a certain level (by default: all messages) to a configured parent logger. Optionally it can prepend the configured `forward_prefix` to all *forwarded* log messages. `forward_suffix` works like `forward_prefix` but appends some string.
This can be helpful if you need to log contextualized messages. For example you could log detailed messages related to a specific file in "imgfile.log" but you want more important messages (e.g. warnings, errors) in another log file used by your application. In that scenario you can quickly spot problems in your main log file while detailed data is available in separate log files.
Python's default logging module can not handle this because:
- A `Logger`'s log level is only applied for messages emitted directly on that logger (not for propagated log messages), see this [blog post by Marius Gedminas](https://mg.pov.lt/blog/logging-levels.html).
- Adding a log prefix only for certain loggers can only by done by duplicating handler configuration. Python's handlers are quite basic so if the duplicated handlers access a shared resource (e.g. a log file) Python will open it twice (which causes data loss if `mode='w'` is used).
**Usage:**
```python
import logging
from schwarz.log_utils import get_logger, ForwardingLogger
parent_logger = logging.getLogger('foo')
log = ForwardingLogger(
forward_to=parent_logger,
forward_prefix='[ABC] ',
forward_minlevel=logging.INFO
)
log.info('foo')
# parent_logger sees a log message like "[ABC] foo"
```
Support for writing tests
--------------------------------
The library also contains some helpers to ease writing logging-related tests.
```python
import logging
from schwarz.log_utils.testutils import *
# "lc" works a bit similar to a LogCapture instance
log, lc = build_collecting_logger()
log.info('foo')
log.debug('bar')
assert_did_log_message(lc, 'foo')
# this raises an AssertionError as "foo" was logged with INFO
assert_did_log_message(lc, 'foo', level=logging.DEBUG)
lr = assert_did_log_message(lc, 'foo', level=logging.INFO)
# you can also inspect the actual "LogRecord" instance "lr" if you need to
assert_no_log_messages(lc, min_level=logging.WARN)
```
Changes
--------------------------------
**0.6.2** (2022-05-25)
- `assert_did_log_message(…)` now returns the `LogRecord` instance which can
be used by the caller for more detailled checks.
- `ForwardingLogger` now also forwards `.exc_info` correctly so that the main
logger can also log exceptions.
|
schwarzlog
|
/schwarzlog-0.6.2.tar.gz/schwarzlog-0.6.2/README.md
|
README.md
|
import logging
from schwarz.log_utils import CallbackLogger
_l = logging.getLogger('foo')
logged_msgs = []
cb = logged_msgs.append
log = CallbackLogger(log=_l, callback=cb, callback_minlevel=logging.ERROR, merge_arguments=True)
log.info('info message')
log.error('error message')
logged_msgs == ['error message']
import logging
from schwarz.log_utils import get_logger, ForwardingLogger
parent_logger = logging.getLogger('foo')
log = ForwardingLogger(
forward_to=parent_logger,
forward_prefix='[ABC] ',
forward_minlevel=logging.INFO
)
log.info('foo')
# parent_logger sees a log message like "[ABC] foo"
import logging
from schwarz.log_utils.testutils import *
# "lc" works a bit similar to a LogCapture instance
log, lc = build_collecting_logger()
log.info('foo')
log.debug('bar')
assert_did_log_message(lc, 'foo')
# this raises an AssertionError as "foo" was logged with INFO
assert_did_log_message(lc, 'foo', level=logging.DEBUG)
lr = assert_did_log_message(lc, 'foo', level=logging.INFO)
# you can also inspect the actual "LogRecord" instance "lr" if you need to
assert_no_log_messages(lc, min_level=logging.WARN)
| 0.512449 | 0.747178 |
import logging
from logging.handlers import MemoryHandler
__all__ = [
'get_logger', 'l_', 'log_',
'CollectingHandler',
'NullLogger',
]
# This is added for backwards-compatibility with Python 2.6
class NullLogger(logging.Logger):
def _log(self, *args, **kwargs):
pass
def handle(self, record):
pass
class CollectingHandler(MemoryHandler):
"""
This handler collects log messages until the buffering capacity is
exhausted or a message equal/above a certain level was logged. After the
first (automatic) flush buffering is disabled (manual calls to .flush()
do not disable buffering).
Flushing only works if a target was set.
"""
def __init__(self, capacity=10000, flush_level=logging.ERROR, target=None):
super(CollectingHandler, self).__init__(capacity, flushLevel=flush_level, target=target)
def shouldFlush(self, record):
should_flush = super(CollectingHandler, self).shouldFlush(record)
if should_flush and self.capacity > 0:
# disable buffering after the first flush was necessary...
self.capacity = 0
return should_flush
def set_target(self, target, disable_buffering=False):
self.target = target
if disable_buffering:
self.capacity = 0
self.flush()
setTarget = set_target
class ContextAdapter(logging.LoggerAdapter):
def process(self, msg, kwargs):
if not self.extra:
return (msg, kwargs)
extra_data = tuple(self.extra.items())
assert len(extra_data) == 1
ctx_value = extra_data[0][1]
adapted_msg = '[%s] %s' % (ctx_value, msg)
return (adapted_msg, kwargs)
def get_logger(name, log=True, context=None, level=None):
if not log:
log = NullLogger('__log_proxy')
elif not isinstance(log, logging.Logger):
log = logging.getLogger(name)
if level is not None:
log.setLevel(level)
if context is None:
return log
adapter = ContextAdapter(log, {'context': context})
return adapter
def log_(name, get_logger_=None):
"""Return a Logger for the specified name. If get_logger is None, use
Python's default getLogger.
"""
get_func = get_logger_ if (get_logger_ is not None) else logging.getLogger
return get_func(name)
def l_(log, fallback=None):
"""Return a NullLogger if log is None.
This is useful if logging should only happen to optional loggers passed
from callers and you don't want clutter the code with "if log is not None"
conditions."""
if log is None:
return NullLogger('__log_proxy') if (fallback is None) else fallback
return log
|
schwarzlog
|
/schwarzlog-0.6.2.tar.gz/schwarzlog-0.6.2/schwarz/log_utils/log_proxy.py
|
log_proxy.py
|
import logging
from logging.handlers import MemoryHandler
__all__ = [
'get_logger', 'l_', 'log_',
'CollectingHandler',
'NullLogger',
]
# This is added for backwards-compatibility with Python 2.6
class NullLogger(logging.Logger):
def _log(self, *args, **kwargs):
pass
def handle(self, record):
pass
class CollectingHandler(MemoryHandler):
"""
This handler collects log messages until the buffering capacity is
exhausted or a message equal/above a certain level was logged. After the
first (automatic) flush buffering is disabled (manual calls to .flush()
do not disable buffering).
Flushing only works if a target was set.
"""
def __init__(self, capacity=10000, flush_level=logging.ERROR, target=None):
super(CollectingHandler, self).__init__(capacity, flushLevel=flush_level, target=target)
def shouldFlush(self, record):
should_flush = super(CollectingHandler, self).shouldFlush(record)
if should_flush and self.capacity > 0:
# disable buffering after the first flush was necessary...
self.capacity = 0
return should_flush
def set_target(self, target, disable_buffering=False):
self.target = target
if disable_buffering:
self.capacity = 0
self.flush()
setTarget = set_target
class ContextAdapter(logging.LoggerAdapter):
def process(self, msg, kwargs):
if not self.extra:
return (msg, kwargs)
extra_data = tuple(self.extra.items())
assert len(extra_data) == 1
ctx_value = extra_data[0][1]
adapted_msg = '[%s] %s' % (ctx_value, msg)
return (adapted_msg, kwargs)
def get_logger(name, log=True, context=None, level=None):
if not log:
log = NullLogger('__log_proxy')
elif not isinstance(log, logging.Logger):
log = logging.getLogger(name)
if level is not None:
log.setLevel(level)
if context is None:
return log
adapter = ContextAdapter(log, {'context': context})
return adapter
def log_(name, get_logger_=None):
"""Return a Logger for the specified name. If get_logger is None, use
Python's default getLogger.
"""
get_func = get_logger_ if (get_logger_ is not None) else logging.getLogger
return get_func(name)
def l_(log, fallback=None):
"""Return a NullLogger if log is None.
This is useful if logging should only happen to optional loggers passed
from callers and you don't want clutter the code with "if log is not None"
conditions."""
if log is None:
return NullLogger('__log_proxy') if (fallback is None) else fallback
return log
| 0.778986 | 0.147432 |
import logging
import sys
__all__ = ['contextfile_logger', 'ForwardingLogger']
class ForwardingLogger(logging.Logger):
"""
This logger forwards messages above a certain level (by default: all messages)
to a configured parent logger. Optionally it can prepend the configured
"forward_prefix" to all *forwarded* log messages.
"forward_suffix" works like "forward_prefix" but appends some string.
Python's default logging module can not handle this because
a) a logger's log level is only applied for messages emitted directly on
that logger (not for propagated log messages), see
https://mg.pov.lt/blog/logging-levels.html
b) adding a log prefix only for certain loggers can only by done by
duplicating handler configuration. Python's handlers are quite basic
so if the duplicated handlers access a shared resource (e.g. a log file)
Python will open it twice (which causes data loss if mode='w' is
used).
c) and last but not least we often need to configure the specific logging
handlers dynamically (e.g. log to a context-dependent file) which is
not doable via Python's fileConfig either - so we can go fully dynamic
here...
"""
def __init__(self, *args, **kwargs):
self._forward_to = kwargs.pop('forward_to')
self._forward_prefix = kwargs.pop('forward_prefix', None)
self._forward_suffix = kwargs.pop('forward_suffix', None)
self._forward_minlevel = kwargs.pop('forward_minlevel', logging.NOTSET)
if (not args) and ('name' not in kwargs):
name = self.__class__.__name__
args = (name, )
super(ForwardingLogger, self).__init__(*args, **kwargs)
def callHandlers(self, record):
nr_handlers = self._call_handlers(record)
if self._forward_to is None:
self._emit_last_resort_message(record, nr_handlers)
# "logging.NOTSET" (default) is defined as 0 so that works here just fine
if (record.levelno >= self._forward_minlevel) and (self._forward_to is not None):
msg = record.msg
if self._forward_prefix:
msg = self._forward_prefix + msg
if self._forward_suffix:
msg += self._forward_suffix
record_kwargs = {
'exc_info': record.exc_info,
}
if hasattr(record, 'stack_info'):
# Python 3
record_kwargs['stack_info'] = record.stack_info
self._forward_to.log(record.levelno, msg, *record.args, **record_kwargs)
def _call_handlers(self, record):
# ,--- mostly copied from logging.Logger.callHandlers -----------------
logger = self
nr_found = 0
while logger:
for handler in logger.handlers:
nr_found = nr_found + 1
if record.levelno >= handler.level:
handler.handle(record)
if logger.propagate:
logger = logger.parent
else:
break
return nr_found
# `--- end copy -------------------------------------------------------
def _emit_last_resort_message(self, record, nr_handlers):
# ,--- mostly copied from logging.Logger.callHandlers -----------------
if nr_handlers > 0:
return
if logging.lastResort:
if record.levelno >= logging.lastResort.level:
logging.lastResort.handle(record)
elif logging.raiseExceptions and not self.manager.emittedNoHandlerWarning:
sys.stderr.write("No handlers could be found for logger"
" \"%s\"\n" % self.name)
self.manager.emittedNoHandlerWarning = True
# `--- end copy -------------------------------------------------------
def contextfile_logger(logger_name, log_path=None, handler=None, **kwargs):
"""
Return a ForwardingLogger which logs to the given logfile.
This is a generic example how to use the ForwardingLogger and can be used
to create log files which are placed near the data they are referring to.
"""
log = ForwardingLogger(logger_name,
forward_to=kwargs.pop('forward_to', None),
forward_prefix=kwargs.pop('forward_prefix', None),
forward_minlevel=kwargs.pop('forward_minlevel', logging.NOTSET),
**kwargs
)
if handler is None:
# The logging module does not keep a reference to this FileHandler anywhere
# as we are instantiating it directly (not by name or fileconfig).
# That means Python's garbage collection will work just fine and the
# underlying log file will be closed when our batch-specific
# ForwardingLogger goes out of scope.
handler = logging.FileHandler(log_path, delay=True)
handler.setFormatter(logging.Formatter(
fmt='%(asctime)s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
))
log.addHandler(handler)
return log
|
schwarzlog
|
/schwarzlog-0.6.2.tar.gz/schwarzlog-0.6.2/schwarz/log_utils/forwarding_logger.py
|
forwarding_logger.py
|
import logging
import sys
__all__ = ['contextfile_logger', 'ForwardingLogger']
class ForwardingLogger(logging.Logger):
"""
This logger forwards messages above a certain level (by default: all messages)
to a configured parent logger. Optionally it can prepend the configured
"forward_prefix" to all *forwarded* log messages.
"forward_suffix" works like "forward_prefix" but appends some string.
Python's default logging module can not handle this because
a) a logger's log level is only applied for messages emitted directly on
that logger (not for propagated log messages), see
https://mg.pov.lt/blog/logging-levels.html
b) adding a log prefix only for certain loggers can only by done by
duplicating handler configuration. Python's handlers are quite basic
so if the duplicated handlers access a shared resource (e.g. a log file)
Python will open it twice (which causes data loss if mode='w' is
used).
c) and last but not least we often need to configure the specific logging
handlers dynamically (e.g. log to a context-dependent file) which is
not doable via Python's fileConfig either - so we can go fully dynamic
here...
"""
def __init__(self, *args, **kwargs):
self._forward_to = kwargs.pop('forward_to')
self._forward_prefix = kwargs.pop('forward_prefix', None)
self._forward_suffix = kwargs.pop('forward_suffix', None)
self._forward_minlevel = kwargs.pop('forward_minlevel', logging.NOTSET)
if (not args) and ('name' not in kwargs):
name = self.__class__.__name__
args = (name, )
super(ForwardingLogger, self).__init__(*args, **kwargs)
def callHandlers(self, record):
nr_handlers = self._call_handlers(record)
if self._forward_to is None:
self._emit_last_resort_message(record, nr_handlers)
# "logging.NOTSET" (default) is defined as 0 so that works here just fine
if (record.levelno >= self._forward_minlevel) and (self._forward_to is not None):
msg = record.msg
if self._forward_prefix:
msg = self._forward_prefix + msg
if self._forward_suffix:
msg += self._forward_suffix
record_kwargs = {
'exc_info': record.exc_info,
}
if hasattr(record, 'stack_info'):
# Python 3
record_kwargs['stack_info'] = record.stack_info
self._forward_to.log(record.levelno, msg, *record.args, **record_kwargs)
def _call_handlers(self, record):
# ,--- mostly copied from logging.Logger.callHandlers -----------------
logger = self
nr_found = 0
while logger:
for handler in logger.handlers:
nr_found = nr_found + 1
if record.levelno >= handler.level:
handler.handle(record)
if logger.propagate:
logger = logger.parent
else:
break
return nr_found
# `--- end copy -------------------------------------------------------
def _emit_last_resort_message(self, record, nr_handlers):
# ,--- mostly copied from logging.Logger.callHandlers -----------------
if nr_handlers > 0:
return
if logging.lastResort:
if record.levelno >= logging.lastResort.level:
logging.lastResort.handle(record)
elif logging.raiseExceptions and not self.manager.emittedNoHandlerWarning:
sys.stderr.write("No handlers could be found for logger"
" \"%s\"\n" % self.name)
self.manager.emittedNoHandlerWarning = True
# `--- end copy -------------------------------------------------------
def contextfile_logger(logger_name, log_path=None, handler=None, **kwargs):
"""
Return a ForwardingLogger which logs to the given logfile.
This is a generic example how to use the ForwardingLogger and can be used
to create log files which are placed near the data they are referring to.
"""
log = ForwardingLogger(logger_name,
forward_to=kwargs.pop('forward_to', None),
forward_prefix=kwargs.pop('forward_prefix', None),
forward_minlevel=kwargs.pop('forward_minlevel', logging.NOTSET),
**kwargs
)
if handler is None:
# The logging module does not keep a reference to this FileHandler anywhere
# as we are instantiating it directly (not by name or fileconfig).
# That means Python's garbage collection will work just fine and the
# underlying log file will be closed when our batch-specific
# ForwardingLogger goes out of scope.
handler = logging.FileHandler(log_path, delay=True)
handler.setFormatter(logging.Formatter(
fmt='%(asctime)s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
))
log.addHandler(handler)
return log
| 0.562177 | 0.196402 |
# SchwiCV
## Installation
You can install the SchwiCV Tools from [PyPI](https://pypi.org/project/schwicv/):
pip install schwicv
The package is supported on Python 3.6 and above.
# How to use
## Timer Lib
from schwicv import Timer
tmr = Timer(0.1) # Makes Instance of Timer class with 0.1 seconds init time
tmr.remaining_time # Output of remaining time in seconds
tmr.remaining_time_ms # Output of remaining time in milliseconds
tmr.execution_time # Output of execution time since last start in seconds
tmr.execution_time_ms # Output of execution time since last start in milliseconds
tmr.time_stamp # Output of actual time stamp as datetime
tmr.time_stamp_str # Output of actual time stamp yearmonthday-hhmmss-µs example: 20210708-075514-612456
tmr.remaining_percent # Output of remaining time in percent
tmr.time_over # True if code needed more or equal 0.1 seconds
tmr.restart() # Restarts the timer with previous init time
tmr.start(1) # Restart timer with 1 second init time, if re-use instance
|
schwicv
|
/schwicv-0.0.9.tar.gz/schwicv-0.0.9/README.md
|
README.md
|
# SchwiCV
## Installation
You can install the SchwiCV Tools from [PyPI](https://pypi.org/project/schwicv/):
pip install schwicv
The package is supported on Python 3.6 and above.
# How to use
## Timer Lib
from schwicv import Timer
tmr = Timer(0.1) # Makes Instance of Timer class with 0.1 seconds init time
tmr.remaining_time # Output of remaining time in seconds
tmr.remaining_time_ms # Output of remaining time in milliseconds
tmr.execution_time # Output of execution time since last start in seconds
tmr.execution_time_ms # Output of execution time since last start in milliseconds
tmr.time_stamp # Output of actual time stamp as datetime
tmr.time_stamp_str # Output of actual time stamp yearmonthday-hhmmss-µs example: 20210708-075514-612456
tmr.remaining_percent # Output of remaining time in percent
tmr.time_over # True if code needed more or equal 0.1 seconds
tmr.restart() # Restarts the timer with previous init time
tmr.start(1) # Restart timer with 1 second init time, if re-use instance
| 0.662141 | 0.451387 |
.. image:: https://img.shields.io/pypi/v/schwifty.svg?style=flat-square
:target: https://pypi.python.org/pypi/schwifty
.. image:: https://img.shields.io/github/actions/workflow/status/mdomke/schwifty/lint-and-test.yml?branch=main&style=flat-square
:target: https://github.com/mdomke/schwifty/actions?query=workflow%3Alint-and-test
.. image:: https://img.shields.io/pypi/l/schwifty.svg?style=flat-square
:target: https://pypi.python.org/pypi/schwifty
.. image:: https://readthedocs.org/projects/schwifty/badge/?version=latest&style=flat-square
:target: https://schwifty.readthedocs.io
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg?style=flat-square
:target: https://black.readthedocs.io/en/stable/index.html
Gotta get schwifty with your IBANs
==================================
.. teaser-begin
``schwifty`` is a Python library that let's you easily work with IBANs and BICs
as specified by the ISO. IBAN is the Internation Bank Account Number and BIC
the Business Identifier Code. Both are used for international money transfer.
Features
--------
``schwifty`` lets you
* validate check-digits and the country specific format of IBANs
* validate format and country codes from BICs
* generate BICs from country and bank-code
* generate IBANs from country-code, bank-code and account-number.
* get the BIC associated to an IBAN's bank-code
* access all relevant components as attributes
.. teaser-end
Versioning
----------
Since the IBAN specification and the mapping from BIC to bank_code is updated from time to time,
``schwifty`` uses `CalVer <http://www.calver.org/>`_ for versioning with the scheme ``YY.0M.Micro``.
.. installation-begin
Installation
------------
To install ``schwifty``, simply:
.. code-block:: bash
$ pip install schwifty
.. installation-end
Development
-----------
We use the `black`_ as code formatter. This avoids discussions about style preferences in the same
way as ``gofmt`` does the job for Golang. The conformance to the formatting rules is checked in the
CI pipeline, so that it is recommendable to install the configured `pre-commit`_-hook, in order to
avoid long feedback-cycles.
.. code-block:: bash
$ pre-commit install
You can also use the ``fmt`` Makefile-target to format the code or use one of the available `editor
integrations`_.
Project Information
-------------------
``schwifty`` is released under `MIT`_ license and its documentation lives at `Read the Docs`_. The
code is maintained on `GitHub`_ and packages are distributed on `PyPI`_
Name
~~~~
Since ``swift`` and ``swiftly`` were already taken by the OpenStack-project, but we somehow wanted
to point out the connection to SWIFT, Rick and Morty came up with the idea to name the project
``schwifty``.
.. image:: https://i.cdn.turner.com/adultswim/big/video/get-schwifty-pt-2/rickandmorty_ep205_002_vbnuta15a755dvash8.jpg
.. _black: https://black.readthedocs.io/en/stable/index.html
.. _pre-commit: https://pre-commit.com
.. _editor integrations: https://black.readthedocs.io/en/stable/editor_integration.html
.. _MIT: https://choosealicense.com/licenses/mit/
.. _Read the Docs: https://schwifty.readthedocs.io
.. _GitHub: https://github.com/mdomke/schwifty
.. _PyPI: https://pypi.org/project/schwifty
|
schwifty
|
/schwifty-2023.6.0.tar.gz/schwifty-2023.6.0/README.rst
|
README.rst
|
.. image:: https://img.shields.io/pypi/v/schwifty.svg?style=flat-square
:target: https://pypi.python.org/pypi/schwifty
.. image:: https://img.shields.io/github/actions/workflow/status/mdomke/schwifty/lint-and-test.yml?branch=main&style=flat-square
:target: https://github.com/mdomke/schwifty/actions?query=workflow%3Alint-and-test
.. image:: https://img.shields.io/pypi/l/schwifty.svg?style=flat-square
:target: https://pypi.python.org/pypi/schwifty
.. image:: https://readthedocs.org/projects/schwifty/badge/?version=latest&style=flat-square
:target: https://schwifty.readthedocs.io
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg?style=flat-square
:target: https://black.readthedocs.io/en/stable/index.html
Gotta get schwifty with your IBANs
==================================
.. teaser-begin
``schwifty`` is a Python library that let's you easily work with IBANs and BICs
as specified by the ISO. IBAN is the Internation Bank Account Number and BIC
the Business Identifier Code. Both are used for international money transfer.
Features
--------
``schwifty`` lets you
* validate check-digits and the country specific format of IBANs
* validate format and country codes from BICs
* generate BICs from country and bank-code
* generate IBANs from country-code, bank-code and account-number.
* get the BIC associated to an IBAN's bank-code
* access all relevant components as attributes
.. teaser-end
Versioning
----------
Since the IBAN specification and the mapping from BIC to bank_code is updated from time to time,
``schwifty`` uses `CalVer <http://www.calver.org/>`_ for versioning with the scheme ``YY.0M.Micro``.
.. installation-begin
Installation
------------
To install ``schwifty``, simply:
.. code-block:: bash
$ pip install schwifty
.. installation-end
Development
-----------
We use the `black`_ as code formatter. This avoids discussions about style preferences in the same
way as ``gofmt`` does the job for Golang. The conformance to the formatting rules is checked in the
CI pipeline, so that it is recommendable to install the configured `pre-commit`_-hook, in order to
avoid long feedback-cycles.
.. code-block:: bash
$ pre-commit install
You can also use the ``fmt`` Makefile-target to format the code or use one of the available `editor
integrations`_.
Project Information
-------------------
``schwifty`` is released under `MIT`_ license and its documentation lives at `Read the Docs`_. The
code is maintained on `GitHub`_ and packages are distributed on `PyPI`_
Name
~~~~
Since ``swift`` and ``swiftly`` were already taken by the OpenStack-project, but we somehow wanted
to point out the connection to SWIFT, Rick and Morty came up with the idea to name the project
``schwifty``.
.. image:: https://i.cdn.turner.com/adultswim/big/video/get-schwifty-pt-2/rickandmorty_ep205_002_vbnuta15a755dvash8.jpg
.. _black: https://black.readthedocs.io/en/stable/index.html
.. _pre-commit: https://pre-commit.com
.. _editor integrations: https://black.readthedocs.io/en/stable/editor_integration.html
.. _MIT: https://choosealicense.com/licenses/mit/
.. _Read the Docs: https://schwifty.readthedocs.io
.. _GitHub: https://github.com/mdomke/schwifty
.. _PyPI: https://pypi.org/project/schwifty
| 0.877922 | 0.375134 |
Troubleshooting
===============
UnicodeDecodeError on import
----------------------------
Since ``schwifty``'s bank regisry contains bank names with non-ASCII characters, the corresponding
JSON files are encoded in UTF-8. In some Docker container setups (and possibly elsewhere) it might
occur that a ``UnicodeDecodeError`` is raised on import. This can be fixed in most cases by
adjusting the locales setup to support UTF-8. See `this blog post
<http://jaredmarkell.com/docker-and-locales/>`_ for more information. Since version ``2022.04.0``
this issue has been fixed, so that ``schwifty`` uses the correct encoding by default.
|
schwifty
|
/schwifty-2023.6.0.tar.gz/schwifty-2023.6.0/docs/source/troubleshooting.rst
|
troubleshooting.rst
|
Troubleshooting
===============
UnicodeDecodeError on import
----------------------------
Since ``schwifty``'s bank regisry contains bank names with non-ASCII characters, the corresponding
JSON files are encoded in UTF-8. In some Docker container setups (and possibly elsewhere) it might
occur that a ``UnicodeDecodeError`` is raised on import. This can be fixed in most cases by
adjusting the locales setup to support UTF-8. See `this blog post
<http://jaredmarkell.com/docker-and-locales/>`_ for more information. Since version ``2022.04.0``
this issue has been fixed, so that ``schwifty`` uses the correct encoding by default.
| 0.762778 | 0.292538 |
Get schwifty with IBANs and BICs
================================
Release v\ |release| (:ref:`What's new <changelog>`)
.. include:: ../../README.rst
:start-after: teaser-begin
:end-before: teaser-end
.. include:: ../../README.rst
:start-after: installation-begin
:end-before: installation-end
.. note::
Starting from version 2021.01.0 schwifty only supports Python 3.6+
API documentation
-----------------
.. toctree::
:maxdepth: 2
examples
api
troubleshooting
.. toctree::
:maxdepth: 1
changelog
|
schwifty
|
/schwifty-2023.6.0.tar.gz/schwifty-2023.6.0/docs/source/index.rst
|
index.rst
|
Get schwifty with IBANs and BICs
================================
Release v\ |release| (:ref:`What's new <changelog>`)
.. include:: ../../README.rst
:start-after: teaser-begin
:end-before: teaser-end
.. include:: ../../README.rst
:start-after: installation-begin
:end-before: installation-end
.. note::
Starting from version 2021.01.0 schwifty only supports Python 3.6+
API documentation
-----------------
.. toctree::
:maxdepth: 2
examples
api
troubleshooting
.. toctree::
:maxdepth: 1
changelog
| 0.502441 | 0.198297 |
import json
from typing import Tuple
import pandas as pd
import requests
BRANCH_URL = "https://bank.gov.ua/NBU_BankInfo/get_data_branch?json"
PARENT_URL = "https://bank.gov.ua/NBU_BankInfo/get_data_branch_glbank?json"
def split_names(s) -> Tuple[str, str]:
"""This will split the `NAME_E` line from the API into a name and a short name"""
name, short_name = [name.strip() for name in s[:-1].split(" (скорочена назва - ")]
return name, short_name
def get_data(filter_insolvent: bool = True) -> pd.DataFrame:
# Get raw dataframes for parent banks and branches
with requests.get(PARENT_URL) as r:
parents = pd.read_json(r.text)
with requests.get(BRANCH_URL) as r:
branches = pd.read_json(r.text)
# Filter out insolvent branches and branches of insolvent banks
if filter_insolvent:
branches = branches.loc[
(branches["N_STAN"] == "Нормальний") & (branches["NSTAN_GOL"] == "Нормальний")
]
# Note that the National Bank of Ukraine provides English names for banking
# institutions, but not for branches. Therefore we enrich the `branches`
# dataframe with the English name for the parent bank
# Add empty column to `branches` for full and short English name for head bank
branches["NGOL_E"] = ""
branches["NGOL_E_SHORT"] = ""
for idx, row in branches.iterrows():
# Get parent bank identifier
glmfo = row["GLMFO"]
# Get the name of parent bank from
parent_names = parents.loc[parents["GLMFO"] == glmfo]["NAME_E"].iloc[0]
parent_full_name, parent_short_name = split_names(parent_names)
branches.loc[idx, "NGOL_E"] = parent_full_name # type: ignore
branches.loc[idx, "NGOL_E_SHORT"] = parent_short_name # type: ignore
return branches
def process():
branches = get_data()
registry = []
for idx, row in branches.iterrows():
registry.append(
{
"country_code": "UA",
"primary": row["TYP"] == 0,
"bic": "",
"bank_code": str(row["MFO"]),
"name": row["FULLNAME"],
"short_name": row["NGOL_E_SHORT"],
}
)
print(f"Fetched {len(registry)} bank records")
return registry
if __name__ == "__main__":
with open("schwifty/bank_registry/generated_ua.json", "w+") as fp:
json.dump(process(), fp, indent=2, ensure_ascii=False)
|
schwifty
|
/schwifty-2023.6.0.tar.gz/schwifty-2023.6.0/scripts/get_bank_registry_ua.py
|
get_bank_registry_ua.py
|
import json
from typing import Tuple
import pandas as pd
import requests
BRANCH_URL = "https://bank.gov.ua/NBU_BankInfo/get_data_branch?json"
PARENT_URL = "https://bank.gov.ua/NBU_BankInfo/get_data_branch_glbank?json"
def split_names(s) -> Tuple[str, str]:
"""This will split the `NAME_E` line from the API into a name and a short name"""
name, short_name = [name.strip() for name in s[:-1].split(" (скорочена назва - ")]
return name, short_name
def get_data(filter_insolvent: bool = True) -> pd.DataFrame:
# Get raw dataframes for parent banks and branches
with requests.get(PARENT_URL) as r:
parents = pd.read_json(r.text)
with requests.get(BRANCH_URL) as r:
branches = pd.read_json(r.text)
# Filter out insolvent branches and branches of insolvent banks
if filter_insolvent:
branches = branches.loc[
(branches["N_STAN"] == "Нормальний") & (branches["NSTAN_GOL"] == "Нормальний")
]
# Note that the National Bank of Ukraine provides English names for banking
# institutions, but not for branches. Therefore we enrich the `branches`
# dataframe with the English name for the parent bank
# Add empty column to `branches` for full and short English name for head bank
branches["NGOL_E"] = ""
branches["NGOL_E_SHORT"] = ""
for idx, row in branches.iterrows():
# Get parent bank identifier
glmfo = row["GLMFO"]
# Get the name of parent bank from
parent_names = parents.loc[parents["GLMFO"] == glmfo]["NAME_E"].iloc[0]
parent_full_name, parent_short_name = split_names(parent_names)
branches.loc[idx, "NGOL_E"] = parent_full_name # type: ignore
branches.loc[idx, "NGOL_E_SHORT"] = parent_short_name # type: ignore
return branches
def process():
branches = get_data()
registry = []
for idx, row in branches.iterrows():
registry.append(
{
"country_code": "UA",
"primary": row["TYP"] == 0,
"bic": "",
"bank_code": str(row["MFO"]),
"name": row["FULLNAME"],
"short_name": row["NGOL_E_SHORT"],
}
)
print(f"Fetched {len(registry)} bank records")
return registry
if __name__ == "__main__":
with open("schwifty/bank_registry/generated_ua.json", "w+") as fp:
json.dump(process(), fp, indent=2, ensure_ascii=False)
| 0.673729 | 0.286263 |
import json
import re
from urllib.parse import urljoin
import requests
from bs4 import BeautifulSoup
COUNTRY_CODE_PATTERN = r"[A-Z]{2}"
EMPTY_RANGE = (0, 0)
URL = "https://www.swift.com/standards/data-standards/iban"
def get_raw():
soup = BeautifulSoup(requests.get(URL).content, "html.parser")
link = soup.find("a", attrs={"data-tracking-title": "IBAN Registry (TXT)"})
return requests.get(urljoin(URL, link["href"])).content.decode(encoding="latin1")
def parse_int(raw):
return int(re.search(r"\d+", raw).group())
def parse_range(raw):
pattern = r".*?(?P<from>\d+)\s*-\s*(?P<to>\d+)"
match = re.search(pattern, raw)
if not match:
return EMPTY_RANGE
return (int(match["from"]) - 1, int(match["to"]))
def parse(raw):
columns = {}
for line in raw.split("\r\n"):
header, *rows = line.split("\t")
if header == "IBAN prefix country code (ISO 3166)":
columns["country"] = [re.search(COUNTRY_CODE_PATTERN, item).group() for item in rows]
elif header == "Country code includes other countries/territories":
columns["other_countries"] = [re.findall(COUNTRY_CODE_PATTERN, item) for item in rows]
elif header == "BBAN structure":
columns["bban_spec"] = rows
elif header == "BBAN length":
columns["bban_length"] = [parse_int(item) for item in rows]
elif header == "Bank identifier position within the BBAN":
columns["bank_code_position"] = [parse_range(item) for item in rows]
elif header == "Branch identifier position within the BBAN":
columns["branch_code_position"] = [parse_range(item) for item in rows]
elif header == "IBAN structure":
columns["iban_spec"] = rows
elif header == "IBAN length":
columns["iban_length"] = [parse_int(item) for item in rows]
return [dict(zip(columns.keys(), row)) for row in zip(*columns.values())]
def process(records):
registry = {}
for record in records:
country_codes = [record["country"]]
country_codes.extend(record["other_countries"])
for code in country_codes:
registry[code] = {
"bban_spec": record["bban_spec"],
"iban_spec": record["iban_spec"],
"bban_length": record["bban_length"],
"iban_length": record["iban_length"],
"positions": process_positions(record),
}
return registry
def process_positions(record):
bank_code = record["bank_code_position"]
branch_code = record["branch_code_position"]
if branch_code == EMPTY_RANGE:
branch_code = (bank_code[1], bank_code[1])
return {
"account_code": (max(bank_code[1], branch_code[1]), record["bban_length"]),
"bank_code": bank_code,
"branch_code": branch_code,
}
if __name__ == "__main__":
with open("schwifty/iban_registry/generated.json", "w+") as fp:
json.dump(process(parse(get_raw())), fp, indent=2)
|
schwifty
|
/schwifty-2023.6.0.tar.gz/schwifty-2023.6.0/scripts/get_iban_registry.py
|
get_iban_registry.py
|
import json
import re
from urllib.parse import urljoin
import requests
from bs4 import BeautifulSoup
COUNTRY_CODE_PATTERN = r"[A-Z]{2}"
EMPTY_RANGE = (0, 0)
URL = "https://www.swift.com/standards/data-standards/iban"
def get_raw():
soup = BeautifulSoup(requests.get(URL).content, "html.parser")
link = soup.find("a", attrs={"data-tracking-title": "IBAN Registry (TXT)"})
return requests.get(urljoin(URL, link["href"])).content.decode(encoding="latin1")
def parse_int(raw):
return int(re.search(r"\d+", raw).group())
def parse_range(raw):
pattern = r".*?(?P<from>\d+)\s*-\s*(?P<to>\d+)"
match = re.search(pattern, raw)
if not match:
return EMPTY_RANGE
return (int(match["from"]) - 1, int(match["to"]))
def parse(raw):
columns = {}
for line in raw.split("\r\n"):
header, *rows = line.split("\t")
if header == "IBAN prefix country code (ISO 3166)":
columns["country"] = [re.search(COUNTRY_CODE_PATTERN, item).group() for item in rows]
elif header == "Country code includes other countries/territories":
columns["other_countries"] = [re.findall(COUNTRY_CODE_PATTERN, item) for item in rows]
elif header == "BBAN structure":
columns["bban_spec"] = rows
elif header == "BBAN length":
columns["bban_length"] = [parse_int(item) for item in rows]
elif header == "Bank identifier position within the BBAN":
columns["bank_code_position"] = [parse_range(item) for item in rows]
elif header == "Branch identifier position within the BBAN":
columns["branch_code_position"] = [parse_range(item) for item in rows]
elif header == "IBAN structure":
columns["iban_spec"] = rows
elif header == "IBAN length":
columns["iban_length"] = [parse_int(item) for item in rows]
return [dict(zip(columns.keys(), row)) for row in zip(*columns.values())]
def process(records):
registry = {}
for record in records:
country_codes = [record["country"]]
country_codes.extend(record["other_countries"])
for code in country_codes:
registry[code] = {
"bban_spec": record["bban_spec"],
"iban_spec": record["iban_spec"],
"bban_length": record["bban_length"],
"iban_length": record["iban_length"],
"positions": process_positions(record),
}
return registry
def process_positions(record):
bank_code = record["bank_code_position"]
branch_code = record["branch_code_position"]
if branch_code == EMPTY_RANGE:
branch_code = (bank_code[1], bank_code[1])
return {
"account_code": (max(bank_code[1], branch_code[1]), record["bban_length"]),
"bank_code": bank_code,
"branch_code": branch_code,
}
if __name__ == "__main__":
with open("schwifty/iban_registry/generated.json", "w+") as fp:
json.dump(process(parse(get_raw())), fp, indent=2)
| 0.447219 | 0.293664 |
The Schwimmbad
==============
.. image:: https://travis-ci.org/adrn/schwimmbad.svg?branch=master
:target: https://travis-ci.org/adrn/schwimmbad
.. image:: http://img.shields.io/pypi/v/schwimmbad.svg?style=flat
:target: https://pypi.python.org/pypi/schwimmbad/
.. image:: http://img.shields.io/badge/license-MIT-blue.svg?style=flat
:target: https://github.com/adrn/schwimmbad/blob/master/LICENSE
.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.885577.svg
:target: https://zenodo.org/record/885577#.Wa9WVBZSy2w
.. image:: http://joss.theoj.org/papers/10.21105/joss.00357/status.svg
:target: http://dx.doi.org/10.21105/joss.00357
``schwimmbad`` provides a uniform interface to parallel processing pools
and enables switching easily between local development (e.g., serial processing
or with ``multiprocessing``) and deployment on a cluster or supercomputer
(via, e.g., MPI or JobLib).
Installation
------------
The easiest way to install is via `pip`::
pip install schwimmbad
See the `installation
instructions <http://schwimmbad.readthedocs.io/en/latest/install.html>`_ in the
`documentation <http://schwimmbad.readthedocs.io>`_ for more information.
Documentation
-------------
.. image:: https://readthedocs.org/projects/schwimmbad/badge/?version=latest
:target: http://schwimmbad.readthedocs.io/en/latest/?badge=latest
The documentation for ``schwimmbad`` is hosted on `Read the docs
<http://schwimmbad.readthedocs.io/>`_.
Attribution
-----------
If you use this software in a scientific publication, please cite the `JOSS
<http://joss.theoj.org/>`_ article:
.. code-block:: tex
@article{schwimmbad,
doi = {10.21105/joss.00357},
url = {https://doi.org/10.21105/joss.00357},
year = {2017},
month = {sep},
publisher = {The Open Journal},
volume = {2},
number = {17},
author = {Adrian M. Price-Whelan and Daniel Foreman-Mackey},
title = {schwimmbad: A uniform interface to parallel processing pools in Python},
journal = {The Journal of Open Source Software}
}
License
-------
Copyright 2016-2017 Adrian Price-Whelan and contributors.
schwimmbad is free software made available under the MIT License. For details
see the LICENSE file.
|
schwimmbad
|
/schwimmbad-0.3.1.tar.gz/schwimmbad-0.3.1/README.rst
|
README.rst
|
The Schwimmbad
==============
.. image:: https://travis-ci.org/adrn/schwimmbad.svg?branch=master
:target: https://travis-ci.org/adrn/schwimmbad
.. image:: http://img.shields.io/pypi/v/schwimmbad.svg?style=flat
:target: https://pypi.python.org/pypi/schwimmbad/
.. image:: http://img.shields.io/badge/license-MIT-blue.svg?style=flat
:target: https://github.com/adrn/schwimmbad/blob/master/LICENSE
.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.885577.svg
:target: https://zenodo.org/record/885577#.Wa9WVBZSy2w
.. image:: http://joss.theoj.org/papers/10.21105/joss.00357/status.svg
:target: http://dx.doi.org/10.21105/joss.00357
``schwimmbad`` provides a uniform interface to parallel processing pools
and enables switching easily between local development (e.g., serial processing
or with ``multiprocessing``) and deployment on a cluster or supercomputer
(via, e.g., MPI or JobLib).
Installation
------------
The easiest way to install is via `pip`::
pip install schwimmbad
See the `installation
instructions <http://schwimmbad.readthedocs.io/en/latest/install.html>`_ in the
`documentation <http://schwimmbad.readthedocs.io>`_ for more information.
Documentation
-------------
.. image:: https://readthedocs.org/projects/schwimmbad/badge/?version=latest
:target: http://schwimmbad.readthedocs.io/en/latest/?badge=latest
The documentation for ``schwimmbad`` is hosted on `Read the docs
<http://schwimmbad.readthedocs.io/>`_.
Attribution
-----------
If you use this software in a scientific publication, please cite the `JOSS
<http://joss.theoj.org/>`_ article:
.. code-block:: tex
@article{schwimmbad,
doi = {10.21105/joss.00357},
url = {https://doi.org/10.21105/joss.00357},
year = {2017},
month = {sep},
publisher = {The Open Journal},
volume = {2},
number = {17},
author = {Adrian M. Price-Whelan and Daniel Foreman-Mackey},
title = {schwimmbad: A uniform interface to parallel processing pools in Python},
journal = {The Journal of Open Source Software}
}
License
-------
Copyright 2016-2017 Adrian Price-Whelan and contributors.
schwimmbad is free software made available under the MIT License. For details
see the LICENSE file.
| 0.892568 | 0.382545 |
.. _install:
************
Installation
************
.. image:: http://img.shields.io/pypi/v/schwimmbad.svg?style=flat
:target: https://pypi.python.org/pypi/schwimmbad/
Dependencies
============
For running in serial, or using Python's built-in `multiprocessing` module,
`schwimmbad` has no third-party dependencies.
To run with MPI, you must have a compiled MPI library (e.g., `OpenMPI
<https://www.open-mpi.org/>`_) and ``mpi4py``.
To run with joblib, you must have ``joblib`` installed.
Each of these dependencies is either ``pip`` or ``conda`` installable.
With `conda`
============
To install with `conda <http://continuum.io/downloads>`_, use the
`conda-forge <https://conda-forge.github.io/>`_ channel::
conda install -c conda-forge schwimmbad
With `pip`
==========
To install the latest stable version using ``pip``, use::
pip install schwimmbad
To install the development version::
pip install git+https://github.com/adrn/schwimmbad
From source
===========
The latest development version can be cloned from
`GitHub <https://github.com/>`_ using ``git``::
git clone git://github.com/adrn/schwimmbad.git
To install the project (from the root of the source tree, e.g., inside
the cloned ``schwimmbad`` directory)::
python setup.py install
|
schwimmbad
|
/schwimmbad-0.3.1.tar.gz/schwimmbad-0.3.1/docs/install.rst
|
install.rst
|
.. _install:
************
Installation
************
.. image:: http://img.shields.io/pypi/v/schwimmbad.svg?style=flat
:target: https://pypi.python.org/pypi/schwimmbad/
Dependencies
============
For running in serial, or using Python's built-in `multiprocessing` module,
`schwimmbad` has no third-party dependencies.
To run with MPI, you must have a compiled MPI library (e.g., `OpenMPI
<https://www.open-mpi.org/>`_) and ``mpi4py``.
To run with joblib, you must have ``joblib`` installed.
Each of these dependencies is either ``pip`` or ``conda`` installable.
With `conda`
============
To install with `conda <http://continuum.io/downloads>`_, use the
`conda-forge <https://conda-forge.github.io/>`_ channel::
conda install -c conda-forge schwimmbad
With `pip`
==========
To install the latest stable version using ``pip``, use::
pip install schwimmbad
To install the development version::
pip install git+https://github.com/adrn/schwimmbad
From source
===========
The latest development version can be cloned from
`GitHub <https://github.com/>`_ using ``git``::
git clone git://github.com/adrn/schwimmbad.git
To install the project (from the root of the source tree, e.g., inside
the cloned ``schwimmbad`` directory)::
python setup.py install
| 0.805823 | 0.211763 |
.. _contribute:
**********************************
How to Contribute or Report Issues
**********************************
Contributions to the project are always welcome!
If you would like to contribute new features, if it is a significant change to
the code (left open to interpretation), please start by `opening an issue
<https://github.com/adrn/schwimmbad/issues>`_ on the GitHub page for this
project describing your idea. If it's a bug fix or minor change to the source
code, or a change to the documentation, feel free to instead submit a `pull
request <https://github.com/adrn/schwimmbad/pulls>`_ on GitHub instead.
If you've found a bug or issue with the code, or something that is unclear in
the documentation, please `open an issue
<https://github.com/adrn/schwimmbad/issues>`_ on the GitHub page for this
project describing the bug or issue.
|
schwimmbad
|
/schwimmbad-0.3.1.tar.gz/schwimmbad-0.3.1/docs/contributing.rst
|
contributing.rst
|
.. _contribute:
**********************************
How to Contribute or Report Issues
**********************************
Contributions to the project are always welcome!
If you would like to contribute new features, if it is a significant change to
the code (left open to interpretation), please start by `opening an issue
<https://github.com/adrn/schwimmbad/issues>`_ on the GitHub page for this
project describing your idea. If it's a bug fix or minor change to the source
code, or a change to the documentation, feel free to instead submit a `pull
request <https://github.com/adrn/schwimmbad/pulls>`_ on GitHub instead.
If you've found a bug or issue with the code, or something that is unclear in
the documentation, please `open an issue
<https://github.com/adrn/schwimmbad/issues>`_ on the GitHub page for this
project describing the bug or issue.
| 0.534612 | 0.224416 |
---
title: 'schwimmbad: A uniform interface to parallel processing pools in Python'
tags:
- Python
- multiprocessing
- parallel computing
authors:
- name: Adrian M. Price-Whelan
orcid: 0000-0003-0872-7098
affiliation: 1
- name: Daniel Foreman-Mackey
orcid: 0000-0002-9328-5652
affiliation: 2
affiliations:
- name: Lyman Spitzer, Jr. Fellow, Princeton University
index: 1
- name: Sagan Fellow, University of Washington
index: 2
date: 11 August 2017
bibliography: paper.bib
---
# Summary
Many scientific and computing problems require doing some calculation on all
elements of some data set. If the calculations can be executed in parallel
(i.e. without any communication between calculations), these problems are said
to be [*perfectly
parallel*](https://en.wikipedia.org/wiki/Embarrassingly_parallel). On computers
with multiple processing cores, these tasks can be distributed and executed in
parallel to greatly improve performance. A common paradigm for handling these
distributed computing problems is to use a processing "pool": the "tasks" (the
data) are passed in bulk to the pool, and the pool handles distributing the
tasks to a number of worker processes when available.
In Python, the built-in ``multiprocessing`` package provides a ``Pool`` class
for exactly this design case, but only supports distributing the tasks amongst
multiple cores of a single processor. To extend to large cluster computing
environments, other protocols are required, such as the Message Passing
Interface [MPI; @Forum1994]. ``schwimmbad`` provides new ``Pool`` classes for a
number of parallel processing environments with a consistent interface. This
enables easily switching between local development (e.g., serial processing
or with Python's built-in ``multiprocessing``) and deployment on a cluster or
supercomputer (via, e.g., MPI or JobLib). This library supports processing pools
with a number of backends:
* Serial processing: ``SerialPool``
* ``Python`` standard-library ``multiprocessing``: ``MultiPool``
* [``OpenMPI``](open-mpi.org) [@Gabriel2004] and
[``mpich2``](https://www.mpich.org/) [@Lusk1996] via the ``mpi4py``
package [@Dalcin2005; @Dalcin2008]: ``MPIPool``
* [``joblib``](http://pythonhosted.org/joblib/): ``JoblibPool``
All pool classes provide a ``.map()`` method to distribute tasks to a specified
worker function (or callable), and support specifying a callback function that
is executed on the master process to enable post-processing or caching the
results as they are delivered.
# References
|
schwimmbad
|
/schwimmbad-0.3.1.tar.gz/schwimmbad-0.3.1/paper/paper.md
|
paper.md
|
---
title: 'schwimmbad: A uniform interface to parallel processing pools in Python'
tags:
- Python
- multiprocessing
- parallel computing
authors:
- name: Adrian M. Price-Whelan
orcid: 0000-0003-0872-7098
affiliation: 1
- name: Daniel Foreman-Mackey
orcid: 0000-0002-9328-5652
affiliation: 2
affiliations:
- name: Lyman Spitzer, Jr. Fellow, Princeton University
index: 1
- name: Sagan Fellow, University of Washington
index: 2
date: 11 August 2017
bibliography: paper.bib
---
# Summary
Many scientific and computing problems require doing some calculation on all
elements of some data set. If the calculations can be executed in parallel
(i.e. without any communication between calculations), these problems are said
to be [*perfectly
parallel*](https://en.wikipedia.org/wiki/Embarrassingly_parallel). On computers
with multiple processing cores, these tasks can be distributed and executed in
parallel to greatly improve performance. A common paradigm for handling these
distributed computing problems is to use a processing "pool": the "tasks" (the
data) are passed in bulk to the pool, and the pool handles distributing the
tasks to a number of worker processes when available.
In Python, the built-in ``multiprocessing`` package provides a ``Pool`` class
for exactly this design case, but only supports distributing the tasks amongst
multiple cores of a single processor. To extend to large cluster computing
environments, other protocols are required, such as the Message Passing
Interface [MPI; @Forum1994]. ``schwimmbad`` provides new ``Pool`` classes for a
number of parallel processing environments with a consistent interface. This
enables easily switching between local development (e.g., serial processing
or with Python's built-in ``multiprocessing``) and deployment on a cluster or
supercomputer (via, e.g., MPI or JobLib). This library supports processing pools
with a number of backends:
* Serial processing: ``SerialPool``
* ``Python`` standard-library ``multiprocessing``: ``MultiPool``
* [``OpenMPI``](open-mpi.org) [@Gabriel2004] and
[``mpich2``](https://www.mpich.org/) [@Lusk1996] via the ``mpi4py``
package [@Dalcin2005; @Dalcin2008]: ``MPIPool``
* [``joblib``](http://pythonhosted.org/joblib/): ``JoblibPool``
All pool classes provide a ``.map()`` method to distribute tasks to a specified
worker function (or callable), and support specifying a callback function that
is executed on the master process to enable post-processing or caching the
results as they are delivered.
# References
| 0.788746 | 0.572723 |
==========
schwurbler
==========
Mangle strings by repeated Google Translate
Description
===========
This module offers functions to destroy text by feeding it through multiple
languages in Google Translate.
Installation
============
The project is available on PyPI, so simply invoke the following to install the
package:
.. code-block::
pip install schwurbler
Usage
=====
Schwurbler's functions are contained in the ``schwurbler`` package, so simply
import it:
.. code-block:: python
import schwurbler
The two main functions are fixed path schwurbles and set ratio schwurbles. The
former translates text through a fixed set of languages and the latter randomly
picks languages to translate a string through until it only resembles the
original by a certain token set ratio:
.. code-block:: python
import schwurbler
translated = schwurbler.path_schwurbel(['en', 'ja', 'en'], 'Hello world!')
translates = schwurbler.set_ratio_schwurbel('Hello world!', 'en', ratio=50)
More information on the usage can be found in the `API reference`_.
.. _API reference: https://schwurbler.readthedocs.io/en/latest/
|
schwurbler
|
/schwurbler-1.0.3.tar.gz/schwurbler-1.0.3/README.rst
|
README.rst
|
==========
schwurbler
==========
Mangle strings by repeated Google Translate
Description
===========
This module offers functions to destroy text by feeding it through multiple
languages in Google Translate.
Installation
============
The project is available on PyPI, so simply invoke the following to install the
package:
.. code-block::
pip install schwurbler
Usage
=====
Schwurbler's functions are contained in the ``schwurbler`` package, so simply
import it:
.. code-block:: python
import schwurbler
The two main functions are fixed path schwurbles and set ratio schwurbles. The
former translates text through a fixed set of languages and the latter randomly
picks languages to translate a string through until it only resembles the
original by a certain token set ratio:
.. code-block:: python
import schwurbler
translated = schwurbler.path_schwurbel(['en', 'ja', 'en'], 'Hello world!')
translates = schwurbler.set_ratio_schwurbel('Hello world!', 'en', ratio=50)
More information on the usage can be found in the `API reference`_.
.. _API reference: https://schwurbler.readthedocs.io/en/latest/
| 0.822937 | 0.266966 |
==========
schwurbler
==========
This is the documentation of **schwurbler**.
.. note::
This is the main page of your project's `Sphinx <http://sphinx-doc.org/>`_
documentation. It is formatted in `reStructuredText
<http://sphinx-doc.org/rest.html>`__. Add additional pages by creating
rst-files in ``docs`` and adding them to the `toctree
<http://sphinx-doc.org/markup/toctree.html>`_ below. Use then
`references <http://sphinx-doc.org/markup/inline.html>`__ in order to link
them from this page, e.g. :ref:`authors <authors>` and :ref:`changes`.
It is also possible to refer to the documentation of other Python packages
with the `Python domain syntax
<http://sphinx-doc.org/domains.html#the-python-domain>`__. By default you
can reference the documentation of `Sphinx <http://sphinx.pocoo.org>`__,
`Python <http://docs.python.org/>`__, `NumPy
<http://docs.scipy.org/doc/numpy>`__, `SciPy
<http://docs.scipy.org/doc/scipy/reference/>`__, `matplotlib
<http://matplotlib.sourceforge.net>`__, `Pandas
<http://pandas.pydata.org/pandas-docs/stable>`__, `Scikit-Learn
<http://scikit-learn.org/stable>`__. You can add more by
extending the ``intersphinx_mapping`` in your Sphinx's ``conf.py``.
The pretty useful extension `autodoc
<http://www.sphinx-doc.org/en/stable/ext/autodoc.html>`__ is activated by
default and lets you include documentation from docstrings. Docstrings can
be written in `Google
<http://google.github.io/styleguide/pyguide.html#Comments>`__
(recommended!), `NumPy
<https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt>`__
and `classical
<http://www.sphinx-doc.org/en/stable/domains.html#info-field-lists>`__
style.
Contents
========
.. toctree::
:maxdepth: 2
License <license>
Authors <authors>
Changelog <changelog>
Module Reference <api/modules>
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
|
schwurbler
|
/schwurbler-1.0.3.tar.gz/schwurbler-1.0.3/docs/index.rst
|
index.rst
|
==========
schwurbler
==========
This is the documentation of **schwurbler**.
.. note::
This is the main page of your project's `Sphinx <http://sphinx-doc.org/>`_
documentation. It is formatted in `reStructuredText
<http://sphinx-doc.org/rest.html>`__. Add additional pages by creating
rst-files in ``docs`` and adding them to the `toctree
<http://sphinx-doc.org/markup/toctree.html>`_ below. Use then
`references <http://sphinx-doc.org/markup/inline.html>`__ in order to link
them from this page, e.g. :ref:`authors <authors>` and :ref:`changes`.
It is also possible to refer to the documentation of other Python packages
with the `Python domain syntax
<http://sphinx-doc.org/domains.html#the-python-domain>`__. By default you
can reference the documentation of `Sphinx <http://sphinx.pocoo.org>`__,
`Python <http://docs.python.org/>`__, `NumPy
<http://docs.scipy.org/doc/numpy>`__, `SciPy
<http://docs.scipy.org/doc/scipy/reference/>`__, `matplotlib
<http://matplotlib.sourceforge.net>`__, `Pandas
<http://pandas.pydata.org/pandas-docs/stable>`__, `Scikit-Learn
<http://scikit-learn.org/stable>`__. You can add more by
extending the ``intersphinx_mapping`` in your Sphinx's ``conf.py``.
The pretty useful extension `autodoc
<http://www.sphinx-doc.org/en/stable/ext/autodoc.html>`__ is activated by
default and lets you include documentation from docstrings. Docstrings can
be written in `Google
<http://google.github.io/styleguide/pyguide.html#Comments>`__
(recommended!), `NumPy
<https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt>`__
and `classical
<http://www.sphinx-doc.org/en/stable/domains.html#info-field-lists>`__
style.
Contents
========
.. toctree::
:maxdepth: 2
License <license>
Authors <authors>
Changelog <changelog>
Module Reference <api/modules>
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 0.768516 | 0.343452 |
# sci-analysis
An easy to use and powerful python-based data exploration and analysis tool
## Current Version:
2.2 --- Released January 5, 2019
[](https://pypi.python.org/pypi/sci_analysis)
[](https://pypi.python.org/pypi/sci_analysis)
[](https://pypi.python.org/pypi/sci_analysis)
[](https://travis-ci.org/cmmorrow/sci-analysis)
[](https://coveralls.io/github/cmmorrow/sci-analysis?branch=master)
### What is sci-analysis?
sci-analysis is a python package for quickly performing statistical data analysis. It provides a graphical representation of the supplied data as well as the statistical analysis. sci-analysis is smart enough to determine the correct analysis and tests to perform based on the shape of the data you provide, as well as how the data is distributed.
The types of analysis that can be performed are histograms of numeric or categorical data, bi-variate analysis of two numeric data vectors, and one-way analysis of variance.
### What's new in sci-analysis version 2.2?
* Version 2.2 adds the ability to add data labels to scatter plots.
* The default behavior of the histogram and statistics was changed from assuming a sample, to assuming a population.
* Fixed a bug involving the Mann Whitney U test, where the minimum size was set incorrectly.
* Verified compatibility with python 3.7.
### Getting started with sci-analysis
The documentation on how to install and use sci-analysis can be found here:
[http://sci-analysis.readthedocs.io/en/latest/](http://sci-analysis.readthedocs.io/en/latest/)
### Requirements
* Packages: pandas, numpy, scipy, matplotlib, six
* Supports python 2.7, 3.5, 3.6, and 3.7
Bugs can be reported here:
[https://github.com/cmmorrow/sci-analysis/issues](https://github.com/cmmorrow/sci-analysis/issues)
|
sci-analysis
|
/sci_analysis-2.2.1rc0.tar.gz/sci_analysis-2.2.1rc0/README.md
|
README.md
|
# sci-analysis
An easy to use and powerful python-based data exploration and analysis tool
## Current Version:
2.2 --- Released January 5, 2019
[](https://pypi.python.org/pypi/sci_analysis)
[](https://pypi.python.org/pypi/sci_analysis)
[](https://pypi.python.org/pypi/sci_analysis)
[](https://travis-ci.org/cmmorrow/sci-analysis)
[](https://coveralls.io/github/cmmorrow/sci-analysis?branch=master)
### What is sci-analysis?
sci-analysis is a python package for quickly performing statistical data analysis. It provides a graphical representation of the supplied data as well as the statistical analysis. sci-analysis is smart enough to determine the correct analysis and tests to perform based on the shape of the data you provide, as well as how the data is distributed.
The types of analysis that can be performed are histograms of numeric or categorical data, bi-variate analysis of two numeric data vectors, and one-way analysis of variance.
### What's new in sci-analysis version 2.2?
* Version 2.2 adds the ability to add data labels to scatter plots.
* The default behavior of the histogram and statistics was changed from assuming a sample, to assuming a population.
* Fixed a bug involving the Mann Whitney U test, where the minimum size was set incorrectly.
* Verified compatibility with python 3.7.
### Getting started with sci-analysis
The documentation on how to install and use sci-analysis can be found here:
[http://sci-analysis.readthedocs.io/en/latest/](http://sci-analysis.readthedocs.io/en/latest/)
### Requirements
* Packages: pandas, numpy, scipy, matplotlib, six
* Supports python 2.7, 3.5, 3.6, and 3.7
Bugs can be reported here:
[https://github.com/cmmorrow/sci-analysis/issues](https://github.com/cmmorrow/sci-analysis/issues)
| 0.746601 | 0.891717 |
from math import sqrt
# Pandas imports
from pandas import DataFrame
# Numpy imports
from numpy import mean, std, median, amin, amax, percentile
# Scipy imports
from scipy.stats import skew, kurtosis, sem
from .base import Analysis, std_output
from .exc import NoDataError, MinimumSizeError
from ..data import Vector, Categorical, is_dict, is_group, is_categorical, is_vector, is_tuple
class VectorStatistics(Analysis):
"""Reports basic summary stats for a provided vector."""
_min_size = 1
_name = 'Statistics'
_n = 'n'
_mean = 'Mean'
_std = 'Std Dev'
_ste = 'Std Error'
_range = 'Range'
_skew = 'Skewness'
_kurt = 'Kurtosis'
_iqr = 'IQR'
_q1 = '25%'
_q2 = '50%'
_q3 = '75%'
_min = 'Minimum'
_max = "Maximum"
def __init__(self, data, sample=True, display=True):
self._sample = sample
d = Vector(data)
if d.is_empty():
raise NoDataError("Cannot perform the test because there is no data")
if len(d) <= self._min_size:
raise MinimumSizeError("length of data is less than the minimum size {}".format(self._min_size))
super(VectorStatistics, self).__init__(d, display=display)
self.logic()
def run(self):
dof = 1 if self._sample else 0
vmin = amin(self._data.data)
vmax = amax(self._data.data)
vrange = vmax - vmin
q1 = percentile(self._data.data, 25)
q3 = percentile(self._data.data, 75)
iqr = q3 - q1
self._results = {self._n: len(self._data.data),
self._mean: mean(self._data.data),
self._std: std(self._data.data, ddof=dof),
self._ste: sem(self._data.data, 0, dof),
self._q2: median(self._data.data),
self._min: vmin,
self._max: vmax,
self._range: vrange,
self._skew: skew(self._data.data),
self._kurt: kurtosis(self._data.data),
self._q1: q1,
self._q3: q3,
self._iqr: iqr,
}
@property
def count(self):
return self._results[self._n]
@property
def mean(self):
return self._results[self._mean]
@property
def std_dev(self):
return self._results[self._std]
@property
def std_err(self):
return self._results[self._ste]
@property
def median(self):
return self._results[self._q2]
@property
def minimum(self):
return self._results[self._min]
@property
def maximum(self):
return self._results[self._max]
@property
def range(self):
return self._results[self._range]
@property
def skewness(self):
return self._results[self._skew]
@property
def kurtosis(self):
return self._results[self._kurt]
@property
def q1(self):
return self._results[self._q1]
@property
def q3(self):
return self._results[self._q3]
@property
def iqr(self):
return self._results[self._iqr]
def __str__(self):
order = [self._n,
self._mean,
self._std,
self._ste,
self._skew,
self._kurt,
self._max,
self._q3,
self._q2,
self._q1,
self._min,
self._iqr,
self._range,
]
return std_output(self._name, results=self._results, order=order)
class GroupStatistics(Analysis):
"""Reports basic summary stats for a group of vectors."""
_min_size = 1
_name = 'Group Statistics'
_group = 'Group'
_n = 'n'
_mean = 'Mean'
_std = 'Std Dev'
_max = 'Max'
_q2 = 'Median'
_min = 'Min'
_total = 'Total'
_pooled = 'Pooled Std Dev'
_gmean = 'Grand Mean'
_gmedian = 'Grand Median'
_num_of_groups = 'Number of Groups'
def __init__(self, *args, **kwargs):
groups = kwargs.get('groups', None)
display = kwargs.get('display', False)
if is_dict(args[0]):
_data, = args
elif is_group(args,):
_data = dict(zip(groups, args)) if groups else dict(zip(list(range(1, len(args) + 1)), args))
else:
_data = None
data = Vector()
for g, d in _data.items():
if len(d) == 0:
raise NoDataError("Cannot perform test because there is no data")
if len(d) <= self._min_size:
raise MinimumSizeError("length of data is less than the minimum size {}".format(self._min_size))
data.append(Vector(d, groups=[g for _ in range(0, len(d))]))
if data.is_empty():
raise NoDataError("Cannot perform test because there is no data")
self.k = None
self.total = None
self.pooled = None
self.gmean = None
self.gmedian = None
super(GroupStatistics, self).__init__(data, display=display)
self.logic()
def logic(self):
if not self._data:
pass
self._results = []
self.run()
if self._display:
print(self)
def run(self):
out = []
for group, vector in self._data.groups.items():
row_result = {self._group: str(group),
self._n: len(vector),
self._mean: mean(vector),
self._std: std(vector, ddof=1),
self._max: amax(vector),
self._q2: median(vector),
self._min: amin(vector),
}
out.append(row_result)
summ = DataFrame(out).sort_values(self._group)
self.total = len(self._data.data)
self.k = len(summ)
if self.k > 1:
self.pooled = sqrt(((summ[self._n] - 1) * summ[self._std] ** 2).sum() / (summ[self._n].sum() - self.k))
self.gmean = summ[self._mean].mean()
self.gmedian = median(summ[self._q2])
self._results = ({
self._num_of_groups: self.k,
self._total: self.total,
self._pooled: self.pooled,
self._gmean: self.gmean,
self._gmedian: self.gmedian,
}, summ)
else:
self._results = summ
def __str__(self):
order = (
self._num_of_groups,
self._total,
self._gmean,
self._pooled,
self._gmedian,
)
group_order = (
self._n,
self._mean,
self._std,
self._min,
self._q2,
self._max,
self._group,
)
if is_tuple(self._results):
out = '{}\n{}'.format(
std_output('Overall Statistics', self._results[0], order=order),
std_output(self._name, self._results[1].to_dict(orient='records'), order=group_order),
)
else:
out = std_output(self._name, self._results.to_dict(orient='records'), order=group_order)
return out
@property
def grand_mean(self):
return self.gmean
@property
def grand_median(self):
return self.gmedian
@property
def pooled_std(self):
return self.pooled
class GroupStatisticsStacked(Analysis):
_min_size = 1
_name = 'Group Statistics'
_agg_name = 'Overall Statistics'
_group = 'Group'
_n = 'n'
_mean = 'Mean'
_std = 'Std Dev'
_max = 'Max'
_q2 = 'Median'
_min = 'Min'
_total = 'Total'
_pooled = 'Pooled Std Dev'
_gmean = 'Grand Mean'
_gmedian = 'Grand Median'
_num_of_groups = 'Number of Groups'
def __init__(self, values, groups=None, **kwargs):
display = kwargs['display'] if 'display' in kwargs else True
if groups is None:
if is_vector(values):
data = values
else:
raise AttributeError('ydata argument cannot be None.')
else:
data = Vector(values, groups=groups)
if data.is_empty():
raise NoDataError("Cannot perform test because there is no data")
self.pooled = None
self.gmean = None
self.gmedian = None
self.total = None
self.k = None
super(GroupStatisticsStacked, self).__init__(data, display=display)
self.logic()
def logic(self):
if not self._data:
pass
self._results = []
self.run()
if self._display:
print(self)
def run(self):
out = []
for group, vector in self._data.groups.items():
if len(vector) <= self._min_size:
raise MinimumSizeError("length of data is less than the minimum size {}".format(self._min_size))
row_result = {self._group: group,
self._n: len(vector),
self._mean: mean(vector),
self._std: std(vector, ddof=1),
self._max: amax(vector),
self._q2: median(vector),
self._min: amin(vector),
}
out.append(row_result)
summ = DataFrame(out).sort_values(self._group)
self.total = len(self._data.data)
self.k = len(summ)
if self.k > 1:
self.pooled = sqrt(((summ[self._n] - 1) * summ[self._std] ** 2).sum() / (summ[self._n].sum() - self.k))
self.gmean = summ[self._mean].mean()
self.gmedian = median(summ[self._q2])
self._results = ({
self._num_of_groups: self.k,
self._total: self.total,
self._pooled: self.pooled,
self._gmean: self.gmean,
self._gmedian: self.gmedian,
}, summ)
else:
self._results = summ
def __str__(self):
order = (
self._num_of_groups,
self._total,
self._gmean,
self._pooled,
self._gmedian,
)
group_order = (
self._n,
self._mean,
self._std,
self._min,
self._q2,
self._max,
self._group,
)
if is_tuple(self._results):
out = '{}\n{}'.format(
std_output(self._agg_name, self._results[0], order=order),
std_output(self._name, self._results[1].to_dict(orient='records'), order=group_order),
)
else:
out = std_output(self._name, self._results.to_dict(orient='records'), order=group_order)
return out
@property
def grand_mean(self):
return self.gmean
@property
def grand_median(self):
return self.gmedian
@property
def pooled_std(self):
return self.pooled
class CategoricalStatistics(Analysis):
"""Reports basic summary stats for Categorical data."""
_min_size = 1
_name = 'Statistics'
_agg_name = 'Overall Statistics'
_rank = 'Rank'
_cat = 'Category'
_freq = 'Frequency'
_perc = 'Percent'
_total = 'Total'
_num_of_grps = 'Number of Groups'
def __init__(self, data, **kwargs):
order = kwargs['order'] if 'order' in kwargs else None
dropna = kwargs['dropna'] if 'dropna' in kwargs else False
display = kwargs['display'] if 'display' in kwargs else True
self.ordered = True if order is not None else False
d = data if is_categorical(data) else Categorical(data, order=order, dropna=dropna)
if d.is_empty():
raise NoDataError("Cannot perform the test because there is no data")
super(CategoricalStatistics, self).__init__(d, display=display)
self.logic()
def run(self):
col = dict(categories=self._cat,
counts=self._freq,
percents=self._perc,
ranks=self._rank)
self.data.summary.rename(columns=col, inplace=True)
if self.data.num_of_groups > 1:
self._results = ({
self._total: self.data.total,
self._num_of_grps: self.data.num_of_groups,
}, self.data.summary.to_dict(orient='records'))
else:
self._results = self.data.summary.to_dict(orient='records')
def __str__(self):
order = (
self._total,
self._num_of_grps,
)
grp_order = (
self._rank,
self._freq,
self._perc,
self._cat,
)
if is_tuple(self._results):
out = '{}\n{}'.format(
std_output(self._agg_name, self._results[0], order=order),
std_output(self._name, self._results[1], order=grp_order),
)
else:
out = std_output(self._name, self._results, order=grp_order)
return out
|
sci-analysis
|
/sci_analysis-2.2.1rc0.tar.gz/sci_analysis-2.2.1rc0/sci_analysis/analysis/stats.py
|
stats.py
|
from math import sqrt
# Pandas imports
from pandas import DataFrame
# Numpy imports
from numpy import mean, std, median, amin, amax, percentile
# Scipy imports
from scipy.stats import skew, kurtosis, sem
from .base import Analysis, std_output
from .exc import NoDataError, MinimumSizeError
from ..data import Vector, Categorical, is_dict, is_group, is_categorical, is_vector, is_tuple
class VectorStatistics(Analysis):
"""Reports basic summary stats for a provided vector."""
_min_size = 1
_name = 'Statistics'
_n = 'n'
_mean = 'Mean'
_std = 'Std Dev'
_ste = 'Std Error'
_range = 'Range'
_skew = 'Skewness'
_kurt = 'Kurtosis'
_iqr = 'IQR'
_q1 = '25%'
_q2 = '50%'
_q3 = '75%'
_min = 'Minimum'
_max = "Maximum"
def __init__(self, data, sample=True, display=True):
self._sample = sample
d = Vector(data)
if d.is_empty():
raise NoDataError("Cannot perform the test because there is no data")
if len(d) <= self._min_size:
raise MinimumSizeError("length of data is less than the minimum size {}".format(self._min_size))
super(VectorStatistics, self).__init__(d, display=display)
self.logic()
def run(self):
dof = 1 if self._sample else 0
vmin = amin(self._data.data)
vmax = amax(self._data.data)
vrange = vmax - vmin
q1 = percentile(self._data.data, 25)
q3 = percentile(self._data.data, 75)
iqr = q3 - q1
self._results = {self._n: len(self._data.data),
self._mean: mean(self._data.data),
self._std: std(self._data.data, ddof=dof),
self._ste: sem(self._data.data, 0, dof),
self._q2: median(self._data.data),
self._min: vmin,
self._max: vmax,
self._range: vrange,
self._skew: skew(self._data.data),
self._kurt: kurtosis(self._data.data),
self._q1: q1,
self._q3: q3,
self._iqr: iqr,
}
@property
def count(self):
return self._results[self._n]
@property
def mean(self):
return self._results[self._mean]
@property
def std_dev(self):
return self._results[self._std]
@property
def std_err(self):
return self._results[self._ste]
@property
def median(self):
return self._results[self._q2]
@property
def minimum(self):
return self._results[self._min]
@property
def maximum(self):
return self._results[self._max]
@property
def range(self):
return self._results[self._range]
@property
def skewness(self):
return self._results[self._skew]
@property
def kurtosis(self):
return self._results[self._kurt]
@property
def q1(self):
return self._results[self._q1]
@property
def q3(self):
return self._results[self._q3]
@property
def iqr(self):
return self._results[self._iqr]
def __str__(self):
order = [self._n,
self._mean,
self._std,
self._ste,
self._skew,
self._kurt,
self._max,
self._q3,
self._q2,
self._q1,
self._min,
self._iqr,
self._range,
]
return std_output(self._name, results=self._results, order=order)
class GroupStatistics(Analysis):
"""Reports basic summary stats for a group of vectors."""
_min_size = 1
_name = 'Group Statistics'
_group = 'Group'
_n = 'n'
_mean = 'Mean'
_std = 'Std Dev'
_max = 'Max'
_q2 = 'Median'
_min = 'Min'
_total = 'Total'
_pooled = 'Pooled Std Dev'
_gmean = 'Grand Mean'
_gmedian = 'Grand Median'
_num_of_groups = 'Number of Groups'
def __init__(self, *args, **kwargs):
groups = kwargs.get('groups', None)
display = kwargs.get('display', False)
if is_dict(args[0]):
_data, = args
elif is_group(args,):
_data = dict(zip(groups, args)) if groups else dict(zip(list(range(1, len(args) + 1)), args))
else:
_data = None
data = Vector()
for g, d in _data.items():
if len(d) == 0:
raise NoDataError("Cannot perform test because there is no data")
if len(d) <= self._min_size:
raise MinimumSizeError("length of data is less than the minimum size {}".format(self._min_size))
data.append(Vector(d, groups=[g for _ in range(0, len(d))]))
if data.is_empty():
raise NoDataError("Cannot perform test because there is no data")
self.k = None
self.total = None
self.pooled = None
self.gmean = None
self.gmedian = None
super(GroupStatistics, self).__init__(data, display=display)
self.logic()
def logic(self):
if not self._data:
pass
self._results = []
self.run()
if self._display:
print(self)
def run(self):
out = []
for group, vector in self._data.groups.items():
row_result = {self._group: str(group),
self._n: len(vector),
self._mean: mean(vector),
self._std: std(vector, ddof=1),
self._max: amax(vector),
self._q2: median(vector),
self._min: amin(vector),
}
out.append(row_result)
summ = DataFrame(out).sort_values(self._group)
self.total = len(self._data.data)
self.k = len(summ)
if self.k > 1:
self.pooled = sqrt(((summ[self._n] - 1) * summ[self._std] ** 2).sum() / (summ[self._n].sum() - self.k))
self.gmean = summ[self._mean].mean()
self.gmedian = median(summ[self._q2])
self._results = ({
self._num_of_groups: self.k,
self._total: self.total,
self._pooled: self.pooled,
self._gmean: self.gmean,
self._gmedian: self.gmedian,
}, summ)
else:
self._results = summ
def __str__(self):
order = (
self._num_of_groups,
self._total,
self._gmean,
self._pooled,
self._gmedian,
)
group_order = (
self._n,
self._mean,
self._std,
self._min,
self._q2,
self._max,
self._group,
)
if is_tuple(self._results):
out = '{}\n{}'.format(
std_output('Overall Statistics', self._results[0], order=order),
std_output(self._name, self._results[1].to_dict(orient='records'), order=group_order),
)
else:
out = std_output(self._name, self._results.to_dict(orient='records'), order=group_order)
return out
@property
def grand_mean(self):
return self.gmean
@property
def grand_median(self):
return self.gmedian
@property
def pooled_std(self):
return self.pooled
class GroupStatisticsStacked(Analysis):
_min_size = 1
_name = 'Group Statistics'
_agg_name = 'Overall Statistics'
_group = 'Group'
_n = 'n'
_mean = 'Mean'
_std = 'Std Dev'
_max = 'Max'
_q2 = 'Median'
_min = 'Min'
_total = 'Total'
_pooled = 'Pooled Std Dev'
_gmean = 'Grand Mean'
_gmedian = 'Grand Median'
_num_of_groups = 'Number of Groups'
def __init__(self, values, groups=None, **kwargs):
display = kwargs['display'] if 'display' in kwargs else True
if groups is None:
if is_vector(values):
data = values
else:
raise AttributeError('ydata argument cannot be None.')
else:
data = Vector(values, groups=groups)
if data.is_empty():
raise NoDataError("Cannot perform test because there is no data")
self.pooled = None
self.gmean = None
self.gmedian = None
self.total = None
self.k = None
super(GroupStatisticsStacked, self).__init__(data, display=display)
self.logic()
def logic(self):
if not self._data:
pass
self._results = []
self.run()
if self._display:
print(self)
def run(self):
out = []
for group, vector in self._data.groups.items():
if len(vector) <= self._min_size:
raise MinimumSizeError("length of data is less than the minimum size {}".format(self._min_size))
row_result = {self._group: group,
self._n: len(vector),
self._mean: mean(vector),
self._std: std(vector, ddof=1),
self._max: amax(vector),
self._q2: median(vector),
self._min: amin(vector),
}
out.append(row_result)
summ = DataFrame(out).sort_values(self._group)
self.total = len(self._data.data)
self.k = len(summ)
if self.k > 1:
self.pooled = sqrt(((summ[self._n] - 1) * summ[self._std] ** 2).sum() / (summ[self._n].sum() - self.k))
self.gmean = summ[self._mean].mean()
self.gmedian = median(summ[self._q2])
self._results = ({
self._num_of_groups: self.k,
self._total: self.total,
self._pooled: self.pooled,
self._gmean: self.gmean,
self._gmedian: self.gmedian,
}, summ)
else:
self._results = summ
def __str__(self):
order = (
self._num_of_groups,
self._total,
self._gmean,
self._pooled,
self._gmedian,
)
group_order = (
self._n,
self._mean,
self._std,
self._min,
self._q2,
self._max,
self._group,
)
if is_tuple(self._results):
out = '{}\n{}'.format(
std_output(self._agg_name, self._results[0], order=order),
std_output(self._name, self._results[1].to_dict(orient='records'), order=group_order),
)
else:
out = std_output(self._name, self._results.to_dict(orient='records'), order=group_order)
return out
@property
def grand_mean(self):
return self.gmean
@property
def grand_median(self):
return self.gmedian
@property
def pooled_std(self):
return self.pooled
class CategoricalStatistics(Analysis):
"""Reports basic summary stats for Categorical data."""
_min_size = 1
_name = 'Statistics'
_agg_name = 'Overall Statistics'
_rank = 'Rank'
_cat = 'Category'
_freq = 'Frequency'
_perc = 'Percent'
_total = 'Total'
_num_of_grps = 'Number of Groups'
def __init__(self, data, **kwargs):
order = kwargs['order'] if 'order' in kwargs else None
dropna = kwargs['dropna'] if 'dropna' in kwargs else False
display = kwargs['display'] if 'display' in kwargs else True
self.ordered = True if order is not None else False
d = data if is_categorical(data) else Categorical(data, order=order, dropna=dropna)
if d.is_empty():
raise NoDataError("Cannot perform the test because there is no data")
super(CategoricalStatistics, self).__init__(d, display=display)
self.logic()
def run(self):
col = dict(categories=self._cat,
counts=self._freq,
percents=self._perc,
ranks=self._rank)
self.data.summary.rename(columns=col, inplace=True)
if self.data.num_of_groups > 1:
self._results = ({
self._total: self.data.total,
self._num_of_grps: self.data.num_of_groups,
}, self.data.summary.to_dict(orient='records'))
else:
self._results = self.data.summary.to_dict(orient='records')
def __str__(self):
order = (
self._total,
self._num_of_grps,
)
grp_order = (
self._rank,
self._freq,
self._perc,
self._cat,
)
if is_tuple(self._results):
out = '{}\n{}'.format(
std_output(self._agg_name, self._results[0], order=order),
std_output(self._name, self._results[1], order=grp_order),
)
else:
out = std_output(self._name, self._results, order=grp_order)
return out
| 0.906564 | 0.328341 |
from numpy import float_, int_
class Analysis(object):
"""Generic analysis root class.
Members:
_data - the data used for analysis.
_display - flag for whether to display the analysis output.
_results - A dict of the results of the test.
Methods:
logic - This method needs to run the analysis, set the results member, and display the output at bare minimum.
run - This method should return the results of the specific analysis.
output - This method shouldn't return a value and only produce a side-effect.
"""
_name = "Analysis"
def __init__(self, data, display=True):
"""Initialize the data and results members.
Override this method to initialize additional members or perform
checks on data.
"""
self._data = data
self._display = display
self._results = {}
@property
def name(self):
"""The name of the test class"""
return self._name
@property
def data(self):
"""The data used for analysis"""
return self._data
@property
def results(self):
"""A dict of the results returned by the run method"""
return self._results
def logic(self):
"""This method needs to run the analysis, set the results member, and
display the output at bare minimum.
Override this method to modify the execution sequence of the analysis.
"""
if self._data is None:
return
self.run()
if self._display:
print(self)
def run(self):
"""This method should perform the specific analysis and set the results dict.
Override this method to perform a specific analysis or calculation.
"""
raise NotImplementedError
def __str__(self):
return std_output(self._name, self._results, tuple(self._results.keys()))
def std_output(name, results, order, precision=4, spacing=14):
"""
Parameters
----------
name : str
The name of the analysis report.
results : dict or list
The input dict or list to print.
order : list or tuple
The list of keys in results to display and the order to display them in.
precision : int
The number of decimal places to show for float values.
spacing : int
The max number of characters for each printed column.
Returns
-------
output_string : str
The report to be printed to stdout.
"""
def format_header(col_names):
line = ""
for n in col_names:
line += '{:{}s}'.format(n, spacing)
return line
def format_row(_row, _order):
line = ""
for column in _order:
value = _row[column]
t = type(value)
if t in [float, float_]:
line += '{:< {}.{}f}'.format(value, spacing, precision)
elif t in [float, float_]:
line += '{:< {}d}'.format(value, spacing)
else:
line += '{:<{}s}'.format(str(value), spacing)
return line
def format_items(label, value):
if type(value) in {float, float_}:
line = '{:{}s}'.format(label, max_length) + ' = ' + '{:< .{}f}'.format(value, precision)
elif type(value) in {int, int_}:
line = '{:{}s}'.format(label, max_length) + ' = ' + '{:< d}'.format(value)
else:
line = '{:{}s}'.format(label, max_length) + ' = ' + str(value)
return line
table = list()
header = ''
if isinstance(results, list):
header = format_header(order)
for row in results:
table.append(format_row(row, order))
elif isinstance(results, dict):
max_length = max([len(label) for label in results.keys()])
for key in order:
table.append(format_items(key, results[key]))
out = [
'',
'',
name,
'-' * len(name),
''
]
if len(header) > 0:
out.extend([
header,
'-' * len(header)
])
out.append('\n'.join(table))
return '\n'.join(out)
|
sci-analysis
|
/sci_analysis-2.2.1rc0.tar.gz/sci_analysis-2.2.1rc0/sci_analysis/analysis/base.py
|
base.py
|
from numpy import float_, int_
class Analysis(object):
"""Generic analysis root class.
Members:
_data - the data used for analysis.
_display - flag for whether to display the analysis output.
_results - A dict of the results of the test.
Methods:
logic - This method needs to run the analysis, set the results member, and display the output at bare minimum.
run - This method should return the results of the specific analysis.
output - This method shouldn't return a value and only produce a side-effect.
"""
_name = "Analysis"
def __init__(self, data, display=True):
"""Initialize the data and results members.
Override this method to initialize additional members or perform
checks on data.
"""
self._data = data
self._display = display
self._results = {}
@property
def name(self):
"""The name of the test class"""
return self._name
@property
def data(self):
"""The data used for analysis"""
return self._data
@property
def results(self):
"""A dict of the results returned by the run method"""
return self._results
def logic(self):
"""This method needs to run the analysis, set the results member, and
display the output at bare minimum.
Override this method to modify the execution sequence of the analysis.
"""
if self._data is None:
return
self.run()
if self._display:
print(self)
def run(self):
"""This method should perform the specific analysis and set the results dict.
Override this method to perform a specific analysis or calculation.
"""
raise NotImplementedError
def __str__(self):
return std_output(self._name, self._results, tuple(self._results.keys()))
def std_output(name, results, order, precision=4, spacing=14):
"""
Parameters
----------
name : str
The name of the analysis report.
results : dict or list
The input dict or list to print.
order : list or tuple
The list of keys in results to display and the order to display them in.
precision : int
The number of decimal places to show for float values.
spacing : int
The max number of characters for each printed column.
Returns
-------
output_string : str
The report to be printed to stdout.
"""
def format_header(col_names):
line = ""
for n in col_names:
line += '{:{}s}'.format(n, spacing)
return line
def format_row(_row, _order):
line = ""
for column in _order:
value = _row[column]
t = type(value)
if t in [float, float_]:
line += '{:< {}.{}f}'.format(value, spacing, precision)
elif t in [float, float_]:
line += '{:< {}d}'.format(value, spacing)
else:
line += '{:<{}s}'.format(str(value), spacing)
return line
def format_items(label, value):
if type(value) in {float, float_}:
line = '{:{}s}'.format(label, max_length) + ' = ' + '{:< .{}f}'.format(value, precision)
elif type(value) in {int, int_}:
line = '{:{}s}'.format(label, max_length) + ' = ' + '{:< d}'.format(value)
else:
line = '{:{}s}'.format(label, max_length) + ' = ' + str(value)
return line
table = list()
header = ''
if isinstance(results, list):
header = format_header(order)
for row in results:
table.append(format_row(row, order))
elif isinstance(results, dict):
max_length = max([len(label) for label in results.keys()])
for key in order:
table.append(format_items(key, results[key]))
out = [
'',
'',
name,
'-' * len(name),
''
]
if len(header) > 0:
out.extend([
header,
'-' * len(header)
])
out.append('\n'.join(table))
return '\n'.join(out)
| 0.899621 | 0.594845 |
from scipy.stats import linregress, pearsonr, spearmanr
from pandas import DataFrame
from ..data import Vector, is_vector
from .base import Analysis, std_output
from .exc import NoDataError, MinimumSizeError
from .hypo_tests import NormTest
class Comparison(Analysis):
"""Perform a test on two independent vectors of equal length."""
_min_size = 3
_name = "Comparison"
_h0 = "H0: "
_ha = "HA: "
_default_alpha = 0.05
def __init__(self, xdata, ydata=None, alpha=None, display=True):
self._alpha = alpha or self._default_alpha
if ydata is None:
if is_vector(xdata):
v = xdata
else:
raise AttributeError('ydata argument cannot be None.')
else:
v = Vector(xdata, other=ydata)
if v.data.empty or v.other.empty:
raise NoDataError("Cannot perform test because there is no data")
if len(v.data) <= self._min_size or len(v.other) <= self._min_size:
raise MinimumSizeError("length of data is less than the minimum size {}".format(self._min_size))
super(Comparison, self).__init__(v, display=display)
self.logic()
@property
def xdata(self):
"""The predictor vector for comparison tests"""
return self.data.data
@property
def ydata(self):
"""The response vector for comparison tests"""
return self.data.other
@property
def predictor(self):
"""The predictor vector for comparison tests"""
return self.data.data
@property
def response(self):
"""The response vector for comparison tests"""
return self.data.other
@property
def statistic(self):
"""The test statistic returned by the function called in the run method"""
# TODO: Need to catch the case where self._results is an empty dictionary.
return self._results['statistic']
@property
def p_value(self):
"""The p-value returned by the function called in the run method"""
return self._results['p value']
def __str__(self):
out = list()
order = list()
res = list(self._results.keys())
if 'p value' in res:
order.append('p value')
res.remove('p value')
order.extend(res)
out.append(std_output(self.name, self._results, reversed(order)))
out.append('')
out.append(self._h0 if self.p_value > self._alpha else self._ha)
out.append('')
return '\n'.join(out)
def run(self):
raise NotImplementedError
class LinearRegression(Comparison):
"""Performs a linear regression between two vectors."""
_name = "Linear Regression"
_n = 'n'
_slope = 'Slope'
_intercept = 'Intercept'
_r_value = 'r'
_r_squared = 'r^2'
_std_err = 'Std Err'
_p_value = 'p value'
def __init__(self, xdata, ydata=None, alpha=None, display=True):
super(LinearRegression, self).__init__(xdata, ydata, alpha=alpha, display=display)
def run(self):
slope, intercept, r, p_value, std_err = linregress(self.xdata, self.ydata)
count = len(self.xdata)
self._results.update({
self._n: count,
self._slope: slope,
self._intercept: intercept,
self._r_value: r,
self._r_squared: r ** 2,
self._std_err: std_err,
self._p_value: p_value
})
@property
def slope(self):
return self._results[self._slope]
@property
def intercept(self):
return self._results[self._intercept]
@property
def r_squared(self):
return self._results[self._r_squared]
@property
def r_value(self):
return self._results[self._r_value]
@property
def statistic(self):
return self._results[self._r_squared]
@property
def std_err(self):
return self._results[self._std_err]
def __str__(self):
"""If the result is greater than the significance, print the null hypothesis, otherwise,
the alternate hypothesis"""
out = list()
order = [
self._n,
self._slope,
self._intercept,
self._r_value,
self._r_squared,
self._std_err,
self._p_value
]
out.append(std_output(self._name, self._results, order=order))
out.append('')
return '\n'.join(out)
class Correlation(Comparison):
"""Performs a pearson or spearman correlation between two vectors."""
_names = {'pearson': 'Pearson Correlation Coefficient', 'spearman': 'Spearman Correlation Coefficient'}
_h0 = "H0: There is no significant relationship between predictor and response"
_ha = "HA: There is a significant relationship between predictor and response"
_r_value = 'r value'
_p_value = 'p value'
_alpha_name = 'alpha'
def __init__(self, xdata, ydata=None, alpha=None, display=True):
self._test = None
super(Correlation, self).__init__(xdata, ydata, alpha=alpha, display=display)
def run(self):
if NormTest(self.xdata, self.ydata, display=False, alpha=self._alpha).p_value > self._alpha:
r_value, p_value = pearsonr(self.xdata, self.ydata)
r = "pearson"
else:
r_value, p_value = spearmanr(self.xdata, self.ydata)
r = "spearman"
self._name = self._names[r]
self._test = r
self._results.update({
self._r_value: r_value,
self._p_value: p_value,
self._alpha_name: self._alpha
})
@property
def r_value(self):
"""The correlation coefficient returned by the the determined test type"""
return self._results[self._r_value]
@property
def statistic(self):
return self._results[self._r_value]
@property
def test_type(self):
"""The test that was used to determine the correlation coefficient"""
return self._test
def __str__(self):
out = list()
out.append(std_output(self.name, self._results, [self._alpha_name, self._r_value, self._p_value]))
out.append('')
out.append(self._h0 if self.p_value > self._alpha else self._ha)
out.append('')
return '\n'.join(out)
class GroupComparison(Analysis):
_min_size = 1
_name = 'Group Comparison'
_default_alpha = 0.05
def __init__(self, xdata, ydata=None, groups=None, alpha=None, display=True):
if ydata is None:
if is_vector(xdata):
vector = xdata
else:
raise AttributeError("ydata argument cannot be None.")
else:
vector = Vector(xdata, other=ydata, groups=groups)
if vector.is_empty():
raise NoDataError("Cannot perform test because there is no data")
super(GroupComparison, self).__init__(vector, display=display)
self._alpha = alpha or self._default_alpha
self.logic()
def run(self):
raise NotImplementedError
class GroupCorrelation(GroupComparison):
_names = {
'pearson': 'Pearson Correlation Coefficient',
'spearman': 'Spearman Correlation Coefficient',
}
_min_size = 2
_r_value = 'r value'
_p_value = 'p value'
_group_name = 'Group'
_n = 'n'
def __init__(self, xdata, ydata=None, groups=None, alpha=None, display=True):
self._test = None
super(GroupCorrelation, self).__init__(xdata, ydata=ydata, groups=groups, alpha=alpha, display=display)
def run(self):
out = []
# Remove any groups that are less than or equal to the minimum value from analysis.
small_grps = [grp for grp, seq in self.data.groups.items() if len(seq) <= self._min_size]
self.data.drop_groups(small_grps)
if NormTest(*self.data.flatten(), display=False, alpha=self._alpha).p_value > self._alpha:
r = "pearson"
func = pearsonr
else:
r = 'spearman'
func = spearmanr
self._name = self._names[r]
self._test = r
for grp, pairs in self.data.paired_groups.items():
r_value, p_value = func(*pairs)
row_results = ({self._r_value: r_value,
self._p_value: p_value,
self._group_name: str(grp),
self._n: str(len(pairs[0]))})
out.append(row_results)
self._results = DataFrame(out).sort_values(self._group_name).to_dict(orient='records')
def __str__(self):
order = (
self._n,
self._r_value,
self._p_value,
self._group_name
)
return std_output(self._name, self._results, order=order)
@property
def counts(self):
return tuple(s[self._n] for s in self._results)
@property
def r_value(self):
return tuple(s[self._r_value] for s in self._results)
@property
def statistic(self):
return tuple(s[self._r_value] for s in self._results)
@property
def p_value(self):
return tuple(s[self._p_value] for s in self._results)
class GroupLinearRegression(GroupComparison):
_name = "Linear Regression"
_n = 'n'
_slope = 'Slope'
_intercept = 'Intercept'
_r_value = 'r'
_r_squared = 'r^2'
_std_err = 'Std Err'
_p_value = 'p value'
_group_name = 'Group'
def run(self):
out = []
# Remove any groups that are less than or equal to the minimum value from analysis.
small_grps = [grp for grp, seq in self.data.groups.items() if len(seq) <= self._min_size]
self.data.drop_groups(small_grps)
for grp, pairs in self.data.paired_groups.items():
slope, intercept, r, p_value, std_err = linregress(*pairs)
count = len(pairs[0])
out.append({
self._n: str(count),
self._slope: slope,
self._intercept: intercept,
self._r_value: r,
self._r_squared: r ** 2,
self._std_err: std_err,
self._p_value: p_value,
self._group_name: str(grp)
})
if not out:
raise NoDataError
self._results = DataFrame(out).sort_values(self._group_name).to_dict(orient='records')
def __str__(self):
order = (
self._n,
self._slope,
self._intercept,
self._r_squared,
self._std_err,
self._p_value,
self._group_name
)
return std_output(self._name, self._results, order=order)
@property
def counts(self):
return tuple(s[self._n] for s in self._results)
@property
def r_value(self):
return tuple(s[self._r_value] for s in self._results)
@property
def statistic(self):
return tuple(s[self._r_squared] for s in self._results)
@property
def p_value(self):
return tuple(s[self._p_value] for s in self._results)
@property
def slope(self):
return tuple(s[self._slope] for s in self._results)
@property
def intercept(self):
return tuple(s[self._intercept] for s in self._results)
@property
def r_squared(self):
return tuple(s[self._r_squared] for s in self._results)
@property
def std_err(self):
return tuple(s[self._std_err] for s in self._results)
|
sci-analysis
|
/sci_analysis-2.2.1rc0.tar.gz/sci_analysis-2.2.1rc0/sci_analysis/analysis/comparison.py
|
comparison.py
|
from scipy.stats import linregress, pearsonr, spearmanr
from pandas import DataFrame
from ..data import Vector, is_vector
from .base import Analysis, std_output
from .exc import NoDataError, MinimumSizeError
from .hypo_tests import NormTest
class Comparison(Analysis):
"""Perform a test on two independent vectors of equal length."""
_min_size = 3
_name = "Comparison"
_h0 = "H0: "
_ha = "HA: "
_default_alpha = 0.05
def __init__(self, xdata, ydata=None, alpha=None, display=True):
self._alpha = alpha or self._default_alpha
if ydata is None:
if is_vector(xdata):
v = xdata
else:
raise AttributeError('ydata argument cannot be None.')
else:
v = Vector(xdata, other=ydata)
if v.data.empty or v.other.empty:
raise NoDataError("Cannot perform test because there is no data")
if len(v.data) <= self._min_size or len(v.other) <= self._min_size:
raise MinimumSizeError("length of data is less than the minimum size {}".format(self._min_size))
super(Comparison, self).__init__(v, display=display)
self.logic()
@property
def xdata(self):
"""The predictor vector for comparison tests"""
return self.data.data
@property
def ydata(self):
"""The response vector for comparison tests"""
return self.data.other
@property
def predictor(self):
"""The predictor vector for comparison tests"""
return self.data.data
@property
def response(self):
"""The response vector for comparison tests"""
return self.data.other
@property
def statistic(self):
"""The test statistic returned by the function called in the run method"""
# TODO: Need to catch the case where self._results is an empty dictionary.
return self._results['statistic']
@property
def p_value(self):
"""The p-value returned by the function called in the run method"""
return self._results['p value']
def __str__(self):
out = list()
order = list()
res = list(self._results.keys())
if 'p value' in res:
order.append('p value')
res.remove('p value')
order.extend(res)
out.append(std_output(self.name, self._results, reversed(order)))
out.append('')
out.append(self._h0 if self.p_value > self._alpha else self._ha)
out.append('')
return '\n'.join(out)
def run(self):
raise NotImplementedError
class LinearRegression(Comparison):
"""Performs a linear regression between two vectors."""
_name = "Linear Regression"
_n = 'n'
_slope = 'Slope'
_intercept = 'Intercept'
_r_value = 'r'
_r_squared = 'r^2'
_std_err = 'Std Err'
_p_value = 'p value'
def __init__(self, xdata, ydata=None, alpha=None, display=True):
super(LinearRegression, self).__init__(xdata, ydata, alpha=alpha, display=display)
def run(self):
slope, intercept, r, p_value, std_err = linregress(self.xdata, self.ydata)
count = len(self.xdata)
self._results.update({
self._n: count,
self._slope: slope,
self._intercept: intercept,
self._r_value: r,
self._r_squared: r ** 2,
self._std_err: std_err,
self._p_value: p_value
})
@property
def slope(self):
return self._results[self._slope]
@property
def intercept(self):
return self._results[self._intercept]
@property
def r_squared(self):
return self._results[self._r_squared]
@property
def r_value(self):
return self._results[self._r_value]
@property
def statistic(self):
return self._results[self._r_squared]
@property
def std_err(self):
return self._results[self._std_err]
def __str__(self):
"""If the result is greater than the significance, print the null hypothesis, otherwise,
the alternate hypothesis"""
out = list()
order = [
self._n,
self._slope,
self._intercept,
self._r_value,
self._r_squared,
self._std_err,
self._p_value
]
out.append(std_output(self._name, self._results, order=order))
out.append('')
return '\n'.join(out)
class Correlation(Comparison):
"""Performs a pearson or spearman correlation between two vectors."""
_names = {'pearson': 'Pearson Correlation Coefficient', 'spearman': 'Spearman Correlation Coefficient'}
_h0 = "H0: There is no significant relationship between predictor and response"
_ha = "HA: There is a significant relationship between predictor and response"
_r_value = 'r value'
_p_value = 'p value'
_alpha_name = 'alpha'
def __init__(self, xdata, ydata=None, alpha=None, display=True):
self._test = None
super(Correlation, self).__init__(xdata, ydata, alpha=alpha, display=display)
def run(self):
if NormTest(self.xdata, self.ydata, display=False, alpha=self._alpha).p_value > self._alpha:
r_value, p_value = pearsonr(self.xdata, self.ydata)
r = "pearson"
else:
r_value, p_value = spearmanr(self.xdata, self.ydata)
r = "spearman"
self._name = self._names[r]
self._test = r
self._results.update({
self._r_value: r_value,
self._p_value: p_value,
self._alpha_name: self._alpha
})
@property
def r_value(self):
"""The correlation coefficient returned by the the determined test type"""
return self._results[self._r_value]
@property
def statistic(self):
return self._results[self._r_value]
@property
def test_type(self):
"""The test that was used to determine the correlation coefficient"""
return self._test
def __str__(self):
out = list()
out.append(std_output(self.name, self._results, [self._alpha_name, self._r_value, self._p_value]))
out.append('')
out.append(self._h0 if self.p_value > self._alpha else self._ha)
out.append('')
return '\n'.join(out)
class GroupComparison(Analysis):
_min_size = 1
_name = 'Group Comparison'
_default_alpha = 0.05
def __init__(self, xdata, ydata=None, groups=None, alpha=None, display=True):
if ydata is None:
if is_vector(xdata):
vector = xdata
else:
raise AttributeError("ydata argument cannot be None.")
else:
vector = Vector(xdata, other=ydata, groups=groups)
if vector.is_empty():
raise NoDataError("Cannot perform test because there is no data")
super(GroupComparison, self).__init__(vector, display=display)
self._alpha = alpha or self._default_alpha
self.logic()
def run(self):
raise NotImplementedError
class GroupCorrelation(GroupComparison):
_names = {
'pearson': 'Pearson Correlation Coefficient',
'spearman': 'Spearman Correlation Coefficient',
}
_min_size = 2
_r_value = 'r value'
_p_value = 'p value'
_group_name = 'Group'
_n = 'n'
def __init__(self, xdata, ydata=None, groups=None, alpha=None, display=True):
self._test = None
super(GroupCorrelation, self).__init__(xdata, ydata=ydata, groups=groups, alpha=alpha, display=display)
def run(self):
out = []
# Remove any groups that are less than or equal to the minimum value from analysis.
small_grps = [grp for grp, seq in self.data.groups.items() if len(seq) <= self._min_size]
self.data.drop_groups(small_grps)
if NormTest(*self.data.flatten(), display=False, alpha=self._alpha).p_value > self._alpha:
r = "pearson"
func = pearsonr
else:
r = 'spearman'
func = spearmanr
self._name = self._names[r]
self._test = r
for grp, pairs in self.data.paired_groups.items():
r_value, p_value = func(*pairs)
row_results = ({self._r_value: r_value,
self._p_value: p_value,
self._group_name: str(grp),
self._n: str(len(pairs[0]))})
out.append(row_results)
self._results = DataFrame(out).sort_values(self._group_name).to_dict(orient='records')
def __str__(self):
order = (
self._n,
self._r_value,
self._p_value,
self._group_name
)
return std_output(self._name, self._results, order=order)
@property
def counts(self):
return tuple(s[self._n] for s in self._results)
@property
def r_value(self):
return tuple(s[self._r_value] for s in self._results)
@property
def statistic(self):
return tuple(s[self._r_value] for s in self._results)
@property
def p_value(self):
return tuple(s[self._p_value] for s in self._results)
class GroupLinearRegression(GroupComparison):
_name = "Linear Regression"
_n = 'n'
_slope = 'Slope'
_intercept = 'Intercept'
_r_value = 'r'
_r_squared = 'r^2'
_std_err = 'Std Err'
_p_value = 'p value'
_group_name = 'Group'
def run(self):
out = []
# Remove any groups that are less than or equal to the minimum value from analysis.
small_grps = [grp for grp, seq in self.data.groups.items() if len(seq) <= self._min_size]
self.data.drop_groups(small_grps)
for grp, pairs in self.data.paired_groups.items():
slope, intercept, r, p_value, std_err = linregress(*pairs)
count = len(pairs[0])
out.append({
self._n: str(count),
self._slope: slope,
self._intercept: intercept,
self._r_value: r,
self._r_squared: r ** 2,
self._std_err: std_err,
self._p_value: p_value,
self._group_name: str(grp)
})
if not out:
raise NoDataError
self._results = DataFrame(out).sort_values(self._group_name).to_dict(orient='records')
def __str__(self):
order = (
self._n,
self._slope,
self._intercept,
self._r_squared,
self._std_err,
self._p_value,
self._group_name
)
return std_output(self._name, self._results, order=order)
@property
def counts(self):
return tuple(s[self._n] for s in self._results)
@property
def r_value(self):
return tuple(s[self._r_value] for s in self._results)
@property
def statistic(self):
return tuple(s[self._r_squared] for s in self._results)
@property
def p_value(self):
return tuple(s[self._p_value] for s in self._results)
@property
def slope(self):
return tuple(s[self._slope] for s in self._results)
@property
def intercept(self):
return tuple(s[self._intercept] for s in self._results)
@property
def r_squared(self):
return tuple(s[self._r_squared] for s in self._results)
@property
def std_err(self):
return tuple(s[self._std_err] for s in self._results)
| 0.80213 | 0.474509 |
from .hypo_tests import NormTest, KSTest, TwoSampleKSTest, MannWhitney, TTest, Anova, Kruskal, EqualVariance
from .comparison import LinearRegression, Correlation, GroupCorrelation, GroupLinearRegression
from .stats import VectorStatistics, GroupStatistics, GroupStatisticsStacked, CategoricalStatistics
def determine_analysis_type(data, other=None, groups=None, labels=None):
"""Attempts to determine the type of data and returns the corresponding sci_analysis Data object.
Parameters
----------
data : array-like
The sequence of unknown data type.
other : array-like or None
A second sequence of unknown data type.
groups : array-like or None
The group names to include if data is determined to be a Vector.
labels : array-like or None
The sequence of data point labels.
Returns
-------
data : sci_analysis.data.Data
A subclass of sci_analysis Data that corresponds to the analysis type to perform.
"""
from numpy import (
float16, float32, float64,
int8, int16, int32, int64
)
from pandas import Series
from ..data import is_iterable, is_vector, is_categorical, Vector, Categorical
from .exc import NoDataError
numeric_types = [float16, float32, float64, int8, int16, int32, int64]
if not is_iterable(data):
raise ValueError('data cannot be a scalar value.')
elif len(data) == 0:
raise NoDataError
elif is_vector(data):
return data
elif is_categorical(data):
return data
else:
if not hasattr(data, 'dtype'):
data = Series(data)
if other is not None:
if not hasattr(other, 'dtype'):
other = Series(other)
if data.dtype in numeric_types:
if other is not None and other.dtype in numeric_types:
if groups is not None:
return Vector(data, other=other, groups=groups, labels=labels)
else:
return Vector(data, other=other, labels=labels)
else:
if groups is not None:
return Vector(data, groups=groups, labels=labels)
else:
return Vector(data, labels=labels)
else:
return Categorical(data)
def analyse(xdata, ydata=None, groups=None, labels=None, **kwargs):
"""
Alias for analyze.
Parameters
----------
xdata : array-like
The primary set of data.
ydata : array-like
The response or secondary set of data.
groups : array-like
The group names used for location testing or Bivariate analysis.
labels : array-like or None
The sequence of data point labels.
alpha : float
The sensitivity to use for hypothesis tests.
Returns
-------
xdata, ydata : tuple(array-like, array-like)
The input xdata and ydata.
Notes
-----
xdata : array-like(num), ydata : None --- Distribution
xdata : array-like(str), ydata : None --- Frequencies
xdata : array-like(num), ydata : array-like(num) --- Bivariate
xdata : array-like(num), ydata : array-like(num), groups : array-like --- Group Bivariate
xdata : list(array-like(num)), ydata : None --- Location Test(unstacked)
xdata : list(array-like(num)), ydata : None, groups : array-like --- Location Test(unstacked)
xdata : dict(array-like(num)), ydata : None --- Location Test(unstacked)
xdata : array-like(num), ydata : None, groups : array-like --- Location Test(stacked)
"""
return analyze(xdata, ydata=ydata, groups=groups, labels=labels, **kwargs)
def analyze(xdata, ydata=None, groups=None, labels=None, alpha=0.05, **kwargs):
"""
Automatically performs a statistical analysis based on the input arguments.
Parameters
----------
xdata : array-like
The primary set of data.
ydata : array-like
The response or secondary set of data.
groups : array-like
The group names used for location testing or Bivariate analysis.
labels : array-like or None
The sequence of data point labels.
alpha : float
The sensitivity to use for hypothesis tests.
Returns
-------
xdata, ydata : tuple(array-like, array-like)
The input xdata and ydata.
Notes
-----
xdata : array-like(num), ydata : None --- Distribution
xdata : array-like(str), ydata : None --- Frequencies
xdata : array-like(num), ydata : array-like(num) --- Bivariate
xdata : array-like(num), ydata : array-like(num), groups : array-like --- Group Bivariate
xdata : list(array-like(num)), ydata : None --- Location Test(unstacked)
xdata : list(array-like(num)), ydata : None, groups : array-like --- Location Test(unstacked)
xdata : dict(array-like(num)), ydata : None --- Location Test(unstacked)
xdata : array-like(num), ydata : None, groups : array-like --- Location Test(stacked)
"""
from ..graphs import GraphHisto, GraphScatter, GraphBoxplot, GraphFrequency, GraphGroupScatter
from ..data import (is_dict, is_iterable, is_group, is_dict_group, is_vector)
from .exc import NoDataError
debug = True if 'debug' in kwargs else False
tested = list()
if xdata is None:
raise ValueError("xdata was not provided.")
if not is_iterable(xdata):
raise TypeError("xdata is not an array.")
if len(xdata) == 0:
raise NoDataError("No data was passed to analyze")
# Compare Group Means and Variance
if is_group(xdata) or is_dict_group(xdata):
tested.append('Oneway')
if is_dict(xdata):
if groups is not None:
GraphBoxplot(xdata, groups=groups, **kwargs)
else:
GraphBoxplot(xdata, **kwargs)
groups = list(xdata.keys())
xdata = list(xdata.values())
else:
if groups is not None:
GraphBoxplot(*xdata, groups=groups, **kwargs)
else:
GraphBoxplot(*xdata, **kwargs)
out_stats = GroupStatistics(*xdata, groups=groups, display=False)
# Show the box plot and stats
print(out_stats)
if len(xdata) == 2:
norm = NormTest(*xdata, alpha=alpha, display=False)
if norm.p_value > alpha:
TTest(xdata[0], xdata[1], alpha=alpha)
tested.append('TTest')
elif len(xdata[0]) > 20 and len(xdata[1]) > 20:
MannWhitney(xdata[0], xdata[1], alpha=alpha)
tested.append('MannWhitney')
else:
TwoSampleKSTest(xdata[0], xdata[1], alpha=alpha)
tested.append('TwoSampleKSTest')
else:
e = EqualVariance(*xdata, alpha=alpha)
# If normally distributed and variances are equal, perform one-way ANOVA
# Otherwise, perform a non-parametric Kruskal-Wallis test
if e.test_type == 'Bartlett' and e.p_value > alpha:
Anova(*xdata, alpha=alpha)
tested.append('Anova')
else:
Kruskal(*xdata, alpha=alpha)
tested.append('Kruskal')
return tested if debug else None
if ydata is not None:
_data = determine_analysis_type(xdata, other=ydata, groups=groups, labels=labels)
else:
_data = determine_analysis_type(xdata, groups=groups, labels=labels)
if is_vector(_data) and not _data.other.empty:
# Correlation and Linear Regression
if len(_data.groups) > 1:
tested.append('Group Bivariate')
# Show the scatter plot, correlation and regression stats
GraphGroupScatter(_data, **kwargs)
GroupLinearRegression(_data, alpha=alpha)
GroupCorrelation(_data, alpha=alpha)
return tested if debug else None
else:
tested.append('Bivariate')
# Show the scatter plot, correlation and regression stats
GraphScatter(_data, **kwargs)
LinearRegression(_data, alpha=alpha)
Correlation(_data, alpha=alpha)
return tested if debug else None
elif is_vector(_data) and len(_data.groups) > 1:
# Compare Stacked Group Means and Variance
tested.append('Stacked Oneway')
# Show the box plot and stats
out_stats = GroupStatisticsStacked(_data, display=False)
GraphBoxplot(_data, gmean=out_stats.gmean, gmedian=out_stats.gmedian, **kwargs)
print(out_stats)
group_data = tuple(_data.groups.values())
if len(group_data) == 2:
norm = NormTest(*group_data, alpha=alpha, display=False)
if norm.p_value > alpha:
TTest(*group_data)
tested.append('TTest')
elif len(group_data[0]) > 20 and len(group_data[1]) > 20:
MannWhitney(*group_data)
tested.append('MannWhitney')
else:
TwoSampleKSTest(*group_data)
tested.append('TwoSampleKSTest')
else:
e = EqualVariance(*group_data, alpha=alpha)
if e.test_type == 'Bartlett' and e.p_value > alpha:
Anova(*group_data, alpha=alpha)
tested.append('Anova')
else:
Kruskal(*group_data, alpha=alpha)
tested.append('Kruskal')
return tested if debug else None
else:
# Histogram and Basic Stats or Categories and Frequencies
if is_vector(_data):
tested.append('Distribution')
# Show the histogram and stats
out_stats = VectorStatistics(_data, sample=kwargs.get('sample', False), display=False)
if 'distribution' in kwargs:
distro = kwargs['distribution']
distro_class = getattr(
__import__(
'scipy.stats',
globals(),
locals(),
[distro],
0,
),
distro,
)
parms = distro_class.fit(xdata)
fit = KSTest(xdata, distribution=distro, parms=parms, alpha=alpha, display=False)
tested.append('KSTest')
else:
fit = NormTest(xdata, alpha=alpha, display=False)
tested.append('NormTest')
GraphHisto(_data, mean=out_stats.mean, std_dev=out_stats.std_dev, **kwargs)
print(out_stats)
print(fit)
return tested if debug else None
else:
tested.append('Frequencies')
# Show the histogram and stats
GraphFrequency(_data, **kwargs)
CategoricalStatistics(xdata, **kwargs)
return tested if debug else None
|
sci-analysis
|
/sci_analysis-2.2.1rc0.tar.gz/sci_analysis-2.2.1rc0/sci_analysis/analysis/__init__.py
|
__init__.py
|
from .hypo_tests import NormTest, KSTest, TwoSampleKSTest, MannWhitney, TTest, Anova, Kruskal, EqualVariance
from .comparison import LinearRegression, Correlation, GroupCorrelation, GroupLinearRegression
from .stats import VectorStatistics, GroupStatistics, GroupStatisticsStacked, CategoricalStatistics
def determine_analysis_type(data, other=None, groups=None, labels=None):
"""Attempts to determine the type of data and returns the corresponding sci_analysis Data object.
Parameters
----------
data : array-like
The sequence of unknown data type.
other : array-like or None
A second sequence of unknown data type.
groups : array-like or None
The group names to include if data is determined to be a Vector.
labels : array-like or None
The sequence of data point labels.
Returns
-------
data : sci_analysis.data.Data
A subclass of sci_analysis Data that corresponds to the analysis type to perform.
"""
from numpy import (
float16, float32, float64,
int8, int16, int32, int64
)
from pandas import Series
from ..data import is_iterable, is_vector, is_categorical, Vector, Categorical
from .exc import NoDataError
numeric_types = [float16, float32, float64, int8, int16, int32, int64]
if not is_iterable(data):
raise ValueError('data cannot be a scalar value.')
elif len(data) == 0:
raise NoDataError
elif is_vector(data):
return data
elif is_categorical(data):
return data
else:
if not hasattr(data, 'dtype'):
data = Series(data)
if other is not None:
if not hasattr(other, 'dtype'):
other = Series(other)
if data.dtype in numeric_types:
if other is not None and other.dtype in numeric_types:
if groups is not None:
return Vector(data, other=other, groups=groups, labels=labels)
else:
return Vector(data, other=other, labels=labels)
else:
if groups is not None:
return Vector(data, groups=groups, labels=labels)
else:
return Vector(data, labels=labels)
else:
return Categorical(data)
def analyse(xdata, ydata=None, groups=None, labels=None, **kwargs):
"""
Alias for analyze.
Parameters
----------
xdata : array-like
The primary set of data.
ydata : array-like
The response or secondary set of data.
groups : array-like
The group names used for location testing or Bivariate analysis.
labels : array-like or None
The sequence of data point labels.
alpha : float
The sensitivity to use for hypothesis tests.
Returns
-------
xdata, ydata : tuple(array-like, array-like)
The input xdata and ydata.
Notes
-----
xdata : array-like(num), ydata : None --- Distribution
xdata : array-like(str), ydata : None --- Frequencies
xdata : array-like(num), ydata : array-like(num) --- Bivariate
xdata : array-like(num), ydata : array-like(num), groups : array-like --- Group Bivariate
xdata : list(array-like(num)), ydata : None --- Location Test(unstacked)
xdata : list(array-like(num)), ydata : None, groups : array-like --- Location Test(unstacked)
xdata : dict(array-like(num)), ydata : None --- Location Test(unstacked)
xdata : array-like(num), ydata : None, groups : array-like --- Location Test(stacked)
"""
return analyze(xdata, ydata=ydata, groups=groups, labels=labels, **kwargs)
def analyze(xdata, ydata=None, groups=None, labels=None, alpha=0.05, **kwargs):
"""
Automatically performs a statistical analysis based on the input arguments.
Parameters
----------
xdata : array-like
The primary set of data.
ydata : array-like
The response or secondary set of data.
groups : array-like
The group names used for location testing or Bivariate analysis.
labels : array-like or None
The sequence of data point labels.
alpha : float
The sensitivity to use for hypothesis tests.
Returns
-------
xdata, ydata : tuple(array-like, array-like)
The input xdata and ydata.
Notes
-----
xdata : array-like(num), ydata : None --- Distribution
xdata : array-like(str), ydata : None --- Frequencies
xdata : array-like(num), ydata : array-like(num) --- Bivariate
xdata : array-like(num), ydata : array-like(num), groups : array-like --- Group Bivariate
xdata : list(array-like(num)), ydata : None --- Location Test(unstacked)
xdata : list(array-like(num)), ydata : None, groups : array-like --- Location Test(unstacked)
xdata : dict(array-like(num)), ydata : None --- Location Test(unstacked)
xdata : array-like(num), ydata : None, groups : array-like --- Location Test(stacked)
"""
from ..graphs import GraphHisto, GraphScatter, GraphBoxplot, GraphFrequency, GraphGroupScatter
from ..data import (is_dict, is_iterable, is_group, is_dict_group, is_vector)
from .exc import NoDataError
debug = True if 'debug' in kwargs else False
tested = list()
if xdata is None:
raise ValueError("xdata was not provided.")
if not is_iterable(xdata):
raise TypeError("xdata is not an array.")
if len(xdata) == 0:
raise NoDataError("No data was passed to analyze")
# Compare Group Means and Variance
if is_group(xdata) or is_dict_group(xdata):
tested.append('Oneway')
if is_dict(xdata):
if groups is not None:
GraphBoxplot(xdata, groups=groups, **kwargs)
else:
GraphBoxplot(xdata, **kwargs)
groups = list(xdata.keys())
xdata = list(xdata.values())
else:
if groups is not None:
GraphBoxplot(*xdata, groups=groups, **kwargs)
else:
GraphBoxplot(*xdata, **kwargs)
out_stats = GroupStatistics(*xdata, groups=groups, display=False)
# Show the box plot and stats
print(out_stats)
if len(xdata) == 2:
norm = NormTest(*xdata, alpha=alpha, display=False)
if norm.p_value > alpha:
TTest(xdata[0], xdata[1], alpha=alpha)
tested.append('TTest')
elif len(xdata[0]) > 20 and len(xdata[1]) > 20:
MannWhitney(xdata[0], xdata[1], alpha=alpha)
tested.append('MannWhitney')
else:
TwoSampleKSTest(xdata[0], xdata[1], alpha=alpha)
tested.append('TwoSampleKSTest')
else:
e = EqualVariance(*xdata, alpha=alpha)
# If normally distributed and variances are equal, perform one-way ANOVA
# Otherwise, perform a non-parametric Kruskal-Wallis test
if e.test_type == 'Bartlett' and e.p_value > alpha:
Anova(*xdata, alpha=alpha)
tested.append('Anova')
else:
Kruskal(*xdata, alpha=alpha)
tested.append('Kruskal')
return tested if debug else None
if ydata is not None:
_data = determine_analysis_type(xdata, other=ydata, groups=groups, labels=labels)
else:
_data = determine_analysis_type(xdata, groups=groups, labels=labels)
if is_vector(_data) and not _data.other.empty:
# Correlation and Linear Regression
if len(_data.groups) > 1:
tested.append('Group Bivariate')
# Show the scatter plot, correlation and regression stats
GraphGroupScatter(_data, **kwargs)
GroupLinearRegression(_data, alpha=alpha)
GroupCorrelation(_data, alpha=alpha)
return tested if debug else None
else:
tested.append('Bivariate')
# Show the scatter plot, correlation and regression stats
GraphScatter(_data, **kwargs)
LinearRegression(_data, alpha=alpha)
Correlation(_data, alpha=alpha)
return tested if debug else None
elif is_vector(_data) and len(_data.groups) > 1:
# Compare Stacked Group Means and Variance
tested.append('Stacked Oneway')
# Show the box plot and stats
out_stats = GroupStatisticsStacked(_data, display=False)
GraphBoxplot(_data, gmean=out_stats.gmean, gmedian=out_stats.gmedian, **kwargs)
print(out_stats)
group_data = tuple(_data.groups.values())
if len(group_data) == 2:
norm = NormTest(*group_data, alpha=alpha, display=False)
if norm.p_value > alpha:
TTest(*group_data)
tested.append('TTest')
elif len(group_data[0]) > 20 and len(group_data[1]) > 20:
MannWhitney(*group_data)
tested.append('MannWhitney')
else:
TwoSampleKSTest(*group_data)
tested.append('TwoSampleKSTest')
else:
e = EqualVariance(*group_data, alpha=alpha)
if e.test_type == 'Bartlett' and e.p_value > alpha:
Anova(*group_data, alpha=alpha)
tested.append('Anova')
else:
Kruskal(*group_data, alpha=alpha)
tested.append('Kruskal')
return tested if debug else None
else:
# Histogram and Basic Stats or Categories and Frequencies
if is_vector(_data):
tested.append('Distribution')
# Show the histogram and stats
out_stats = VectorStatistics(_data, sample=kwargs.get('sample', False), display=False)
if 'distribution' in kwargs:
distro = kwargs['distribution']
distro_class = getattr(
__import__(
'scipy.stats',
globals(),
locals(),
[distro],
0,
),
distro,
)
parms = distro_class.fit(xdata)
fit = KSTest(xdata, distribution=distro, parms=parms, alpha=alpha, display=False)
tested.append('KSTest')
else:
fit = NormTest(xdata, alpha=alpha, display=False)
tested.append('NormTest')
GraphHisto(_data, mean=out_stats.mean, std_dev=out_stats.std_dev, **kwargs)
print(out_stats)
print(fit)
return tested if debug else None
else:
tested.append('Frequencies')
# Show the histogram and stats
GraphFrequency(_data, **kwargs)
CategoricalStatistics(xdata, **kwargs)
return tested if debug else None
| 0.898785 | 0.709227 |
from os import path, getcwd
import unittest
import warnings
from .. data import is_iterable
class TestWarnings(unittest.TestCase):
"""A TestCase subclass with assertWarns substitute to cover python 2.7 which doesn't have an assertWarns method."""
_seed = 987654321
@property
def save_path(self):
if getcwd().split('/')[-1] == 'test':
return './images/'
elif getcwd().split('/')[-1] == 'sci_analysis':
if path.exists('./setup.py'):
return './sci_analysis/test/images/'
else:
return './test/images/'
else:
'./'
def assertWarnsCrossCompatible(self, expected_warning, *args, **kwargs):
if 'message' in kwargs:
_message = kwargs['message']
kwargs.pop('message')
else:
_message = None
with warnings.catch_warnings(record=True) as warning_list:
warnings.simplefilter('always')
callable_obj = args[0]
args = args[1:]
callable_obj(*args, **kwargs)
# This has to be done with for loops for py27 compatability
for caught_warning in warning_list:
self.assertTrue(issubclass(caught_warning.category, expected_warning))
if _message is not None:
if is_iterable(_message):
for i, m in enumerate(_message):
self.assertTrue(m in str(warning_list[i].message))
else:
for caught_warning in warning_list:
self.assertTrue(_message in str(caught_warning.message))
def assertNotWarnsCrossCompatible(self, expected_warning, *args, **kwargs):
with warnings.catch_warnings(record=True) as warning_list:
warnings.simplefilter('always')
callable_obj = args[0]
args = args[1:]
callable_obj(*args, **kwargs)
self.assertFalse(any(item.category == expected_warning for item in warning_list))
|
sci-analysis
|
/sci_analysis-2.2.1rc0.tar.gz/sci_analysis-2.2.1rc0/sci_analysis/test/base.py
|
base.py
|
from os import path, getcwd
import unittest
import warnings
from .. data import is_iterable
class TestWarnings(unittest.TestCase):
"""A TestCase subclass with assertWarns substitute to cover python 2.7 which doesn't have an assertWarns method."""
_seed = 987654321
@property
def save_path(self):
if getcwd().split('/')[-1] == 'test':
return './images/'
elif getcwd().split('/')[-1] == 'sci_analysis':
if path.exists('./setup.py'):
return './sci_analysis/test/images/'
else:
return './test/images/'
else:
'./'
def assertWarnsCrossCompatible(self, expected_warning, *args, **kwargs):
if 'message' in kwargs:
_message = kwargs['message']
kwargs.pop('message')
else:
_message = None
with warnings.catch_warnings(record=True) as warning_list:
warnings.simplefilter('always')
callable_obj = args[0]
args = args[1:]
callable_obj(*args, **kwargs)
# This has to be done with for loops for py27 compatability
for caught_warning in warning_list:
self.assertTrue(issubclass(caught_warning.category, expected_warning))
if _message is not None:
if is_iterable(_message):
for i, m in enumerate(_message):
self.assertTrue(m in str(warning_list[i].message))
else:
for caught_warning in warning_list:
self.assertTrue(_message in str(caught_warning.message))
def assertNotWarnsCrossCompatible(self, expected_warning, *args, **kwargs):
with warnings.catch_warnings(record=True) as warning_list:
warnings.simplefilter('always')
callable_obj = args[0]
args = args[1:]
callable_obj(*args, **kwargs)
self.assertFalse(any(item.category == expected_warning for item in warning_list))
| 0.440951 | 0.311152 |
import warnings
import six
from math import sqrt, fabs
# matplotlib imports
from matplotlib.pyplot import (
show, subplot, yticks, xlabel, ylabel, figure, setp, savefig, close, xticks, subplots_adjust
)
from matplotlib.gridspec import GridSpec
from matplotlib.patches import Circle
# Numpy imports
from numpy import (
polyfit, polyval, sort, arange, array, linspace, mgrid, vstack, std, sum, mean, median
)
# Scipy imports
from scipy.stats import probplot, gaussian_kde, t
# local imports
from .base import Graph
from ..data import Vector, is_dict, is_group, is_vector
from ..analysis.exc import NoDataError
def future(message):
warnings.warn(message, FutureWarning, stacklevel=2)
class VectorGraph(Graph):
def __init__(self, sequence, **kwargs):
"""Converts the data argument to a Vector object and sets it to the Graph
object's vector member. Sets the xname and yname arguments as the axis
labels. The default values are "x" and "y".
"""
if is_vector(sequence):
super(VectorGraph, self).__init__(sequence, **kwargs)
else:
super(VectorGraph, self).__init__(Vector(sequence), **kwargs)
if len(self._data.groups.keys()) == 0:
raise NoDataError("Cannot draw graph because there is no data.")
self.draw()
def draw(self):
"""
Prepares and displays the graph based on the set class members.
"""
raise NotImplementedError
class GraphHisto(VectorGraph):
"""Draws a histogram.
New class members are bins, color and box_plot. The bins member is the number
of histogram bins to draw. The color member is the color of the histogram area.
The box_plot member is a boolean flag for whether to draw the corresponding
box plot.
"""
_xsize = 5
_ysize = 4
def __init__(self, data, **kwargs):
"""GraphHisto constructor.
:param data: The data to be graphed.
:param _bins: The number of histogram bins to draw. This arg sets the bins member.
:param _name: The optional x-axis label.
:param _distribution: The theoretical distribution to fit.
:param _box_plot: Toggle the display of the optional boxplot.
:param _cdf: Toggle the display of the optional cumulative density function plot.
:param _fit: Toggle the display of the best fit line for the specified distribution.
:param _mean: The mean to be displayed on the graph title.
:param _std: The standard deviation to be displayed on the graph title.
:param _sample: Sets x-bar and s if true, else mu and sigma for displaying on the graph title.
:param _title: The title of the graph.
:param _save_to: Save the graph to the specified path.
:return: pass
"""
self._bins = kwargs.get('bins', 20)
self._distribution = kwargs.get('distribution', 'norm')
self._box_plot = kwargs.get('boxplot', True)
self._cdf = kwargs.get('cdf', False)
self._fit = kwargs.get('fit', False)
self._mean = kwargs.get('mean')
self._std = kwargs.get('std_dev')
self._sample = kwargs.get('sample', False)
self._title = kwargs.get('title', 'Distribution')
self._save_to = kwargs.get('save_to')
yname = kwargs.get('yname', 'Probability')
name = kwargs.get('name') or kwargs.get('xname') or 'Data'
super(GraphHisto, self).__init__(data, xname=name, yname=yname)
def fit_distro(self):
"""
Calculate the fit points for a specified distribution.
Returns
-------
fit_parms : tuple
First value - The x-axis points
Second value - The pdf y-axis points
Third value - The cdf y-axis points
"""
distro_class = getattr(
__import__(
'scipy.stats',
globals(),
locals(),
[self._distribution],
0,
),
self._distribution
)
parms = distro_class.fit(self._data.data)
distro = linspace(distro_class.ppf(0.001, *parms), distro_class.ppf(0.999, *parms), 100)
distro_pdf = distro_class.pdf(distro, *parms)
distro_cdf = distro_class.cdf(distro, *parms)
return distro, distro_pdf, distro_cdf
def calc_cdf(self):
"""
Calcuate the cdf points.
Returns
-------
coordinates : tuple
First value - The cdf x-axis points
Second value - The cdf y-axis points
"""
x_sorted_vector = sort(self._data.data)
if len(x_sorted_vector) == 0:
return 0, 0
y_sorted_vector = arange(len(x_sorted_vector) + 1) / float(len(x_sorted_vector))
x_cdf = array([x_sorted_vector, x_sorted_vector]).T.flatten()
y_cdf = array([y_sorted_vector[:(len(y_sorted_vector)-1)], y_sorted_vector[1:]]).T.flatten()
return x_cdf, y_cdf
def draw(self):
"""
Draws the histogram based on the set parameters.
Returns
-------
pass
"""
# Setup the grid variables
histo_span = 3
box_plot_span = 1
cdf_span = 3
h_ratios = [histo_span]
p = []
if self._box_plot:
self._ysize += 0.5
self._nrows += 1
h_ratios.insert(0, box_plot_span)
if self._cdf:
self._ysize += 2
self._nrows += 1
h_ratios.insert(0, cdf_span)
# Create the figure and grid spec
f = figure(figsize=(self._xsize, self._ysize))
gs = GridSpec(self._nrows, self._ncols, height_ratios=h_ratios, hspace=0)
# Set the title
title = self._title
if self._mean and self._std:
if self._sample:
title = r"{}{}$\bar x = {:.4f}, s = {:.4f}$".format(title, "\n", self._mean, self._std)
else:
title = r"{}{}$\mu = {:.4f}$, $\sigma = {:.4f}$".format(title, "\n", self._mean, self._std)
f.suptitle(title, fontsize=14)
# Adjust the bin size if it's greater than the vector size
if len(self._data.data) < self._bins:
self._bins = len(self._data.data)
# Fit the distribution
if self._fit:
distro, distro_pdf, distro_cdf = self.fit_distro()
else:
distro, distro_pdf, distro_cdf = None, None, None
# Draw the cdf
if self._cdf:
x_cdf, y_cdf = self.calc_cdf()
ax_cdf = subplot(gs[0])
ax_cdf.plot(x_cdf, y_cdf, 'k-')
ax_cdf.xaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
ax_cdf.yaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
p.append(ax_cdf.get_xticklabels())
if self._fit:
ax_cdf.plot(distro, distro_cdf, 'r--', linewidth=2)
yticks(arange(11) * 0.1)
ylabel("Cumulative Probability")
else:
ax_cdf = None
# Draw the box plot
if self._box_plot:
if self._cdf:
ax_box = subplot(gs[len(h_ratios) - 2], sharex=ax_cdf)
else:
ax_box = subplot(gs[len(h_ratios) - 2])
bp = ax_box.boxplot(self._data.data, vert=False, showmeans=True)
setp(bp['boxes'], color='k')
setp(bp['whiskers'], color='k')
vp = ax_box.violinplot(self._data.data, vert=False, showextrema=False, showmedians=False, showmeans=False)
setp(vp['bodies'], facecolors=self.get_color(0))
ax_box.xaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
yticks([])
p.append(ax_box.get_xticklabels())
ax_hist = subplot(gs[len(h_ratios) - 1], sharex=ax_box)
else:
ax_hist = subplot(gs[len(h_ratios) - 1])
# Draw the histogram
# First try to use the density arg which replaced normed (which is now depricated) in matplotlib 2.2.2
try:
ax_hist.hist(self._data.data, self._bins, density=True, color=self.get_color(0), zorder=0)
except TypeError:
ax_hist.hist(self._data.data, self._bins, normed=True, color=self.get_color(0), zorder=0)
ax_hist.xaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
ax_hist.yaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
if self._fit:
ax_hist.plot(distro, distro_pdf, 'r--', linewidth=2)
if len(p) > 0:
setp(p, visible=False)
# set the labels and display the figure
ylabel(self._yname)
xlabel(self._xname)
if self._save_to:
savefig(self._save_to)
close(f)
else:
show()
pass
class GraphScatter(VectorGraph):
"""Draws an x-by-y scatter plot.
Unique class members are fit and style. The fit member is a boolean flag for
whether to draw the linear best fit line. The style member is a tuple of
formatted strings that set the matplotlib point style and line style. It is
also worth noting that the vector member for the GraphScatter class is a
tuple of xdata and ydata.
"""
_nrows = 1
_ncols = 1
_xsize = 6
_ysize = 5
def __init__(self, xdata, ydata=None, **kwargs):
"""GraphScatter constructor.
:param xdata: The x-axis data.
:param ydata: The y-axis data.
:param fit: Display the optional line fit.
:param points: Display the scatter points.
:param contours: Display the density contours
:param boxplot_borders: Display the boxplot borders
:param highlight: an array-like with points to highlight based on labels
:param labels: a vector object with the graph labels
:param title: The title of the graph.
:param save_to: Save the graph to the specified path.
:return: pass
"""
self._fit = kwargs.get('fit', True)
self._points = kwargs.get('points', True)
self._labels = kwargs.get('labels', None)
self._highlight = kwargs.get('highlight', None)
self._contours = kwargs.get('contours', False)
self._contour_props = (31, 1.1)
self._boxplot_borders = kwargs.get('boxplot_borders', False)
self._title = kwargs['title'] if 'title' in kwargs else 'Bivariate'
self._save_to = kwargs.get('save_to', None)
yname = kwargs.get('yname', 'y Data')
xname = kwargs.get('xname', 'x Data')
if ydata is None:
if is_vector(xdata):
super(GraphScatter, self).__init__(xdata, xname=xname, yname=yname)
else:
raise AttributeError('ydata argument cannot be None.')
else:
super(GraphScatter, self).__init__(
Vector(xdata, other=ydata, labels=self._labels),
xname=xname,
yname=yname,
)
def calc_contours(self):
"""
Calculates the density contours.
Returns
-------
contour_parms : tuple
First value - x-axis points
Second value - y-axis points
Third value - z-axis points
Fourth value - The contour levels
"""
xmin = self._data.data.min()
xmax = self._data.data.max()
ymin = self._data.other.min()
ymax = self._data.other.max()
values = vstack([self._data.data, self._data.other])
kernel = gaussian_kde(values)
_x, _y = mgrid[xmin:xmax:100j, ymin:ymax:100j]
positions = vstack([_x.ravel(), _y.ravel()])
_z = kernel.evaluate(positions).T.reshape(_x.shape)
return _x, _y, _z, arange(_z.min(), _z.max(), (_z.max() - _z.min()) / self._contour_props[0])
def calc_fit(self):
"""
Calculates the best fit line using sum of squares.
Returns
-------
fit_coordinates : list
A list of the min and max fit points.
"""
x = self._data.data
y = self._data.other
p = polyfit(x, y, 1)
fit = polyval(p, x)
if p[0] > 0:
return (x.min(), x.max()), (fit.min(), fit.max())
else:
return (x.min(), x.max()), (fit.max(), fit.min())
def draw(self):
"""
Draws the scatter plot based on the set parameters.
Returns
-------
pass
"""
# Setup the grid variables
x = self._data.data
y = self._data.other
h_ratio = [1, 1]
w_ratio = [1, 1]
# Setup the figure and gridspec
if self._boxplot_borders:
self._nrows, self._ncols = 2, 2
self._xsize = self._xsize + 0.5
self._ysize = self._ysize + 0.5
h_ratio, w_ratio = (1.5, 5.5), (5.5, 1.5)
main_plot = 2
else:
main_plot = 0
# Setup the figure
f = figure(figsize=(self._xsize, self._ysize))
f.suptitle(self._title, fontsize=14)
if self._boxplot_borders:
gs = GridSpec(self._nrows, self._ncols, height_ratios=h_ratio, width_ratios=w_ratio, hspace=0, wspace=0)
else:
gs = GridSpec(self._nrows, self._ncols)
ax1 = None
ax3 = None
# Draw the boxplot borders
if self._boxplot_borders:
ax1 = subplot(gs[0])
ax3 = subplot(gs[3])
bpx = ax1.boxplot(x, vert=False, showmeans=True)
bpy = ax3.boxplot(y, vert=True, showmeans=True)
setp(bpx['boxes'], color='k')
setp(bpx['whiskers'], color='k')
setp(bpy['boxes'], color='k')
setp(bpy['whiskers'], color='k')
vpx = ax1.violinplot(x, vert=False, showmedians=False, showmeans=False, showextrema=False)
vpy = ax3.violinplot(y, vert=True, showmedians=False, showmeans=False, showextrema=False)
setp(vpx['bodies'], facecolors=self.get_color(0))
setp(vpy['bodies'], facecolors=self.get_color(0))
ax1.xaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
ax3.yaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
setp(
[
ax1.get_xticklabels(), ax1.get_yticklabels(), ax3.get_xticklabels(), ax3.get_yticklabels()
], visible=False
)
# Draw the main graph
ax2 = subplot(gs[main_plot], sharex=ax1, sharey=ax3)
# Draw the points
if self._points:
# A 2-D array needs to be passed to prevent matplotlib from applying the default cmap if the size < 4.
color = (self.get_color(0),)
alpha_trans = 0.7
if self._highlight is not None:
# Find index of the labels which are in the highlight list
labelmask = self._data.labels.isin(self._highlight)
# Get x and y position of those labels
x_labels = x.loc[labelmask]
y_labels = y.loc[labelmask]
x_nolabels = x.loc[~labelmask]
y_nolabels = y.loc[~labelmask]
ax2.scatter(x_labels, y_labels, c=color, marker='o', linewidths=0, alpha=alpha_trans, zorder=1)
ax2.scatter(x_nolabels, y_nolabels, c=color, marker='o', linewidths=0, alpha=.2, zorder=1)
for k in self._data.labels[labelmask].index:
ax2.annotate(self._data.labels[k], xy=(x[k], y[k]), alpha=1, color=color[0])
else:
ax2.scatter(x, y, c=color, marker='o', linewidths=0, alpha=alpha_trans, zorder=1)
# Draw the contours
if self._contours:
x_prime, y_prime, z, levels = self.calc_contours()
ax2.contour(x_prime, y_prime, z, levels, linewidths=self._contour_props[1], nchunk=16,
extend='both', zorder=2)
# Draw the fit line
if self._fit:
fit_x, fit_y = self.calc_fit()
ax2.plot(fit_x, fit_y, 'r--', linewidth=2, zorder=3)
# Draw the grid lines and labels
ax2.xaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
ax2.yaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
xlabel(self._xname)
ylabel(self._yname)
# Save the figure to disk or display
if self._save_to:
savefig(self._save_to)
close(f)
else:
show()
pass
class GraphGroupScatter(VectorGraph):
"""Draws an x-by-y scatter plot with more than a single group.
Unique class members are fit and style. The fit member is a boolean flag for
whether to draw the linear best fit line. The style member is a tuple of
formatted strings that set the matplotlib point style and line style. It is
also worth noting that the vector member for the GraphScatter class is a
tuple of xdata and ydata.
"""
_nrows = 1
_ncols = 1
_xsize = 6
_ysize = 5
def __init__(self, xdata, ydata=None, groups=None, **kwargs):
"""GraphScatter constructor.
:param xdata: The x-axis data.
:param ydata: The y-axis data.
:param _fit: Display the optional line fit.
:param _highlight: Give list of groups to highlight in scatter.
:param _points: Display the scatter points.
:param _contours: Display the density contours
:param _boxplot_borders: Display the boxplot borders
:param _labels: a vector object with the graph labels
:param _title: The title of the graph.
:param _save_to: Save the graph to the specified path.
:return: pass
"""
self._fit = kwargs['fit'] if 'fit' in kwargs else True
self._points = kwargs['points'] if 'points' in kwargs else True
self._labels = kwargs['labels'] if 'labels' in kwargs else None
self._highlight = kwargs['highlight'] if 'highlight' in kwargs else None
self._boxplot_borders = kwargs['boxplot_borders'] if 'boxplot_borders' in kwargs else True
self._title = kwargs['title'] if 'title' in kwargs else 'Group Bivariate'
self._save_to = kwargs['save_to'] if 'save_to' in kwargs else None
yname = kwargs['yname'] if 'yname' in kwargs else 'y Data'
xname = kwargs['xname'] if 'xname' in kwargs else 'x Data'
if ydata is None:
if is_vector(xdata):
super(GraphGroupScatter, self).__init__(xdata, xname=xname, yname=yname)
else:
raise AttributeError('ydata argument cannot be None.')
else:
super(GraphGroupScatter, self).__init__(Vector(
xdata,
other=ydata,
groups=groups,
labels=self._labels
), xname=xname, yname=yname)
@staticmethod
def calc_fit(x, y):
"""
Calculates the best fit line using sum of squares.
Returns
-------
fit_coordinates : list
A list of the min and max fit points.
"""
p = polyfit(x, y, 1)
fit = polyval(p, x)
if p[0] > 0:
return (x.min(), x.max()), (fit.min(), fit.max())
else:
return (x.min(), x.max()), (fit.max(), fit.min())
def draw(self):
"""
Draws the scatter plot based on the set parameters.
Returns
-------
pass
"""
# Setup the grid variables
x = self._data.data
y = self._data.other
groups = sorted(self._data.groups.keys())
h_ratio = [1, 1]
w_ratio = [1, 1]
# Setup the figure and gridspec
if self._boxplot_borders:
self._nrows, self._ncols = 2, 2
self._xsize = self._xsize + 0.5
self._ysize = self._ysize + 0.5
h_ratio, w_ratio = (1.5, 5.5), (5.5, 1.5)
main_plot = 2
else:
main_plot = 0
# Setup the figure
f = figure(figsize=(self._xsize, self._ysize))
f.suptitle(self._title, fontsize=14)
if self._boxplot_borders:
gs = GridSpec(self._nrows, self._ncols, height_ratios=h_ratio, width_ratios=w_ratio, hspace=0, wspace=0)
else:
gs = GridSpec(self._nrows, self._ncols)
ax1 = None
ax3 = None
# Draw the boxplot borders
if self._boxplot_borders:
ax1 = subplot(gs[0])
ax3 = subplot(gs[3])
bpx = ax1.boxplot(x, vert=False, showmeans=True)
bpy = ax3.boxplot(y, vert=True, showmeans=True)
setp(bpx['boxes'], color='k')
setp(bpx['whiskers'], color='k')
setp(bpy['boxes'], color='k')
setp(bpy['whiskers'], color='k')
vpx = ax1.violinplot(x, vert=False, showmedians=False, showmeans=False, showextrema=False)
vpy = ax3.violinplot(y, vert=True, showmedians=False, showmeans=False, showextrema=False)
setp(vpx['bodies'], facecolors=self.get_color(0))
setp(vpy['bodies'], facecolors=self.get_color(0))
ax1.xaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
ax3.yaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
setp([ax1.get_xticklabels(), ax1.get_yticklabels(), ax3.get_xticklabels(), ax3.get_yticklabels()],
visible=False)
# Draw the main graph
ax2 = subplot(gs[main_plot], sharex=ax1, sharey=ax3)
for grp, (grp_x, grp_y) in self._data.paired_groups.items():
i = groups.index(grp)
alpha_trans = 0.65
if self._highlight is not None:
try:
if grp not in self._highlight:
alpha_trans = 0.2
except TypeError:
pass
if isinstance(grp, six.string_types) and len(grp) > 20:
grp = grp[0:21] + '...'
# Draw the points
if self._points:
# A 2-D array needs to be passed to prevent matplotlib from applying the default cmap if the size < 4.
color = (self.get_color(i),)
scatter_kwargs = dict(
c=color,
marker='o',
linewidths=0,
zorder=1,
)
# Draw the point labels
if self._data.has_labels and self._highlight is not None:
# If a group is in highlights and labels are also given
if grp in self._highlight:
scatter_kwargs.update(
dict(
alpha=alpha_trans,
label=grp
)
)
ax2.scatter(grp_x, grp_y, **scatter_kwargs)
# Highlight the specified labels
else:
labelmask = self._data.group_labels[grp].isin(self._highlight)
# Get x and y position of those labels
x_labels = grp_x.loc[labelmask]
y_labels = grp_y.loc[labelmask]
x_nolabels = grp_x.loc[~labelmask]
y_nolabels = grp_y.loc[~labelmask]
scatter_kwargs.update(
dict(
alpha=0.65,
label=grp if any(labelmask) else None,
)
)
ax2.scatter(x_labels, y_labels, **scatter_kwargs)
scatter_kwargs.update(
dict(
alpha=0.2,
label=None if any(labelmask) else grp,
)
)
ax2.scatter(x_nolabels, y_nolabels, **scatter_kwargs)
# Add the annotations
for k in self._data.group_labels[grp][labelmask].index:
clr = color[0]
ax2.annotate(self._data.group_labels[grp][k], xy=(grp_x[k], grp_y[k]), alpha=1, color=clr)
else:
scatter_kwargs.update(
dict(
alpha=alpha_trans,
label=grp,
)
)
ax2.scatter(grp_x, grp_y, **scatter_kwargs)
# Draw the fit line
if self._fit:
fit_x, fit_y = self.calc_fit(grp_x, grp_y)
if self._points:
ax2.plot(fit_x, fit_y, linestyle='--', color=self.get_color(i), linewidth=2, zorder=2)
else:
ax2.plot(fit_x, fit_y, linestyle='--', color=self.get_color(i), linewidth=2, zorder=2, label=grp)
# Draw the legend
if (self._fit or self._points) and len(groups) > 1:
ax2.legend(loc='best')
# Draw the grid lines and labels
ax2.xaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
ax2.yaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
xlabel(self._xname)
ylabel(self._yname)
# Save the figure to disk or display
if self._save_to:
savefig(self._save_to)
close(f)
else:
show()
pass
class GraphBoxplot(VectorGraph):
"""Draws box plots of the provided data as well as an optional probability plot.
Unique class members are groups, nqp and prob. The groups member is a list of
labels for each boxplot. If groups is an empty list, sequentially ascending
numbers are used for each boxplot. The nqp member is a flag that turns the
probability plot on or off. The prob member is a list of tuples that contains
the data used to graph the probability plot. It is also worth noting that the
vector member for the GraphBoxplot is a list of lists that contain the data
for each boxplot.
"""
_nrows = 1
_ncols = 1
_xsize = 5.75
_ysize = 5
_default_alpha = 0.05
def __init__(self, *args, **kwargs):
"""GraphBoxplot constructor. NOTE: If vectors is a dict, the boxplots are
graphed in random order instead of the provided order.
:param groups: An optional list of boxplot labels. The order should match the order in vectors.
:param nqp: Display the optional probability plot.
:param _title: The title of the graph.
:param _save_to: Save the graph to the specified path.
:return: pass
"""
name = kwargs['name'] if 'name' in kwargs else 'Values'
categories = kwargs['categories'] if 'categories' in kwargs else 'Categories'
xname = kwargs['xname'] if 'xname' in kwargs else categories
yname = kwargs['yname'] if 'yname' in kwargs else name
self._title = kwargs['title'] if 'title' in kwargs else 'Oneway'
self._nqp = kwargs['nqp'] if 'nqp' in kwargs else True
self._save_to = kwargs['save_to'] if 'save_to' in kwargs else None
self._gmean = kwargs['gmean'] if 'gmean' in kwargs else True
self._gmedian = kwargs['gmedian'] if 'gmedian' in kwargs else True
self._circles = kwargs['circles'] if 'circles' in kwargs else True
self._alpha = kwargs['alpha'] if 'alpha' in kwargs else self._default_alpha
if 'title' in kwargs:
self._title = kwargs['title']
elif self._nqp:
self._title = 'Oneway and Normal Quantile Plot'
else:
self._title = 'Oneway'
if is_vector(args[0]):
data = args[0]
elif is_dict(args[0]):
data = Vector()
for g, d in args[0].items():
data.append(Vector(d, groups=[g] * len(d)))
else:
if is_group(args) and len(args) > 1:
future('Graphing boxplots by passing multiple arguments will be removed in a future version. '
'Instead, pass unstacked arguments as a dictionary.')
data = Vector()
if 'groups' in kwargs:
if len(kwargs['groups']) != len(args):
raise AttributeError('The length of passed groups does not match the number passed data.')
for g, d in zip(kwargs['groups'], args):
data.append(Vector(d, groups=[g] * len(d)))
else:
for d in args:
data.append(Vector(d))
else:
if 'groups' in kwargs:
if len(kwargs['groups']) != len(args[0]):
raise AttributeError('The length of passed groups does not match the number passed data.')
data = Vector(args[0], groups=kwargs['groups'])
else:
data = Vector(args[0])
super(GraphBoxplot, self).__init__(data, xname=xname, yname=yname, save_to=self._save_to)
@staticmethod
def grand_mean(data):
return mean([mean(sample) for sample in data])
@staticmethod
def grand_median(data):
return median([median(sample) for sample in data])
def tukey_circles(self, data):
num = []
den = []
crit = []
radii = []
xbar = []
for sample in data:
df = len(sample) - 1
num.append(std(sample, ddof=1) ** 2 * df)
den.append(df)
crit.append(t.ppf(1 - self._alpha, df))
mse = sum(num) / sum(den)
for i, sample in enumerate(data):
radii.append(fabs(crit[i]) * sqrt(mse / len(sample)))
xbar.append(mean(sample))
return tuple(zip(xbar, radii))
def draw(self):
"""Draws the boxplots based on the set parameters."""
# Setup the grid variables
w_ratio = [1]
if self._circles:
w_ratio = [4, 1]
self._ncols += 1
if self._nqp:
w_ratio.append(4 if self._circles else 1)
self._ncols += 1
groups, data = zip(*[
(g, v['ind'].reset_index(drop=True)) for g, v in self._data.values.groupby('grp') if not v.empty]
)
# Create the quantile plot arrays
prob = [probplot(v) for v in data]
# Create the figure and gridspec
if self._nqp and len(prob) > 0:
self._xsize *= 2
f = figure(figsize=(self._xsize, self._ysize))
f.suptitle(self._title, fontsize=14)
gs = GridSpec(self._nrows, self._ncols, width_ratios=w_ratio, wspace=0)
# Draw the boxplots
ax1 = subplot(gs[0])
bp = ax1.boxplot(data, showmeans=True, labels=groups)
setp(bp['boxes'], color='k')
setp(bp['whiskers'], color='k')
vp = ax1.violinplot(data, showextrema=False, showmedians=False, showmeans=False)
for i in range(len(groups)):
setp(vp['bodies'][i], facecolors=self.get_color(i))
ax1.yaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
if self._gmean:
ax1.axhline(float(self.grand_mean(data)), c='k', linestyle='--', alpha=0.4)
if self._gmedian:
ax1.axhline(float(self.grand_median(data)), c='k', linestyle=':', alpha=0.4)
if any([True if len(str(g)) > 9 else False for g in groups]) or len(groups) > 5:
xticks(rotation=60)
subplots_adjust(bottom=0.2)
ylabel(self._yname)
xlabel(self._xname)
# Draw the Tukey-Kramer circles
if self._circles:
ax2 = subplot(gs[1], sharey=ax1)
for i, (center, radius) in enumerate(self.tukey_circles(data)):
c = Circle((0.5, center), radius=radius, facecolor='none', edgecolor=self.get_color(i))
ax2.add_patch(c)
# matplotlib 2.2.2 requires adjustable='datalim' to display properly.
ax2.set_aspect('equal', adjustable='datalim')
setp(ax2.get_xticklabels(), visible=False)
setp(ax2.get_yticklabels(), visible=False)
ax2.set_xticks([])
# Draw the normal quantile plot
if self._nqp and len(prob) > 0:
ax3 = subplot(gs[2], sharey=ax1) if self._circles else subplot(gs[1], sharey=ax1)
for i, g in enumerate(prob):
osm = g[0][0]
osr = g[0][1]
slope = g[1][0]
intercept = g[1][1]
ax3.plot(osm, osr, marker='^', color=self.get_color(i), label=groups[i])
ax3.plot(osm, slope * osm + intercept, linestyle='--', linewidth=2, color=self.get_color(i))
ax3.xaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
ax3.yaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
ax3.legend(loc='best')
xlabel("Quantiles")
setp(ax3.get_yticklabels(), visible=False)
# Save the figure to disk or display
if self._save_to:
savefig(self._save_to)
close(f)
else:
show()
pass
|
sci-analysis
|
/sci_analysis-2.2.1rc0.tar.gz/sci_analysis-2.2.1rc0/sci_analysis/graphs/vector.py
|
vector.py
|
import warnings
import six
from math import sqrt, fabs
# matplotlib imports
from matplotlib.pyplot import (
show, subplot, yticks, xlabel, ylabel, figure, setp, savefig, close, xticks, subplots_adjust
)
from matplotlib.gridspec import GridSpec
from matplotlib.patches import Circle
# Numpy imports
from numpy import (
polyfit, polyval, sort, arange, array, linspace, mgrid, vstack, std, sum, mean, median
)
# Scipy imports
from scipy.stats import probplot, gaussian_kde, t
# local imports
from .base import Graph
from ..data import Vector, is_dict, is_group, is_vector
from ..analysis.exc import NoDataError
def future(message):
warnings.warn(message, FutureWarning, stacklevel=2)
class VectorGraph(Graph):
def __init__(self, sequence, **kwargs):
"""Converts the data argument to a Vector object and sets it to the Graph
object's vector member. Sets the xname and yname arguments as the axis
labels. The default values are "x" and "y".
"""
if is_vector(sequence):
super(VectorGraph, self).__init__(sequence, **kwargs)
else:
super(VectorGraph, self).__init__(Vector(sequence), **kwargs)
if len(self._data.groups.keys()) == 0:
raise NoDataError("Cannot draw graph because there is no data.")
self.draw()
def draw(self):
"""
Prepares and displays the graph based on the set class members.
"""
raise NotImplementedError
class GraphHisto(VectorGraph):
"""Draws a histogram.
New class members are bins, color and box_plot. The bins member is the number
of histogram bins to draw. The color member is the color of the histogram area.
The box_plot member is a boolean flag for whether to draw the corresponding
box plot.
"""
_xsize = 5
_ysize = 4
def __init__(self, data, **kwargs):
"""GraphHisto constructor.
:param data: The data to be graphed.
:param _bins: The number of histogram bins to draw. This arg sets the bins member.
:param _name: The optional x-axis label.
:param _distribution: The theoretical distribution to fit.
:param _box_plot: Toggle the display of the optional boxplot.
:param _cdf: Toggle the display of the optional cumulative density function plot.
:param _fit: Toggle the display of the best fit line for the specified distribution.
:param _mean: The mean to be displayed on the graph title.
:param _std: The standard deviation to be displayed on the graph title.
:param _sample: Sets x-bar and s if true, else mu and sigma for displaying on the graph title.
:param _title: The title of the graph.
:param _save_to: Save the graph to the specified path.
:return: pass
"""
self._bins = kwargs.get('bins', 20)
self._distribution = kwargs.get('distribution', 'norm')
self._box_plot = kwargs.get('boxplot', True)
self._cdf = kwargs.get('cdf', False)
self._fit = kwargs.get('fit', False)
self._mean = kwargs.get('mean')
self._std = kwargs.get('std_dev')
self._sample = kwargs.get('sample', False)
self._title = kwargs.get('title', 'Distribution')
self._save_to = kwargs.get('save_to')
yname = kwargs.get('yname', 'Probability')
name = kwargs.get('name') or kwargs.get('xname') or 'Data'
super(GraphHisto, self).__init__(data, xname=name, yname=yname)
def fit_distro(self):
"""
Calculate the fit points for a specified distribution.
Returns
-------
fit_parms : tuple
First value - The x-axis points
Second value - The pdf y-axis points
Third value - The cdf y-axis points
"""
distro_class = getattr(
__import__(
'scipy.stats',
globals(),
locals(),
[self._distribution],
0,
),
self._distribution
)
parms = distro_class.fit(self._data.data)
distro = linspace(distro_class.ppf(0.001, *parms), distro_class.ppf(0.999, *parms), 100)
distro_pdf = distro_class.pdf(distro, *parms)
distro_cdf = distro_class.cdf(distro, *parms)
return distro, distro_pdf, distro_cdf
def calc_cdf(self):
"""
Calcuate the cdf points.
Returns
-------
coordinates : tuple
First value - The cdf x-axis points
Second value - The cdf y-axis points
"""
x_sorted_vector = sort(self._data.data)
if len(x_sorted_vector) == 0:
return 0, 0
y_sorted_vector = arange(len(x_sorted_vector) + 1) / float(len(x_sorted_vector))
x_cdf = array([x_sorted_vector, x_sorted_vector]).T.flatten()
y_cdf = array([y_sorted_vector[:(len(y_sorted_vector)-1)], y_sorted_vector[1:]]).T.flatten()
return x_cdf, y_cdf
def draw(self):
"""
Draws the histogram based on the set parameters.
Returns
-------
pass
"""
# Setup the grid variables
histo_span = 3
box_plot_span = 1
cdf_span = 3
h_ratios = [histo_span]
p = []
if self._box_plot:
self._ysize += 0.5
self._nrows += 1
h_ratios.insert(0, box_plot_span)
if self._cdf:
self._ysize += 2
self._nrows += 1
h_ratios.insert(0, cdf_span)
# Create the figure and grid spec
f = figure(figsize=(self._xsize, self._ysize))
gs = GridSpec(self._nrows, self._ncols, height_ratios=h_ratios, hspace=0)
# Set the title
title = self._title
if self._mean and self._std:
if self._sample:
title = r"{}{}$\bar x = {:.4f}, s = {:.4f}$".format(title, "\n", self._mean, self._std)
else:
title = r"{}{}$\mu = {:.4f}$, $\sigma = {:.4f}$".format(title, "\n", self._mean, self._std)
f.suptitle(title, fontsize=14)
# Adjust the bin size if it's greater than the vector size
if len(self._data.data) < self._bins:
self._bins = len(self._data.data)
# Fit the distribution
if self._fit:
distro, distro_pdf, distro_cdf = self.fit_distro()
else:
distro, distro_pdf, distro_cdf = None, None, None
# Draw the cdf
if self._cdf:
x_cdf, y_cdf = self.calc_cdf()
ax_cdf = subplot(gs[0])
ax_cdf.plot(x_cdf, y_cdf, 'k-')
ax_cdf.xaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
ax_cdf.yaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
p.append(ax_cdf.get_xticklabels())
if self._fit:
ax_cdf.plot(distro, distro_cdf, 'r--', linewidth=2)
yticks(arange(11) * 0.1)
ylabel("Cumulative Probability")
else:
ax_cdf = None
# Draw the box plot
if self._box_plot:
if self._cdf:
ax_box = subplot(gs[len(h_ratios) - 2], sharex=ax_cdf)
else:
ax_box = subplot(gs[len(h_ratios) - 2])
bp = ax_box.boxplot(self._data.data, vert=False, showmeans=True)
setp(bp['boxes'], color='k')
setp(bp['whiskers'], color='k')
vp = ax_box.violinplot(self._data.data, vert=False, showextrema=False, showmedians=False, showmeans=False)
setp(vp['bodies'], facecolors=self.get_color(0))
ax_box.xaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
yticks([])
p.append(ax_box.get_xticklabels())
ax_hist = subplot(gs[len(h_ratios) - 1], sharex=ax_box)
else:
ax_hist = subplot(gs[len(h_ratios) - 1])
# Draw the histogram
# First try to use the density arg which replaced normed (which is now depricated) in matplotlib 2.2.2
try:
ax_hist.hist(self._data.data, self._bins, density=True, color=self.get_color(0), zorder=0)
except TypeError:
ax_hist.hist(self._data.data, self._bins, normed=True, color=self.get_color(0), zorder=0)
ax_hist.xaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
ax_hist.yaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
if self._fit:
ax_hist.plot(distro, distro_pdf, 'r--', linewidth=2)
if len(p) > 0:
setp(p, visible=False)
# set the labels and display the figure
ylabel(self._yname)
xlabel(self._xname)
if self._save_to:
savefig(self._save_to)
close(f)
else:
show()
pass
class GraphScatter(VectorGraph):
"""Draws an x-by-y scatter plot.
Unique class members are fit and style. The fit member is a boolean flag for
whether to draw the linear best fit line. The style member is a tuple of
formatted strings that set the matplotlib point style and line style. It is
also worth noting that the vector member for the GraphScatter class is a
tuple of xdata and ydata.
"""
_nrows = 1
_ncols = 1
_xsize = 6
_ysize = 5
def __init__(self, xdata, ydata=None, **kwargs):
"""GraphScatter constructor.
:param xdata: The x-axis data.
:param ydata: The y-axis data.
:param fit: Display the optional line fit.
:param points: Display the scatter points.
:param contours: Display the density contours
:param boxplot_borders: Display the boxplot borders
:param highlight: an array-like with points to highlight based on labels
:param labels: a vector object with the graph labels
:param title: The title of the graph.
:param save_to: Save the graph to the specified path.
:return: pass
"""
self._fit = kwargs.get('fit', True)
self._points = kwargs.get('points', True)
self._labels = kwargs.get('labels', None)
self._highlight = kwargs.get('highlight', None)
self._contours = kwargs.get('contours', False)
self._contour_props = (31, 1.1)
self._boxplot_borders = kwargs.get('boxplot_borders', False)
self._title = kwargs['title'] if 'title' in kwargs else 'Bivariate'
self._save_to = kwargs.get('save_to', None)
yname = kwargs.get('yname', 'y Data')
xname = kwargs.get('xname', 'x Data')
if ydata is None:
if is_vector(xdata):
super(GraphScatter, self).__init__(xdata, xname=xname, yname=yname)
else:
raise AttributeError('ydata argument cannot be None.')
else:
super(GraphScatter, self).__init__(
Vector(xdata, other=ydata, labels=self._labels),
xname=xname,
yname=yname,
)
def calc_contours(self):
"""
Calculates the density contours.
Returns
-------
contour_parms : tuple
First value - x-axis points
Second value - y-axis points
Third value - z-axis points
Fourth value - The contour levels
"""
xmin = self._data.data.min()
xmax = self._data.data.max()
ymin = self._data.other.min()
ymax = self._data.other.max()
values = vstack([self._data.data, self._data.other])
kernel = gaussian_kde(values)
_x, _y = mgrid[xmin:xmax:100j, ymin:ymax:100j]
positions = vstack([_x.ravel(), _y.ravel()])
_z = kernel.evaluate(positions).T.reshape(_x.shape)
return _x, _y, _z, arange(_z.min(), _z.max(), (_z.max() - _z.min()) / self._contour_props[0])
def calc_fit(self):
"""
Calculates the best fit line using sum of squares.
Returns
-------
fit_coordinates : list
A list of the min and max fit points.
"""
x = self._data.data
y = self._data.other
p = polyfit(x, y, 1)
fit = polyval(p, x)
if p[0] > 0:
return (x.min(), x.max()), (fit.min(), fit.max())
else:
return (x.min(), x.max()), (fit.max(), fit.min())
def draw(self):
"""
Draws the scatter plot based on the set parameters.
Returns
-------
pass
"""
# Setup the grid variables
x = self._data.data
y = self._data.other
h_ratio = [1, 1]
w_ratio = [1, 1]
# Setup the figure and gridspec
if self._boxplot_borders:
self._nrows, self._ncols = 2, 2
self._xsize = self._xsize + 0.5
self._ysize = self._ysize + 0.5
h_ratio, w_ratio = (1.5, 5.5), (5.5, 1.5)
main_plot = 2
else:
main_plot = 0
# Setup the figure
f = figure(figsize=(self._xsize, self._ysize))
f.suptitle(self._title, fontsize=14)
if self._boxplot_borders:
gs = GridSpec(self._nrows, self._ncols, height_ratios=h_ratio, width_ratios=w_ratio, hspace=0, wspace=0)
else:
gs = GridSpec(self._nrows, self._ncols)
ax1 = None
ax3 = None
# Draw the boxplot borders
if self._boxplot_borders:
ax1 = subplot(gs[0])
ax3 = subplot(gs[3])
bpx = ax1.boxplot(x, vert=False, showmeans=True)
bpy = ax3.boxplot(y, vert=True, showmeans=True)
setp(bpx['boxes'], color='k')
setp(bpx['whiskers'], color='k')
setp(bpy['boxes'], color='k')
setp(bpy['whiskers'], color='k')
vpx = ax1.violinplot(x, vert=False, showmedians=False, showmeans=False, showextrema=False)
vpy = ax3.violinplot(y, vert=True, showmedians=False, showmeans=False, showextrema=False)
setp(vpx['bodies'], facecolors=self.get_color(0))
setp(vpy['bodies'], facecolors=self.get_color(0))
ax1.xaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
ax3.yaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
setp(
[
ax1.get_xticklabels(), ax1.get_yticklabels(), ax3.get_xticklabels(), ax3.get_yticklabels()
], visible=False
)
# Draw the main graph
ax2 = subplot(gs[main_plot], sharex=ax1, sharey=ax3)
# Draw the points
if self._points:
# A 2-D array needs to be passed to prevent matplotlib from applying the default cmap if the size < 4.
color = (self.get_color(0),)
alpha_trans = 0.7
if self._highlight is not None:
# Find index of the labels which are in the highlight list
labelmask = self._data.labels.isin(self._highlight)
# Get x and y position of those labels
x_labels = x.loc[labelmask]
y_labels = y.loc[labelmask]
x_nolabels = x.loc[~labelmask]
y_nolabels = y.loc[~labelmask]
ax2.scatter(x_labels, y_labels, c=color, marker='o', linewidths=0, alpha=alpha_trans, zorder=1)
ax2.scatter(x_nolabels, y_nolabels, c=color, marker='o', linewidths=0, alpha=.2, zorder=1)
for k in self._data.labels[labelmask].index:
ax2.annotate(self._data.labels[k], xy=(x[k], y[k]), alpha=1, color=color[0])
else:
ax2.scatter(x, y, c=color, marker='o', linewidths=0, alpha=alpha_trans, zorder=1)
# Draw the contours
if self._contours:
x_prime, y_prime, z, levels = self.calc_contours()
ax2.contour(x_prime, y_prime, z, levels, linewidths=self._contour_props[1], nchunk=16,
extend='both', zorder=2)
# Draw the fit line
if self._fit:
fit_x, fit_y = self.calc_fit()
ax2.plot(fit_x, fit_y, 'r--', linewidth=2, zorder=3)
# Draw the grid lines and labels
ax2.xaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
ax2.yaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
xlabel(self._xname)
ylabel(self._yname)
# Save the figure to disk or display
if self._save_to:
savefig(self._save_to)
close(f)
else:
show()
pass
class GraphGroupScatter(VectorGraph):
"""Draws an x-by-y scatter plot with more than a single group.
Unique class members are fit and style. The fit member is a boolean flag for
whether to draw the linear best fit line. The style member is a tuple of
formatted strings that set the matplotlib point style and line style. It is
also worth noting that the vector member for the GraphScatter class is a
tuple of xdata and ydata.
"""
_nrows = 1
_ncols = 1
_xsize = 6
_ysize = 5
def __init__(self, xdata, ydata=None, groups=None, **kwargs):
"""GraphScatter constructor.
:param xdata: The x-axis data.
:param ydata: The y-axis data.
:param _fit: Display the optional line fit.
:param _highlight: Give list of groups to highlight in scatter.
:param _points: Display the scatter points.
:param _contours: Display the density contours
:param _boxplot_borders: Display the boxplot borders
:param _labels: a vector object with the graph labels
:param _title: The title of the graph.
:param _save_to: Save the graph to the specified path.
:return: pass
"""
self._fit = kwargs['fit'] if 'fit' in kwargs else True
self._points = kwargs['points'] if 'points' in kwargs else True
self._labels = kwargs['labels'] if 'labels' in kwargs else None
self._highlight = kwargs['highlight'] if 'highlight' in kwargs else None
self._boxplot_borders = kwargs['boxplot_borders'] if 'boxplot_borders' in kwargs else True
self._title = kwargs['title'] if 'title' in kwargs else 'Group Bivariate'
self._save_to = kwargs['save_to'] if 'save_to' in kwargs else None
yname = kwargs['yname'] if 'yname' in kwargs else 'y Data'
xname = kwargs['xname'] if 'xname' in kwargs else 'x Data'
if ydata is None:
if is_vector(xdata):
super(GraphGroupScatter, self).__init__(xdata, xname=xname, yname=yname)
else:
raise AttributeError('ydata argument cannot be None.')
else:
super(GraphGroupScatter, self).__init__(Vector(
xdata,
other=ydata,
groups=groups,
labels=self._labels
), xname=xname, yname=yname)
@staticmethod
def calc_fit(x, y):
"""
Calculates the best fit line using sum of squares.
Returns
-------
fit_coordinates : list
A list of the min and max fit points.
"""
p = polyfit(x, y, 1)
fit = polyval(p, x)
if p[0] > 0:
return (x.min(), x.max()), (fit.min(), fit.max())
else:
return (x.min(), x.max()), (fit.max(), fit.min())
def draw(self):
"""
Draws the scatter plot based on the set parameters.
Returns
-------
pass
"""
# Setup the grid variables
x = self._data.data
y = self._data.other
groups = sorted(self._data.groups.keys())
h_ratio = [1, 1]
w_ratio = [1, 1]
# Setup the figure and gridspec
if self._boxplot_borders:
self._nrows, self._ncols = 2, 2
self._xsize = self._xsize + 0.5
self._ysize = self._ysize + 0.5
h_ratio, w_ratio = (1.5, 5.5), (5.5, 1.5)
main_plot = 2
else:
main_plot = 0
# Setup the figure
f = figure(figsize=(self._xsize, self._ysize))
f.suptitle(self._title, fontsize=14)
if self._boxplot_borders:
gs = GridSpec(self._nrows, self._ncols, height_ratios=h_ratio, width_ratios=w_ratio, hspace=0, wspace=0)
else:
gs = GridSpec(self._nrows, self._ncols)
ax1 = None
ax3 = None
# Draw the boxplot borders
if self._boxplot_borders:
ax1 = subplot(gs[0])
ax3 = subplot(gs[3])
bpx = ax1.boxplot(x, vert=False, showmeans=True)
bpy = ax3.boxplot(y, vert=True, showmeans=True)
setp(bpx['boxes'], color='k')
setp(bpx['whiskers'], color='k')
setp(bpy['boxes'], color='k')
setp(bpy['whiskers'], color='k')
vpx = ax1.violinplot(x, vert=False, showmedians=False, showmeans=False, showextrema=False)
vpy = ax3.violinplot(y, vert=True, showmedians=False, showmeans=False, showextrema=False)
setp(vpx['bodies'], facecolors=self.get_color(0))
setp(vpy['bodies'], facecolors=self.get_color(0))
ax1.xaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
ax3.yaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
setp([ax1.get_xticklabels(), ax1.get_yticklabels(), ax3.get_xticklabels(), ax3.get_yticklabels()],
visible=False)
# Draw the main graph
ax2 = subplot(gs[main_plot], sharex=ax1, sharey=ax3)
for grp, (grp_x, grp_y) in self._data.paired_groups.items():
i = groups.index(grp)
alpha_trans = 0.65
if self._highlight is not None:
try:
if grp not in self._highlight:
alpha_trans = 0.2
except TypeError:
pass
if isinstance(grp, six.string_types) and len(grp) > 20:
grp = grp[0:21] + '...'
# Draw the points
if self._points:
# A 2-D array needs to be passed to prevent matplotlib from applying the default cmap if the size < 4.
color = (self.get_color(i),)
scatter_kwargs = dict(
c=color,
marker='o',
linewidths=0,
zorder=1,
)
# Draw the point labels
if self._data.has_labels and self._highlight is not None:
# If a group is in highlights and labels are also given
if grp in self._highlight:
scatter_kwargs.update(
dict(
alpha=alpha_trans,
label=grp
)
)
ax2.scatter(grp_x, grp_y, **scatter_kwargs)
# Highlight the specified labels
else:
labelmask = self._data.group_labels[grp].isin(self._highlight)
# Get x and y position of those labels
x_labels = grp_x.loc[labelmask]
y_labels = grp_y.loc[labelmask]
x_nolabels = grp_x.loc[~labelmask]
y_nolabels = grp_y.loc[~labelmask]
scatter_kwargs.update(
dict(
alpha=0.65,
label=grp if any(labelmask) else None,
)
)
ax2.scatter(x_labels, y_labels, **scatter_kwargs)
scatter_kwargs.update(
dict(
alpha=0.2,
label=None if any(labelmask) else grp,
)
)
ax2.scatter(x_nolabels, y_nolabels, **scatter_kwargs)
# Add the annotations
for k in self._data.group_labels[grp][labelmask].index:
clr = color[0]
ax2.annotate(self._data.group_labels[grp][k], xy=(grp_x[k], grp_y[k]), alpha=1, color=clr)
else:
scatter_kwargs.update(
dict(
alpha=alpha_trans,
label=grp,
)
)
ax2.scatter(grp_x, grp_y, **scatter_kwargs)
# Draw the fit line
if self._fit:
fit_x, fit_y = self.calc_fit(grp_x, grp_y)
if self._points:
ax2.plot(fit_x, fit_y, linestyle='--', color=self.get_color(i), linewidth=2, zorder=2)
else:
ax2.plot(fit_x, fit_y, linestyle='--', color=self.get_color(i), linewidth=2, zorder=2, label=grp)
# Draw the legend
if (self._fit or self._points) and len(groups) > 1:
ax2.legend(loc='best')
# Draw the grid lines and labels
ax2.xaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
ax2.yaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
xlabel(self._xname)
ylabel(self._yname)
# Save the figure to disk or display
if self._save_to:
savefig(self._save_to)
close(f)
else:
show()
pass
class GraphBoxplot(VectorGraph):
"""Draws box plots of the provided data as well as an optional probability plot.
Unique class members are groups, nqp and prob. The groups member is a list of
labels for each boxplot. If groups is an empty list, sequentially ascending
numbers are used for each boxplot. The nqp member is a flag that turns the
probability plot on or off. The prob member is a list of tuples that contains
the data used to graph the probability plot. It is also worth noting that the
vector member for the GraphBoxplot is a list of lists that contain the data
for each boxplot.
"""
_nrows = 1
_ncols = 1
_xsize = 5.75
_ysize = 5
_default_alpha = 0.05
def __init__(self, *args, **kwargs):
"""GraphBoxplot constructor. NOTE: If vectors is a dict, the boxplots are
graphed in random order instead of the provided order.
:param groups: An optional list of boxplot labels. The order should match the order in vectors.
:param nqp: Display the optional probability plot.
:param _title: The title of the graph.
:param _save_to: Save the graph to the specified path.
:return: pass
"""
name = kwargs['name'] if 'name' in kwargs else 'Values'
categories = kwargs['categories'] if 'categories' in kwargs else 'Categories'
xname = kwargs['xname'] if 'xname' in kwargs else categories
yname = kwargs['yname'] if 'yname' in kwargs else name
self._title = kwargs['title'] if 'title' in kwargs else 'Oneway'
self._nqp = kwargs['nqp'] if 'nqp' in kwargs else True
self._save_to = kwargs['save_to'] if 'save_to' in kwargs else None
self._gmean = kwargs['gmean'] if 'gmean' in kwargs else True
self._gmedian = kwargs['gmedian'] if 'gmedian' in kwargs else True
self._circles = kwargs['circles'] if 'circles' in kwargs else True
self._alpha = kwargs['alpha'] if 'alpha' in kwargs else self._default_alpha
if 'title' in kwargs:
self._title = kwargs['title']
elif self._nqp:
self._title = 'Oneway and Normal Quantile Plot'
else:
self._title = 'Oneway'
if is_vector(args[0]):
data = args[0]
elif is_dict(args[0]):
data = Vector()
for g, d in args[0].items():
data.append(Vector(d, groups=[g] * len(d)))
else:
if is_group(args) and len(args) > 1:
future('Graphing boxplots by passing multiple arguments will be removed in a future version. '
'Instead, pass unstacked arguments as a dictionary.')
data = Vector()
if 'groups' in kwargs:
if len(kwargs['groups']) != len(args):
raise AttributeError('The length of passed groups does not match the number passed data.')
for g, d in zip(kwargs['groups'], args):
data.append(Vector(d, groups=[g] * len(d)))
else:
for d in args:
data.append(Vector(d))
else:
if 'groups' in kwargs:
if len(kwargs['groups']) != len(args[0]):
raise AttributeError('The length of passed groups does not match the number passed data.')
data = Vector(args[0], groups=kwargs['groups'])
else:
data = Vector(args[0])
super(GraphBoxplot, self).__init__(data, xname=xname, yname=yname, save_to=self._save_to)
@staticmethod
def grand_mean(data):
return mean([mean(sample) for sample in data])
@staticmethod
def grand_median(data):
return median([median(sample) for sample in data])
def tukey_circles(self, data):
num = []
den = []
crit = []
radii = []
xbar = []
for sample in data:
df = len(sample) - 1
num.append(std(sample, ddof=1) ** 2 * df)
den.append(df)
crit.append(t.ppf(1 - self._alpha, df))
mse = sum(num) / sum(den)
for i, sample in enumerate(data):
radii.append(fabs(crit[i]) * sqrt(mse / len(sample)))
xbar.append(mean(sample))
return tuple(zip(xbar, radii))
def draw(self):
"""Draws the boxplots based on the set parameters."""
# Setup the grid variables
w_ratio = [1]
if self._circles:
w_ratio = [4, 1]
self._ncols += 1
if self._nqp:
w_ratio.append(4 if self._circles else 1)
self._ncols += 1
groups, data = zip(*[
(g, v['ind'].reset_index(drop=True)) for g, v in self._data.values.groupby('grp') if not v.empty]
)
# Create the quantile plot arrays
prob = [probplot(v) for v in data]
# Create the figure and gridspec
if self._nqp and len(prob) > 0:
self._xsize *= 2
f = figure(figsize=(self._xsize, self._ysize))
f.suptitle(self._title, fontsize=14)
gs = GridSpec(self._nrows, self._ncols, width_ratios=w_ratio, wspace=0)
# Draw the boxplots
ax1 = subplot(gs[0])
bp = ax1.boxplot(data, showmeans=True, labels=groups)
setp(bp['boxes'], color='k')
setp(bp['whiskers'], color='k')
vp = ax1.violinplot(data, showextrema=False, showmedians=False, showmeans=False)
for i in range(len(groups)):
setp(vp['bodies'][i], facecolors=self.get_color(i))
ax1.yaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
if self._gmean:
ax1.axhline(float(self.grand_mean(data)), c='k', linestyle='--', alpha=0.4)
if self._gmedian:
ax1.axhline(float(self.grand_median(data)), c='k', linestyle=':', alpha=0.4)
if any([True if len(str(g)) > 9 else False for g in groups]) or len(groups) > 5:
xticks(rotation=60)
subplots_adjust(bottom=0.2)
ylabel(self._yname)
xlabel(self._xname)
# Draw the Tukey-Kramer circles
if self._circles:
ax2 = subplot(gs[1], sharey=ax1)
for i, (center, radius) in enumerate(self.tukey_circles(data)):
c = Circle((0.5, center), radius=radius, facecolor='none', edgecolor=self.get_color(i))
ax2.add_patch(c)
# matplotlib 2.2.2 requires adjustable='datalim' to display properly.
ax2.set_aspect('equal', adjustable='datalim')
setp(ax2.get_xticklabels(), visible=False)
setp(ax2.get_yticklabels(), visible=False)
ax2.set_xticks([])
# Draw the normal quantile plot
if self._nqp and len(prob) > 0:
ax3 = subplot(gs[2], sharey=ax1) if self._circles else subplot(gs[1], sharey=ax1)
for i, g in enumerate(prob):
osm = g[0][0]
osr = g[0][1]
slope = g[1][0]
intercept = g[1][1]
ax3.plot(osm, osr, marker='^', color=self.get_color(i), label=groups[i])
ax3.plot(osm, slope * osm + intercept, linestyle='--', linewidth=2, color=self.get_color(i))
ax3.xaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
ax3.yaxis.grid(True, linestyle='-', which='major', color='grey', alpha=0.75)
ax3.legend(loc='best')
xlabel("Quantiles")
setp(ax3.get_yticklabels(), visible=False)
# Save the figure to disk or display
if self._save_to:
savefig(self._save_to)
close(f)
else:
show()
pass
| 0.815673 | 0.537163 |
_colors = (
(0.0, 0.3, 0.7), # blue
(1.0, 0.1, 0.1), # red
(0.0, 0.7, 0.3), # green
(1.0, 0.5, 0.0), # orange
(0.1, 1.0, 1.0), # cyan
(1.0, 1.0, 0.0), # yellow
(1.0, 0.0, 1.0), # magenta
(0.5, 0.0, 1.0), # purple
(0.5, 1.0, 0.0), # light green
(0.0, 0.0, 0.0) # black
)
_color_names = (
'blue',
'red',
'green',
'orange',
'cyan',
'yellow',
'magenta',
'purple',
'light green',
'black'
)
class Graph(object):
"""The super class all other sci_analysis graphing classes descend from.
Classes that descend from Graph should implement the draw method at bare minimum.
Graph members are _nrows, _ncols, _xsize, _ysize, _data, _xname and _yname. The _nrows
member is the number of graphs that will span vertically. The _ncols member is
the number of graphs that will span horizontally. The _xsize member is the horizontal
size of the graph area. The _ysize member is the vertical size of the graph area.
The _data member the data to be plotted. The _xname member is the x-axis label.
The _yname member is the y-axis label.
Parameters
----------
_nrows : int, static
The number of graphs that will span vertically.
_ncols : int, static
The number of graphs that will span horizontally.
_xsize : int, static
The horizontal size of the graph area.
_ysize : int, static
The vertical size of the graph area.
_min_size : int, static
The minimum required length of the data to be graphed.
_xname : str
The x-axis label.
_yname : str
The y-axis label.
_data : Data or list(d1, d2, ..., dn)
The data to graph.
Returns
-------
pass
"""
_nrows = 1
_ncols = 1
_xsize = 5
_ysize = 5
_min_size = 1
def __init__(self, data, **kwargs):
self._xname = kwargs['xname'] if 'xname' in kwargs else 'x'
self._yname = kwargs['yname'] if 'yname' in kwargs else 'y'
self._data = data
def get_color_by_name(self, color='black'):
"""Return a color array based on the string color passed.
Parameters
----------
color : str
A string color name.
Returns
-------
color : tuple
A color tuple that corresponds to the passed color string.
"""
return self.get_color(_color_names.index(color))
@staticmethod
def get_color(num):
"""Return a color based on the given num argument.
Parameters
----------
num : int
A numeric value greater than zero that returns a corresponding color.
Returns
-------
color : tuple
A color tuple calculated from the num argument.
"""
desired_color = []
floor = int(num) // len(_colors)
remainder = int(num) % len(_colors)
selected = _colors[remainder]
if floor > 0:
for value in selected:
desired_color.append(value / (2.0 * floor) + 0.4)
return tuple(desired_color)
else:
return selected
def draw(self):
"""
Prepares and displays the graph based on the set class members.
"""
raise NotImplementedError
|
sci-analysis
|
/sci_analysis-2.2.1rc0.tar.gz/sci_analysis-2.2.1rc0/sci_analysis/graphs/base.py
|
base.py
|
_colors = (
(0.0, 0.3, 0.7), # blue
(1.0, 0.1, 0.1), # red
(0.0, 0.7, 0.3), # green
(1.0, 0.5, 0.0), # orange
(0.1, 1.0, 1.0), # cyan
(1.0, 1.0, 0.0), # yellow
(1.0, 0.0, 1.0), # magenta
(0.5, 0.0, 1.0), # purple
(0.5, 1.0, 0.0), # light green
(0.0, 0.0, 0.0) # black
)
_color_names = (
'blue',
'red',
'green',
'orange',
'cyan',
'yellow',
'magenta',
'purple',
'light green',
'black'
)
class Graph(object):
"""The super class all other sci_analysis graphing classes descend from.
Classes that descend from Graph should implement the draw method at bare minimum.
Graph members are _nrows, _ncols, _xsize, _ysize, _data, _xname and _yname. The _nrows
member is the number of graphs that will span vertically. The _ncols member is
the number of graphs that will span horizontally. The _xsize member is the horizontal
size of the graph area. The _ysize member is the vertical size of the graph area.
The _data member the data to be plotted. The _xname member is the x-axis label.
The _yname member is the y-axis label.
Parameters
----------
_nrows : int, static
The number of graphs that will span vertically.
_ncols : int, static
The number of graphs that will span horizontally.
_xsize : int, static
The horizontal size of the graph area.
_ysize : int, static
The vertical size of the graph area.
_min_size : int, static
The minimum required length of the data to be graphed.
_xname : str
The x-axis label.
_yname : str
The y-axis label.
_data : Data or list(d1, d2, ..., dn)
The data to graph.
Returns
-------
pass
"""
_nrows = 1
_ncols = 1
_xsize = 5
_ysize = 5
_min_size = 1
def __init__(self, data, **kwargs):
self._xname = kwargs['xname'] if 'xname' in kwargs else 'x'
self._yname = kwargs['yname'] if 'yname' in kwargs else 'y'
self._data = data
def get_color_by_name(self, color='black'):
"""Return a color array based on the string color passed.
Parameters
----------
color : str
A string color name.
Returns
-------
color : tuple
A color tuple that corresponds to the passed color string.
"""
return self.get_color(_color_names.index(color))
@staticmethod
def get_color(num):
"""Return a color based on the given num argument.
Parameters
----------
num : int
A numeric value greater than zero that returns a corresponding color.
Returns
-------
color : tuple
A color tuple calculated from the num argument.
"""
desired_color = []
floor = int(num) // len(_colors)
remainder = int(num) % len(_colors)
selected = _colors[remainder]
if floor > 0:
for value in selected:
desired_color.append(value / (2.0 * floor) + 0.4)
return tuple(desired_color)
else:
return selected
def draw(self):
"""
Prepares and displays the graph based on the set class members.
"""
raise NotImplementedError
| 0.840619 | 0.585575 |
import math
# matplotlib imports
from matplotlib.pyplot import show, xticks, savefig, close, subplots, subplots_adjust
# local imports
from .base import Graph
from ..data import Categorical, is_group, is_categorical
from ..analysis.exc import MinimumSizeError, NoDataError
class CategoricalGraph(Graph):
def __init__(self, *args, **kwargs):
order = kwargs['order'] if 'order' in kwargs else None
dropna = kwargs['dropna'] if 'dropna' in kwargs else False
seq_name = kwargs['name'] if 'name' in kwargs else None
data = list()
for d in args:
new = d if is_categorical(d) else Categorical(d, name=seq_name, order=order, dropna=dropna)
if new.is_empty():
raise NoDataError('Cannot draw graph because there is no data.')
if len(new) <= self._min_size:
raise MinimumSizeError('Length of data is less than the minimum size {}.'.format(self._min_size))
data.append(new)
if not is_group(data):
raise NoDataError('Cannot draw graph because there is no data.')
if len(data) == 1:
data = data[0]
super(CategoricalGraph, self).__init__(data, **kwargs)
self.draw()
def draw(self):
"""
Prepares and displays the graph based on the set class members.
"""
raise NotImplementedError
class GraphFrequency(CategoricalGraph):
_xsize = 8.5
_ysize = 5.5
def __init__(self, data, **kwargs):
self._percent = kwargs['percent'] if 'percent' in kwargs else False
self._vertical = kwargs['vertical'] if 'vertical' in kwargs else True
self._grid = kwargs['grid'] if 'grid' in kwargs else False
self._labels = kwargs['labels'] if 'labels' in kwargs else True
self._title = kwargs['title'] if 'title' in kwargs else 'Frequencies'
self._save_to = kwargs['save_to'] if 'save_to' in kwargs else None
order = kwargs['order'] if 'order' in kwargs else None
dropna = kwargs['dropna'] if 'dropna' in kwargs else False
yname = 'Percent' if self._percent else 'Frequency'
name = 'Categories'
if 'name' in kwargs:
name = kwargs['name']
elif 'xname' in kwargs:
name = kwargs['xname']
super(GraphFrequency, self).__init__(data, xname=name, yname=yname, order=order, dropna=dropna)
def add_numeric_labels(self, bars, axis):
if self._vertical:
if len(bars) < 3:
size = 'xx-large'
elif len(bars) < 9:
size = 'x-large'
elif len(bars) < 21:
size = 'large'
elif len(bars) < 31:
size = 'medium'
else:
size = 'small'
for bar in bars:
x_pos = bar.get_width()
y_pos = bar.get_y() + bar.get_height() / 2.
x_off = x_pos + 0.05
adjust = .885 if self._percent else .95
if not self._percent and x_pos != 0:
adjust = adjust - math.floor(math.log10(x_pos)) * .035
label = '{:.1f}'.format(x_pos) if self._percent else '{}'.format(x_pos)
col = 'k'
if x_pos != 0 and (x_off / axis.get_xlim()[1]) > .965 - math.floor(math.log10(x_pos)) * .02:
x_off = x_pos * adjust
col = 'w'
axis.annotate(label,
xy=(x_pos, y_pos),
xytext=(x_off, y_pos),
va='center',
color=col,
size=size)
else:
if len(bars) < 21:
size = 'medium'
elif len(bars) < 31:
size = 'small'
else:
size = 'x-small'
for bar in bars:
y_pos = bar.get_height()
x_pos = bar.get_x() + bar.get_width() / 2.
y_off = y_pos + 0.05
label = '{:.1f}'.format(y_pos) if self._percent else '{}'.format(y_pos)
col = 'k'
if (y_off / axis.get_ylim()[1]) > 0.95:
y_off = y_pos * .95
col = 'w'
axis.annotate(label,
xy=(x_pos, y_pos),
xytext=(x_pos, y_off),
ha='center',
size=size,
color=col)
def draw(self):
freq = self._data.percents if self._percent else self._data.counts
categories = self._data.categories.tolist()
nbars = tuple(range(1, len(freq) + 1))
grid_props = dict(linestyle='-', which='major', color='grey', alpha=0.75)
bar_props = dict(color=self.get_color(0), zorder=3)
# Create the figure and axes
if self._vertical:
f, ax = subplots(figsize=(self._ysize, self._xsize))
else:
f, ax = subplots(figsize=(self._xsize, self._ysize))
# Set the title
f.suptitle(self._title, fontsize=14)
# Create the graph, grid and labels
if self._grid:
ax.xaxis.grid(True, **grid_props) if self._vertical else ax.yaxis.grid(True, **grid_props)
categories = ['{}...'.format(cat[:18]) if len(str(cat)) > 20 else cat for cat in categories]
max_len = max([len(str(cat)) for cat in categories])
offset = max_len / 5 * 0.09
if self._vertical:
bars = ax.barh(nbars, freq.tolist(), **bar_props)
ax.set_xlabel(self._yname)
ax.set_yticks(nbars)
ax.set_yticklabels(categories)
subplots_adjust(left=offset)
ax.invert_yaxis()
else:
bars = ax.bar(nbars, freq.tolist(), **bar_props)
ax.set_ylabel(self._yname)
ax.set_xticks(nbars)
angle = 90 if len(nbars) > 15 else 60
xticks(rotation=angle)
ax.set_xticklabels(categories)
subplots_adjust(bottom=offset)
if self._labels:
self.add_numeric_labels(bars, ax)
# Save the figure to disk or display
if self._save_to:
savefig(self._save_to)
close(f)
else:
show()
pass
|
sci-analysis
|
/sci_analysis-2.2.1rc0.tar.gz/sci_analysis-2.2.1rc0/sci_analysis/graphs/categorical.py
|
categorical.py
|
import math
# matplotlib imports
from matplotlib.pyplot import show, xticks, savefig, close, subplots, subplots_adjust
# local imports
from .base import Graph
from ..data import Categorical, is_group, is_categorical
from ..analysis.exc import MinimumSizeError, NoDataError
class CategoricalGraph(Graph):
def __init__(self, *args, **kwargs):
order = kwargs['order'] if 'order' in kwargs else None
dropna = kwargs['dropna'] if 'dropna' in kwargs else False
seq_name = kwargs['name'] if 'name' in kwargs else None
data = list()
for d in args:
new = d if is_categorical(d) else Categorical(d, name=seq_name, order=order, dropna=dropna)
if new.is_empty():
raise NoDataError('Cannot draw graph because there is no data.')
if len(new) <= self._min_size:
raise MinimumSizeError('Length of data is less than the minimum size {}.'.format(self._min_size))
data.append(new)
if not is_group(data):
raise NoDataError('Cannot draw graph because there is no data.')
if len(data) == 1:
data = data[0]
super(CategoricalGraph, self).__init__(data, **kwargs)
self.draw()
def draw(self):
"""
Prepares and displays the graph based on the set class members.
"""
raise NotImplementedError
class GraphFrequency(CategoricalGraph):
_xsize = 8.5
_ysize = 5.5
def __init__(self, data, **kwargs):
self._percent = kwargs['percent'] if 'percent' in kwargs else False
self._vertical = kwargs['vertical'] if 'vertical' in kwargs else True
self._grid = kwargs['grid'] if 'grid' in kwargs else False
self._labels = kwargs['labels'] if 'labels' in kwargs else True
self._title = kwargs['title'] if 'title' in kwargs else 'Frequencies'
self._save_to = kwargs['save_to'] if 'save_to' in kwargs else None
order = kwargs['order'] if 'order' in kwargs else None
dropna = kwargs['dropna'] if 'dropna' in kwargs else False
yname = 'Percent' if self._percent else 'Frequency'
name = 'Categories'
if 'name' in kwargs:
name = kwargs['name']
elif 'xname' in kwargs:
name = kwargs['xname']
super(GraphFrequency, self).__init__(data, xname=name, yname=yname, order=order, dropna=dropna)
def add_numeric_labels(self, bars, axis):
if self._vertical:
if len(bars) < 3:
size = 'xx-large'
elif len(bars) < 9:
size = 'x-large'
elif len(bars) < 21:
size = 'large'
elif len(bars) < 31:
size = 'medium'
else:
size = 'small'
for bar in bars:
x_pos = bar.get_width()
y_pos = bar.get_y() + bar.get_height() / 2.
x_off = x_pos + 0.05
adjust = .885 if self._percent else .95
if not self._percent and x_pos != 0:
adjust = adjust - math.floor(math.log10(x_pos)) * .035
label = '{:.1f}'.format(x_pos) if self._percent else '{}'.format(x_pos)
col = 'k'
if x_pos != 0 and (x_off / axis.get_xlim()[1]) > .965 - math.floor(math.log10(x_pos)) * .02:
x_off = x_pos * adjust
col = 'w'
axis.annotate(label,
xy=(x_pos, y_pos),
xytext=(x_off, y_pos),
va='center',
color=col,
size=size)
else:
if len(bars) < 21:
size = 'medium'
elif len(bars) < 31:
size = 'small'
else:
size = 'x-small'
for bar in bars:
y_pos = bar.get_height()
x_pos = bar.get_x() + bar.get_width() / 2.
y_off = y_pos + 0.05
label = '{:.1f}'.format(y_pos) if self._percent else '{}'.format(y_pos)
col = 'k'
if (y_off / axis.get_ylim()[1]) > 0.95:
y_off = y_pos * .95
col = 'w'
axis.annotate(label,
xy=(x_pos, y_pos),
xytext=(x_pos, y_off),
ha='center',
size=size,
color=col)
def draw(self):
freq = self._data.percents if self._percent else self._data.counts
categories = self._data.categories.tolist()
nbars = tuple(range(1, len(freq) + 1))
grid_props = dict(linestyle='-', which='major', color='grey', alpha=0.75)
bar_props = dict(color=self.get_color(0), zorder=3)
# Create the figure and axes
if self._vertical:
f, ax = subplots(figsize=(self._ysize, self._xsize))
else:
f, ax = subplots(figsize=(self._xsize, self._ysize))
# Set the title
f.suptitle(self._title, fontsize=14)
# Create the graph, grid and labels
if self._grid:
ax.xaxis.grid(True, **grid_props) if self._vertical else ax.yaxis.grid(True, **grid_props)
categories = ['{}...'.format(cat[:18]) if len(str(cat)) > 20 else cat for cat in categories]
max_len = max([len(str(cat)) for cat in categories])
offset = max_len / 5 * 0.09
if self._vertical:
bars = ax.barh(nbars, freq.tolist(), **bar_props)
ax.set_xlabel(self._yname)
ax.set_yticks(nbars)
ax.set_yticklabels(categories)
subplots_adjust(left=offset)
ax.invert_yaxis()
else:
bars = ax.bar(nbars, freq.tolist(), **bar_props)
ax.set_ylabel(self._yname)
ax.set_xticks(nbars)
angle = 90 if len(nbars) > 15 else 60
xticks(rotation=angle)
ax.set_xticklabels(categories)
subplots_adjust(bottom=offset)
if self._labels:
self.add_numeric_labels(bars, ax)
# Save the figure to disk or display
if self._save_to:
savefig(self._save_to)
close(f)
else:
show()
pass
| 0.532182 | 0.290578 |
import pandas as pd
import numpy as np
# Import from local
from .data import Data, is_data
from .data_operations import flatten, is_iterable
class EmptyVectorError(Exception):
"""
Exception raised when the length of a Vector object is 0.
"""
pass
class UnequalVectorLengthError(Exception):
"""
Exception raised when the length of two Vector objects are not equal, i.e., len(Vector1) != len(Vector2)
"""
pass
def is_numeric(obj):
"""
Test if the passed array_like argument is a sci_analysis Numeric object.
Parameters
----------
obj : object
The input object.
Returns
-------
test result : bool
The test result of whether seq is a sci_analysis Numeric object or not.
"""
return isinstance(obj, Numeric)
def is_vector(obj):
"""
Test if the passed array_like argument is a sci_analysis Vector object.
Parameters
----------
obj : object
The input object.
Returns
-------
test result : bool
The test result of whether seq is a sci_analysis Vector object or not.
"""
return isinstance(obj, Vector)
class Numeric(Data):
"""An abstract class that all Data classes that represent numeric data should inherit from."""
_ind = 'ind'
_dep = 'dep'
_grp = 'grp'
_lbl = 'lbl'
_col_names = (_ind, _dep, _grp, _lbl)
def __init__(self, sequence=None, other=None, groups=None, labels=None, name=None):
"""Takes an array-like object and converts it to a pandas Series with any non-numeric values converted to NaN.
Parameters
----------
sequence : int | list | set | tuple | np.array | pd.Series
The input object
other : list | set | tuple | np.array | pd.Series, optional
The secondary input object
groups : list | set | tuple | np.array | pd.Series, optional
The sequence of group names for sub-arrays
labels : list | set | tuple | np.array | pd.Series, optional
The sequence of data point labels
name : str, optional
The name of the Numeric object
"""
self._auto_groups = True if groups is None else False
self._values = pd.DataFrame([], columns=self._col_names)
if sequence is None:
super(Numeric, self).__init__(v=self._values, n=name)
self._type = None
self._values.loc[:, self._grp] = self._values[self._grp].astype('category')
elif is_data(sequence):
super(Numeric, self).__init__(v=sequence.values, n=name)
self._type = sequence.data_type
self._auto_groups = sequence.auto_groups
elif isinstance(sequence, pd.DataFrame):
raise ValueError('sequence cannot be a pandas DataFrame object. Use a Series instead.')
else:
sequence = pd.to_numeric(self.data_prep(sequence), errors='coerce')
other = pd.to_numeric(self.data_prep(other), errors='coerce') if other is not None else np.nan
groups = self.data_prep(groups) if groups is not None else 1
# TODO: This try block needs some work
try:
self._values[self._ind] = sequence
self._values[self._dep] = other
self._values[self._grp] = groups
self._values.loc[:, self._grp] = self._values[self._grp].astype('category')
if labels is not None:
self._values[self._lbl] = labels
except ValueError:
raise UnequalVectorLengthError('length of data does not match length of other.')
if any(self._values[self._dep].notnull()):
self._values = self.drop_nan_intersect()
else:
self._values = self.drop_nan()
self._type = self._values[self._ind].dtype
self._name = name
@staticmethod
def data_prep(seq):
"""
Converts the values of _name to conform to the Data object standards.
Parameters
----------
seq : array-like
The input array to be prepared.
Returns
-------
data : np.array
The enclosed data represented as a numpy array.
"""
if hasattr(seq, 'shape'):
if len(seq.shape) > 1:
return flatten(seq)
else:
return seq
else:
return flatten(seq)
def drop_nan(self):
"""
Removes NaN values from the Numeric object and returns the resulting pandas Series. The length of the returned
object is the original object length minus the number of NaN values removed from the object.
Returns
-------
arr : pandas.Series
A copy of the Numeric object's internal Series with all NaN values removed.
"""
return self._values.dropna(how='any', subset=[self._ind])
def drop_nan_intersect(self):
"""
Removes the value from the internal Vector object and seq at i where i is nan in the internal Vector object or
seq.
Returns
-------
arr : pandas.DataFrame
A copy of the Numeric object's internal DataFrame with all nan values removed.
"""
return self._values.dropna(how='any', subset=[self._ind, self._dep])
def drop_groups(self, grps):
"""Drop the specified group name from the Numeric object.
Parameters
----------
grps : str|int|list[str]|list[int]
The name of the group to remove.
Returns
-------
arr : pandas.DataFrame
A copy of the Numeric object's internal DataFrame with all records belonging to the specified group removed.
"""
if not is_iterable(grps):
grps = [grps]
dropped = self._values.query("{} not in {}".format(self._grp, grps)).copy()
dropped[self._grp] = dropped[self._grp].cat.remove_categories(grps)
self._values = dropped
return dropped
@property
def data_type(self):
return self._type
@property
def data(self):
return self._values[self._ind]
@property
def other(self):
return pd.Series([]) if all(self._values[self._dep].isnull()) else self._values[self._dep]
@property
def groups(self):
groups = self._values.groupby(self._grp)
return {grp: seq[self._ind].rename(grp) for grp, seq in groups if not seq.empty}
@property
def labels(self):
return self._values[self._lbl].fillna('None')
@property
def paired_groups(self):
groups = self._values.groupby(self._grp)
return {grp: (df[self._ind], df[self._dep]) for grp, df in groups if not df.empty}
@property
def group_labels(self):
groups = self._values.groupby(self._grp)
return {grp: df[self._lbl] for grp, df in groups if not df.empty}
@property
def values(self):
return self._values
@property
def auto_groups(self):
return self._auto_groups
@property
def has_labels(self):
return any(pd.notna(self._values[self._lbl]))
class Vector(Numeric):
"""
The sci_analysis representation of continuous, numeric data.
"""
def __init__(self, sequence=None, other=None, groups=None, labels=None, name=None):
"""
Takes an array-like object and converts it to a pandas Series of
dtype float64, with any non-numeric values converted to NaN.
Parameters
----------
sequence : array-like or int or float or None
The input object
other : array-like
The secondary input object
groups : array-like
The sequence of group names for sub-arrays
labels : list | set | tuple | np.array | pd.Series, optional
The sequence of data point labels
name : str, optional
The name of the Vector object
"""
super(Vector, self).__init__(sequence=sequence, other=other, groups=groups, labels=labels, name=name)
if not self._values.empty:
self._values[self._ind] = self._values[self._ind].astype('float')
self._values[self._dep] = self._values[self._dep].astype('float')
def is_empty(self):
"""
Overrides the super class's method to also check for length of zero.
Returns
-------
test_result : bool
The result of whether the length of the Vector object is 0 or not.
Examples
--------
>>> Vector([1, 2, 3, 4, 5]).is_empty()
False
>>> Vector([]).is_empty()
True
"""
return self._values.empty
def append(self, other):
"""
Append the values of another vector to self.
Parameters
----------
other : Vector
The Vector object to be appended to self.
Returns
-------
vector : Vector
The original Vector object with new values.
Examples
--------
>>> Vector([1, 2, 3]).append(Vector([4, 5, 6])).data
pandas.Series([1., 2., 3., 4., 5., 6.])
"""
if not is_vector(other):
raise ValueError("Vector object cannot be added to a non-vector object.")
if other.data.empty:
return self
if self.auto_groups and other.auto_groups and len(self._values) > 0:
new_cat = max(self._values[self._grp].cat.categories) + 1
other.values['grp'] = new_cat
self._values = pd.concat([self._values, other.values], copy=False)
self._values.reset_index(inplace=True, drop=True)
self._values.loc[:, self._grp] = self._values[self._grp].astype('category')
return self
def flatten(self):
"""
Disassociates independent and dependent data into individual groups.
Returns
-------
data : tuple(Series)
A tuple of pandas Series.
"""
if not self.other.empty:
return (tuple(data[self._ind] for grp, data in self.values.groupby(self._grp)) +
tuple(data[self._dep] for grp, data in self.values.groupby(self._grp)))
else:
return tuple(data[self._ind] for grp, data in self.values.groupby(self._grp))
|
sci-analysis
|
/sci_analysis-2.2.1rc0.tar.gz/sci_analysis-2.2.1rc0/sci_analysis/data/numeric.py
|
numeric.py
|
import pandas as pd
import numpy as np
# Import from local
from .data import Data, is_data
from .data_operations import flatten, is_iterable
class EmptyVectorError(Exception):
"""
Exception raised when the length of a Vector object is 0.
"""
pass
class UnequalVectorLengthError(Exception):
"""
Exception raised when the length of two Vector objects are not equal, i.e., len(Vector1) != len(Vector2)
"""
pass
def is_numeric(obj):
"""
Test if the passed array_like argument is a sci_analysis Numeric object.
Parameters
----------
obj : object
The input object.
Returns
-------
test result : bool
The test result of whether seq is a sci_analysis Numeric object or not.
"""
return isinstance(obj, Numeric)
def is_vector(obj):
"""
Test if the passed array_like argument is a sci_analysis Vector object.
Parameters
----------
obj : object
The input object.
Returns
-------
test result : bool
The test result of whether seq is a sci_analysis Vector object or not.
"""
return isinstance(obj, Vector)
class Numeric(Data):
"""An abstract class that all Data classes that represent numeric data should inherit from."""
_ind = 'ind'
_dep = 'dep'
_grp = 'grp'
_lbl = 'lbl'
_col_names = (_ind, _dep, _grp, _lbl)
def __init__(self, sequence=None, other=None, groups=None, labels=None, name=None):
"""Takes an array-like object and converts it to a pandas Series with any non-numeric values converted to NaN.
Parameters
----------
sequence : int | list | set | tuple | np.array | pd.Series
The input object
other : list | set | tuple | np.array | pd.Series, optional
The secondary input object
groups : list | set | tuple | np.array | pd.Series, optional
The sequence of group names for sub-arrays
labels : list | set | tuple | np.array | pd.Series, optional
The sequence of data point labels
name : str, optional
The name of the Numeric object
"""
self._auto_groups = True if groups is None else False
self._values = pd.DataFrame([], columns=self._col_names)
if sequence is None:
super(Numeric, self).__init__(v=self._values, n=name)
self._type = None
self._values.loc[:, self._grp] = self._values[self._grp].astype('category')
elif is_data(sequence):
super(Numeric, self).__init__(v=sequence.values, n=name)
self._type = sequence.data_type
self._auto_groups = sequence.auto_groups
elif isinstance(sequence, pd.DataFrame):
raise ValueError('sequence cannot be a pandas DataFrame object. Use a Series instead.')
else:
sequence = pd.to_numeric(self.data_prep(sequence), errors='coerce')
other = pd.to_numeric(self.data_prep(other), errors='coerce') if other is not None else np.nan
groups = self.data_prep(groups) if groups is not None else 1
# TODO: This try block needs some work
try:
self._values[self._ind] = sequence
self._values[self._dep] = other
self._values[self._grp] = groups
self._values.loc[:, self._grp] = self._values[self._grp].astype('category')
if labels is not None:
self._values[self._lbl] = labels
except ValueError:
raise UnequalVectorLengthError('length of data does not match length of other.')
if any(self._values[self._dep].notnull()):
self._values = self.drop_nan_intersect()
else:
self._values = self.drop_nan()
self._type = self._values[self._ind].dtype
self._name = name
@staticmethod
def data_prep(seq):
"""
Converts the values of _name to conform to the Data object standards.
Parameters
----------
seq : array-like
The input array to be prepared.
Returns
-------
data : np.array
The enclosed data represented as a numpy array.
"""
if hasattr(seq, 'shape'):
if len(seq.shape) > 1:
return flatten(seq)
else:
return seq
else:
return flatten(seq)
def drop_nan(self):
"""
Removes NaN values from the Numeric object and returns the resulting pandas Series. The length of the returned
object is the original object length minus the number of NaN values removed from the object.
Returns
-------
arr : pandas.Series
A copy of the Numeric object's internal Series with all NaN values removed.
"""
return self._values.dropna(how='any', subset=[self._ind])
def drop_nan_intersect(self):
"""
Removes the value from the internal Vector object and seq at i where i is nan in the internal Vector object or
seq.
Returns
-------
arr : pandas.DataFrame
A copy of the Numeric object's internal DataFrame with all nan values removed.
"""
return self._values.dropna(how='any', subset=[self._ind, self._dep])
def drop_groups(self, grps):
"""Drop the specified group name from the Numeric object.
Parameters
----------
grps : str|int|list[str]|list[int]
The name of the group to remove.
Returns
-------
arr : pandas.DataFrame
A copy of the Numeric object's internal DataFrame with all records belonging to the specified group removed.
"""
if not is_iterable(grps):
grps = [grps]
dropped = self._values.query("{} not in {}".format(self._grp, grps)).copy()
dropped[self._grp] = dropped[self._grp].cat.remove_categories(grps)
self._values = dropped
return dropped
@property
def data_type(self):
return self._type
@property
def data(self):
return self._values[self._ind]
@property
def other(self):
return pd.Series([]) if all(self._values[self._dep].isnull()) else self._values[self._dep]
@property
def groups(self):
groups = self._values.groupby(self._grp)
return {grp: seq[self._ind].rename(grp) for grp, seq in groups if not seq.empty}
@property
def labels(self):
return self._values[self._lbl].fillna('None')
@property
def paired_groups(self):
groups = self._values.groupby(self._grp)
return {grp: (df[self._ind], df[self._dep]) for grp, df in groups if not df.empty}
@property
def group_labels(self):
groups = self._values.groupby(self._grp)
return {grp: df[self._lbl] for grp, df in groups if not df.empty}
@property
def values(self):
return self._values
@property
def auto_groups(self):
return self._auto_groups
@property
def has_labels(self):
return any(pd.notna(self._values[self._lbl]))
class Vector(Numeric):
"""
The sci_analysis representation of continuous, numeric data.
"""
def __init__(self, sequence=None, other=None, groups=None, labels=None, name=None):
"""
Takes an array-like object and converts it to a pandas Series of
dtype float64, with any non-numeric values converted to NaN.
Parameters
----------
sequence : array-like or int or float or None
The input object
other : array-like
The secondary input object
groups : array-like
The sequence of group names for sub-arrays
labels : list | set | tuple | np.array | pd.Series, optional
The sequence of data point labels
name : str, optional
The name of the Vector object
"""
super(Vector, self).__init__(sequence=sequence, other=other, groups=groups, labels=labels, name=name)
if not self._values.empty:
self._values[self._ind] = self._values[self._ind].astype('float')
self._values[self._dep] = self._values[self._dep].astype('float')
def is_empty(self):
"""
Overrides the super class's method to also check for length of zero.
Returns
-------
test_result : bool
The result of whether the length of the Vector object is 0 or not.
Examples
--------
>>> Vector([1, 2, 3, 4, 5]).is_empty()
False
>>> Vector([]).is_empty()
True
"""
return self._values.empty
def append(self, other):
"""
Append the values of another vector to self.
Parameters
----------
other : Vector
The Vector object to be appended to self.
Returns
-------
vector : Vector
The original Vector object with new values.
Examples
--------
>>> Vector([1, 2, 3]).append(Vector([4, 5, 6])).data
pandas.Series([1., 2., 3., 4., 5., 6.])
"""
if not is_vector(other):
raise ValueError("Vector object cannot be added to a non-vector object.")
if other.data.empty:
return self
if self.auto_groups and other.auto_groups and len(self._values) > 0:
new_cat = max(self._values[self._grp].cat.categories) + 1
other.values['grp'] = new_cat
self._values = pd.concat([self._values, other.values], copy=False)
self._values.reset_index(inplace=True, drop=True)
self._values.loc[:, self._grp] = self._values[self._grp].astype('category')
return self
def flatten(self):
"""
Disassociates independent and dependent data into individual groups.
Returns
-------
data : tuple(Series)
A tuple of pandas Series.
"""
if not self.other.empty:
return (tuple(data[self._ind] for grp, data in self.values.groupby(self._grp)) +
tuple(data[self._dep] for grp, data in self.values.groupby(self._grp)))
else:
return tuple(data[self._ind] for grp, data in self.values.groupby(self._grp))
| 0.818592 | 0.724889 |
def is_data(obj):
"""
Test if the passed array_like argument is a sci_analysis Data object.
Parameters
----------
obj : object
The input object.
Returns
-------
test result : bool
The test result of whether seq is a sci_analysis Data object or not.
"""
return isinstance(obj, Data)
class Data(object):
"""
The super class used by all objects representing data for analysis
in sci_analysis. All analysis classes should expect the data provided through
arguments to be a descendant of this class.
Data members are data_type, data and name. data_type is used for identifying
the container class. The data member stores the data provided through an
argument. The name member is an optional name for the Data object.
"""
def __init__(self, v=None, n=None):
"""
Sets the data and name members.
Parameters
----------
v : array_like
The input object
n : str
The name of the Data object
"""
self._values = v
self._name = n
def is_empty(self):
"""
Tests if this Data object's data member equals 'None' and returns the result.
Returns
-------
test result : bool
The result of whether self._values is set or not
"""
return self._values is None
@property
def data(self):
return self._values
@property
def name(self):
return self._name
def __repr__(self):
"""
Prints the Data object using the same representation as its data member.
Returns
-------
output : str
The string representation of the encapsulated data.
"""
return self._values.__repr__()
def __len__(self):
"""Returns the length of the data member. If data is not defined, 0 is returned. If the data member is a scalar
value, 1 is returned.
Returns
-------
length : int
The length of the encapsulated data.
"""
if self._values is not None:
try:
return len(self._values)
except TypeError:
return 1
else:
return 0
def __getitem__(self, item):
"""
Gets the value of the data member at index item and returns it.
Parameters
----------
item : int
An index of the encapsulating data.
Returns
-------
value : object
The value of the encapsulated data at the specified index, otherwise None if no such index exists.
"""
try:
return self._values[item]
except (IndexError, AttributeError):
return None
def __contains__(self, item):
"""
Tests whether the encapsulated data contains the specified index or not.
Parameters
----------
item : int
An index of the encapsulating data.
Returns
-------
test result : bool
The test result of whether item is a valid index of the encapsulating data or not.
"""
try:
return item in self._values
except AttributeError:
return None
def __iter__(self):
"""
Give this Data object the iterative behavior of its encapsulated data.
Returns
-------
itr :iterator
An iterator based on the encapsulated sequence.
"""
return self._values.__iter__()
|
sci-analysis
|
/sci_analysis-2.2.1rc0.tar.gz/sci_analysis-2.2.1rc0/sci_analysis/data/data.py
|
data.py
|
def is_data(obj):
"""
Test if the passed array_like argument is a sci_analysis Data object.
Parameters
----------
obj : object
The input object.
Returns
-------
test result : bool
The test result of whether seq is a sci_analysis Data object or not.
"""
return isinstance(obj, Data)
class Data(object):
"""
The super class used by all objects representing data for analysis
in sci_analysis. All analysis classes should expect the data provided through
arguments to be a descendant of this class.
Data members are data_type, data and name. data_type is used for identifying
the container class. The data member stores the data provided through an
argument. The name member is an optional name for the Data object.
"""
def __init__(self, v=None, n=None):
"""
Sets the data and name members.
Parameters
----------
v : array_like
The input object
n : str
The name of the Data object
"""
self._values = v
self._name = n
def is_empty(self):
"""
Tests if this Data object's data member equals 'None' and returns the result.
Returns
-------
test result : bool
The result of whether self._values is set or not
"""
return self._values is None
@property
def data(self):
return self._values
@property
def name(self):
return self._name
def __repr__(self):
"""
Prints the Data object using the same representation as its data member.
Returns
-------
output : str
The string representation of the encapsulated data.
"""
return self._values.__repr__()
def __len__(self):
"""Returns the length of the data member. If data is not defined, 0 is returned. If the data member is a scalar
value, 1 is returned.
Returns
-------
length : int
The length of the encapsulated data.
"""
if self._values is not None:
try:
return len(self._values)
except TypeError:
return 1
else:
return 0
def __getitem__(self, item):
"""
Gets the value of the data member at index item and returns it.
Parameters
----------
item : int
An index of the encapsulating data.
Returns
-------
value : object
The value of the encapsulated data at the specified index, otherwise None if no such index exists.
"""
try:
return self._values[item]
except (IndexError, AttributeError):
return None
def __contains__(self, item):
"""
Tests whether the encapsulated data contains the specified index or not.
Parameters
----------
item : int
An index of the encapsulating data.
Returns
-------
test result : bool
The test result of whether item is a valid index of the encapsulating data or not.
"""
try:
return item in self._values
except AttributeError:
return None
def __iter__(self):
"""
Give this Data object the iterative behavior of its encapsulated data.
Returns
-------
itr :iterator
An iterator based on the encapsulated sequence.
"""
return self._values.__iter__()
| 0.927961 | 0.827689 |
import six
import numpy as np
import pandas as pd
def to_float(seq):
"""
Takes an arguement seq, tries to convert each value to a float and returns the result. If a value cannot be
converted to a float, it is replaced by 'nan'.
Parameters
----------
seq : array-like
The input object.
Returns
-------
subseq : array_like
seq with values converted to a float or "nan".
>>> to_float(['1', '2', '3', 'four', '5'])
[1.0, 2.0, 3.0, nan, 5.0]
"""
float_list = list()
for i in range(len(seq)):
try:
float_list.append(float(seq[i]))
except ValueError:
float_list.append(float("nan"))
except TypeError:
float_list.append(to_float(seq[i]))
return float_list
def flatten(seq):
"""
Recursively reduces the dimension of seq to one.
Parameters
----------
seq : array-like
The input object.
Returns
-------
subseq : array_like
A flattened copy of the input object.
Flatten a two-dimensional list into a one-dimensional list
>>> flatten([[1, 2, 3], [4, 5, 6]])
array([1, 2, 3, 4, 5, 6])
Flatten a three-dimensional list into a one-dimensional list
>>> flatten([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]])
array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
>>> flatten(([1, 2, 3], [4, 5, 6]))
array([1, 2, 3, 4, 5, 6])
>>> flatten(list(zip([1, 2, 3], [4, 5, 6])))
array([1, 4, 2, 5, 3, 6])
>>> flatten([(1, 2), (3, 4), (5, 6), (7, 8)])
array([1, 2, 3, 4, 5, 6, 7, 8])
"""
return np.array(seq).flatten()
def is_tuple(obj):
"""
Checks if a given sequence is a tuple.
Parameters
----------
obj : object
The input array.
Returns
-------
test result : bool
The test result of whether seq is a tuple or not.
>>> is_tuple(('a', 'b'))
True
>>> is_tuple(['a', 'b'])
False
>>> is_tuple(4)
False
"""
return True if isinstance(obj, tuple) else False
def is_iterable(obj):
"""
Checks if a given variable is iterable, but not a string.
Parameters
----------
obj : Any
The input argument.
Returns
-------
test result : bool
The test result of whether variable is iterable or not.
>>> is_iterable([1, 2, 3])
True
>>> is_iterable((1, 2, 3))
True
>>> is_iterable({'one': 1, 'two': 2, 'three': 3})
True
Strings arguments return False.
>>> is_iterable('foobar')
False
Scalars return False.
>>> is_iterable(42)
False
"""
if isinstance(obj, six.string_types):
return False
try:
obj.__iter__()
return True
except (AttributeError, TypeError):
return False
def is_array(obj):
"""
Checks if a given sequence is a numpy Array object.
Parameters
----------
obj : object
The input argument.
Returns
-------
test result : bool
The test result of whether seq is a numpy Array or not.
>>> import numpy as np
>>> is_array([1, 2, 3, 4, 5])
False
>>> is_array(np.array([1, 2, 3, 4, 5]))
True
"""
return hasattr(obj, 'dtype')
def is_series(obj):
"""
Checks if a given sequence is a Pandas Series object.
Parameters
----------
obj : object
The input argument.
Returns
-------
bool
>>> is_series([1, 2, 3])
False
>>> is_series(pd.Series([1, 2, 3]))
True
"""
return isinstance(obj, pd.Series)
def is_dict(obj):
"""
Checks if a given sequence is a dictionary.
Parameters
----------
obj : object
The input argument.
Returns
-------
test result : bool
The test result of whether seq is a dictionary or not.
>>> is_dict([1, 2, 3])
False
>>> is_dict((1, 2, 3))
False
>>> is_dict({'one': 1, 'two': 2, 'three': 3})
True
>>> is_dict('foobar')
False
"""
return isinstance(obj, dict)
def is_group(seq):
"""
Checks if a given variable is a list of iterable objects.
Parameters
----------
seq : array_like
The input argument.
Returns
-------
test result : bool
The test result of whether seq is a list of array_like values or not.
>>> is_group([[1, 2, 3], [4, 5, 6]])
True
>>> is_group({'one': 1, 'two': 2, 'three': 3})
False
>>> is_group(([1, 2, 3], [4, 5, 6]))
True
>>> is_group([1, 2, 3, 4, 5, 6])
False
>>> is_group({'foo': [1, 2, 3], 'bar': [4, 5, 6]})
False
"""
try:
if any(is_iterable(x) for x in seq):
return True
else:
return False
except TypeError:
return False
def is_dict_group(seq):
"""
Checks if a given variable is a dictionary of iterable objects.
Parameters
----------
seq : array-like
The input argument.
Returns
-------
test result : bool
The test result of whether seq is a dictionary of array_like values or not.
>>> is_dict_group([[1, 2, 3], [4, 5, 6]])
False
>>> is_dict_group(([1, 2, 3], [4, 5, 6]))
False
>>> is_dict_group([1, 2, 3, 4, 5, 6])
False
>>> is_dict_group({'foo': [1, 2, 3], 'bar': [4, 5, 6]})
True
"""
try:
if is_group(list(seq.values())):
return True
else:
return False
except (AttributeError, TypeError):
return False
def is_number(obj):
"""
Checks if the given object is a number.
Parameters
----------
obj : Any
The input argument.
Returns
-------
test result : bool
The test result of whether obj can be converted to a number or not.
>>> is_number(3)
True
>>> is_number(1.34)
True
>>> is_number('3')
True
>>> is_number(np.array(3))
True
>>> is_number('a')
False
>>> is_number([1, 2, 3])
False
>>> is_number(None)
False
"""
try:
float(obj)
return True
except (ValueError, TypeError):
return False
|
sci-analysis
|
/sci_analysis-2.2.1rc0.tar.gz/sci_analysis-2.2.1rc0/sci_analysis/data/data_operations.py
|
data_operations.py
|
import six
import numpy as np
import pandas as pd
def to_float(seq):
"""
Takes an arguement seq, tries to convert each value to a float and returns the result. If a value cannot be
converted to a float, it is replaced by 'nan'.
Parameters
----------
seq : array-like
The input object.
Returns
-------
subseq : array_like
seq with values converted to a float or "nan".
>>> to_float(['1', '2', '3', 'four', '5'])
[1.0, 2.0, 3.0, nan, 5.0]
"""
float_list = list()
for i in range(len(seq)):
try:
float_list.append(float(seq[i]))
except ValueError:
float_list.append(float("nan"))
except TypeError:
float_list.append(to_float(seq[i]))
return float_list
def flatten(seq):
"""
Recursively reduces the dimension of seq to one.
Parameters
----------
seq : array-like
The input object.
Returns
-------
subseq : array_like
A flattened copy of the input object.
Flatten a two-dimensional list into a one-dimensional list
>>> flatten([[1, 2, 3], [4, 5, 6]])
array([1, 2, 3, 4, 5, 6])
Flatten a three-dimensional list into a one-dimensional list
>>> flatten([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]])
array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
>>> flatten(([1, 2, 3], [4, 5, 6]))
array([1, 2, 3, 4, 5, 6])
>>> flatten(list(zip([1, 2, 3], [4, 5, 6])))
array([1, 4, 2, 5, 3, 6])
>>> flatten([(1, 2), (3, 4), (5, 6), (7, 8)])
array([1, 2, 3, 4, 5, 6, 7, 8])
"""
return np.array(seq).flatten()
def is_tuple(obj):
"""
Checks if a given sequence is a tuple.
Parameters
----------
obj : object
The input array.
Returns
-------
test result : bool
The test result of whether seq is a tuple or not.
>>> is_tuple(('a', 'b'))
True
>>> is_tuple(['a', 'b'])
False
>>> is_tuple(4)
False
"""
return True if isinstance(obj, tuple) else False
def is_iterable(obj):
"""
Checks if a given variable is iterable, but not a string.
Parameters
----------
obj : Any
The input argument.
Returns
-------
test result : bool
The test result of whether variable is iterable or not.
>>> is_iterable([1, 2, 3])
True
>>> is_iterable((1, 2, 3))
True
>>> is_iterable({'one': 1, 'two': 2, 'three': 3})
True
Strings arguments return False.
>>> is_iterable('foobar')
False
Scalars return False.
>>> is_iterable(42)
False
"""
if isinstance(obj, six.string_types):
return False
try:
obj.__iter__()
return True
except (AttributeError, TypeError):
return False
def is_array(obj):
"""
Checks if a given sequence is a numpy Array object.
Parameters
----------
obj : object
The input argument.
Returns
-------
test result : bool
The test result of whether seq is a numpy Array or not.
>>> import numpy as np
>>> is_array([1, 2, 3, 4, 5])
False
>>> is_array(np.array([1, 2, 3, 4, 5]))
True
"""
return hasattr(obj, 'dtype')
def is_series(obj):
"""
Checks if a given sequence is a Pandas Series object.
Parameters
----------
obj : object
The input argument.
Returns
-------
bool
>>> is_series([1, 2, 3])
False
>>> is_series(pd.Series([1, 2, 3]))
True
"""
return isinstance(obj, pd.Series)
def is_dict(obj):
"""
Checks if a given sequence is a dictionary.
Parameters
----------
obj : object
The input argument.
Returns
-------
test result : bool
The test result of whether seq is a dictionary or not.
>>> is_dict([1, 2, 3])
False
>>> is_dict((1, 2, 3))
False
>>> is_dict({'one': 1, 'two': 2, 'three': 3})
True
>>> is_dict('foobar')
False
"""
return isinstance(obj, dict)
def is_group(seq):
"""
Checks if a given variable is a list of iterable objects.
Parameters
----------
seq : array_like
The input argument.
Returns
-------
test result : bool
The test result of whether seq is a list of array_like values or not.
>>> is_group([[1, 2, 3], [4, 5, 6]])
True
>>> is_group({'one': 1, 'two': 2, 'three': 3})
False
>>> is_group(([1, 2, 3], [4, 5, 6]))
True
>>> is_group([1, 2, 3, 4, 5, 6])
False
>>> is_group({'foo': [1, 2, 3], 'bar': [4, 5, 6]})
False
"""
try:
if any(is_iterable(x) for x in seq):
return True
else:
return False
except TypeError:
return False
def is_dict_group(seq):
"""
Checks if a given variable is a dictionary of iterable objects.
Parameters
----------
seq : array-like
The input argument.
Returns
-------
test result : bool
The test result of whether seq is a dictionary of array_like values or not.
>>> is_dict_group([[1, 2, 3], [4, 5, 6]])
False
>>> is_dict_group(([1, 2, 3], [4, 5, 6]))
False
>>> is_dict_group([1, 2, 3, 4, 5, 6])
False
>>> is_dict_group({'foo': [1, 2, 3], 'bar': [4, 5, 6]})
True
"""
try:
if is_group(list(seq.values())):
return True
else:
return False
except (AttributeError, TypeError):
return False
def is_number(obj):
"""
Checks if the given object is a number.
Parameters
----------
obj : Any
The input argument.
Returns
-------
test result : bool
The test result of whether obj can be converted to a number or not.
>>> is_number(3)
True
>>> is_number(1.34)
True
>>> is_number('3')
True
>>> is_number(np.array(3))
True
>>> is_number('a')
False
>>> is_number([1, 2, 3])
False
>>> is_number(None)
False
"""
try:
float(obj)
return True
except (ValueError, TypeError):
return False
| 0.886439 | 0.695222 |
from warnings import warn
# Import packages
import pandas as pd
# Import from local
from .data import Data, is_data
from .data_operations import flatten, is_iterable
class NumberOfCategoriesWarning(Warning):
warn_categories = 50
def __str__(self):
return "The number of categories is greater than {} which might make analysis difficult. " \
"If this isn't a mistake, consider subsetting the data first".format(self.warn_categories)
def is_categorical(obj):
"""
Test if the passed array_like argument is a sci_analysis Categorical object.
Parameters
----------
obj : object
The input object.
Returns
-------
test result : bool
The test result of whether obj is a sci_analysis Categorical object.
"""
return isinstance(obj, Categorical)
class Categorical(Data):
"""
The sci_analysis representation of categorical, quantitative or textual data.
"""
def __init__(self, sequence=None, name=None, order=None, dropna=False):
"""Takes an array-like object and converts it to a pandas Categorical object.
Parameters
----------
sequence : array-like or Data or Categorical
The input object.
name : str, optional
The name of the Categorical object.
order : array-like
The order that categories in sequence should appear.
dropna : bool
Remove all occurances of numpy NaN.
"""
if sequence is None:
self._values = pd.Series([])
self._order = order
self._name = name
self._summary = pd.DataFrame([], columns=['counts', 'ranks', 'percents', 'categories'])
elif is_data(sequence):
new_name = sequence.name or name
super(Categorical, self).__init__(v=sequence.data, n=new_name)
self._order = sequence.order
self._values = sequence.data
self._name = sequence.name
self._summary = sequence.summary
else:
self._name = name
self._values = pd.Series(sequence)
try:
self._values.astype('category')
except TypeError:
self._values = pd.Series(flatten(sequence))
except ValueError:
self._values = pd.Series([])
# Try to preserve the original dtype of the categories.
try:
if not any(self._values % 1):
self._values = self._values.astype(int)
except TypeError:
pass
self._values = self._values.astype('category')
if order is not None:
if not is_iterable(order):
order = [order]
self._values = self._values.cat.set_categories(order).cat.reorder_categories(order, ordered=True)
if dropna:
self._values = self._values.dropna()
try:
sequence += 1
self._order = None if self._values.empty else self._values.cat.categories
except TypeError:
self._order = order
counts = self._values.value_counts(sort=False, dropna=False, ascending=False)
self._summary = pd.DataFrame({
'counts': counts,
'ranks': counts.rank(method='dense', na_option='bottom', ascending=False).astype('int'),
'percents': (counts / counts.sum() * 100) if not all(counts == 0) else 0.0
})
self._summary['categories'] = self._summary.index.to_series()
if order is not None:
self._summary.sort_index(level=self._order, inplace=True, axis=0, na_position='last')
else:
self._summary.sort_values('ranks', inplace=True)
if not self._summary.empty and len(self.categories) > NumberOfCategoriesWarning.warn_categories:
warn(NumberOfCategoriesWarning())
def is_empty(self):
"""
Overrides the super class's method to also check for length of zero.
Returns
-------
test_result : bool
The result of whether the length of the Vector object is 0 or not.
"""
return self._values.empty
@property
def summary(self):
return self._summary
@property
def counts(self):
return self._summary.counts
@property
def percents(self):
return self._summary.percents
@property
def order(self):
return self._order
@property
def ranks(self):
return self._summary.ranks
@property
def categories(self):
return self._summary.categories
@property
def total(self):
return len(self._values)
@property
def num_of_groups(self):
return len(self._summary)
|
sci-analysis
|
/sci_analysis-2.2.1rc0.tar.gz/sci_analysis-2.2.1rc0/sci_analysis/data/categorical.py
|
categorical.py
|
from warnings import warn
# Import packages
import pandas as pd
# Import from local
from .data import Data, is_data
from .data_operations import flatten, is_iterable
class NumberOfCategoriesWarning(Warning):
warn_categories = 50
def __str__(self):
return "The number of categories is greater than {} which might make analysis difficult. " \
"If this isn't a mistake, consider subsetting the data first".format(self.warn_categories)
def is_categorical(obj):
"""
Test if the passed array_like argument is a sci_analysis Categorical object.
Parameters
----------
obj : object
The input object.
Returns
-------
test result : bool
The test result of whether obj is a sci_analysis Categorical object.
"""
return isinstance(obj, Categorical)
class Categorical(Data):
"""
The sci_analysis representation of categorical, quantitative or textual data.
"""
def __init__(self, sequence=None, name=None, order=None, dropna=False):
"""Takes an array-like object and converts it to a pandas Categorical object.
Parameters
----------
sequence : array-like or Data or Categorical
The input object.
name : str, optional
The name of the Categorical object.
order : array-like
The order that categories in sequence should appear.
dropna : bool
Remove all occurances of numpy NaN.
"""
if sequence is None:
self._values = pd.Series([])
self._order = order
self._name = name
self._summary = pd.DataFrame([], columns=['counts', 'ranks', 'percents', 'categories'])
elif is_data(sequence):
new_name = sequence.name or name
super(Categorical, self).__init__(v=sequence.data, n=new_name)
self._order = sequence.order
self._values = sequence.data
self._name = sequence.name
self._summary = sequence.summary
else:
self._name = name
self._values = pd.Series(sequence)
try:
self._values.astype('category')
except TypeError:
self._values = pd.Series(flatten(sequence))
except ValueError:
self._values = pd.Series([])
# Try to preserve the original dtype of the categories.
try:
if not any(self._values % 1):
self._values = self._values.astype(int)
except TypeError:
pass
self._values = self._values.astype('category')
if order is not None:
if not is_iterable(order):
order = [order]
self._values = self._values.cat.set_categories(order).cat.reorder_categories(order, ordered=True)
if dropna:
self._values = self._values.dropna()
try:
sequence += 1
self._order = None if self._values.empty else self._values.cat.categories
except TypeError:
self._order = order
counts = self._values.value_counts(sort=False, dropna=False, ascending=False)
self._summary = pd.DataFrame({
'counts': counts,
'ranks': counts.rank(method='dense', na_option='bottom', ascending=False).astype('int'),
'percents': (counts / counts.sum() * 100) if not all(counts == 0) else 0.0
})
self._summary['categories'] = self._summary.index.to_series()
if order is not None:
self._summary.sort_index(level=self._order, inplace=True, axis=0, na_position='last')
else:
self._summary.sort_values('ranks', inplace=True)
if not self._summary.empty and len(self.categories) > NumberOfCategoriesWarning.warn_categories:
warn(NumberOfCategoriesWarning())
def is_empty(self):
"""
Overrides the super class's method to also check for length of zero.
Returns
-------
test_result : bool
The result of whether the length of the Vector object is 0 or not.
"""
return self._values.empty
@property
def summary(self):
return self._summary
@property
def counts(self):
return self._summary.counts
@property
def percents(self):
return self._summary.percents
@property
def order(self):
return self._order
@property
def ranks(self):
return self._summary.ranks
@property
def categories(self):
return self._summary.categories
@property
def total(self):
return len(self._values)
@property
def num_of_groups(self):
return len(self._summary)
| 0.883995 | 0.547646 |
class DefaultPreferences(type):
"""The type for Default Preferences that cannot be modified"""
def __setattr__(cls, key, value):
if key == "defaults":
raise AttributeError("Cannot override defaults")
else:
return type.__setattr__(cls, key, value)
def __delattr__(cls, item):
if item == "defaults":
raise AttributeError("Cannot delete defaults")
else:
return type.__delattr__(cls, item)
class Preferences(object):
"""The base Preferences class"""
__metaclass__ = DefaultPreferences
def list(self):
print(self.__dict__)
return self.__dict__
def defaults(self):
return tuple(self.__dict__.values())
class GraphPreferences(object):
"""Handles graphing preferences."""
class Plot(object):
boxplot = True
histogram = True
cdf = False
oneway = True
probplot = True
scatter = True
tukey = False
histogram_borders = False
boxplot_borders = False
defaults = (boxplot, histogram, cdf, oneway, probplot, scatter, tukey, histogram_borders, boxplot_borders)
distribution = {'counts': False,
'violin': False,
'boxplot': True,
'fit': False,
'fit_style': 'r--',
'fit_width': '2',
'cdf_style': 'k-',
'distribution': 'norm',
'bins': 20,
'color': 'green'
}
bivariate = {'points': True,
'point_style': 'k.',
'contours': False,
'contour_width': 1.25,
'fit': True,
'fit_style': 'r-',
'fit_width': 1,
'boxplot': True,
'violin': True,
'bins': 20,
'color': 'green'
}
oneway = {'boxplot': True,
'violin': False,
'point_style': '^',
'line_style': '-'
}
|
sci-analysis
|
/sci_analysis-2.2.1rc0.tar.gz/sci_analysis-2.2.1rc0/sci_analysis/preferences/preferences.py
|
preferences.py
|
class DefaultPreferences(type):
"""The type for Default Preferences that cannot be modified"""
def __setattr__(cls, key, value):
if key == "defaults":
raise AttributeError("Cannot override defaults")
else:
return type.__setattr__(cls, key, value)
def __delattr__(cls, item):
if item == "defaults":
raise AttributeError("Cannot delete defaults")
else:
return type.__delattr__(cls, item)
class Preferences(object):
"""The base Preferences class"""
__metaclass__ = DefaultPreferences
def list(self):
print(self.__dict__)
return self.__dict__
def defaults(self):
return tuple(self.__dict__.values())
class GraphPreferences(object):
"""Handles graphing preferences."""
class Plot(object):
boxplot = True
histogram = True
cdf = False
oneway = True
probplot = True
scatter = True
tukey = False
histogram_borders = False
boxplot_borders = False
defaults = (boxplot, histogram, cdf, oneway, probplot, scatter, tukey, histogram_borders, boxplot_borders)
distribution = {'counts': False,
'violin': False,
'boxplot': True,
'fit': False,
'fit_style': 'r--',
'fit_width': '2',
'cdf_style': 'k-',
'distribution': 'norm',
'bins': 20,
'color': 'green'
}
bivariate = {'points': True,
'point_style': 'k.',
'contours': False,
'contour_width': 1.25,
'fit': True,
'fit_style': 'r-',
'fit_width': 1,
'boxplot': True,
'violin': True,
'bins': 20,
'color': 'green'
}
oneway = {'boxplot': True,
'violin': False,
'point_style': '^',
'line_style': '-'
}
| 0.727395 | 0.156427 |
import pandas as pd
import os
from sci_annot_eval.common.bounding_box import AbsoluteBoundingBox, RelativeBoundingBox
from . parsers.parserInterface import Parser
from sci_annot_eval import evaluation
def build_id_file_dict(path: str):
result = {}
for file in os.listdir(path):
no_extension = file.split('.')[0]
result[no_extension] = os.path.join(path, file)
return result
def build_3D_dict(input: dict):
"""
Converts the nested dictionary into an input for pandas' multiIndex.
See https://stackoverflow.com/questions/24988131/nested-dictionary-to-multiindex-dataframe-where-dictionary-keys-are-column-label
"""
return {
(outerKey, innerKey): values
for outerKey, innerDict in input.items()
for innerKey, values in innerDict.items()
}
def benchmark(
render_summary_parquet_path: str,
gtruth_parser: Parser,
pred_parser: Parser,
gtruth_dir: str,
pred_dir: str,
output_parquet_path: str,
IOU_threshold: float = 0.8
):
result_dict = {}
gtruth_file_dict = build_id_file_dict(gtruth_dir)
pred_file_dict = build_id_file_dict(pred_dir)
render_summ = pd.read_parquet(render_summary_parquet_path)
for row in render_summ.itertuples():
id = row.Index
ground_truth = []
if id in gtruth_file_dict.keys():
ground_truth = gtruth_parser.parse_file_relative(gtruth_file_dict[id])
predictions = []
if id in pred_file_dict.keys():
predictions = pred_parser.parse_file_relative(pred_file_dict[id])
result_dict[id] = evaluation.evaluate(predictions, ground_truth, IOU_threshold)
"""
Produces a DF in this shape:
class class2 ... class_1 ...
metric metric_1 metric_2 ... metric_1 metric_2 metric_3 ...
id
id_1 -1 -2 1 2 2
id_2 -3 -4 3 4 0
"""
result_df = pd.DataFrame.from_dict(result_dict, orient='index').stack()
result_df = pd.DataFrame(result_df.values.tolist(), index=result_df.index, columns=['TP', 'FP', 'FN'])\
.unstack()\
.swaplevel(axis=1)\
.sort_index(axis=1, level=0)\
.rename_axis(index='id', columns=['class', 'metric'])
result_df.to_parquet(output_parquet_path)
|
sci-annot-eval
|
/sci_annot_eval-0.0.9-py3-none-any.whl/sci_annot_eval/benchmarking.py
|
benchmarking.py
|
import pandas as pd
import os
from sci_annot_eval.common.bounding_box import AbsoluteBoundingBox, RelativeBoundingBox
from . parsers.parserInterface import Parser
from sci_annot_eval import evaluation
def build_id_file_dict(path: str):
result = {}
for file in os.listdir(path):
no_extension = file.split('.')[0]
result[no_extension] = os.path.join(path, file)
return result
def build_3D_dict(input: dict):
"""
Converts the nested dictionary into an input for pandas' multiIndex.
See https://stackoverflow.com/questions/24988131/nested-dictionary-to-multiindex-dataframe-where-dictionary-keys-are-column-label
"""
return {
(outerKey, innerKey): values
for outerKey, innerDict in input.items()
for innerKey, values in innerDict.items()
}
def benchmark(
render_summary_parquet_path: str,
gtruth_parser: Parser,
pred_parser: Parser,
gtruth_dir: str,
pred_dir: str,
output_parquet_path: str,
IOU_threshold: float = 0.8
):
result_dict = {}
gtruth_file_dict = build_id_file_dict(gtruth_dir)
pred_file_dict = build_id_file_dict(pred_dir)
render_summ = pd.read_parquet(render_summary_parquet_path)
for row in render_summ.itertuples():
id = row.Index
ground_truth = []
if id in gtruth_file_dict.keys():
ground_truth = gtruth_parser.parse_file_relative(gtruth_file_dict[id])
predictions = []
if id in pred_file_dict.keys():
predictions = pred_parser.parse_file_relative(pred_file_dict[id])
result_dict[id] = evaluation.evaluate(predictions, ground_truth, IOU_threshold)
"""
Produces a DF in this shape:
class class2 ... class_1 ...
metric metric_1 metric_2 ... metric_1 metric_2 metric_3 ...
id
id_1 -1 -2 1 2 2
id_2 -3 -4 3 4 0
"""
result_df = pd.DataFrame.from_dict(result_dict, orient='index').stack()
result_df = pd.DataFrame(result_df.values.tolist(), index=result_df.index, columns=['TP', 'FP', 'FN'])\
.unstack()\
.swaplevel(axis=1)\
.sort_index(axis=1, level=0)\
.rename_axis(index='id', columns=['class', 'metric'])
result_df.to_parquet(output_parquet_path)
| 0.557845 | 0.216094 |
import argparse
from sci_annot_eval.common.bounding_box import AbsoluteBoundingBox, RelativeBoundingBox
from sci_annot_eval.exporters.sci_annot_exporter import SciAnnotExporter
from . helpers import rasterize_pdfs, pdffigures2_page_splitter, deepfigures_prediction
import coloredlogs
import logging
from enum import Enum
from . benchmarking import benchmark
from . parsers import sci_annot_parser, pdffigures2_parser, parserInterface
from sci_annot_eval.common.prediction_field_mapper import Pdffigures2FieldMapper, DeepfiguresFieldMapper
import os
import pandas as pd
# TODO: Type hint values
class RegisteredParsers(Enum):
SCI_ANNOT = sci_annot_parser.SciAnnotParser()
PDF_FIGURES_2 = pdffigures2_parser.PdfFigures2Parser(Pdffigures2FieldMapper)
DEEPFIGURES = pdffigures2_parser.PdfFigures2Parser(DeepfiguresFieldMapper)
def run_benchmark(
render_summary_parquet_path: str,
gtruth_parser_name: str,
pred_parser_name: str,
gtruth_dir: str,
pred_dir: str,
output_parquet_path: str,
IOU_threshold: float = 0.8,
**kwargs
):
gtruth_parser = RegisteredParsers.__getitem__(gtruth_parser_name)
pred_parser = RegisteredParsers.__getitem__(pred_parser_name)
benchmark(
render_summary_parquet_path,
gtruth_parser.value,
pred_parser.value,
gtruth_dir,
pred_dir,
output_parquet_path,
IOU_threshold
)
def run_deepfigures_prediction(
deepfigures_root: str,
input_folder: str,
output_folder: str,
run_summary_csv_path: str,
**kwargs
):
deepfigures_prediction.run_deepfigures_prediction_for_folder(
deepfigures_root,
input_folder,
output_folder,
run_summary_csv_path
)
def run_transpile(
input_dir: str,
input_parser_name: str,
render_summary_parquet_path: str,
output_dir: str,
**kwargs
):
input_parser = RegisteredParsers.__getitem__(input_parser_name).value
exporter = SciAnnotExporter()
if not os.path.exists(output_dir):
logging.debug('Output directory does not exist. Creating it...')
os.mkdir(output_dir)
files = os.listdir(input_dir)
render_summ = pd.read_parquet(render_summary_parquet_path)
for i, file in enumerate(files):
id = file[:-5]
logging.debug(f'Transpiling file {i+1}/{len(files)} with id {id}')
summary_row = render_summ[render_summ.index == id]
relative_input = input_parser.parse_file_relative(os.path.join(input_dir, file))
exporter.export_to_file(
relative_input,
int(summary_row.width.values[0]),
int(summary_row.height.values[0]),
os.path.join(output_dir, file)
)
logging.info(f'Transpiled {len(files)} files')
# TODO: Make args consistent
def main():
parser = argparse.ArgumentParser(description='Command line tool for managing the sci_annot evaluator and its helper functions', argument_default=argparse.SUPPRESS)
parser.add_argument('--verbose', '-v', dest='verbose', help='Enable verbose logging (info, debug)', action='count', default=0)
subparsers = parser.add_subparsers()
parser_rasterize = subparsers.add_parser(
'rasterize',
description='Rasterize all pdfs in input folder and additionally produce a summary parquet file called render_summary.parquet in the output folder.',
argument_default=argparse.SUPPRESS
)
parser_rasterize.add_argument('-i', dest='input-dir', metavar='input_folder', help='Input folder containing PDFs.', required=True)
parser_rasterize.add_argument('-o' ,dest='output-dir', metavar='output_folder', help='Output folder to save page rasters.', required=True)
parser_rasterize.add_argument('--dpi', metavar='DPI', help='DPI to render at (default is 150).', type=int)
parser_rasterize.add_argument('-f', dest='format', metavar='format', help='Output format for images (default is png).')
parser_rasterize.add_argument('-t', dest='nr-threads', metavar='threads', help='Number of threads to use when rasterizing (default is 8).')
parser_rasterize.set_defaults(func=rasterize_pdfs.rasterize)
parser_pdffig2 = subparsers.add_parser(
'split-pdffigures2',
description='Take original pdffigures2 output and split it into validator-friendly per-page files.',
argument_default=argparse.SUPPRESS
)
parser_pdffig2.add_argument('-i', dest='input-dir', metavar='input_folder', help='Input folder containing the original predictions.', required=True)
parser_pdffig2.add_argument('-o' ,dest='output-dir', metavar='output_folder', help='Output folder to save per-page predictions.', required=True)
parser_pdffig2.add_argument('-p' ,dest='run-prefix', metavar='prefix', help='Prediction prefix specified with -d when running pdffigures2', required=True)
parser_pdffig2.add_argument('-s' ,dest='render_summary_path', metavar='path', help='Path to render summary parquet file', required=True)
parser_pdffig2.set_defaults(func=pdffigures2_page_splitter.split_pages)
parser_benchmark = subparsers.add_parser(
'benchmark',
description='Evaluate predictions against a ground truth and produce TP, FP, and FN metrics for each page',
argument_default=argparse.SUPPRESS
)
parser_benchmark.add_argument('-g', '--ground-truth-dir', dest='gtruth_dir', metavar='DIR', help='Directory containing files with ground truth annotations. Each should be named like PDF_ID-PAGENR.EXTENSION.', required=True)
parser_benchmark.add_argument('-p', '--predictions-dir', dest='pred_dir', metavar='DIR', help='Directory containing files with prediction annotations. Each should be named like: PDF_ID-PAGENR.EXTENSION.', required=True)
parser_benchmark.add_argument('-G', '--ground-truth-parser', dest='gtruth_parser_name', help='Parser to use for each file in the ground truth directory.', choices=RegisteredParsers.__members__, required=True)
parser_benchmark.add_argument('-P', '--predictions-parser', dest='pred_parser_name', help='Parser to use for each file in the parser directory.', choices=RegisteredParsers.__members__, required=True)
parser_benchmark.add_argument('-r', '--render-summary', dest='render_summary_parquet_path', metavar='PATH', help='Path to render_summary.parquet. This table contains all of the pages to test on.', required=True)
parser_benchmark.add_argument('-o', '--output-path', dest='output_parquet_path', metavar='PATH', help='Tells the tool where to create a parquet file which contains the benchmark output', required=True)
parser_benchmark.add_argument('-t', '--IOU-threshold', dest='IOU_threshold', metavar='THRESHOLD', help='Area under curve threshold over which annotations count as valid (default is 0.8)', type=float)
parser_benchmark.set_defaults(func= run_benchmark)
parser_deepfigures_predict = subparsers.add_parser(
'deepfigures-predict',
description='Use deepfigures to detect elements from each pdf in the input folder',
argument_default=argparse.SUPPRESS
)
parser_deepfigures_predict.add_argument('deepfigures_root', metavar='DIR', help='Folder containing manage.py and all other requirements for deepfigures-open')
parser_deepfigures_predict.add_argument('input_folder', metavar='DIR', help='Folder containing input PDFs')
parser_deepfigures_predict.add_argument('output_folder', metavar='DIR', help='Folder in which predictions should be saved')
parser_deepfigures_predict.add_argument('run_summary_csv_path', metavar='FILE', help='Path to save run information')
parser_deepfigures_predict.set_defaults(func=run_deepfigures_prediction)
parser_transpile = subparsers.add_parser(
'transpile',
description='Take a folder of predictions in one format and output them in another',
argument_default=argparse.SUPPRESS
)
parser_transpile.add_argument('-i', '--input-dir', dest='input_dir', metavar='DIR', help='Directory containing files with prediction annotations. Each should be named like: PDF_ID-PAGENR.EXTENSION.', required=True)
parser_transpile.add_argument('-I', '--input-parser', dest='input_parser_name', help='Parser to use for each file in the input directory.', choices=RegisteredParsers.__members__, required=True)
parser_transpile.add_argument('-o', '--output-dir', dest='output_dir', metavar='PATH', help='Where to create the transpiled files', required=True)
parser_transpile.add_argument('-r', '--render-summary', dest='render_summary_parquet_path', metavar='PATH', help='Path to render_summary.parquet. This is required in order to create right absolute coordinates', required=True)
parser_transpile.set_defaults(func=run_transpile)
args = parser.parse_args()
logging_config = {"fmt":'%(asctime)s %(levelname)s: %(message)s', "level": logging.WARNING}
if(args.verbose == 1):
logging_config['level'] = logging.INFO
elif(args.verbose == 2):
logging_config['level'] = logging.DEBUG
coloredlogs.install(**logging_config)
logging.debug('DEBUG LOGGING ENABLED')
logging.info('INFO LOGGING ENABLED')
if hasattr(args, 'func'):
args.func(**vars(args))
if __name__ == '__main__':
main()
|
sci-annot-eval
|
/sci_annot_eval-0.0.9-py3-none-any.whl/sci_annot_eval/cli_entrypoint.py
|
cli_entrypoint.py
|
import argparse
from sci_annot_eval.common.bounding_box import AbsoluteBoundingBox, RelativeBoundingBox
from sci_annot_eval.exporters.sci_annot_exporter import SciAnnotExporter
from . helpers import rasterize_pdfs, pdffigures2_page_splitter, deepfigures_prediction
import coloredlogs
import logging
from enum import Enum
from . benchmarking import benchmark
from . parsers import sci_annot_parser, pdffigures2_parser, parserInterface
from sci_annot_eval.common.prediction_field_mapper import Pdffigures2FieldMapper, DeepfiguresFieldMapper
import os
import pandas as pd
# TODO: Type hint values
class RegisteredParsers(Enum):
SCI_ANNOT = sci_annot_parser.SciAnnotParser()
PDF_FIGURES_2 = pdffigures2_parser.PdfFigures2Parser(Pdffigures2FieldMapper)
DEEPFIGURES = pdffigures2_parser.PdfFigures2Parser(DeepfiguresFieldMapper)
def run_benchmark(
render_summary_parquet_path: str,
gtruth_parser_name: str,
pred_parser_name: str,
gtruth_dir: str,
pred_dir: str,
output_parquet_path: str,
IOU_threshold: float = 0.8,
**kwargs
):
gtruth_parser = RegisteredParsers.__getitem__(gtruth_parser_name)
pred_parser = RegisteredParsers.__getitem__(pred_parser_name)
benchmark(
render_summary_parquet_path,
gtruth_parser.value,
pred_parser.value,
gtruth_dir,
pred_dir,
output_parquet_path,
IOU_threshold
)
def run_deepfigures_prediction(
deepfigures_root: str,
input_folder: str,
output_folder: str,
run_summary_csv_path: str,
**kwargs
):
deepfigures_prediction.run_deepfigures_prediction_for_folder(
deepfigures_root,
input_folder,
output_folder,
run_summary_csv_path
)
def run_transpile(
input_dir: str,
input_parser_name: str,
render_summary_parquet_path: str,
output_dir: str,
**kwargs
):
input_parser = RegisteredParsers.__getitem__(input_parser_name).value
exporter = SciAnnotExporter()
if not os.path.exists(output_dir):
logging.debug('Output directory does not exist. Creating it...')
os.mkdir(output_dir)
files = os.listdir(input_dir)
render_summ = pd.read_parquet(render_summary_parquet_path)
for i, file in enumerate(files):
id = file[:-5]
logging.debug(f'Transpiling file {i+1}/{len(files)} with id {id}')
summary_row = render_summ[render_summ.index == id]
relative_input = input_parser.parse_file_relative(os.path.join(input_dir, file))
exporter.export_to_file(
relative_input,
int(summary_row.width.values[0]),
int(summary_row.height.values[0]),
os.path.join(output_dir, file)
)
logging.info(f'Transpiled {len(files)} files')
# TODO: Make args consistent
def main():
parser = argparse.ArgumentParser(description='Command line tool for managing the sci_annot evaluator and its helper functions', argument_default=argparse.SUPPRESS)
parser.add_argument('--verbose', '-v', dest='verbose', help='Enable verbose logging (info, debug)', action='count', default=0)
subparsers = parser.add_subparsers()
parser_rasterize = subparsers.add_parser(
'rasterize',
description='Rasterize all pdfs in input folder and additionally produce a summary parquet file called render_summary.parquet in the output folder.',
argument_default=argparse.SUPPRESS
)
parser_rasterize.add_argument('-i', dest='input-dir', metavar='input_folder', help='Input folder containing PDFs.', required=True)
parser_rasterize.add_argument('-o' ,dest='output-dir', metavar='output_folder', help='Output folder to save page rasters.', required=True)
parser_rasterize.add_argument('--dpi', metavar='DPI', help='DPI to render at (default is 150).', type=int)
parser_rasterize.add_argument('-f', dest='format', metavar='format', help='Output format for images (default is png).')
parser_rasterize.add_argument('-t', dest='nr-threads', metavar='threads', help='Number of threads to use when rasterizing (default is 8).')
parser_rasterize.set_defaults(func=rasterize_pdfs.rasterize)
parser_pdffig2 = subparsers.add_parser(
'split-pdffigures2',
description='Take original pdffigures2 output and split it into validator-friendly per-page files.',
argument_default=argparse.SUPPRESS
)
parser_pdffig2.add_argument('-i', dest='input-dir', metavar='input_folder', help='Input folder containing the original predictions.', required=True)
parser_pdffig2.add_argument('-o' ,dest='output-dir', metavar='output_folder', help='Output folder to save per-page predictions.', required=True)
parser_pdffig2.add_argument('-p' ,dest='run-prefix', metavar='prefix', help='Prediction prefix specified with -d when running pdffigures2', required=True)
parser_pdffig2.add_argument('-s' ,dest='render_summary_path', metavar='path', help='Path to render summary parquet file', required=True)
parser_pdffig2.set_defaults(func=pdffigures2_page_splitter.split_pages)
parser_benchmark = subparsers.add_parser(
'benchmark',
description='Evaluate predictions against a ground truth and produce TP, FP, and FN metrics for each page',
argument_default=argparse.SUPPRESS
)
parser_benchmark.add_argument('-g', '--ground-truth-dir', dest='gtruth_dir', metavar='DIR', help='Directory containing files with ground truth annotations. Each should be named like PDF_ID-PAGENR.EXTENSION.', required=True)
parser_benchmark.add_argument('-p', '--predictions-dir', dest='pred_dir', metavar='DIR', help='Directory containing files with prediction annotations. Each should be named like: PDF_ID-PAGENR.EXTENSION.', required=True)
parser_benchmark.add_argument('-G', '--ground-truth-parser', dest='gtruth_parser_name', help='Parser to use for each file in the ground truth directory.', choices=RegisteredParsers.__members__, required=True)
parser_benchmark.add_argument('-P', '--predictions-parser', dest='pred_parser_name', help='Parser to use for each file in the parser directory.', choices=RegisteredParsers.__members__, required=True)
parser_benchmark.add_argument('-r', '--render-summary', dest='render_summary_parquet_path', metavar='PATH', help='Path to render_summary.parquet. This table contains all of the pages to test on.', required=True)
parser_benchmark.add_argument('-o', '--output-path', dest='output_parquet_path', metavar='PATH', help='Tells the tool where to create a parquet file which contains the benchmark output', required=True)
parser_benchmark.add_argument('-t', '--IOU-threshold', dest='IOU_threshold', metavar='THRESHOLD', help='Area under curve threshold over which annotations count as valid (default is 0.8)', type=float)
parser_benchmark.set_defaults(func= run_benchmark)
parser_deepfigures_predict = subparsers.add_parser(
'deepfigures-predict',
description='Use deepfigures to detect elements from each pdf in the input folder',
argument_default=argparse.SUPPRESS
)
parser_deepfigures_predict.add_argument('deepfigures_root', metavar='DIR', help='Folder containing manage.py and all other requirements for deepfigures-open')
parser_deepfigures_predict.add_argument('input_folder', metavar='DIR', help='Folder containing input PDFs')
parser_deepfigures_predict.add_argument('output_folder', metavar='DIR', help='Folder in which predictions should be saved')
parser_deepfigures_predict.add_argument('run_summary_csv_path', metavar='FILE', help='Path to save run information')
parser_deepfigures_predict.set_defaults(func=run_deepfigures_prediction)
parser_transpile = subparsers.add_parser(
'transpile',
description='Take a folder of predictions in one format and output them in another',
argument_default=argparse.SUPPRESS
)
parser_transpile.add_argument('-i', '--input-dir', dest='input_dir', metavar='DIR', help='Directory containing files with prediction annotations. Each should be named like: PDF_ID-PAGENR.EXTENSION.', required=True)
parser_transpile.add_argument('-I', '--input-parser', dest='input_parser_name', help='Parser to use for each file in the input directory.', choices=RegisteredParsers.__members__, required=True)
parser_transpile.add_argument('-o', '--output-dir', dest='output_dir', metavar='PATH', help='Where to create the transpiled files', required=True)
parser_transpile.add_argument('-r', '--render-summary', dest='render_summary_parquet_path', metavar='PATH', help='Path to render_summary.parquet. This is required in order to create right absolute coordinates', required=True)
parser_transpile.set_defaults(func=run_transpile)
args = parser.parse_args()
logging_config = {"fmt":'%(asctime)s %(levelname)s: %(message)s', "level": logging.WARNING}
if(args.verbose == 1):
logging_config['level'] = logging.INFO
elif(args.verbose == 2):
logging_config['level'] = logging.DEBUG
coloredlogs.install(**logging_config)
logging.debug('DEBUG LOGGING ENABLED')
logging.info('INFO LOGGING ENABLED')
if hasattr(args, 'func'):
args.func(**vars(args))
if __name__ == '__main__':
main()
| 0.446977 | 0.207235 |
import cv2 as cv
import numpy as np
from ..common.bounding_box import AbsoluteBoundingBox, RelativeBoundingBox
def delete_multiple_elements(list_object, indices):
indices = sorted(indices, reverse=True)
for idx in indices:
list_object.pop(idx)
def make_absolute(
bbox_list: list[RelativeBoundingBox],
canvas_width: int,
canvas_height: int
) -> list[AbsoluteBoundingBox]:
result_dict: dict[RelativeBoundingBox, AbsoluteBoundingBox] = {}
for box in bbox_list:
if type(box) is not RelativeBoundingBox:
raise TypeError(f'Annotation {box} is not of type RelativeBoundingBox!')
abs_box = AbsoluteBoundingBox(
box.type,
box.x*canvas_width,
box.y*canvas_height,
box.height*canvas_height,
box.width*canvas_width,
box.parent
)
result_dict[box] = abs_box
# Replace old parent references with new ones
for id, annotation in result_dict.items():
if annotation.parent:
annotation.parent = result_dict[annotation.parent]
return list(result_dict.values())
def make_relative(
bbox_list: list[AbsoluteBoundingBox],
canvas_width: int,
canvas_height: int
) -> list[RelativeBoundingBox]:
result_dict: dict[AbsoluteBoundingBox, RelativeBoundingBox] = {}
for box in bbox_list:
if type(box) is not AbsoluteBoundingBox:
raise TypeError(f'Annotation {box} is not of type AbsoluteBoundingBox!')
abs_box = RelativeBoundingBox(
box.type,
box.x/float(canvas_width),
box.y/float(canvas_height),
box.height/float(canvas_height),
box.width/float(canvas_width),
box.parent
)
result_dict[box] = abs_box
# Replace old parent references with new ones
for id, annotation in result_dict.items():
if annotation.parent:
annotation.parent = result_dict[annotation.parent]
return list(result_dict.values())
# TODO: Add float32 support!
def crop_to_content(
img: np.ndarray,
orig_coords: AbsoluteBoundingBox,
threshold: int= 248
) -> tuple[float, float, float, float]:
ox = int(orig_coords.x)
oy = int(orig_coords.y)
ow = int(orig_coords.width)
oh = int(orig_coords.height)
selected_slice = img[oy:oy+oh+1, ox:ox+ow+1]
is_color = len(img.shape) == 3 and img.shape[2] == 3
if is_color:
gray = cv.cvtColor(selected_slice, cv.COLOR_BGR2GRAY)
else:
gray = selected_slice
gray = 255 * (gray < threshold).astype(np.uint8)
coords = cv.findNonZero(gray) # Find all non-zero points (text)
x, y, w, h = cv.boundingRect(coords) # Find minimum spanning bounding box
return (ox+x, oy+y, w, h)
def crop_all_to_content(
image: bytes,
orig_annots: list[AbsoluteBoundingBox],
threshold: int= 248
) -> list[AbsoluteBoundingBox]:
"""Takes a page as a bytes object and crops the whitespace out of the provided annotations.
Args:
image (bytes): _description_
orig_annots (list[AbsoluteBoundingBox]): _description_
threshold (int, optional): _description_. Defaults to 248.
Returns:
list[AbsoluteBoundingBox]: _description_
"""
image_as_np = np.frombuffer(image, dtype=np.uint8)
img = cv.imdecode(image_as_np, cv.IMREAD_COLOR)
result_dict = {}
for annot in orig_annots:
x, y, w, h = crop_to_content(img, annot, threshold)
cropped = AbsoluteBoundingBox(
annot.type,
x,
y,
h,
w,
annot.parent
)
result_dict[annot] = cropped
# Replace old parent references with new ones
for id, annotation in result_dict.items():
if annotation.parent:
annotation.parent = result_dict[annotation.parent]
return list(result_dict.values())
|
sci-annot-eval
|
/sci_annot_eval-0.0.9-py3-none-any.whl/sci_annot_eval/helpers/helpers.py
|
helpers.py
|
import cv2 as cv
import numpy as np
from ..common.bounding_box import AbsoluteBoundingBox, RelativeBoundingBox
def delete_multiple_elements(list_object, indices):
indices = sorted(indices, reverse=True)
for idx in indices:
list_object.pop(idx)
def make_absolute(
bbox_list: list[RelativeBoundingBox],
canvas_width: int,
canvas_height: int
) -> list[AbsoluteBoundingBox]:
result_dict: dict[RelativeBoundingBox, AbsoluteBoundingBox] = {}
for box in bbox_list:
if type(box) is not RelativeBoundingBox:
raise TypeError(f'Annotation {box} is not of type RelativeBoundingBox!')
abs_box = AbsoluteBoundingBox(
box.type,
box.x*canvas_width,
box.y*canvas_height,
box.height*canvas_height,
box.width*canvas_width,
box.parent
)
result_dict[box] = abs_box
# Replace old parent references with new ones
for id, annotation in result_dict.items():
if annotation.parent:
annotation.parent = result_dict[annotation.parent]
return list(result_dict.values())
def make_relative(
bbox_list: list[AbsoluteBoundingBox],
canvas_width: int,
canvas_height: int
) -> list[RelativeBoundingBox]:
result_dict: dict[AbsoluteBoundingBox, RelativeBoundingBox] = {}
for box in bbox_list:
if type(box) is not AbsoluteBoundingBox:
raise TypeError(f'Annotation {box} is not of type AbsoluteBoundingBox!')
abs_box = RelativeBoundingBox(
box.type,
box.x/float(canvas_width),
box.y/float(canvas_height),
box.height/float(canvas_height),
box.width/float(canvas_width),
box.parent
)
result_dict[box] = abs_box
# Replace old parent references with new ones
for id, annotation in result_dict.items():
if annotation.parent:
annotation.parent = result_dict[annotation.parent]
return list(result_dict.values())
# TODO: Add float32 support!
def crop_to_content(
img: np.ndarray,
orig_coords: AbsoluteBoundingBox,
threshold: int= 248
) -> tuple[float, float, float, float]:
ox = int(orig_coords.x)
oy = int(orig_coords.y)
ow = int(orig_coords.width)
oh = int(orig_coords.height)
selected_slice = img[oy:oy+oh+1, ox:ox+ow+1]
is_color = len(img.shape) == 3 and img.shape[2] == 3
if is_color:
gray = cv.cvtColor(selected_slice, cv.COLOR_BGR2GRAY)
else:
gray = selected_slice
gray = 255 * (gray < threshold).astype(np.uint8)
coords = cv.findNonZero(gray) # Find all non-zero points (text)
x, y, w, h = cv.boundingRect(coords) # Find minimum spanning bounding box
return (ox+x, oy+y, w, h)
def crop_all_to_content(
image: bytes,
orig_annots: list[AbsoluteBoundingBox],
threshold: int= 248
) -> list[AbsoluteBoundingBox]:
"""Takes a page as a bytes object and crops the whitespace out of the provided annotations.
Args:
image (bytes): _description_
orig_annots (list[AbsoluteBoundingBox]): _description_
threshold (int, optional): _description_. Defaults to 248.
Returns:
list[AbsoluteBoundingBox]: _description_
"""
image_as_np = np.frombuffer(image, dtype=np.uint8)
img = cv.imdecode(image_as_np, cv.IMREAD_COLOR)
result_dict = {}
for annot in orig_annots:
x, y, w, h = crop_to_content(img, annot, threshold)
cropped = AbsoluteBoundingBox(
annot.type,
x,
y,
h,
w,
annot.parent
)
result_dict[annot] = cropped
# Replace old parent references with new ones
for id, annotation in result_dict.items():
if annotation.parent:
annotation.parent = result_dict[annotation.parent]
return list(result_dict.values())
| 0.639624 | 0.354629 |
import os
import logging
from collections import Counter
import pandas as pd
import subprocess
import shutil
def run_deepfigures_prediction_for_folder(
deepfigures_root: str,
input_folder: str,
output_folder: str,
run_summary_csv_path: str
):
input_folder = os.path.abspath(input_folder)
output_folder = os.path.abspath(output_folder)
pdfs = [file for file in os.listdir(input_folder) if file.endswith('.pdf')]
logging.info(f'Found {len(pdfs)} pdf files in {input_folder}')
pdf_ids = set([file[:-4] for file in pdfs])
existing_prediction_folders = set(os.listdir(output_folder))
not_predicted_ids = pdf_ids - existing_prediction_folders
logging.info(f'{len(not_predicted_ids)} have not been processed yet')
id_status_dict = {}
for i, id in enumerate(not_predicted_ids):
logging.info(f'{i+1}/{len(not_predicted_ids)} Processing pdf {id}')
output_dir_for_pdf = os.path.join(output_folder, id)
os.mkdir(output_dir_for_pdf)
with open(os.path.join(output_dir_for_pdf, 'log.txt'), 'w') as logfile:
child = subprocess.Popen(
args=['pipenv', 'run', 'python3', './manage.py', 'detectfigures', '-s', output_dir_for_pdf, os.path.join(input_folder, id + '.pdf')],
cwd=deepfigures_root,
stdout=logfile,
stderr=logfile
)
streamdata = child.communicate()[0]
return_code = child.returncode
successful = return_code == 0
if not successful:
logging.error('Prediction unsuccessful!')
else:
# Delete rendered pages
result_folder = os.path.join(output_folder, id, get_weird_output_folder(output_folder, id))
rendered_folder = os.path.join(result_folder, id + '.pdf-images')
shutil.rmtree(rendered_folder)
id_status_dict[id] = successful
logging.info(f'Successful counts: {Counter(id_status_dict.values())}')
summary_series = pd.Series(id_status_dict, name='prediction_successful')
summary_series.to_csv(run_summary_csv_path, index_label='pdf_id')
logging.debug('Extracting result JSONs')
os.mkdir(os.path.join(output_folder, 'results'))
for id, successful in id_status_dict.items():
if successful:
result_folder = os.path.join(output_folder, id, get_weird_output_folder(output_folder, id))
shutil.copy(
os.path.join(result_folder, id + 'deepfigures-results.json'),
os.path.join(output_folder, 'results', id + '.json'),
)
def get_weird_output_folder(output_root: str, pdf_id: str)-> str:
all_entries = os.listdir(os.path.join(output_root, pdf_id))
log_txt_excluded_entries = [entry for entry in all_entries if entry != 'log.txt']
# Yeah I know it's ugly
return log_txt_excluded_entries[0]
|
sci-annot-eval
|
/sci_annot_eval-0.0.9-py3-none-any.whl/sci_annot_eval/helpers/deepfigures_prediction.py
|
deepfigures_prediction.py
|
import os
import logging
from collections import Counter
import pandas as pd
import subprocess
import shutil
def run_deepfigures_prediction_for_folder(
deepfigures_root: str,
input_folder: str,
output_folder: str,
run_summary_csv_path: str
):
input_folder = os.path.abspath(input_folder)
output_folder = os.path.abspath(output_folder)
pdfs = [file for file in os.listdir(input_folder) if file.endswith('.pdf')]
logging.info(f'Found {len(pdfs)} pdf files in {input_folder}')
pdf_ids = set([file[:-4] for file in pdfs])
existing_prediction_folders = set(os.listdir(output_folder))
not_predicted_ids = pdf_ids - existing_prediction_folders
logging.info(f'{len(not_predicted_ids)} have not been processed yet')
id_status_dict = {}
for i, id in enumerate(not_predicted_ids):
logging.info(f'{i+1}/{len(not_predicted_ids)} Processing pdf {id}')
output_dir_for_pdf = os.path.join(output_folder, id)
os.mkdir(output_dir_for_pdf)
with open(os.path.join(output_dir_for_pdf, 'log.txt'), 'w') as logfile:
child = subprocess.Popen(
args=['pipenv', 'run', 'python3', './manage.py', 'detectfigures', '-s', output_dir_for_pdf, os.path.join(input_folder, id + '.pdf')],
cwd=deepfigures_root,
stdout=logfile,
stderr=logfile
)
streamdata = child.communicate()[0]
return_code = child.returncode
successful = return_code == 0
if not successful:
logging.error('Prediction unsuccessful!')
else:
# Delete rendered pages
result_folder = os.path.join(output_folder, id, get_weird_output_folder(output_folder, id))
rendered_folder = os.path.join(result_folder, id + '.pdf-images')
shutil.rmtree(rendered_folder)
id_status_dict[id] = successful
logging.info(f'Successful counts: {Counter(id_status_dict.values())}')
summary_series = pd.Series(id_status_dict, name='prediction_successful')
summary_series.to_csv(run_summary_csv_path, index_label='pdf_id')
logging.debug('Extracting result JSONs')
os.mkdir(os.path.join(output_folder, 'results'))
for id, successful in id_status_dict.items():
if successful:
result_folder = os.path.join(output_folder, id, get_weird_output_folder(output_folder, id))
shutil.copy(
os.path.join(result_folder, id + 'deepfigures-results.json'),
os.path.join(output_folder, 'results', id + '.json'),
)
def get_weird_output_folder(output_root: str, pdf_id: str)-> str:
all_entries = os.listdir(os.path.join(output_root, pdf_id))
log_txt_excluded_entries = [entry for entry in all_entries if entry != 'log.txt']
# Yeah I know it's ugly
return log_txt_excluded_entries[0]
| 0.210604 | 0.151467 |
import json
import os
from typing import Any
import pandas as pd
import logging
import numpy as np
# TODO: Assumed by pdffigures2 and cannot be changed, but should be changed for deepfigures to 100
ASSUMED_DPI = 100
def append_entry(result_dict: dict[int, Any], page_nr: int, category: str, entry: dict):
if page_nr not in result_dict.keys():
result_dict[page_nr] = {'figures': [], 'regionless-captions': []}
result_dict[page_nr][category].append(entry)
def split_pages(input_dir: str, output_dir: str, run_prefix: str, render_summary_path: str, **kwargs):
"""
Turn the normal pdffigures2/deepfigures output into per-page output with width/height info.
IMPORTANT: run pdffigures2 with the -c flag!
run_prefix: str - Pandatory prefix that each json file contains,
specified with the -d flag when running pdffigures2.
render_summary_path: str - Path to the parquet file that contains information on rendered pages,
like width, height, DPI etc.
This is used to figure out which size the page would have, rendered at 72 DPI.
"""
render_summ = pd.read_parquet(render_summary_path)
input_files = [f for f in os.listdir(input_dir) if f.endswith('.json') and f.startswith(run_prefix)]
if not os.path.exists(output_dir):
logging.debug(f'Creating output dir {output_dir}')
os.makedirs(output_dir)
logging.info(f'Parsing {len(input_files)} files...')
logging_points = list(np.linspace(len(input_files), 1, 10, dtype=np.int64))
for file_nr, file in enumerate(input_files):
full_input_file_path = os.path.join(input_dir, file)
with open(full_input_file_path, 'r') as fp:
result: dict[int, Any] = {}
pdf_id = file[len(run_prefix):-5]
parsed_json = json.load(fp)
for figure_entry in parsed_json['figures']:
# Pages are 0-indexed!
append_entry(result, figure_entry['page']+1, 'figures', figure_entry)
if 'regionless-captions' in parsed_json:
for reg_cap_entry in parsed_json['regionless-captions']:
# Pages are 0-indexed!
append_entry(result, reg_cap_entry['page']+1, 'regionless-captions', reg_cap_entry)
if result:
rel_summs = render_summ[render_summ['file'] == pdf_id]
for page_nr, entry_dict in result.items():
summary_row = rel_summs[rel_summs['page_nr'] == page_nr].iloc[0]
scale_factor = ASSUMED_DPI / summary_row['DPI']
scaled_width = scale_factor * summary_row['width']
scaled_height = scale_factor * summary_row['height']
extended_entry = {'width': scaled_width, 'height': scaled_height, **entry_dict}
with open(os.path.join(output_dir, str(summary_row.name)+'.json'), 'w+') as of:
json.dump(extended_entry, of, indent=4)
if file_nr+1 == logging_points[-1]:
logging_points.pop()
logging.info(f'Processed {file_nr+1}/{len(input_files)} files.')
|
sci-annot-eval
|
/sci_annot_eval-0.0.9-py3-none-any.whl/sci_annot_eval/helpers/pdffigures2_page_splitter.py
|
pdffigures2_page_splitter.py
|
import json
import os
from typing import Any
import pandas as pd
import logging
import numpy as np
# TODO: Assumed by pdffigures2 and cannot be changed, but should be changed for deepfigures to 100
ASSUMED_DPI = 100
def append_entry(result_dict: dict[int, Any], page_nr: int, category: str, entry: dict):
if page_nr not in result_dict.keys():
result_dict[page_nr] = {'figures': [], 'regionless-captions': []}
result_dict[page_nr][category].append(entry)
def split_pages(input_dir: str, output_dir: str, run_prefix: str, render_summary_path: str, **kwargs):
"""
Turn the normal pdffigures2/deepfigures output into per-page output with width/height info.
IMPORTANT: run pdffigures2 with the -c flag!
run_prefix: str - Pandatory prefix that each json file contains,
specified with the -d flag when running pdffigures2.
render_summary_path: str - Path to the parquet file that contains information on rendered pages,
like width, height, DPI etc.
This is used to figure out which size the page would have, rendered at 72 DPI.
"""
render_summ = pd.read_parquet(render_summary_path)
input_files = [f for f in os.listdir(input_dir) if f.endswith('.json') and f.startswith(run_prefix)]
if not os.path.exists(output_dir):
logging.debug(f'Creating output dir {output_dir}')
os.makedirs(output_dir)
logging.info(f'Parsing {len(input_files)} files...')
logging_points = list(np.linspace(len(input_files), 1, 10, dtype=np.int64))
for file_nr, file in enumerate(input_files):
full_input_file_path = os.path.join(input_dir, file)
with open(full_input_file_path, 'r') as fp:
result: dict[int, Any] = {}
pdf_id = file[len(run_prefix):-5]
parsed_json = json.load(fp)
for figure_entry in parsed_json['figures']:
# Pages are 0-indexed!
append_entry(result, figure_entry['page']+1, 'figures', figure_entry)
if 'regionless-captions' in parsed_json:
for reg_cap_entry in parsed_json['regionless-captions']:
# Pages are 0-indexed!
append_entry(result, reg_cap_entry['page']+1, 'regionless-captions', reg_cap_entry)
if result:
rel_summs = render_summ[render_summ['file'] == pdf_id]
for page_nr, entry_dict in result.items():
summary_row = rel_summs[rel_summs['page_nr'] == page_nr].iloc[0]
scale_factor = ASSUMED_DPI / summary_row['DPI']
scaled_width = scale_factor * summary_row['width']
scaled_height = scale_factor * summary_row['height']
extended_entry = {'width': scaled_width, 'height': scaled_height, **entry_dict}
with open(os.path.join(output_dir, str(summary_row.name)+'.json'), 'w+') as of:
json.dump(extended_entry, of, indent=4)
if file_nr+1 == logging_points[-1]:
logging_points.pop()
logging.info(f'Processed {file_nr+1}/{len(input_files)} files.')
| 0.337968 | 0.229816 |
from sci_annot_eval.common.sci_annot_annotation import Annotation, SciAnnotOutput
from ..common.bounding_box import AbsoluteBoundingBox, RelativeBoundingBox
from . exporterInterface import Exporter
import json
from typing import TypedDict, Any
class SciAnnotExporter(Exporter):
def export_to_dict(self, input: list[RelativeBoundingBox], canvas_width: int, canvas_height: int, **kwargs) -> SciAnnotOutput:
result: SciAnnotOutput = {
'canvasHeight': canvas_height,
'canvasWidth': canvas_width,
'annotations': []
}
source = kwargs['source'] if 'source' in kwargs.keys() else 'Unknown'
for annotation in input:
if type(annotation) is not RelativeBoundingBox:
raise TypeError(f'Annotation {annotation} is not of type RelativeBoundingBox!')
absolute_x = annotation.x * canvas_width
absolute_y = annotation.y * canvas_height
absolute_height = annotation.height * canvas_height
absolute_width = annotation.width * canvas_width
generated_anno: Annotation = {
"type": "Annotation",
"body": [
{
"type": "TextualBody",
"purpose": "img-cap-enum",
"value": f"{annotation.type}"
}
],
"target": {
"source": source,
"selector": {
"type": "FragmentSelector",
"conformsTo": "http://www.w3.org/TR/media-frags/",
"value": f"xywh=pixel:{absolute_x},{absolute_y},{absolute_width},{absolute_height}"
}
},
"@context": "http://www.w3.org/ns/anno.jsonld",
"id": f"#{hash(annotation)}"
}
if(annotation.parent):
generated_anno['body'].append({
"type": "TextualBody",
"purpose": "parent",
"value": f"#{hash(annotation.parent)}"
})
result['annotations'].append(generated_anno)
return result
def export_to_str(self, input: list[RelativeBoundingBox], canvas_width: int, canvas_height: int, **kwargs) -> str:
res = self.export_to_dict(input, canvas_width, canvas_height, **kwargs)
return json.dumps(res, indent=4)
def export_to_file(
self,
input: list[RelativeBoundingBox],
canvas_width: int,
canvas_height: int,
file_location: str,
**kwargs
):
res = self.export_to_str(input, canvas_width, canvas_height, **kwargs)
with open(file_location, 'w') as f:
f.write(res)
|
sci-annot-eval
|
/sci_annot_eval-0.0.9-py3-none-any.whl/sci_annot_eval/exporters/sci_annot_exporter.py
|
sci_annot_exporter.py
|
from sci_annot_eval.common.sci_annot_annotation import Annotation, SciAnnotOutput
from ..common.bounding_box import AbsoluteBoundingBox, RelativeBoundingBox
from . exporterInterface import Exporter
import json
from typing import TypedDict, Any
class SciAnnotExporter(Exporter):
def export_to_dict(self, input: list[RelativeBoundingBox], canvas_width: int, canvas_height: int, **kwargs) -> SciAnnotOutput:
result: SciAnnotOutput = {
'canvasHeight': canvas_height,
'canvasWidth': canvas_width,
'annotations': []
}
source = kwargs['source'] if 'source' in kwargs.keys() else 'Unknown'
for annotation in input:
if type(annotation) is not RelativeBoundingBox:
raise TypeError(f'Annotation {annotation} is not of type RelativeBoundingBox!')
absolute_x = annotation.x * canvas_width
absolute_y = annotation.y * canvas_height
absolute_height = annotation.height * canvas_height
absolute_width = annotation.width * canvas_width
generated_anno: Annotation = {
"type": "Annotation",
"body": [
{
"type": "TextualBody",
"purpose": "img-cap-enum",
"value": f"{annotation.type}"
}
],
"target": {
"source": source,
"selector": {
"type": "FragmentSelector",
"conformsTo": "http://www.w3.org/TR/media-frags/",
"value": f"xywh=pixel:{absolute_x},{absolute_y},{absolute_width},{absolute_height}"
}
},
"@context": "http://www.w3.org/ns/anno.jsonld",
"id": f"#{hash(annotation)}"
}
if(annotation.parent):
generated_anno['body'].append({
"type": "TextualBody",
"purpose": "parent",
"value": f"#{hash(annotation.parent)}"
})
result['annotations'].append(generated_anno)
return result
def export_to_str(self, input: list[RelativeBoundingBox], canvas_width: int, canvas_height: int, **kwargs) -> str:
res = self.export_to_dict(input, canvas_width, canvas_height, **kwargs)
return json.dumps(res, indent=4)
def export_to_file(
self,
input: list[RelativeBoundingBox],
canvas_width: int,
canvas_height: int,
file_location: str,
**kwargs
):
res = self.export_to_str(input, canvas_width, canvas_height, **kwargs)
with open(file_location, 'w') as f:
f.write(res)
| 0.767777 | 0.270817 |
from . parserInterface import Parser
from sci_annot_eval.common.bounding_box import AbsoluteBoundingBox, BoundingBox, RelativeBoundingBox, TargetType
from sci_annot_eval.common.prediction_field_mapper import PredictionFieldMapper
from .. helpers import helpers
import json
from typing import Any, Type
class PdfFigures2Parser(Parser):
"""This parser works for both Pdffigures2 and Deepfigures
"""
def __init__(self, field_mapper: Type[PredictionFieldMapper]):
self.field_mapper = field_mapper
def extract_x12y12(self, boundaries: dict[str, float]) -> tuple[float, float, float, float]:
x = boundaries['x1']
y = boundaries['y1']
x2 = boundaries['x2']
y2 = boundaries['y2']
w = x2 - x
h = y2 - y
return x, y, w, h
def parse_dict_absolute(self, input: dict[str, Any]) -> list[AbsoluteBoundingBox]:
result: list[AbsoluteBoundingBox] = []
figures = input['figures']
for figure in figures:
fig_x, fig_y, fig_w, fig_h = self.extract_x12y12(figure[self.field_mapper.region_boundary])
fig_type = figure[self.field_mapper.figure_type]
fig_bbox = AbsoluteBoundingBox(fig_type, fig_x, fig_y, fig_h, fig_w, None)
result.append(fig_bbox)
if(self.field_mapper.caption_boundary in figure.keys()):
cap_x, cap_y, cap_w, cap_h = self.extract_x12y12(figure[self.field_mapper.caption_boundary])
result.append(AbsoluteBoundingBox(
TargetType.CAPTION.value, cap_x, cap_y, cap_h, cap_w, fig_bbox
))
regionless_captions = []
if 'regionless-captions' in input.keys():
regionless_captions = input['regionless-captions']
for r_caption in regionless_captions:
r_cap_x, r_cap_y, r_cap_w, r_cap_h = self.extract_x12y12(r_caption['boundary'])
result.append(AbsoluteBoundingBox(
TargetType.CAPTION.value, r_cap_x, r_cap_y, r_cap_h, r_cap_w, None
))
return result
def parse_dict_relative(self, input: dict[str, Any]) -> list[RelativeBoundingBox]:
return helpers.make_relative(self.parse_dict_absolute(input), int(input['width']), int(input['height']))
def parse_text_absolute(self, input: str) -> list[AbsoluteBoundingBox]:
return self.parse_dict_absolute(json.loads(input))
def parse_text_relative(self, input: str) -> list[RelativeBoundingBox]:
return self.parse_dict_relative(json.loads(input))
def parse_file_absolute(self, path: str) -> list[AbsoluteBoundingBox]:
with open(path, 'r') as fd:
return self.parse_dict_absolute(json.load(fd))
def parse_file_relative(self, path: str) -> list[RelativeBoundingBox]:
with open(path, 'r') as fd:
return self.parse_dict_relative(json.load(fd))
|
sci-annot-eval
|
/sci_annot_eval-0.0.9-py3-none-any.whl/sci_annot_eval/parsers/pdffigures2_parser.py
|
pdffigures2_parser.py
|
from . parserInterface import Parser
from sci_annot_eval.common.bounding_box import AbsoluteBoundingBox, BoundingBox, RelativeBoundingBox, TargetType
from sci_annot_eval.common.prediction_field_mapper import PredictionFieldMapper
from .. helpers import helpers
import json
from typing import Any, Type
class PdfFigures2Parser(Parser):
"""This parser works for both Pdffigures2 and Deepfigures
"""
def __init__(self, field_mapper: Type[PredictionFieldMapper]):
self.field_mapper = field_mapper
def extract_x12y12(self, boundaries: dict[str, float]) -> tuple[float, float, float, float]:
x = boundaries['x1']
y = boundaries['y1']
x2 = boundaries['x2']
y2 = boundaries['y2']
w = x2 - x
h = y2 - y
return x, y, w, h
def parse_dict_absolute(self, input: dict[str, Any]) -> list[AbsoluteBoundingBox]:
result: list[AbsoluteBoundingBox] = []
figures = input['figures']
for figure in figures:
fig_x, fig_y, fig_w, fig_h = self.extract_x12y12(figure[self.field_mapper.region_boundary])
fig_type = figure[self.field_mapper.figure_type]
fig_bbox = AbsoluteBoundingBox(fig_type, fig_x, fig_y, fig_h, fig_w, None)
result.append(fig_bbox)
if(self.field_mapper.caption_boundary in figure.keys()):
cap_x, cap_y, cap_w, cap_h = self.extract_x12y12(figure[self.field_mapper.caption_boundary])
result.append(AbsoluteBoundingBox(
TargetType.CAPTION.value, cap_x, cap_y, cap_h, cap_w, fig_bbox
))
regionless_captions = []
if 'regionless-captions' in input.keys():
regionless_captions = input['regionless-captions']
for r_caption in regionless_captions:
r_cap_x, r_cap_y, r_cap_w, r_cap_h = self.extract_x12y12(r_caption['boundary'])
result.append(AbsoluteBoundingBox(
TargetType.CAPTION.value, r_cap_x, r_cap_y, r_cap_h, r_cap_w, None
))
return result
def parse_dict_relative(self, input: dict[str, Any]) -> list[RelativeBoundingBox]:
return helpers.make_relative(self.parse_dict_absolute(input), int(input['width']), int(input['height']))
def parse_text_absolute(self, input: str) -> list[AbsoluteBoundingBox]:
return self.parse_dict_absolute(json.loads(input))
def parse_text_relative(self, input: str) -> list[RelativeBoundingBox]:
return self.parse_dict_relative(json.loads(input))
def parse_file_absolute(self, path: str) -> list[AbsoluteBoundingBox]:
with open(path, 'r') as fd:
return self.parse_dict_absolute(json.load(fd))
def parse_file_relative(self, path: str) -> list[RelativeBoundingBox]:
with open(path, 'r') as fd:
return self.parse_dict_relative(json.load(fd))
| 0.714927 | 0.430447 |
from sci_annot_eval.common.bounding_box import RelativeBoundingBox
from . parserInterface import Parser
from .. common.bounding_box import AbsoluteBoundingBox, BoundingBox, RelativeBoundingBox, TargetType
from ..common.sci_annot_annotation import Annotation, SciAnnotOutput
from .. helpers import helpers
import re
import json
from typing import Any, Optional
from typing import Mapping
class SciAnnotParser(Parser):
location_regex= re.compile(r'\d+(?:\.\d+)?')
child_types = [TargetType.CAPTION]
def get_annotation_type(self, annot: Annotation)-> TargetType:
for block in annot['body']:
if block['purpose'] == 'img-cap-enum':
return TargetType(block['value'])
raise ValueError(f'Annotation has no type: {annot}')
def get_annotation_parent_id(self, annot: Annotation)-> Optional[str] :
for block in annot['body']:
if block['purpose'] == 'parent':
return block['value']
return None
def parse_location_string(self, annot: Annotation)-> tuple[float, float, float, float]:
loc = annot['target']['selector']['value']
parsed_loc = self.location_regex.findall(loc)
if (len(parsed_loc) != 4):
raise ValueError(f'Location string couldn\'t be parsed: {loc}')
# Python's typing is not so clever yet...
return (float(parsed_loc[0]), float(parsed_loc[1]), float(parsed_loc[2]), float(parsed_loc[3]))
def parse_dict_absolute(self, input: Mapping) -> list[AbsoluteBoundingBox]:
result: dict[str, AbsoluteBoundingBox] = {}
for annotation in input['annotations']:
id = annotation['id']
ann_type = self.get_annotation_type(annotation)
x, y, width, height = self.parse_location_string(annotation)
parent_id = None
if ann_type in self.child_types:
parent_id = self.get_annotation_parent_id(annotation)
result[id] = AbsoluteBoundingBox(
ann_type.value,
x,
y,
height,
width,
parent_id,
)
for id, annotation in result.items():
if annotation.parent:
annotation.parent = result[annotation.parent]
res_list = list(result.values())
return res_list
def parse_dict_relative(self, input: Mapping[str, Any]) -> list[RelativeBoundingBox]:
canvas_height = int(input['canvasHeight'])
canvas_width = int(input['canvasWidth'])
return helpers.make_relative(self.parse_dict_absolute(input), canvas_width, canvas_height)
def parse_text_absolute(self, input: str) -> list[AbsoluteBoundingBox]:
return self.parse_dict_absolute(json.loads(input))
def parse_text_relative(self, input: str) -> list[RelativeBoundingBox]:
return self.parse_dict_relative(json.loads(input))
def parse_file_absolute(self, path: str) -> list[AbsoluteBoundingBox]:
with open(path, 'r') as fd:
return self.parse_dict_absolute(json.load(fd))
def parse_file_relative(self, path: str) -> list[RelativeBoundingBox]:
with open(path, 'r') as fd:
return self.parse_dict_relative(json.load(fd))
|
sci-annot-eval
|
/sci_annot_eval-0.0.9-py3-none-any.whl/sci_annot_eval/parsers/sci_annot_parser.py
|
sci_annot_parser.py
|
from sci_annot_eval.common.bounding_box import RelativeBoundingBox
from . parserInterface import Parser
from .. common.bounding_box import AbsoluteBoundingBox, BoundingBox, RelativeBoundingBox, TargetType
from ..common.sci_annot_annotation import Annotation, SciAnnotOutput
from .. helpers import helpers
import re
import json
from typing import Any, Optional
from typing import Mapping
class SciAnnotParser(Parser):
location_regex= re.compile(r'\d+(?:\.\d+)?')
child_types = [TargetType.CAPTION]
def get_annotation_type(self, annot: Annotation)-> TargetType:
for block in annot['body']:
if block['purpose'] == 'img-cap-enum':
return TargetType(block['value'])
raise ValueError(f'Annotation has no type: {annot}')
def get_annotation_parent_id(self, annot: Annotation)-> Optional[str] :
for block in annot['body']:
if block['purpose'] == 'parent':
return block['value']
return None
def parse_location_string(self, annot: Annotation)-> tuple[float, float, float, float]:
loc = annot['target']['selector']['value']
parsed_loc = self.location_regex.findall(loc)
if (len(parsed_loc) != 4):
raise ValueError(f'Location string couldn\'t be parsed: {loc}')
# Python's typing is not so clever yet...
return (float(parsed_loc[0]), float(parsed_loc[1]), float(parsed_loc[2]), float(parsed_loc[3]))
def parse_dict_absolute(self, input: Mapping) -> list[AbsoluteBoundingBox]:
result: dict[str, AbsoluteBoundingBox] = {}
for annotation in input['annotations']:
id = annotation['id']
ann_type = self.get_annotation_type(annotation)
x, y, width, height = self.parse_location_string(annotation)
parent_id = None
if ann_type in self.child_types:
parent_id = self.get_annotation_parent_id(annotation)
result[id] = AbsoluteBoundingBox(
ann_type.value,
x,
y,
height,
width,
parent_id,
)
for id, annotation in result.items():
if annotation.parent:
annotation.parent = result[annotation.parent]
res_list = list(result.values())
return res_list
def parse_dict_relative(self, input: Mapping[str, Any]) -> list[RelativeBoundingBox]:
canvas_height = int(input['canvasHeight'])
canvas_width = int(input['canvasWidth'])
return helpers.make_relative(self.parse_dict_absolute(input), canvas_width, canvas_height)
def parse_text_absolute(self, input: str) -> list[AbsoluteBoundingBox]:
return self.parse_dict_absolute(json.loads(input))
def parse_text_relative(self, input: str) -> list[RelativeBoundingBox]:
return self.parse_dict_relative(json.loads(input))
def parse_file_absolute(self, path: str) -> list[AbsoluteBoundingBox]:
with open(path, 'r') as fd:
return self.parse_dict_absolute(json.load(fd))
def parse_file_relative(self, path: str) -> list[RelativeBoundingBox]:
with open(path, 'r') as fd:
return self.parse_dict_relative(json.load(fd))
| 0.81721 | 0.308359 |
import datetime
from sci_api_req import config
from ..api_provider import ApiProvider
class DONKIProvider(ApiProvider):
"""
The Space Weather Database Of Notifications, Knowledge, Information (DONKI) is
a comprehensive on-line tool for space weather forecasters, scientists, and the
general space science community. DONKI provides chronicles the daily interpretations
of space weather observations, analysis, models, forecasts, and notifications
provided by the Space Weather Research Center (SWRC), comprehensive knowledge-base
search functionality to support anomaly resolution and space science research,
intelligent linkages, relationships, cause-and-effects between space weather
activities and comprehensive webservice API access to information stored in DONKI.
For more information see: https://api.nasa.gov/api.html#DONKI. Requires NASA api key
"""
def __init__(self):
super(ApiProvider).__init__()
self._api_url = "https://api.nasa.gov/DONKI/"
@property
def api_key(self) -> str:
return config.get_api_keys('NASA')
def coronal_mass_ejection(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today()):
@self._get_request(
'CME',
startDate=start_date,
endDate=end_date)
def inner(response):
return response
return inner
def coronal_mass_ejection_analysis(
self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today(), most_accurate_only=True,
complete_entry_only=True, speed=0, halfAngle=0, catalog="ALL",
keyword="NONE") -> dict:
@self._get_request(
'CMEAnalysis',
startDate=start_date,
endDate=end_date,
mostAccurateOnly=most_accurate_only,
completeEntryOnly=complete_entry_only,
speed=speed, halfAngle=halfAngle,
catalog=catalog,
keyword=keyword)
def inner(response):
return response
return inner
def geomagnetic_storm(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today()):
@self._get_request('GST', startDate=start_date, endDate=end_date)
def inner(response):
return response
return inner
def interplanetary_shock(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today(), location="ALL", catalog="ALL"):
@self._get_request('IPS', startDate=start_date, endDate=end_date, location=location,
catalog=catalog)
def inner(response):
return response
return inner
def solar_flare(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today()):
@self._get_request('FLR', startDate=start_date, endDate=end_date)
def inner(response):
return response
return inner
def solar_energetic_particle(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today()):
@self._get_request('SEP', startDate=start_date, endDate=end_date)
def inner(response):
return response
return inner
def magnetopause_crossing(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today()):
@self._get_request('MPC', startDate=start_date, endDate=end_date)
def inner(response):
return response
return inner
def radiation_belt_enhancment(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today()):
@self._get_request('RBE', startDate=start_date, endDate=end_date)
def inner(response):
return response
return inner
def hight_speed_stream(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today()):
@self._get_request('HSS', startDate=start_date, endDate=end_date)
def inner(response):
return response
return inner
def wsa_enlil_simulation(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today()):
@self._get_request('EnlilSimulations', startDate=start_date, endDate=end_date)
def inner(response):
return response
return inner
def notifications(self, start_date: datetime.date, end_date: datetime.date, type="all"):
@self._get_request('notifications', startDate=start_date, endDate=end_date, type=type)
def inner(response):
return response
return inner
|
sci-api-req
|
/sci_api_req-0.1.1-py3-none-any.whl/sci_api_req/providers/NASA/donki_provider.py
|
donki_provider.py
|
import datetime
from sci_api_req import config
from ..api_provider import ApiProvider
class DONKIProvider(ApiProvider):
"""
The Space Weather Database Of Notifications, Knowledge, Information (DONKI) is
a comprehensive on-line tool for space weather forecasters, scientists, and the
general space science community. DONKI provides chronicles the daily interpretations
of space weather observations, analysis, models, forecasts, and notifications
provided by the Space Weather Research Center (SWRC), comprehensive knowledge-base
search functionality to support anomaly resolution and space science research,
intelligent linkages, relationships, cause-and-effects between space weather
activities and comprehensive webservice API access to information stored in DONKI.
For more information see: https://api.nasa.gov/api.html#DONKI. Requires NASA api key
"""
def __init__(self):
super(ApiProvider).__init__()
self._api_url = "https://api.nasa.gov/DONKI/"
@property
def api_key(self) -> str:
return config.get_api_keys('NASA')
def coronal_mass_ejection(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today()):
@self._get_request(
'CME',
startDate=start_date,
endDate=end_date)
def inner(response):
return response
return inner
def coronal_mass_ejection_analysis(
self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today(), most_accurate_only=True,
complete_entry_only=True, speed=0, halfAngle=0, catalog="ALL",
keyword="NONE") -> dict:
@self._get_request(
'CMEAnalysis',
startDate=start_date,
endDate=end_date,
mostAccurateOnly=most_accurate_only,
completeEntryOnly=complete_entry_only,
speed=speed, halfAngle=halfAngle,
catalog=catalog,
keyword=keyword)
def inner(response):
return response
return inner
def geomagnetic_storm(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today()):
@self._get_request('GST', startDate=start_date, endDate=end_date)
def inner(response):
return response
return inner
def interplanetary_shock(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today(), location="ALL", catalog="ALL"):
@self._get_request('IPS', startDate=start_date, endDate=end_date, location=location,
catalog=catalog)
def inner(response):
return response
return inner
def solar_flare(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today()):
@self._get_request('FLR', startDate=start_date, endDate=end_date)
def inner(response):
return response
return inner
def solar_energetic_particle(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today()):
@self._get_request('SEP', startDate=start_date, endDate=end_date)
def inner(response):
return response
return inner
def magnetopause_crossing(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today()):
@self._get_request('MPC', startDate=start_date, endDate=end_date)
def inner(response):
return response
return inner
def radiation_belt_enhancment(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today()):
@self._get_request('RBE', startDate=start_date, endDate=end_date)
def inner(response):
return response
return inner
def hight_speed_stream(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today()):
@self._get_request('HSS', startDate=start_date, endDate=end_date)
def inner(response):
return response
return inner
def wsa_enlil_simulation(self, start_date=datetime.date.today() - datetime.timedelta(30),
end_date=datetime.date.today()):
@self._get_request('EnlilSimulations', startDate=start_date, endDate=end_date)
def inner(response):
return response
return inner
def notifications(self, start_date: datetime.date, end_date: datetime.date, type="all"):
@self._get_request('notifications', startDate=start_date, endDate=end_date, type=type)
def inner(response):
return response
return inner
| 0.659405 | 0.338651 |
from ..api_provider import ApiProvider
from sci_api_req import config
import datetime
class NeoWsProvider(ApiProvider):
"""
You can use NeoWs(Near Earth Object Web Service) to search for Asteroids based on
their closest approach date to Earth, lookup a specific Asteroid with its NASA JPL
small body id, as well as browse the overall data-set. Requires Nasa Key. For more
information check that https://api.nasa.gov/api.html#NeoWS
"""
def __init__(self):
super(ApiProvider).__init__()
self._api_url = "https://api.nasa.gov/neo/rest/v1/"
@property
def api_key(self) -> str:
return config.get_api_keys('NASA')
"""Retrieve a list of Asteroids based on their closest approach date to Earth."""
def feed(self, start_date: datetime.date, end_date: datetime.date, detailed=True) -> dict:
@self._get_request('feed',
start_date=start_date,
end_date=end_date,
detailed=detailed)
def inner(response):
return response
return inner
"""Find Near Earth Objects for today"""
def feed(self, detailed=True) -> dict:
@self._get_request('feed/today', detailed=detailed)
def inner(response):
return response
return inner
"""Lookup a specific Asteroid based on its NASA JPL small body (SPK-ID) ID"""
def lookup(self, id) -> dict:
@self._get_request('neo/{}'.format(id))
def inner(response):
return response
return inner
"""Browse the overall Asteroid data-set"""
def browse(self) -> dict:
@self._get_request('neo/browse')
def inner(response):
return response
return inner
"""Retrieve Sentry (Impact Risk) Near Earth Objects"""
def sentry(self, is_active=True, page=0, size=50) -> dict:
@self._get_request('neo/sentry', is_active=str(is_active), page=str(page), size=str(size))
def inner(response):
return response
return inner
"""Retrieve Sentry (Impact Risk) Near Earth Objectby ID"""
def sentry_by_id(self, id) -> dict:
@self._get_request('neo/sentry/{}'.format(id))
def inner(response):
return response
return inner
"""Get the Near Earth Object data set totals"""
def stats(self) -> dict:
@self._get_request('stats')
def inner(response):
return response
return inner
|
sci-api-req
|
/sci_api_req-0.1.1-py3-none-any.whl/sci_api_req/providers/NASA/neows_provider.py
|
neows_provider.py
|
from ..api_provider import ApiProvider
from sci_api_req import config
import datetime
class NeoWsProvider(ApiProvider):
"""
You can use NeoWs(Near Earth Object Web Service) to search for Asteroids based on
their closest approach date to Earth, lookup a specific Asteroid with its NASA JPL
small body id, as well as browse the overall data-set. Requires Nasa Key. For more
information check that https://api.nasa.gov/api.html#NeoWS
"""
def __init__(self):
super(ApiProvider).__init__()
self._api_url = "https://api.nasa.gov/neo/rest/v1/"
@property
def api_key(self) -> str:
return config.get_api_keys('NASA')
"""Retrieve a list of Asteroids based on their closest approach date to Earth."""
def feed(self, start_date: datetime.date, end_date: datetime.date, detailed=True) -> dict:
@self._get_request('feed',
start_date=start_date,
end_date=end_date,
detailed=detailed)
def inner(response):
return response
return inner
"""Find Near Earth Objects for today"""
def feed(self, detailed=True) -> dict:
@self._get_request('feed/today', detailed=detailed)
def inner(response):
return response
return inner
"""Lookup a specific Asteroid based on its NASA JPL small body (SPK-ID) ID"""
def lookup(self, id) -> dict:
@self._get_request('neo/{}'.format(id))
def inner(response):
return response
return inner
"""Browse the overall Asteroid data-set"""
def browse(self) -> dict:
@self._get_request('neo/browse')
def inner(response):
return response
return inner
"""Retrieve Sentry (Impact Risk) Near Earth Objects"""
def sentry(self, is_active=True, page=0, size=50) -> dict:
@self._get_request('neo/sentry', is_active=str(is_active), page=str(page), size=str(size))
def inner(response):
return response
return inner
"""Retrieve Sentry (Impact Risk) Near Earth Objectby ID"""
def sentry_by_id(self, id) -> dict:
@self._get_request('neo/sentry/{}'.format(id))
def inner(response):
return response
return inner
"""Get the Near Earth Object data set totals"""
def stats(self) -> dict:
@self._get_request('stats')
def inner(response):
return response
return inner
| 0.846578 | 0.35855 |
# sci-dl: help you download SciHub PDF faster
## Features
1. configuration file support.
2. search by Google Scholar(coming soon).
3. download using DOI.
4. custom SciHub mirror url.
5. proxy support.
6. failure retry.
7. captacha support(coming soon).
7. a Python library that can be embedded in your program.
## Installation
### use as command line software
```shell
pip install 'sci-dl[cmd]'
```
### use as Python library
```shell
pip install sci-dl
```
## Usage
### use as command line software
1. initialization configuration file
```shell
sci-dl init-config
```
follow the prompt to create the configuration file.
2. download using DOI
```shell
sci-dl dl -d '10.1016/j.neuron.2012.02.004'
# 10.1016/j.neuron.2012.02.004 is the article DOI you want to download
```
### use as Python library
> sci_dl.SciDlError raises when exception happens.
#### if you don't have a proxy
```python
from sci_dl import dl_by_doi
config = {
'base_url': 'https://sci-hub.se', # sci-hub URL
'retries': 5, # number of failure retries
'use_proxy': False # means you don't want to use a proxy
}
response = dl_by_doi('10.1016/j.neuron.2012.02.004', config)
```
### if you use a proxy
```python
from sci_dl import dl_by_doi
config = {
'base_url': 'https://sci-hub.se', # sci-hub URL
'retries': 5, # number of failure retries
'use_proxy': True, # means you don't want to use a proxy
'proxy_protocol': 'socks5', # available protocols: http https socks5
'proxy_user': None, # proxy user, if your proxy don't need one, you can pass None
'proxy_password': None, # proxy password, if your proxy don't need one, you can pass None
'proxy_host': '127.0.0.1', # proxy host
'proxy_port': 1080 # proxy port
}
response = dl_by_doi('10.1016/j.neuron.2012.02.004', config)
```
### how to save response?
#### get all content one time
```python
with open('xxx.pdf', 'wb') as fp:
fp.write(response.content)
```
#### chunk by chunk
```python
with open('xxx.pdf', 'wb') as fp:
for chunk in response.iter_content(1024): # 1024 is the chunk size
fp.write(chunk)
```
|
sci-dl
|
/sci-dl-0.1.2.tar.gz/sci-dl-0.1.2/README.md
|
README.md
|
pip install 'sci-dl[cmd]'
pip install sci-dl
sci-dl init-config
sci-dl dl -d '10.1016/j.neuron.2012.02.004'
# 10.1016/j.neuron.2012.02.004 is the article DOI you want to download
from sci_dl import dl_by_doi
config = {
'base_url': 'https://sci-hub.se', # sci-hub URL
'retries': 5, # number of failure retries
'use_proxy': False # means you don't want to use a proxy
}
response = dl_by_doi('10.1016/j.neuron.2012.02.004', config)
from sci_dl import dl_by_doi
config = {
'base_url': 'https://sci-hub.se', # sci-hub URL
'retries': 5, # number of failure retries
'use_proxy': True, # means you don't want to use a proxy
'proxy_protocol': 'socks5', # available protocols: http https socks5
'proxy_user': None, # proxy user, if your proxy don't need one, you can pass None
'proxy_password': None, # proxy password, if your proxy don't need one, you can pass None
'proxy_host': '127.0.0.1', # proxy host
'proxy_port': 1080 # proxy port
}
response = dl_by_doi('10.1016/j.neuron.2012.02.004', config)
with open('xxx.pdf', 'wb') as fp:
fp.write(response.content)
with open('xxx.pdf', 'wb') as fp:
for chunk in response.iter_content(1024): # 1024 is the chunk size
fp.write(chunk)
| 0.464659 | 0.953275 |
import logging
from gettext import gettext as _
from urllib.parse import urljoin, quote
import requests
from bs4 import BeautifulSoup
logger = logging.getLogger('sci-dl')
DEFAULT_ENCODING = 'UTF-8'
HEADERS = {
'User-Agent': (
'Mozilla/5.0 (Macintosh; Intel Mac OS X 11_0_1) '
'AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/89.0.4389.90 Safari/537.36'
)
}
DEFAULT_CONFIG = {'base_url': 'https://sci-hub.se', 'retries': 5, 'use_proxy': False}
class SciDlError(Exception):
"""SciDlError"""
def is_valid_doi(doi):
return '/' in doi
class Proxy(object):
def __init__(
self, protocol='socks5', user='', password='', host='127.0.0.1', port=1080
):
self.protocol = protocol
self.user = user
self.password = password
self.host = host
self.port = port
def to_url(self):
if self.user and self.password:
return '{protocol}://{user}:{password}@{host}:{port}'.format(
protocol=self.protocol,
user=self.user,
password=quote(self.password),
host=self.host,
port=self.port,
)
return '{protocol}://{host}:{port}'.format(
protocol=self.protocol, host=self.host, port=self.port
)
def __repr__(self):
return self.to_url()
def to_requests(self):
return {'http': self.to_url(), 'https': self.to_url()}
class Dl(object):
def __init__(self, retries=3, proxy=None):
self.retries = retries
self.proxy = proxy
def _dl(self, url):
proxies = self.proxy.to_requests() if self.proxy else None
return requests.get(url, headers=HEADERS, stream=True, proxies=proxies)
def dl(self, url):
for i in range(self.retries):
try:
return self._dl(url)
except Exception as e:
logger.exception(e)
logger.warning(_('retrying...'))
raise SciDlError(_('download %s failure') % url)
class Sci(object):
def __init__(self, base_url):
self.base_url = base_url
def get_protocol(self):
if self.base_url.startswith('https'):
return 'https'
return 'http'
def get_matchmaker_url_for_doi(self, doi):
if not is_valid_doi(doi):
raise SciDlError(_('invalid DOI %s') % doi)
return urljoin(self.base_url, doi)
def clean_pdf_url(self, pdf_url):
# 找到/downloads位置
index = pdf_url.find('/downloads')
return pdf_url[index:-1]
def parse_pdf_url(self, content):
soup = BeautifulSoup(content, features='html.parser')
buttons = soup.find('div', id='buttons')
if not buttons:
return None
pdf_url = None
for button in buttons.find_all('button'):
if 'save' in button.string:
pdf_url = button.attrs['onclick']
return (self.base_url + self.clean_pdf_url(pdf_url)) if pdf_url else None
def dl_by_doi(doi, config=None):
"""
download PDF by DOI
Args:
doi: <str> DOI
config: <dict> must contains the following keys:
when you have a proxy:
1. base_url: <str> SciHub url, eg, https://sci-hub.se
2. retries: <int> number of failure retries, eg, 5
3. use_proxy: <bool> use proxy or not, eg, True
4. proxy_protocol: <str> proxy protocol, eg, socks5
5. proxy_user: <str> proxy username, None if no user need
6. proxy_password: <str> proxy password, None if no password need
7. proxy_host: <str> proxy host, eg, 127.0.0.1
8. proxy_port: <int> proxy port, eg, 1080
when you don't have a proxy:
1. base_url: <str> SciHub url, eg, https://sci-hub.se
2. retries: <int> number of failure retries, eg, 5
3. use_proxy: <bool> use proxy or not, eg, False
default:
{
"base_url": "https://sci-hub.se",
"retries": 5,
"user_proxy": False
}
Returns:
requests.models.Response
Raises:
SciDlError
"""
def get(key):
if key not in config:
raise SciDlError(_("malformed configuration, can't find %s") % key)
return config[key]
if config is None:
config = DEFAULT_CONFIG.copy()
# initialize objects
sci = Sci(get('base_url'))
proxy = None
if get('use_proxy'):
proxy = Proxy(
protocol=get('proxy_protocol'),
user=get('proxy_user'),
password=get('proxy_password'),
host=get('proxy_host'),
port=get('proxy_port'),
)
dl = Dl(get('retries'), proxy)
# get matchmaker url
matchmaker_url = sci.get_matchmaker_url_for_doi(doi)
# download matchmaker response
matchmaker_response = dl.dl(matchmaker_url)
# get parse pdf url
pdf_url = sci.parse_pdf_url(matchmaker_response.content)
if not pdf_url:
raise SciDlError(_('Failed to parse PDF url of DOI %s') % doi)
# download pdf response
return dl.dl(pdf_url)
|
sci-dl
|
/sci-dl-0.1.2.tar.gz/sci-dl-0.1.2/sci_dl/sci_dl.py
|
sci_dl.py
|
import logging
from gettext import gettext as _
from urllib.parse import urljoin, quote
import requests
from bs4 import BeautifulSoup
logger = logging.getLogger('sci-dl')
DEFAULT_ENCODING = 'UTF-8'
HEADERS = {
'User-Agent': (
'Mozilla/5.0 (Macintosh; Intel Mac OS X 11_0_1) '
'AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/89.0.4389.90 Safari/537.36'
)
}
DEFAULT_CONFIG = {'base_url': 'https://sci-hub.se', 'retries': 5, 'use_proxy': False}
class SciDlError(Exception):
"""SciDlError"""
def is_valid_doi(doi):
return '/' in doi
class Proxy(object):
def __init__(
self, protocol='socks5', user='', password='', host='127.0.0.1', port=1080
):
self.protocol = protocol
self.user = user
self.password = password
self.host = host
self.port = port
def to_url(self):
if self.user and self.password:
return '{protocol}://{user}:{password}@{host}:{port}'.format(
protocol=self.protocol,
user=self.user,
password=quote(self.password),
host=self.host,
port=self.port,
)
return '{protocol}://{host}:{port}'.format(
protocol=self.protocol, host=self.host, port=self.port
)
def __repr__(self):
return self.to_url()
def to_requests(self):
return {'http': self.to_url(), 'https': self.to_url()}
class Dl(object):
def __init__(self, retries=3, proxy=None):
self.retries = retries
self.proxy = proxy
def _dl(self, url):
proxies = self.proxy.to_requests() if self.proxy else None
return requests.get(url, headers=HEADERS, stream=True, proxies=proxies)
def dl(self, url):
for i in range(self.retries):
try:
return self._dl(url)
except Exception as e:
logger.exception(e)
logger.warning(_('retrying...'))
raise SciDlError(_('download %s failure') % url)
class Sci(object):
def __init__(self, base_url):
self.base_url = base_url
def get_protocol(self):
if self.base_url.startswith('https'):
return 'https'
return 'http'
def get_matchmaker_url_for_doi(self, doi):
if not is_valid_doi(doi):
raise SciDlError(_('invalid DOI %s') % doi)
return urljoin(self.base_url, doi)
def clean_pdf_url(self, pdf_url):
# 找到/downloads位置
index = pdf_url.find('/downloads')
return pdf_url[index:-1]
def parse_pdf_url(self, content):
soup = BeautifulSoup(content, features='html.parser')
buttons = soup.find('div', id='buttons')
if not buttons:
return None
pdf_url = None
for button in buttons.find_all('button'):
if 'save' in button.string:
pdf_url = button.attrs['onclick']
return (self.base_url + self.clean_pdf_url(pdf_url)) if pdf_url else None
def dl_by_doi(doi, config=None):
"""
download PDF by DOI
Args:
doi: <str> DOI
config: <dict> must contains the following keys:
when you have a proxy:
1. base_url: <str> SciHub url, eg, https://sci-hub.se
2. retries: <int> number of failure retries, eg, 5
3. use_proxy: <bool> use proxy or not, eg, True
4. proxy_protocol: <str> proxy protocol, eg, socks5
5. proxy_user: <str> proxy username, None if no user need
6. proxy_password: <str> proxy password, None if no password need
7. proxy_host: <str> proxy host, eg, 127.0.0.1
8. proxy_port: <int> proxy port, eg, 1080
when you don't have a proxy:
1. base_url: <str> SciHub url, eg, https://sci-hub.se
2. retries: <int> number of failure retries, eg, 5
3. use_proxy: <bool> use proxy or not, eg, False
default:
{
"base_url": "https://sci-hub.se",
"retries": 5,
"user_proxy": False
}
Returns:
requests.models.Response
Raises:
SciDlError
"""
def get(key):
if key not in config:
raise SciDlError(_("malformed configuration, can't find %s") % key)
return config[key]
if config is None:
config = DEFAULT_CONFIG.copy()
# initialize objects
sci = Sci(get('base_url'))
proxy = None
if get('use_proxy'):
proxy = Proxy(
protocol=get('proxy_protocol'),
user=get('proxy_user'),
password=get('proxy_password'),
host=get('proxy_host'),
port=get('proxy_port'),
)
dl = Dl(get('retries'), proxy)
# get matchmaker url
matchmaker_url = sci.get_matchmaker_url_for_doi(doi)
# download matchmaker response
matchmaker_response = dl.dl(matchmaker_url)
# get parse pdf url
pdf_url = sci.parse_pdf_url(matchmaker_response.content)
if not pdf_url:
raise SciDlError(_('Failed to parse PDF url of DOI %s') % doi)
# download pdf response
return dl.dl(pdf_url)
| 0.471467 | 0.089455 |
import json
import codecs
import gettext
import logging
from os import makedirs
from pkg_resources import resource_filename
from os.path import join, exists, dirname, expanduser
import click
import validators
from rich.console import Console
from appdirs import user_config_dir, user_log_dir
from rich.prompt import Prompt, IntPrompt, Confirm
from rich.progress import (
BarColumn, DownloadColumn, Progress,
TextColumn, TimeRemainingColumn, TransferSpeedColumn,
)
from .sci_dl import SciDlError, Proxy, Dl, Sci, DEFAULT_ENCODING
# translation configuration
LOCALEDIR = resource_filename('sci_dl', 'locale')
try:
_ = gettext.translation('sci-dl', LOCALEDIR).gettext
except FileNotFoundError:
from gettext import gettext as _
APP_NAME = 'sci-dl'
DEFAULT_BASE_URL = 'https://sci-hub.se'
MIN_RETRIES = 0
MAX_RETRIES = 50
CONFIG_FILE = join(
user_config_dir(APP_NAME),
'sci-dl.json'
)
DEFAULT_LOG_FILE = join(
user_log_dir(APP_NAME),
'sci-dl.log'
)
PROXY_PROTOCOLS = ['socks5', 'http', 'https']
DEFAULT_PROXY_PROTOCOL = 'socks5'
DEFAULT_PROXY_USER = ''
DEFAULT_PROXY_PASSWORD = ''
DEFAULT_PROXY_HOST = '127.0.0.1'
DEFAULT_PROXY_PORT = 1080
CHUNK_SIZE = 128
UNKNOWN_ERROR_MSG = _(
'Unknown error occurred, please refer to log file to get more detail.'
)
logger = logging.getLogger(APP_NAME)
class Config(dict):
@staticmethod
def load(file):
if not exists(file):
raise SciDlError(
_(
'configuration file %s does not exists, '
'please run "sci-dl init-config" to create it.'
) % file
)
with codecs.open(file, encoding=DEFAULT_ENCODING) as fp:
return Config(json.load(fp))
def get_config(self, key):
if key not in self:
raise SciDlError(_("malformed configuration, can't find %s") % key)
return self[key]
def write(self, file):
directory = dirname(file)
if not exists(directory):
makedirs(directory)
with codecs.open(file, 'w', encoding=DEFAULT_ENCODING) as fp:
return json.dump(self, fp, indent=4)
progress = Progress(
TextColumn("[bold blue]{task.fields[filename]}", justify="right"),
BarColumn(bar_width=None),
"[progress.percentage]{task.percentage:>3.1f}%",
"•",
DownloadColumn(),
"•",
TransferSpeedColumn(),
"•",
TimeRemainingColumn(),
)
@click.group()
def sci_dl():
"""
sci-dl helps you download SciHub PDF programmatically
"""
@sci_dl.command(name='init-config')
def sci_dl_init_config():
"""
initialize sci-dl configuration
"""
try:
console = Console()
# base_url
while True:
base_url = Prompt.ask(
_('SciHub base url'),
default=DEFAULT_BASE_URL
)
if validators.url(base_url):
break
console.log(_('Invalid base_url %s') % base_url)
# retries
while True:
retries = IntPrompt.ask(
_('Number of failure download retries'),
default=5
)
if MIN_RETRIES <= retries <= MAX_RETRIES:
break
console.log(
_('invalid number of failure download retries %s, '
'must between %s and %s') % (retries, MIN_RETRIES, MAX_RETRIES)
)
# use_proxy
use_proxy = Confirm.ask(
_('Do you want to use a proxy?'),
default=True
)
proxy_protocol = DEFAULT_PROXY_PROTOCOL
proxy_user = DEFAULT_PROXY_USER
proxy_password = DEFAULT_PROXY_PASSWORD
proxy_host = DEFAULT_PROXY_HOST
proxy_port = DEFAULT_PROXY_PORT
if use_proxy:
# proxy_protocol
proxy_protocol = Prompt.ask(
_('Protocol of your proxy'),
choices=PROXY_PROTOCOLS,
default=DEFAULT_PROXY_PROTOCOL
)
# proxy_user
proxy_user = Prompt.ask(
_('User of your proxy, leave blank if not need'),
default=DEFAULT_PROXY_USER
)
# proxy_password
proxy_password = Prompt.ask(
_('Password of your proxy, leave blank if not need'),
password=True, default=DEFAULT_PROXY_PASSWORD,
)
# proxy_host
while True:
proxy_host = Prompt.ask(
_('Host of your proxy'),
default=DEFAULT_PROXY_HOST
)
if validators.domain(
proxy_host
) or validators.ipv4(
proxy_host
) or validators.ipv4(proxy_host):
break
console.log(_('Invalid host %s') % proxy_host)
# proxy port
while True:
proxy_port = IntPrompt.ask(
_('Port of your proxy'),
default=DEFAULT_PROXY_PORT
)
if 1 <= proxy_port <= 65535:
break
console.log(_('Invalid port %s, should between 1 and 65535') % proxy_port)
# log file
while True:
log_file = Prompt.ask(
_('Log file'),
default=DEFAULT_LOG_FILE
)
try:
log_directory = dirname(log_file)
if not exists(log_directory):
makedirs(log_directory)
break
except Exception:
console.log(_('Invalid log file %s') % log_file)
# 输出目录
while True:
outdir = Prompt.ask(
_('Where you want to save PDF file'),
default=expanduser('~')
)
if exists(outdir):
break
console.log(_('Invalid directory %s') % outdir)
# 是否打开调试模式
debug_mode = Confirm.ask(
_('Enable DEBUG mode?'),
default=False
)
config = Config({
'base_url': base_url,
'retries': retries,
'use_proxy': use_proxy,
'proxy_protocol': proxy_protocol,
'proxy_user': proxy_user,
'proxy_password': proxy_password,
'proxy_host': proxy_host,
'proxy_port': proxy_port,
'log_file': log_file,
'outdir': outdir,
'debug_mode': debug_mode
})
config.write(CONFIG_FILE)
console.log(_('Configurations saved, you can edit "%s" if needed.') % CONFIG_FILE)
except SciDlError as e:
logger.exception(e)
raise click.UsageError(e)
except Exception as e:
logger.exception(e)
raise click.UsageError(UNKNOWN_ERROR_MSG)
return 0
@sci_dl.command('dl')
@click.option(
'-d', '--doi', required=True,
help='DOI, eg, 10.1002/9781118445112.stat06003'
)
def sci_dl_dl(doi):
"""
download SciHub PDF using DOI
"""
try:
config = Config.load(CONFIG_FILE)
if config.get_config('debug_mode'):
logging.basicConfig(
level=logging.DEBUG,
filename=config.get_config('log_file')
)
else:
logging.basicConfig(
filename=config.get_config('log_file')
)
console = Console()
sh = Sci(config.get_config('base_url'))
if config.get_config('use_proxy'):
proxy = Proxy(
protocol=config.get_config('proxy_protocol'),
user=config.get_config('proxy_user'),
password=config.get_config('proxy_password'),
host=config.get_config('proxy_host'),
port=config.get_config('proxy_port'),
)
else:
proxy = None
dl = Dl(config.get_config('retries'), proxy=proxy)
console.log(_('Received DOI [bold][green]%s[/green][/bold]') % doi)
# get matchmaker url and download the page
matchmaker_url = sh.get_matchmaker_url_for_doi(doi)
matchmaker_response = dl.dl(matchmaker_url)
# parse PDF url
pdf_url = sh.parse_pdf_url(matchmaker_response.text)
if pdf_url is None:
msg = _('Failed to parse PDF url of DOI %s') % doi
logger.error(msg)
raise SciDlError(msg)
console.log(_('Find PDF url %s') % pdf_url)
# download PDF
pdf_response = dl.dl(pdf_url)
content_type = pdf_response.headers['Content-Type']
if content_type != 'application/pdf':
msg = _('Failed to Download PDF url %s of DOI %s') % (pdf_url, doi)
logger.error(msg)
raise SciDlError(msg)
content_length = int(pdf_response.headers['Content-Length'])
fn = '%s.pdf' % doi.replace(r'/', '_')
file = join(config.get_config('outdir'), fn)
task_id = progress.add_task('Download', filename=fn)
progress.update(task_id, total=content_length)
with progress, open(file, 'wb') as fp:
for chunk in pdf_response.iter_content(CHUNK_SIZE):
fp.write(chunk)
size = len(chunk)
progress.update(task_id, advance=size)
console.log(_(
'Congratulations, PDF was saved to %s successfully.'
) % file)
except SciDlError as e:
logger.exception(e)
raise click.UsageError(e)
except Exception as e:
logger.exception(e)
raise click.UsageError(UNKNOWN_ERROR_MSG)
if __name__ == '__main__':
sci_dl()
|
sci-dl
|
/sci-dl-0.1.2.tar.gz/sci-dl-0.1.2/sci_dl/main.py
|
main.py
|
import json
import codecs
import gettext
import logging
from os import makedirs
from pkg_resources import resource_filename
from os.path import join, exists, dirname, expanduser
import click
import validators
from rich.console import Console
from appdirs import user_config_dir, user_log_dir
from rich.prompt import Prompt, IntPrompt, Confirm
from rich.progress import (
BarColumn, DownloadColumn, Progress,
TextColumn, TimeRemainingColumn, TransferSpeedColumn,
)
from .sci_dl import SciDlError, Proxy, Dl, Sci, DEFAULT_ENCODING
# translation configuration
LOCALEDIR = resource_filename('sci_dl', 'locale')
try:
_ = gettext.translation('sci-dl', LOCALEDIR).gettext
except FileNotFoundError:
from gettext import gettext as _
APP_NAME = 'sci-dl'
DEFAULT_BASE_URL = 'https://sci-hub.se'
MIN_RETRIES = 0
MAX_RETRIES = 50
CONFIG_FILE = join(
user_config_dir(APP_NAME),
'sci-dl.json'
)
DEFAULT_LOG_FILE = join(
user_log_dir(APP_NAME),
'sci-dl.log'
)
PROXY_PROTOCOLS = ['socks5', 'http', 'https']
DEFAULT_PROXY_PROTOCOL = 'socks5'
DEFAULT_PROXY_USER = ''
DEFAULT_PROXY_PASSWORD = ''
DEFAULT_PROXY_HOST = '127.0.0.1'
DEFAULT_PROXY_PORT = 1080
CHUNK_SIZE = 128
UNKNOWN_ERROR_MSG = _(
'Unknown error occurred, please refer to log file to get more detail.'
)
logger = logging.getLogger(APP_NAME)
class Config(dict):
@staticmethod
def load(file):
if not exists(file):
raise SciDlError(
_(
'configuration file %s does not exists, '
'please run "sci-dl init-config" to create it.'
) % file
)
with codecs.open(file, encoding=DEFAULT_ENCODING) as fp:
return Config(json.load(fp))
def get_config(self, key):
if key not in self:
raise SciDlError(_("malformed configuration, can't find %s") % key)
return self[key]
def write(self, file):
directory = dirname(file)
if not exists(directory):
makedirs(directory)
with codecs.open(file, 'w', encoding=DEFAULT_ENCODING) as fp:
return json.dump(self, fp, indent=4)
progress = Progress(
TextColumn("[bold blue]{task.fields[filename]}", justify="right"),
BarColumn(bar_width=None),
"[progress.percentage]{task.percentage:>3.1f}%",
"•",
DownloadColumn(),
"•",
TransferSpeedColumn(),
"•",
TimeRemainingColumn(),
)
@click.group()
def sci_dl():
"""
sci-dl helps you download SciHub PDF programmatically
"""
@sci_dl.command(name='init-config')
def sci_dl_init_config():
"""
initialize sci-dl configuration
"""
try:
console = Console()
# base_url
while True:
base_url = Prompt.ask(
_('SciHub base url'),
default=DEFAULT_BASE_URL
)
if validators.url(base_url):
break
console.log(_('Invalid base_url %s') % base_url)
# retries
while True:
retries = IntPrompt.ask(
_('Number of failure download retries'),
default=5
)
if MIN_RETRIES <= retries <= MAX_RETRIES:
break
console.log(
_('invalid number of failure download retries %s, '
'must between %s and %s') % (retries, MIN_RETRIES, MAX_RETRIES)
)
# use_proxy
use_proxy = Confirm.ask(
_('Do you want to use a proxy?'),
default=True
)
proxy_protocol = DEFAULT_PROXY_PROTOCOL
proxy_user = DEFAULT_PROXY_USER
proxy_password = DEFAULT_PROXY_PASSWORD
proxy_host = DEFAULT_PROXY_HOST
proxy_port = DEFAULT_PROXY_PORT
if use_proxy:
# proxy_protocol
proxy_protocol = Prompt.ask(
_('Protocol of your proxy'),
choices=PROXY_PROTOCOLS,
default=DEFAULT_PROXY_PROTOCOL
)
# proxy_user
proxy_user = Prompt.ask(
_('User of your proxy, leave blank if not need'),
default=DEFAULT_PROXY_USER
)
# proxy_password
proxy_password = Prompt.ask(
_('Password of your proxy, leave blank if not need'),
password=True, default=DEFAULT_PROXY_PASSWORD,
)
# proxy_host
while True:
proxy_host = Prompt.ask(
_('Host of your proxy'),
default=DEFAULT_PROXY_HOST
)
if validators.domain(
proxy_host
) or validators.ipv4(
proxy_host
) or validators.ipv4(proxy_host):
break
console.log(_('Invalid host %s') % proxy_host)
# proxy port
while True:
proxy_port = IntPrompt.ask(
_('Port of your proxy'),
default=DEFAULT_PROXY_PORT
)
if 1 <= proxy_port <= 65535:
break
console.log(_('Invalid port %s, should between 1 and 65535') % proxy_port)
# log file
while True:
log_file = Prompt.ask(
_('Log file'),
default=DEFAULT_LOG_FILE
)
try:
log_directory = dirname(log_file)
if not exists(log_directory):
makedirs(log_directory)
break
except Exception:
console.log(_('Invalid log file %s') % log_file)
# 输出目录
while True:
outdir = Prompt.ask(
_('Where you want to save PDF file'),
default=expanduser('~')
)
if exists(outdir):
break
console.log(_('Invalid directory %s') % outdir)
# 是否打开调试模式
debug_mode = Confirm.ask(
_('Enable DEBUG mode?'),
default=False
)
config = Config({
'base_url': base_url,
'retries': retries,
'use_proxy': use_proxy,
'proxy_protocol': proxy_protocol,
'proxy_user': proxy_user,
'proxy_password': proxy_password,
'proxy_host': proxy_host,
'proxy_port': proxy_port,
'log_file': log_file,
'outdir': outdir,
'debug_mode': debug_mode
})
config.write(CONFIG_FILE)
console.log(_('Configurations saved, you can edit "%s" if needed.') % CONFIG_FILE)
except SciDlError as e:
logger.exception(e)
raise click.UsageError(e)
except Exception as e:
logger.exception(e)
raise click.UsageError(UNKNOWN_ERROR_MSG)
return 0
@sci_dl.command('dl')
@click.option(
'-d', '--doi', required=True,
help='DOI, eg, 10.1002/9781118445112.stat06003'
)
def sci_dl_dl(doi):
"""
download SciHub PDF using DOI
"""
try:
config = Config.load(CONFIG_FILE)
if config.get_config('debug_mode'):
logging.basicConfig(
level=logging.DEBUG,
filename=config.get_config('log_file')
)
else:
logging.basicConfig(
filename=config.get_config('log_file')
)
console = Console()
sh = Sci(config.get_config('base_url'))
if config.get_config('use_proxy'):
proxy = Proxy(
protocol=config.get_config('proxy_protocol'),
user=config.get_config('proxy_user'),
password=config.get_config('proxy_password'),
host=config.get_config('proxy_host'),
port=config.get_config('proxy_port'),
)
else:
proxy = None
dl = Dl(config.get_config('retries'), proxy=proxy)
console.log(_('Received DOI [bold][green]%s[/green][/bold]') % doi)
# get matchmaker url and download the page
matchmaker_url = sh.get_matchmaker_url_for_doi(doi)
matchmaker_response = dl.dl(matchmaker_url)
# parse PDF url
pdf_url = sh.parse_pdf_url(matchmaker_response.text)
if pdf_url is None:
msg = _('Failed to parse PDF url of DOI %s') % doi
logger.error(msg)
raise SciDlError(msg)
console.log(_('Find PDF url %s') % pdf_url)
# download PDF
pdf_response = dl.dl(pdf_url)
content_type = pdf_response.headers['Content-Type']
if content_type != 'application/pdf':
msg = _('Failed to Download PDF url %s of DOI %s') % (pdf_url, doi)
logger.error(msg)
raise SciDlError(msg)
content_length = int(pdf_response.headers['Content-Length'])
fn = '%s.pdf' % doi.replace(r'/', '_')
file = join(config.get_config('outdir'), fn)
task_id = progress.add_task('Download', filename=fn)
progress.update(task_id, total=content_length)
with progress, open(file, 'wb') as fp:
for chunk in pdf_response.iter_content(CHUNK_SIZE):
fp.write(chunk)
size = len(chunk)
progress.update(task_id, advance=size)
console.log(_(
'Congratulations, PDF was saved to %s successfully.'
) % file)
except SciDlError as e:
logger.exception(e)
raise click.UsageError(e)
except Exception as e:
logger.exception(e)
raise click.UsageError(UNKNOWN_ERROR_MSG)
if __name__ == '__main__':
sci_dl()
| 0.142739 | 0.063308 |
import logging
from pathlib import Path
import subprocess
import warnings
from typing import Dict, List, Optional, Tuple, Union
from fab.util import string_checksum
logger = logging.getLogger(__name__)
class Compiler(object):
"""
A command-line compiler whose flags we wish to manage.
"""
def __init__(self, exe, compile_flag, module_folder_flag):
self.exe = exe
self.compile_flag = compile_flag
self.module_folder_flag = module_folder_flag
# We should probably extend this for fPIC, two-stage and optimisation levels.
COMPILERS: Dict[str, Compiler] = {
'gfortran': Compiler(exe='gfortran', compile_flag='-c', module_folder_flag='-J'),
'ifort': Compiler(exe='ifort', compile_flag='-c', module_folder_flag='-module'),
}
# todo: We're not sure we actually want to do modify incoming flags. Discuss...
# todo: this is compiler specific, rename - and do we want similar functions for other steps?
def remove_managed_flags(compiler, flags_in):
"""
Remove flags which Fab manages.
Fab prefers to specify a few compiler flags itself.
For example, Fab wants to place module files in the `build_output` folder.
The flag to do this differs with compiler.
We don't want duplicate, possibly conflicting flags in our tool invocation so this function is used
to remove any flags which Fab wants to manage.
If the compiler is not known to Fab, we rely on the user to specify these flags in their config.
.. note::
This approach is due for discussion. It might not be desirable to modify user flags at all.
"""
def remove_flag(flags: List[str], flag: str, len):
while flag in flags:
warnings.warn(f'removing managed flag {flag} for compiler {compiler}')
flag_index = flags.index(flag)
for _ in range(len):
flags.pop(flag_index)
known_compiler = COMPILERS.get(compiler)
if not known_compiler:
logger.warning('Unable to remove managed flags for unknown compiler. User config must specify managed flags.')
return flags_in
flags_out = [*flags_in]
remove_flag(flags_out, known_compiler.compile_flag, 1)
remove_flag(flags_out, known_compiler.module_folder_flag, 2)
return flags_out
def flags_checksum(flags: List[str]):
"""
Return a checksum of the flags.
"""
return string_checksum(str(flags))
def run_command(command: List[str], env=None, cwd: Optional[Union[Path, str]] = None, capture_output=True):
"""
Run a CLI command.
:param command:
List of strings to be sent to :func:`subprocess.run` as the command.
:param env:
Optional env for the command. By default it will use the current session's environment.
:param capture_output:
If True, capture and return stdout. If False, the command will print its output directly to the console.
"""
command = list(map(str, command))
logger.debug(f'run_command: {" ".join(command)}')
res = subprocess.run(command, capture_output=capture_output, env=env, cwd=cwd)
if res.returncode != 0:
msg = f'Command failed with return code {res.returncode}:\n{command}'
if res.stdout:
msg += f'\n{res.stdout.decode()}'
if res.stderr:
msg += f'\n{res.stderr.decode()}'
raise RuntimeError(msg)
if capture_output:
return res.stdout.decode()
def get_tool(tool_str: Optional[str] = None) -> Tuple[str, List[str]]:
"""
Get the compiler, preprocessor, etc, from the given string.
Separate the tool and flags for the sort of value we see in environment variables, e.g. `gfortran -c`.
Returns the tool and a list of flags.
:param env_var:
The environment variable from which to find the tool.
"""
tool_str = tool_str or ''
tool_split = tool_str.split()
if not tool_split:
raise ValueError(f"Tool not specified in '{tool_str}'. Cannot continue.")
return tool_split[0], tool_split[1:]
# todo: add more compilers and test with more versions of compilers
def get_compiler_version(compiler: str) -> str:
"""
Try to get the version of the given compiler.
Expects a version in a certain part of the --version output,
which must adhere to the n.n.n format, with at least 2 parts.
Returns a version string, e.g '6.10.1', or empty string.
:param compiler:
The command line tool for which we want a version.
"""
try:
res = run_command([compiler, '--version'])
except FileNotFoundError:
raise ValueError(f'Compiler not found: {compiler}')
except RuntimeError as err:
logger.warning(f"Error asking for version of compiler '{compiler}': {err}")
return ''
# Pull the version string from the command output.
# All the versions of gfortran and ifort we've tried follow the same pattern, it's after a ")".
try:
version = res.split(')')[1].split()[0]
except IndexError:
logger.warning(f"Unexpected version response from compiler '{compiler}': {res}")
return ''
# expect major.minor[.patch, ...]
# validate - this may be overkill
split = version.split('.')
if len(split) < 2:
logger.warning(f"unhandled compiler version format for compiler '{compiler}' is not <n.n[.n, ...]>: {version}")
return ''
# todo: do we care if the parts are integers? Not all will be, but perhaps major and minor?
logger.info(f'Found compiler version for {compiler} = {version}')
return version
|
sci-fab
|
/sci_fab-1.0-py3-none-any.whl/fab/tools.py
|
tools.py
|
import logging
from pathlib import Path
import subprocess
import warnings
from typing import Dict, List, Optional, Tuple, Union
from fab.util import string_checksum
logger = logging.getLogger(__name__)
class Compiler(object):
"""
A command-line compiler whose flags we wish to manage.
"""
def __init__(self, exe, compile_flag, module_folder_flag):
self.exe = exe
self.compile_flag = compile_flag
self.module_folder_flag = module_folder_flag
# We should probably extend this for fPIC, two-stage and optimisation levels.
COMPILERS: Dict[str, Compiler] = {
'gfortran': Compiler(exe='gfortran', compile_flag='-c', module_folder_flag='-J'),
'ifort': Compiler(exe='ifort', compile_flag='-c', module_folder_flag='-module'),
}
# todo: We're not sure we actually want to do modify incoming flags. Discuss...
# todo: this is compiler specific, rename - and do we want similar functions for other steps?
def remove_managed_flags(compiler, flags_in):
"""
Remove flags which Fab manages.
Fab prefers to specify a few compiler flags itself.
For example, Fab wants to place module files in the `build_output` folder.
The flag to do this differs with compiler.
We don't want duplicate, possibly conflicting flags in our tool invocation so this function is used
to remove any flags which Fab wants to manage.
If the compiler is not known to Fab, we rely on the user to specify these flags in their config.
.. note::
This approach is due for discussion. It might not be desirable to modify user flags at all.
"""
def remove_flag(flags: List[str], flag: str, len):
while flag in flags:
warnings.warn(f'removing managed flag {flag} for compiler {compiler}')
flag_index = flags.index(flag)
for _ in range(len):
flags.pop(flag_index)
known_compiler = COMPILERS.get(compiler)
if not known_compiler:
logger.warning('Unable to remove managed flags for unknown compiler. User config must specify managed flags.')
return flags_in
flags_out = [*flags_in]
remove_flag(flags_out, known_compiler.compile_flag, 1)
remove_flag(flags_out, known_compiler.module_folder_flag, 2)
return flags_out
def flags_checksum(flags: List[str]):
"""
Return a checksum of the flags.
"""
return string_checksum(str(flags))
def run_command(command: List[str], env=None, cwd: Optional[Union[Path, str]] = None, capture_output=True):
"""
Run a CLI command.
:param command:
List of strings to be sent to :func:`subprocess.run` as the command.
:param env:
Optional env for the command. By default it will use the current session's environment.
:param capture_output:
If True, capture and return stdout. If False, the command will print its output directly to the console.
"""
command = list(map(str, command))
logger.debug(f'run_command: {" ".join(command)}')
res = subprocess.run(command, capture_output=capture_output, env=env, cwd=cwd)
if res.returncode != 0:
msg = f'Command failed with return code {res.returncode}:\n{command}'
if res.stdout:
msg += f'\n{res.stdout.decode()}'
if res.stderr:
msg += f'\n{res.stderr.decode()}'
raise RuntimeError(msg)
if capture_output:
return res.stdout.decode()
def get_tool(tool_str: Optional[str] = None) -> Tuple[str, List[str]]:
"""
Get the compiler, preprocessor, etc, from the given string.
Separate the tool and flags for the sort of value we see in environment variables, e.g. `gfortran -c`.
Returns the tool and a list of flags.
:param env_var:
The environment variable from which to find the tool.
"""
tool_str = tool_str or ''
tool_split = tool_str.split()
if not tool_split:
raise ValueError(f"Tool not specified in '{tool_str}'. Cannot continue.")
return tool_split[0], tool_split[1:]
# todo: add more compilers and test with more versions of compilers
def get_compiler_version(compiler: str) -> str:
"""
Try to get the version of the given compiler.
Expects a version in a certain part of the --version output,
which must adhere to the n.n.n format, with at least 2 parts.
Returns a version string, e.g '6.10.1', or empty string.
:param compiler:
The command line tool for which we want a version.
"""
try:
res = run_command([compiler, '--version'])
except FileNotFoundError:
raise ValueError(f'Compiler not found: {compiler}')
except RuntimeError as err:
logger.warning(f"Error asking for version of compiler '{compiler}': {err}")
return ''
# Pull the version string from the command output.
# All the versions of gfortran and ifort we've tried follow the same pattern, it's after a ")".
try:
version = res.split(')')[1].split()[0]
except IndexError:
logger.warning(f"Unexpected version response from compiler '{compiler}': {res}")
return ''
# expect major.minor[.patch, ...]
# validate - this may be overkill
split = version.split('.')
if len(split) < 2:
logger.warning(f"unhandled compiler version format for compiler '{compiler}' is not <n.n[.n, ...]>: {version}")
return ''
# todo: do we care if the parts are integers? Not all will be, but perhaps major and minor?
logger.info(f'Found compiler version for {compiler} = {version}')
return version
| 0.582254 | 0.257199 |
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Iterable, Union, Dict, List
from fab.constants import BUILD_TREES
from fab.dep_tree import filter_source_tree, AnalysedDependent
from fab.util import suffix_filter
class ArtefactsGetter(ABC):
"""
Abstract base class for artefact getters.
"""
@abstractmethod
def __call__(self, artefact_store):
"""
:param artefact_store:
The artefact store from which to retrieve.
"""
pass
class CollectionGetter(ArtefactsGetter):
"""
A simple artefact getter which returns one :term:`Artefact Collection` from the artefact_store.
Example::
`CollectionGetter('preprocessed_fortran')`
"""
def __init__(self, collection_name):
"""
:param collection_name:
The name of the artefact collection to retrieve.
"""
self.collection_name = collection_name
def __call__(self, artefact_store):
super().__call__(artefact_store)
return artefact_store.get(self.collection_name, [])
class CollectionConcat(ArtefactsGetter):
"""
Returns a concatenated list from multiple :term:`Artefact Collections <Artefact Collection>`
(each expected to be an iterable).
An :class:`~fab.artefacts.ArtefactsGetter` can be provided instead of a collection_name.
Example::
# The default source code getter for the Analyse step might look like this.
DEFAULT_SOURCE_GETTER = CollectionConcat([
'preprocessed_c',
'preprocessed_fortran',
SuffixFilter('all_source', '.f90'),
])
"""
def __init__(self, collections: Iterable[Union[str, ArtefactsGetter]]):
"""
:param collections:
An iterable containing collection names (strings) or other ArtefactsGetters.
"""
self.collections = collections
# todo: ensure the labelled values are iterables
def __call__(self, artefact_store: Dict):
super().__call__(artefact_store)
# todo: this should be a set, in case a file appears in multiple collections
result = []
for collection in self.collections:
if isinstance(collection, str):
result.extend(artefact_store.get(collection, []))
elif isinstance(collection, ArtefactsGetter):
result.extend(collection(artefact_store))
return result
class SuffixFilter(ArtefactsGetter):
"""
Returns the file paths in a :term:`Artefact Collection` (expected to be an iterable),
filtered by suffix.
Example::
# The default source getter for the FortranPreProcessor step.
DEFAULT_SOURCE = SuffixFilter('all_source', '.F90')
"""
def __init__(self, collection_name: str, suffix: Union[str, List[str]]):
"""
:param collection_name:
The name of the artefact collection.
:param suffix:
A suffix string including the dot, or iterable of.
"""
self.collection_name = collection_name
self.suffixes = [suffix] if isinstance(suffix, str) else suffix
def __call__(self, artefact_store):
super().__call__(artefact_store)
# todo: returning an empty list is probably "dishonest" if the collection doesn't exist - return None instead?
fpaths: Iterable[Path] = artefact_store.get(self.collection_name, [])
return suffix_filter(fpaths, self.suffixes)
class FilterBuildTrees(ArtefactsGetter):
"""
Filter build trees by suffix.
Returns one list of files to compile per build tree, of the form Dict[name, List[AnalysedDependent]]
Example::
# The default source getter for the CompileFortran step.
DEFAULT_SOURCE_GETTER = FilterBuildTrees(suffix='.f90')
"""
def __init__(self, suffix: Union[str, List[str]], collection_name: str = BUILD_TREES):
"""
:param suffix:
A suffix string, or iterable of, including the preceding dot.
:param collection_name:
The name of the artefact collection where we find the source trees.
Defaults to the value in :py:const:`fab.constants.BUILD_TREES`.
"""
self.collection_name = collection_name
self.suffixes = [suffix] if isinstance(suffix, str) else suffix
def __call__(self, artefact_store):
super().__call__(artefact_store)
build_trees = artefact_store[self.collection_name]
build_lists: Dict[str, List[AnalysedDependent]] = {}
for root, tree in build_trees.items():
build_lists[root] = filter_source_tree(source_tree=tree, suffixes=self.suffixes)
return build_lists
|
sci-fab
|
/sci_fab-1.0-py3-none-any.whl/fab/artefacts.py
|
artefacts.py
|
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Iterable, Union, Dict, List
from fab.constants import BUILD_TREES
from fab.dep_tree import filter_source_tree, AnalysedDependent
from fab.util import suffix_filter
class ArtefactsGetter(ABC):
"""
Abstract base class for artefact getters.
"""
@abstractmethod
def __call__(self, artefact_store):
"""
:param artefact_store:
The artefact store from which to retrieve.
"""
pass
class CollectionGetter(ArtefactsGetter):
"""
A simple artefact getter which returns one :term:`Artefact Collection` from the artefact_store.
Example::
`CollectionGetter('preprocessed_fortran')`
"""
def __init__(self, collection_name):
"""
:param collection_name:
The name of the artefact collection to retrieve.
"""
self.collection_name = collection_name
def __call__(self, artefact_store):
super().__call__(artefact_store)
return artefact_store.get(self.collection_name, [])
class CollectionConcat(ArtefactsGetter):
"""
Returns a concatenated list from multiple :term:`Artefact Collections <Artefact Collection>`
(each expected to be an iterable).
An :class:`~fab.artefacts.ArtefactsGetter` can be provided instead of a collection_name.
Example::
# The default source code getter for the Analyse step might look like this.
DEFAULT_SOURCE_GETTER = CollectionConcat([
'preprocessed_c',
'preprocessed_fortran',
SuffixFilter('all_source', '.f90'),
])
"""
def __init__(self, collections: Iterable[Union[str, ArtefactsGetter]]):
"""
:param collections:
An iterable containing collection names (strings) or other ArtefactsGetters.
"""
self.collections = collections
# todo: ensure the labelled values are iterables
def __call__(self, artefact_store: Dict):
super().__call__(artefact_store)
# todo: this should be a set, in case a file appears in multiple collections
result = []
for collection in self.collections:
if isinstance(collection, str):
result.extend(artefact_store.get(collection, []))
elif isinstance(collection, ArtefactsGetter):
result.extend(collection(artefact_store))
return result
class SuffixFilter(ArtefactsGetter):
"""
Returns the file paths in a :term:`Artefact Collection` (expected to be an iterable),
filtered by suffix.
Example::
# The default source getter for the FortranPreProcessor step.
DEFAULT_SOURCE = SuffixFilter('all_source', '.F90')
"""
def __init__(self, collection_name: str, suffix: Union[str, List[str]]):
"""
:param collection_name:
The name of the artefact collection.
:param suffix:
A suffix string including the dot, or iterable of.
"""
self.collection_name = collection_name
self.suffixes = [suffix] if isinstance(suffix, str) else suffix
def __call__(self, artefact_store):
super().__call__(artefact_store)
# todo: returning an empty list is probably "dishonest" if the collection doesn't exist - return None instead?
fpaths: Iterable[Path] = artefact_store.get(self.collection_name, [])
return suffix_filter(fpaths, self.suffixes)
class FilterBuildTrees(ArtefactsGetter):
"""
Filter build trees by suffix.
Returns one list of files to compile per build tree, of the form Dict[name, List[AnalysedDependent]]
Example::
# The default source getter for the CompileFortran step.
DEFAULT_SOURCE_GETTER = FilterBuildTrees(suffix='.f90')
"""
def __init__(self, suffix: Union[str, List[str]], collection_name: str = BUILD_TREES):
"""
:param suffix:
A suffix string, or iterable of, including the preceding dot.
:param collection_name:
The name of the artefact collection where we find the source trees.
Defaults to the value in :py:const:`fab.constants.BUILD_TREES`.
"""
self.collection_name = collection_name
self.suffixes = [suffix] if isinstance(suffix, str) else suffix
def __call__(self, artefact_store):
super().__call__(artefact_store)
build_trees = artefact_store[self.collection_name]
build_lists: Dict[str, List[AnalysedDependent]] = {}
for root, tree in build_trees.items():
build_lists[root] = filter_source_tree(source_tree=tree, suffixes=self.suffixes)
return build_lists
| 0.793706 | 0.346403 |
import getpass
import logging
import os
import sys
import warnings
from argparse import Namespace
from datetime import datetime
from fnmatch import fnmatch
from logging.handlers import RotatingFileHandler
from multiprocessing import cpu_count
from pathlib import Path
from string import Template
from typing import List, Optional, Dict, Any, Iterable
from fab.constants import BUILD_OUTPUT, SOURCE_ROOT, PREBUILD, CURRENT_PREBUILDS
from fab.metrics import send_metric, init_metrics, stop_metrics, metrics_summary
from fab.util import TimerLogger, by_type, get_fab_workspace
logger = logging.getLogger(__name__)
class BuildConfig(object):
"""
Contains and runs a list of build steps.
The user is not expected to instantiate this class directly,
but rather through the build_config() context manager.
"""
def __init__(self, project_label: str, parsed_args: Optional[Namespace] = None,
multiprocessing: bool = True, n_procs: Optional[int] = None, reuse_artefacts: bool = False,
fab_workspace: Optional[Path] = None,):
"""
:param project_label:
Name of the build project. The project workspace folder is created from this name, with spaces replaced
by underscores.
:param parsed_args:
If you want to add arguments to your script, please use common_arg_parser() and add arguements.
This pararmeter is the result of running :func:`ArgumentParser.parse_args`.
:param multiprocessing:
An option to disable multiprocessing to aid debugging.
:param n_procs:
The number of cores to use for multiprocessing operations. Defaults to the number of available cores.
:param reuse_artefacts:
A flag to avoid reprocessing certain files on subsequent runs.
WARNING: Currently unsophisticated, this flag should only be used by Fab developers.
The logic behind flag will soon be improved, in a work package called "incremental build".
:param fab_workspace:
Overrides the FAB_WORKSPACE environment variable.
If not set, and FAB_WORKSPACE is not set, the fab workspace defaults to *~/fab-workspace*.
"""
self.parsed_args = vars(parsed_args) if parsed_args else {}
from fab.steps.compile_fortran import get_fortran_compiler
compiler, _ = get_fortran_compiler()
project_label = Template(project_label).substitute(
compiler=compiler,
two_stage=f'{int(self.parsed_args.get("two_stage", 0))+1}stage')
self.project_label: str = project_label.replace(' ', '_')
# workspace folder
if not fab_workspace:
fab_workspace = get_fab_workspace()
logger.info(f"fab workspace is {fab_workspace}")
self.project_workspace: Path = fab_workspace / self.project_label
self.metrics_folder: Path = self.project_workspace / 'metrics' / self.project_label
# source config
self.source_root: Path = self.project_workspace / SOURCE_ROOT
self.prebuild_folder: Path = self.build_output / PREBUILD
# multiprocessing config
self.multiprocessing = multiprocessing
# turn off multiprocessing when debugging
# todo: turn off multiprocessing when running tests, as a good test runner will run use mp
if 'pydevd' in str(sys.gettrace()):
logger.info('debugger detected, running without multiprocessing')
self.multiprocessing = False
self.n_procs = n_procs
if self.multiprocessing and not self.n_procs:
try:
self.n_procs = max(1, len(os.sched_getaffinity(0)))
except AttributeError:
logger.error('could not enable multiprocessing')
self.multiprocessing = False
self.n_procs = None
self.reuse_artefacts = reuse_artefacts
# todo: should probably pull the artefact store out of the config
# runtime
# todo: either make this public, add get/setters, or extract into a class.
self._artefact_store: Dict[str, Any] = {}
self.init_artefact_store() # note: the artefact store is reset with every call to run()
def __enter__(self):
logger.info('')
logger.info(f'initialising {self.project_label}')
logger.info('')
if self.parsed_args.get('verbose'):
logging.getLogger('fab').setLevel(logging.DEBUG)
logger.info(f'building {self.project_label}')
self._start_time = datetime.now().replace(microsecond=0)
self._run_prep()
with TimerLogger(f'running {self.project_label} build steps') as build_timer:
# this will return to the build script
self._build_timer = build_timer
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if not exc_type: # None if there's no error.
from fab.steps.cleanup_prebuilds import CLEANUP_COUNT, cleanup_prebuilds
if CLEANUP_COUNT not in self._artefact_store:
logger.info("no housekeeping step was run, using a default hard cleanup")
cleanup_prebuilds(config=self, all_unused=True)
logger.info(f"Building '{self.project_label}' took {datetime.now() - self._start_time}")
# always
self._finalise_metrics(self._start_time, self._build_timer)
self._finalise_logging()
@property
def build_output(self):
return self.project_workspace / BUILD_OUTPUT
def init_artefact_store(self):
# there's no point writing to this from a child process of Step.run_mp() because you'll be modifying a copy.
self._artefact_store = {CURRENT_PREBUILDS: set()}
def add_current_prebuilds(self, artefacts: Iterable[Path]):
"""
Mark the given file paths as being current prebuilds, not to be cleaned during housekeeping.
"""
self._artefact_store[CURRENT_PREBUILDS].update(artefacts)
def _run_prep(self):
self._init_logging()
logger.info('')
logger.info(f'running {self.project_label}')
logger.info('')
self._prep_folders()
init_metrics(metrics_folder=self.metrics_folder)
# note: initialising here gives a new set of artefacts each run
self.init_artefact_store()
def _prep_folders(self):
self.source_root.mkdir(parents=True, exist_ok=True)
self.build_output.mkdir(parents=True, exist_ok=True)
self.prebuild_folder.mkdir(parents=True, exist_ok=True)
def _init_logging(self):
# add a file logger for our run
self.project_workspace.mkdir(parents=True, exist_ok=True)
log_file_handler = RotatingFileHandler(self.project_workspace / 'log.txt', backupCount=5, delay=True)
log_file_handler.doRollover()
logging.getLogger('fab').addHandler(log_file_handler)
logger.info(f"{datetime.now()}")
if self.multiprocessing:
logger.info(f'machine cores: {cpu_count()}')
logger.info(f'available cores: {len(os.sched_getaffinity(0))}')
logger.info(f'using n_procs = {self.n_procs}')
logger.info(f"workspace is {self.project_workspace}")
def _finalise_logging(self):
# remove our file logger
fab_logger = logging.getLogger('fab')
log_file_handlers = list(by_type(fab_logger.handlers, RotatingFileHandler))
if len(log_file_handlers) != 1:
warnings.warn(f'expected to find 1 RotatingFileHandler for removal, found {len(log_file_handlers)}')
fab_logger.removeHandler(log_file_handlers[0])
def _finalise_metrics(self, start_time, steps_timer):
send_metric('run', 'label', self.project_label)
send_metric('run', 'datetime', start_time.isoformat())
send_metric('run', 'time taken', steps_timer.taken)
send_metric('run', 'sysname', os.uname().sysname)
send_metric('run', 'nodename', os.uname().nodename)
send_metric('run', 'machine', os.uname().machine)
send_metric('run', 'user', getpass.getuser())
stop_metrics()
metrics_summary(metrics_folder=self.metrics_folder)
# todo: better name? perhaps PathFlags?
class AddFlags(object):
"""
Add command-line flags when our path filter matches.
Generally used inside a :class:`~fab.build_config.FlagsConfig`.
"""
def __init__(self, match: str, flags: List[str]):
"""
:param match:
The string to match against each file path.
:param flags:
The command-line flags to add for matching files.
Both the *match* and *flags* arguments can make use of templating:
- `$source` for *<project workspace>/source*
- `$output` for *<project workspace>/build_output*
- `$relative` for *<the source file's folder>*
For example::
# For source in the um folder, add an absolute include path
AddFlags(match="$source/um/*", flags=['-I$source/include']),
# For source in the um folder, add an include path relative to each source file.
AddFlags(match="$source/um/*", flags=['-I$relative/include']),
"""
self.match: str = match
self.flags: List[str] = flags
# todo: we don't need the project_workspace, we could just pass in the output folder
def run(self, fpath: Path, input_flags: List[str], config):
"""
Check if our filter matches a given file. If it does, add our flags.
:param fpath:
Filepath to check.
:param input_flags:
The list of command-line flags Fab is building for this file.
:param config:
Contains the folders for templating `$source` and `$output`.
"""
params = {'relative': fpath.parent, 'source': config.source_root, 'output': config.build_output}
# does the file path match our filter?
if not self.match or fnmatch(str(fpath), Template(self.match).substitute(params)):
# use templating to render any relative paths in our flags
add_flags = [Template(flag).substitute(params) for flag in self.flags]
# add our flags
input_flags += add_flags
class FlagsConfig(object):
"""
Return command-line flags for a given path.
Simply allows appending flags but may evolve to also replace and remove flags.
"""
def __init__(self, common_flags: Optional[List[str]] = None, path_flags: Optional[List[AddFlags]] = None):
"""
:param common_flags:
List of flags to apply to all files. E.g `['-O2']`.
:param path_flags:
List of :class:`~fab.build_config.AddFlags` objects which apply flags to selected paths.
"""
self.common_flags = common_flags or []
self.path_flags = path_flags or []
# todo: there's templating both in this method and the run method it calls.
# make sure it's all properly documented and rationalised.
def flags_for_path(self, path: Path, config):
"""
Get all the flags for a given file, in a reproducible order.
:param path:
The file path for which we want command-line flags.
:param config:
THe config contains the source root and project workspace.
"""
# We COULD make the user pass these template params to the constructor
# but we have a design requirement to minimise the config burden on the user,
# so we take care of it for them here instead.
params = {'source': config.source_root, 'output': config.build_output}
flags = [Template(i).substitute(params) for i in self.common_flags]
for flags_modifier in self.path_flags:
flags_modifier.run(path, flags, config=config)
return flags
|
sci-fab
|
/sci_fab-1.0-py3-none-any.whl/fab/build_config.py
|
build_config.py
|
import getpass
import logging
import os
import sys
import warnings
from argparse import Namespace
from datetime import datetime
from fnmatch import fnmatch
from logging.handlers import RotatingFileHandler
from multiprocessing import cpu_count
from pathlib import Path
from string import Template
from typing import List, Optional, Dict, Any, Iterable
from fab.constants import BUILD_OUTPUT, SOURCE_ROOT, PREBUILD, CURRENT_PREBUILDS
from fab.metrics import send_metric, init_metrics, stop_metrics, metrics_summary
from fab.util import TimerLogger, by_type, get_fab_workspace
logger = logging.getLogger(__name__)
class BuildConfig(object):
"""
Contains and runs a list of build steps.
The user is not expected to instantiate this class directly,
but rather through the build_config() context manager.
"""
def __init__(self, project_label: str, parsed_args: Optional[Namespace] = None,
multiprocessing: bool = True, n_procs: Optional[int] = None, reuse_artefacts: bool = False,
fab_workspace: Optional[Path] = None,):
"""
:param project_label:
Name of the build project. The project workspace folder is created from this name, with spaces replaced
by underscores.
:param parsed_args:
If you want to add arguments to your script, please use common_arg_parser() and add arguements.
This pararmeter is the result of running :func:`ArgumentParser.parse_args`.
:param multiprocessing:
An option to disable multiprocessing to aid debugging.
:param n_procs:
The number of cores to use for multiprocessing operations. Defaults to the number of available cores.
:param reuse_artefacts:
A flag to avoid reprocessing certain files on subsequent runs.
WARNING: Currently unsophisticated, this flag should only be used by Fab developers.
The logic behind flag will soon be improved, in a work package called "incremental build".
:param fab_workspace:
Overrides the FAB_WORKSPACE environment variable.
If not set, and FAB_WORKSPACE is not set, the fab workspace defaults to *~/fab-workspace*.
"""
self.parsed_args = vars(parsed_args) if parsed_args else {}
from fab.steps.compile_fortran import get_fortran_compiler
compiler, _ = get_fortran_compiler()
project_label = Template(project_label).substitute(
compiler=compiler,
two_stage=f'{int(self.parsed_args.get("two_stage", 0))+1}stage')
self.project_label: str = project_label.replace(' ', '_')
# workspace folder
if not fab_workspace:
fab_workspace = get_fab_workspace()
logger.info(f"fab workspace is {fab_workspace}")
self.project_workspace: Path = fab_workspace / self.project_label
self.metrics_folder: Path = self.project_workspace / 'metrics' / self.project_label
# source config
self.source_root: Path = self.project_workspace / SOURCE_ROOT
self.prebuild_folder: Path = self.build_output / PREBUILD
# multiprocessing config
self.multiprocessing = multiprocessing
# turn off multiprocessing when debugging
# todo: turn off multiprocessing when running tests, as a good test runner will run use mp
if 'pydevd' in str(sys.gettrace()):
logger.info('debugger detected, running without multiprocessing')
self.multiprocessing = False
self.n_procs = n_procs
if self.multiprocessing and not self.n_procs:
try:
self.n_procs = max(1, len(os.sched_getaffinity(0)))
except AttributeError:
logger.error('could not enable multiprocessing')
self.multiprocessing = False
self.n_procs = None
self.reuse_artefacts = reuse_artefacts
# todo: should probably pull the artefact store out of the config
# runtime
# todo: either make this public, add get/setters, or extract into a class.
self._artefact_store: Dict[str, Any] = {}
self.init_artefact_store() # note: the artefact store is reset with every call to run()
def __enter__(self):
logger.info('')
logger.info(f'initialising {self.project_label}')
logger.info('')
if self.parsed_args.get('verbose'):
logging.getLogger('fab').setLevel(logging.DEBUG)
logger.info(f'building {self.project_label}')
self._start_time = datetime.now().replace(microsecond=0)
self._run_prep()
with TimerLogger(f'running {self.project_label} build steps') as build_timer:
# this will return to the build script
self._build_timer = build_timer
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if not exc_type: # None if there's no error.
from fab.steps.cleanup_prebuilds import CLEANUP_COUNT, cleanup_prebuilds
if CLEANUP_COUNT not in self._artefact_store:
logger.info("no housekeeping step was run, using a default hard cleanup")
cleanup_prebuilds(config=self, all_unused=True)
logger.info(f"Building '{self.project_label}' took {datetime.now() - self._start_time}")
# always
self._finalise_metrics(self._start_time, self._build_timer)
self._finalise_logging()
@property
def build_output(self):
return self.project_workspace / BUILD_OUTPUT
def init_artefact_store(self):
# there's no point writing to this from a child process of Step.run_mp() because you'll be modifying a copy.
self._artefact_store = {CURRENT_PREBUILDS: set()}
def add_current_prebuilds(self, artefacts: Iterable[Path]):
"""
Mark the given file paths as being current prebuilds, not to be cleaned during housekeeping.
"""
self._artefact_store[CURRENT_PREBUILDS].update(artefacts)
def _run_prep(self):
self._init_logging()
logger.info('')
logger.info(f'running {self.project_label}')
logger.info('')
self._prep_folders()
init_metrics(metrics_folder=self.metrics_folder)
# note: initialising here gives a new set of artefacts each run
self.init_artefact_store()
def _prep_folders(self):
self.source_root.mkdir(parents=True, exist_ok=True)
self.build_output.mkdir(parents=True, exist_ok=True)
self.prebuild_folder.mkdir(parents=True, exist_ok=True)
def _init_logging(self):
# add a file logger for our run
self.project_workspace.mkdir(parents=True, exist_ok=True)
log_file_handler = RotatingFileHandler(self.project_workspace / 'log.txt', backupCount=5, delay=True)
log_file_handler.doRollover()
logging.getLogger('fab').addHandler(log_file_handler)
logger.info(f"{datetime.now()}")
if self.multiprocessing:
logger.info(f'machine cores: {cpu_count()}')
logger.info(f'available cores: {len(os.sched_getaffinity(0))}')
logger.info(f'using n_procs = {self.n_procs}')
logger.info(f"workspace is {self.project_workspace}")
def _finalise_logging(self):
# remove our file logger
fab_logger = logging.getLogger('fab')
log_file_handlers = list(by_type(fab_logger.handlers, RotatingFileHandler))
if len(log_file_handlers) != 1:
warnings.warn(f'expected to find 1 RotatingFileHandler for removal, found {len(log_file_handlers)}')
fab_logger.removeHandler(log_file_handlers[0])
def _finalise_metrics(self, start_time, steps_timer):
send_metric('run', 'label', self.project_label)
send_metric('run', 'datetime', start_time.isoformat())
send_metric('run', 'time taken', steps_timer.taken)
send_metric('run', 'sysname', os.uname().sysname)
send_metric('run', 'nodename', os.uname().nodename)
send_metric('run', 'machine', os.uname().machine)
send_metric('run', 'user', getpass.getuser())
stop_metrics()
metrics_summary(metrics_folder=self.metrics_folder)
# todo: better name? perhaps PathFlags?
class AddFlags(object):
"""
Add command-line flags when our path filter matches.
Generally used inside a :class:`~fab.build_config.FlagsConfig`.
"""
def __init__(self, match: str, flags: List[str]):
"""
:param match:
The string to match against each file path.
:param flags:
The command-line flags to add for matching files.
Both the *match* and *flags* arguments can make use of templating:
- `$source` for *<project workspace>/source*
- `$output` for *<project workspace>/build_output*
- `$relative` for *<the source file's folder>*
For example::
# For source in the um folder, add an absolute include path
AddFlags(match="$source/um/*", flags=['-I$source/include']),
# For source in the um folder, add an include path relative to each source file.
AddFlags(match="$source/um/*", flags=['-I$relative/include']),
"""
self.match: str = match
self.flags: List[str] = flags
# todo: we don't need the project_workspace, we could just pass in the output folder
def run(self, fpath: Path, input_flags: List[str], config):
"""
Check if our filter matches a given file. If it does, add our flags.
:param fpath:
Filepath to check.
:param input_flags:
The list of command-line flags Fab is building for this file.
:param config:
Contains the folders for templating `$source` and `$output`.
"""
params = {'relative': fpath.parent, 'source': config.source_root, 'output': config.build_output}
# does the file path match our filter?
if not self.match or fnmatch(str(fpath), Template(self.match).substitute(params)):
# use templating to render any relative paths in our flags
add_flags = [Template(flag).substitute(params) for flag in self.flags]
# add our flags
input_flags += add_flags
class FlagsConfig(object):
"""
Return command-line flags for a given path.
Simply allows appending flags but may evolve to also replace and remove flags.
"""
def __init__(self, common_flags: Optional[List[str]] = None, path_flags: Optional[List[AddFlags]] = None):
"""
:param common_flags:
List of flags to apply to all files. E.g `['-O2']`.
:param path_flags:
List of :class:`~fab.build_config.AddFlags` objects which apply flags to selected paths.
"""
self.common_flags = common_flags or []
self.path_flags = path_flags or []
# todo: there's templating both in this method and the run method it calls.
# make sure it's all properly documented and rationalised.
def flags_for_path(self, path: Path, config):
"""
Get all the flags for a given file, in a reproducible order.
:param path:
The file path for which we want command-line flags.
:param config:
THe config contains the source root and project workspace.
"""
# We COULD make the user pass these template params to the constructor
# but we have a design requirement to minimise the config burden on the user,
# so we take care of it for them here instead.
params = {'source': config.source_root, 'output': config.build_output}
flags = [Template(i).substitute(params) for i in self.common_flags]
for flags_modifier in self.path_flags:
flags_modifier.run(path, flags, config=config)
return flags
| 0.527317 | 0.128635 |
from argparse import ArgumentParser
from pathlib import Path
from typing import Dict, Optional
from fab.steps.analyse import analyse
from fab.steps.c_pragma_injector import c_pragma_injector
from fab.steps.compile_c import compile_c
from fab.steps.link import link_exe
from fab.steps.root_inc_files import root_inc_files
import fab
from fab.artefacts import CollectionGetter
from fab.build_config import BuildConfig
from fab.constants import PRAGMAD_C
from fab.steps.compile_fortran import compile_fortran, get_fortran_compiler
from fab.steps.find_source_files import find_source_files
from fab.steps.grab.folder import grab_folder
from fab.steps.preprocess import preprocess_c, preprocess_fortran
def _generic_build_config(folder: Path, kwargs: Optional[Dict] = None) -> BuildConfig:
folder = folder.resolve()
kwargs = kwargs or {}
# Within the fab workspace, we'll create a project workspace.
# Ideally we'd just use folder.name, but to avoid clashes, we'll use the full absolute path.
label = '/'.join(folder.parts[1:])
linker, linker_flags = calc_linker_flags()
with BuildConfig(project_label=label, **kwargs) as config:
grab_folder(config, folder),
find_source_files(config),
root_inc_files(config), # JULES helper, get rid of this eventually
preprocess_fortran(config),
c_pragma_injector(config),
preprocess_c(config, source=CollectionGetter(PRAGMAD_C)),
analyse(config, find_programs=True),
compile_fortran(config),
compile_c(config),
link_exe(config, linker=linker, flags=linker_flags),
return config
def calc_linker_flags():
fc, _ = get_fortran_compiler()
# linker and flags depend on compiler
linkers = {
'gfortran': ('gcc', ['-lgfortran']),
# todo: test this and get it running
# 'ifort': (..., [...])
}
try:
linker, linker_flags = linkers[fc]
except KeyError:
raise NotImplementedError(f"Fab's zero configuration mode does not yet work with compiler '{fc}'")
return linker, linker_flags
def cli_fab():
"""
Running Fab from the command line will attempt to build the project in the current or given folder.
"""
# todo: use common_arg_parser()?
arg_parser = ArgumentParser()
arg_parser.add_argument('folder', nargs='?', default='.', type=Path)
arg_parser.add_argument('--version', action='version', version=f'%(prog)s {fab.__version__}')
args = arg_parser.parse_args()
_generic_build_config(args.folder)
|
sci-fab
|
/sci_fab-1.0-py3-none-any.whl/fab/cli.py
|
cli.py
|
from argparse import ArgumentParser
from pathlib import Path
from typing import Dict, Optional
from fab.steps.analyse import analyse
from fab.steps.c_pragma_injector import c_pragma_injector
from fab.steps.compile_c import compile_c
from fab.steps.link import link_exe
from fab.steps.root_inc_files import root_inc_files
import fab
from fab.artefacts import CollectionGetter
from fab.build_config import BuildConfig
from fab.constants import PRAGMAD_C
from fab.steps.compile_fortran import compile_fortran, get_fortran_compiler
from fab.steps.find_source_files import find_source_files
from fab.steps.grab.folder import grab_folder
from fab.steps.preprocess import preprocess_c, preprocess_fortran
def _generic_build_config(folder: Path, kwargs: Optional[Dict] = None) -> BuildConfig:
folder = folder.resolve()
kwargs = kwargs or {}
# Within the fab workspace, we'll create a project workspace.
# Ideally we'd just use folder.name, but to avoid clashes, we'll use the full absolute path.
label = '/'.join(folder.parts[1:])
linker, linker_flags = calc_linker_flags()
with BuildConfig(project_label=label, **kwargs) as config:
grab_folder(config, folder),
find_source_files(config),
root_inc_files(config), # JULES helper, get rid of this eventually
preprocess_fortran(config),
c_pragma_injector(config),
preprocess_c(config, source=CollectionGetter(PRAGMAD_C)),
analyse(config, find_programs=True),
compile_fortran(config),
compile_c(config),
link_exe(config, linker=linker, flags=linker_flags),
return config
def calc_linker_flags():
fc, _ = get_fortran_compiler()
# linker and flags depend on compiler
linkers = {
'gfortran': ('gcc', ['-lgfortran']),
# todo: test this and get it running
# 'ifort': (..., [...])
}
try:
linker, linker_flags = linkers[fc]
except KeyError:
raise NotImplementedError(f"Fab's zero configuration mode does not yet work with compiler '{fc}'")
return linker, linker_flags
def cli_fab():
"""
Running Fab from the command line will attempt to build the project in the current or given folder.
"""
# todo: use common_arg_parser()?
arg_parser = ArgumentParser()
arg_parser.add_argument('folder', nargs='?', default='.', type=Path)
arg_parser.add_argument('--version', action='version', version=f'%(prog)s {fab.__version__}')
args = arg_parser.parse_args()
_generic_build_config(args.folder)
| 0.540196 | 0.239905 |
import logging
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Union, Tuple
from fparser.common.readfortran import FortranFileReader # type: ignore
from fparser.two.parser import ParserFactory # type: ignore
from fparser.two.utils import FortranSyntaxError # type: ignore
from fab import FabException
from fab.dep_tree import AnalysedDependent
from fab.parse import EmptySourceFile
from fab.util import log_or_dot, file_checksum
logger = logging.getLogger(__name__)
def iter_content(obj):
"""
Return a generator which yields every node in the tree.
"""
yield obj
if hasattr(obj, "content"):
for child in _iter_content(obj.content):
yield child
def _iter_content(content):
for obj in content:
yield obj
if hasattr(obj, "content"):
for child in _iter_content(obj.content):
yield child
def _has_ancestor_type(obj, obj_type):
# Recursively check if an object has an ancestor of the given type.
if not obj.parent:
return False
if type(obj.parent) == obj_type:
return True
return _has_ancestor_type(obj.parent, obj_type)
def _typed_child(parent, child_type, must_exist=False):
# Look for a child of a certain type.
# Returns the child or None.
# Raises ValueError if more than one child of the given type is found.
children = list(filter(lambda child: type(child) == child_type, parent.children))
if len(children) > 1:
raise ValueError(f"too many children found of type {child_type}")
if children:
return children[0]
if must_exist:
raise FabException(f'Could not find child of type {child_type} in {parent}')
return None
class FortranAnalyserBase(ABC):
"""
Base class for Fortran parse-tree analysers, e.g FortranAnalyser and X90Analyser.
"""
_intrinsic_modules = ['iso_fortran_env', 'iso_c_binding']
def __init__(self, result_class, std=None):
"""
:param result_class:
The type (class) of the analysis result. Defined by the subclass.
:param std:
The Fortran standard.
"""
self.result_class = result_class
self.f2008_parser = ParserFactory().create(std=std or "f2008")
# todo: this, and perhaps other runtime variables like it, might be better set at construction
# if we construct these objects at runtime instead...
# runtime, for child processes to read
self._config = None
def run(self, fpath: Path) \
-> Union[Tuple[AnalysedDependent, Path], Tuple[EmptySourceFile, None], Tuple[Exception, None]]:
"""
Parse the source file and record what we're interested in (subclass specific).
Reloads previous analysis results if available.
Returns the analysis data and the result file where it was stored/loaded.
"""
# calculate the prebuild filename
file_hash = file_checksum(fpath).file_hash
analysis_fpath = self._get_analysis_fpath(fpath, file_hash)
# do we already have analysis results for this file?
if analysis_fpath.exists():
log_or_dot(logger, f"found analysis prebuild for {fpath}")
# Load the result file into whatever result class we use.
loaded_result = self.result_class.load(analysis_fpath)
if loaded_result:
# This result might have been created by another user; their prebuild folder copied to ours.
# If so, the fpath in the result will *not* point to the file we eventually want to compile,
# it will point to the user's original file, somewhere else. So replace it with our own path.
loaded_result.fpath = fpath
return loaded_result, analysis_fpath
log_or_dot(logger, f"analysing {fpath}")
# parse the file, get a node tree
node_tree = self._parse_file(fpath=fpath)
if isinstance(node_tree, Exception):
return Exception(f"error parsing file '{fpath}':\n{node_tree}"), None
if node_tree.content[0] is None:
logger.debug(f" empty tree found when parsing {fpath}")
# todo: If we don't save the empty result we'll keep analysing it every time!
return EmptySourceFile(fpath), None
# find things in the node tree
analysed_file = self.walk_nodes(fpath=fpath, file_hash=file_hash, node_tree=node_tree)
analysis_fpath = self._get_analysis_fpath(fpath, file_hash)
analysed_file.save(analysis_fpath)
return analysed_file, analysis_fpath
def _get_analysis_fpath(self, fpath, file_hash) -> Path:
return Path(self._config.prebuild_folder / f'{fpath.stem}.{file_hash}.an')
def _parse_file(self, fpath):
"""Get a node tree from a fortran file."""
reader = FortranFileReader(str(fpath), ignore_comments=False)
reader.exit_on_error = False # don't call sys.exit, it messes up the multi-processing
try:
tree = self.f2008_parser(reader)
return tree
except FortranSyntaxError as err:
# we can't return the FortranSyntaxError, it breaks multiprocessing!
logger.error(f"\nfparser raised a syntax error in {fpath}\n{err}")
return Exception(f"syntax error in {fpath}\n{err}")
except Exception as err:
logger.error(f"\nunhandled error '{type(err)}' in {fpath}\n{err}")
return Exception(f"unhandled error '{type(err)}' in {fpath}\n{err}")
@abstractmethod
def walk_nodes(self, fpath, file_hash, node_tree) -> AnalysedDependent:
"""
Examine the nodes in the parse tree, recording things we're interested in.
Return type depends on our subclass, and will be a subclass of AnalysedDependent.
"""
raise NotImplementedError
|
sci-fab
|
/sci_fab-1.0-py3-none-any.whl/fab/parse/fortran_common.py
|
fortran_common.py
|
import logging
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Union, Tuple
from fparser.common.readfortran import FortranFileReader # type: ignore
from fparser.two.parser import ParserFactory # type: ignore
from fparser.two.utils import FortranSyntaxError # type: ignore
from fab import FabException
from fab.dep_tree import AnalysedDependent
from fab.parse import EmptySourceFile
from fab.util import log_or_dot, file_checksum
logger = logging.getLogger(__name__)
def iter_content(obj):
"""
Return a generator which yields every node in the tree.
"""
yield obj
if hasattr(obj, "content"):
for child in _iter_content(obj.content):
yield child
def _iter_content(content):
for obj in content:
yield obj
if hasattr(obj, "content"):
for child in _iter_content(obj.content):
yield child
def _has_ancestor_type(obj, obj_type):
# Recursively check if an object has an ancestor of the given type.
if not obj.parent:
return False
if type(obj.parent) == obj_type:
return True
return _has_ancestor_type(obj.parent, obj_type)
def _typed_child(parent, child_type, must_exist=False):
# Look for a child of a certain type.
# Returns the child or None.
# Raises ValueError if more than one child of the given type is found.
children = list(filter(lambda child: type(child) == child_type, parent.children))
if len(children) > 1:
raise ValueError(f"too many children found of type {child_type}")
if children:
return children[0]
if must_exist:
raise FabException(f'Could not find child of type {child_type} in {parent}')
return None
class FortranAnalyserBase(ABC):
"""
Base class for Fortran parse-tree analysers, e.g FortranAnalyser and X90Analyser.
"""
_intrinsic_modules = ['iso_fortran_env', 'iso_c_binding']
def __init__(self, result_class, std=None):
"""
:param result_class:
The type (class) of the analysis result. Defined by the subclass.
:param std:
The Fortran standard.
"""
self.result_class = result_class
self.f2008_parser = ParserFactory().create(std=std or "f2008")
# todo: this, and perhaps other runtime variables like it, might be better set at construction
# if we construct these objects at runtime instead...
# runtime, for child processes to read
self._config = None
def run(self, fpath: Path) \
-> Union[Tuple[AnalysedDependent, Path], Tuple[EmptySourceFile, None], Tuple[Exception, None]]:
"""
Parse the source file and record what we're interested in (subclass specific).
Reloads previous analysis results if available.
Returns the analysis data and the result file where it was stored/loaded.
"""
# calculate the prebuild filename
file_hash = file_checksum(fpath).file_hash
analysis_fpath = self._get_analysis_fpath(fpath, file_hash)
# do we already have analysis results for this file?
if analysis_fpath.exists():
log_or_dot(logger, f"found analysis prebuild for {fpath}")
# Load the result file into whatever result class we use.
loaded_result = self.result_class.load(analysis_fpath)
if loaded_result:
# This result might have been created by another user; their prebuild folder copied to ours.
# If so, the fpath in the result will *not* point to the file we eventually want to compile,
# it will point to the user's original file, somewhere else. So replace it with our own path.
loaded_result.fpath = fpath
return loaded_result, analysis_fpath
log_or_dot(logger, f"analysing {fpath}")
# parse the file, get a node tree
node_tree = self._parse_file(fpath=fpath)
if isinstance(node_tree, Exception):
return Exception(f"error parsing file '{fpath}':\n{node_tree}"), None
if node_tree.content[0] is None:
logger.debug(f" empty tree found when parsing {fpath}")
# todo: If we don't save the empty result we'll keep analysing it every time!
return EmptySourceFile(fpath), None
# find things in the node tree
analysed_file = self.walk_nodes(fpath=fpath, file_hash=file_hash, node_tree=node_tree)
analysis_fpath = self._get_analysis_fpath(fpath, file_hash)
analysed_file.save(analysis_fpath)
return analysed_file, analysis_fpath
def _get_analysis_fpath(self, fpath, file_hash) -> Path:
return Path(self._config.prebuild_folder / f'{fpath.stem}.{file_hash}.an')
def _parse_file(self, fpath):
"""Get a node tree from a fortran file."""
reader = FortranFileReader(str(fpath), ignore_comments=False)
reader.exit_on_error = False # don't call sys.exit, it messes up the multi-processing
try:
tree = self.f2008_parser(reader)
return tree
except FortranSyntaxError as err:
# we can't return the FortranSyntaxError, it breaks multiprocessing!
logger.error(f"\nfparser raised a syntax error in {fpath}\n{err}")
return Exception(f"syntax error in {fpath}\n{err}")
except Exception as err:
logger.error(f"\nunhandled error '{type(err)}' in {fpath}\n{err}")
return Exception(f"unhandled error '{type(err)}' in {fpath}\n{err}")
@abstractmethod
def walk_nodes(self, fpath, file_hash, node_tree) -> AnalysedDependent:
"""
Examine the nodes in the parse tree, recording things we're interested in.
Return type depends on our subclass, and will be a subclass of AnalysedDependent.
"""
raise NotImplementedError
| 0.708011 | 0.220804 |
import json
import logging
from abc import ABC
from pathlib import Path
from typing import Union, Optional, Dict, Any, Set
from fab.util import file_checksum
logger = logging.getLogger(__name__)
class ParseException(Exception):
pass
class AnalysedFile(ABC):
"""
Analysis results for a single file. Abstract base class.
"""
def __init__(self, fpath: Union[str, Path], file_hash: Optional[int] = None):
"""
:param fpath:
The path of the file which was analysed.
:param file_hash:
The checksum of the file which was analysed.
If omitted, Fab will evaluate lazily.
If not provided, the `self.file_hash` property is lazily evaluated in case the file does not yet exist.
"""
self.fpath = Path(fpath)
self._file_hash = file_hash
@property
def file_hash(self):
if self._file_hash is None:
if not self.fpath.exists():
raise ValueError(f"analysed file '{self.fpath}' does not exist")
self._file_hash: int = file_checksum(self.fpath).file_hash
return self._file_hash
def __eq__(self, other):
# todo: better to use self.field_names() instead of vars(self) in order to evaluate any lazy attributes?
return vars(self) == vars(other)
# persistence
def to_dict(self) -> Dict[str, Any]:
"""
Create a dict representing the object.
The dict may be written to json, so can't contain sets.
Lists are sorted for reproducibility in testing.
"""
return {
"fpath": str(self.fpath),
"file_hash": self.file_hash
}
@classmethod
def from_dict(cls, d):
raise NotImplementedError
def save(self, fpath: Union[str, Path]):
# subclasses don't need to override this method
d = self.to_dict()
d["cls"] = self.__class__.__name__
json.dump(d, open(fpath, 'wt'), indent=4)
@classmethod
def load(cls, fpath: Union[str, Path]):
# subclasses don't need to override this method
d = json.load(open(fpath))
found_class = d["cls"]
if found_class != cls.__name__:
raise ValueError(f"Expected class name '{cls.__name__}', found '{found_class}'")
return cls.from_dict(d)
# human readability
@classmethod
def field_names(cls):
"""
Defines the order in which we want fields to appear in str or repr strings.
Calling this helps to ensure any lazy attributes are evaluated before use,
e.g when constructing a string representation of the instance, or generating a hash value.
"""
return ['fpath', 'file_hash']
def __str__(self):
# We use self.field_names() instead of vars(self) in order to evaluate any lazy attributes.
values = [getattr(self, field_name) for field_name in self.field_names()]
return f'{self.__class__.__name__} ' + ' '.join(map(str, values))
def __repr__(self):
params = ', '.join([f'{f}={repr(getattr(self, f))}' for f in self.field_names()])
return f'{self.__class__.__name__}({params})'
# We need to be hashable before we can go into a set, which is useful for our subclasses.
# Note, the numerical result will change with each Python invocation.
def __hash__(self):
# Build up a list of things to hash, from our attributes.
# We use self.field_names() rather than vars(self) because we want to evaluate any lazy attributes.
# We turn dicts and sets into sorted tuples for hashing.
# todo: There's a good reason dicts and sets aren't supposed to be hashable.
# Please see https://github.com/metomi/fab/issues/229
things = set()
for field_name in self.field_names():
thing = getattr(self, field_name)
if isinstance(thing, Dict):
things.add(tuple(sorted(thing.items())))
elif isinstance(thing, Set):
things.add(tuple(sorted(thing)))
else:
things.add(thing)
return hash(tuple(things))
# todo: There's a design weakness relating to this class:
# we don't save empty results, which means we'll keep reanalysing them.
# We should save empty files and allow the loading to detect this, as it already reads the class name.
class EmptySourceFile(AnalysedFile):
"""
An analysis result for a file which resulted in an empty parse tree.
"""
def __init__(self, fpath: Union[str, Path]):
"""
:param fpath:
The path of the file which was analysed.
"""
super().__init__(fpath=fpath)
@classmethod
def from_dict(cls, d):
# todo: load & save should be implemented here and used by the calling code, to save reanalysis.
raise NotImplementedError
|
sci-fab
|
/sci_fab-1.0-py3-none-any.whl/fab/parse/__init__.py
|
__init__.py
|
import json
import logging
from abc import ABC
from pathlib import Path
from typing import Union, Optional, Dict, Any, Set
from fab.util import file_checksum
logger = logging.getLogger(__name__)
class ParseException(Exception):
pass
class AnalysedFile(ABC):
"""
Analysis results for a single file. Abstract base class.
"""
def __init__(self, fpath: Union[str, Path], file_hash: Optional[int] = None):
"""
:param fpath:
The path of the file which was analysed.
:param file_hash:
The checksum of the file which was analysed.
If omitted, Fab will evaluate lazily.
If not provided, the `self.file_hash` property is lazily evaluated in case the file does not yet exist.
"""
self.fpath = Path(fpath)
self._file_hash = file_hash
@property
def file_hash(self):
if self._file_hash is None:
if not self.fpath.exists():
raise ValueError(f"analysed file '{self.fpath}' does not exist")
self._file_hash: int = file_checksum(self.fpath).file_hash
return self._file_hash
def __eq__(self, other):
# todo: better to use self.field_names() instead of vars(self) in order to evaluate any lazy attributes?
return vars(self) == vars(other)
# persistence
def to_dict(self) -> Dict[str, Any]:
"""
Create a dict representing the object.
The dict may be written to json, so can't contain sets.
Lists are sorted for reproducibility in testing.
"""
return {
"fpath": str(self.fpath),
"file_hash": self.file_hash
}
@classmethod
def from_dict(cls, d):
raise NotImplementedError
def save(self, fpath: Union[str, Path]):
# subclasses don't need to override this method
d = self.to_dict()
d["cls"] = self.__class__.__name__
json.dump(d, open(fpath, 'wt'), indent=4)
@classmethod
def load(cls, fpath: Union[str, Path]):
# subclasses don't need to override this method
d = json.load(open(fpath))
found_class = d["cls"]
if found_class != cls.__name__:
raise ValueError(f"Expected class name '{cls.__name__}', found '{found_class}'")
return cls.from_dict(d)
# human readability
@classmethod
def field_names(cls):
"""
Defines the order in which we want fields to appear in str or repr strings.
Calling this helps to ensure any lazy attributes are evaluated before use,
e.g when constructing a string representation of the instance, or generating a hash value.
"""
return ['fpath', 'file_hash']
def __str__(self):
# We use self.field_names() instead of vars(self) in order to evaluate any lazy attributes.
values = [getattr(self, field_name) for field_name in self.field_names()]
return f'{self.__class__.__name__} ' + ' '.join(map(str, values))
def __repr__(self):
params = ', '.join([f'{f}={repr(getattr(self, f))}' for f in self.field_names()])
return f'{self.__class__.__name__}({params})'
# We need to be hashable before we can go into a set, which is useful for our subclasses.
# Note, the numerical result will change with each Python invocation.
def __hash__(self):
# Build up a list of things to hash, from our attributes.
# We use self.field_names() rather than vars(self) because we want to evaluate any lazy attributes.
# We turn dicts and sets into sorted tuples for hashing.
# todo: There's a good reason dicts and sets aren't supposed to be hashable.
# Please see https://github.com/metomi/fab/issues/229
things = set()
for field_name in self.field_names():
thing = getattr(self, field_name)
if isinstance(thing, Dict):
things.add(tuple(sorted(thing.items())))
elif isinstance(thing, Set):
things.add(tuple(sorted(thing)))
else:
things.add(thing)
return hash(tuple(things))
# todo: There's a design weakness relating to this class:
# we don't save empty results, which means we'll keep reanalysing them.
# We should save empty files and allow the loading to detect this, as it already reads the class name.
class EmptySourceFile(AnalysedFile):
"""
An analysis result for a file which resulted in an empty parse tree.
"""
def __init__(self, fpath: Union[str, Path]):
"""
:param fpath:
The path of the file which was analysed.
"""
super().__init__(fpath=fpath)
@classmethod
def from_dict(cls, d):
# todo: load & save should be implemented here and used by the calling code, to save reanalysis.
raise NotImplementedError
| 0.696578 | 0.257552 |
import logging
from typing import Optional, Iterable
from fab.steps import step
from fab.util import file_walk
logger = logging.getLogger(__name__)
class _PathFilter(object):
# Simple pattern matching using string containment check.
# Deems an incoming path as included or excluded.
def __init__(self, *filter_strings: str, include: bool):
"""
:param filter_strings:
One or more strings to be used as pattern matches.
:param include:
Set to True or False to include or exclude matching paths.
"""
self.filter_strings: Iterable[str] = filter_strings
self.include = include
def check(self, path):
if any(str(i) in str(path) for i in self.filter_strings):
return self.include
return None
class Include(_PathFilter):
"""
A path filter which includes matching paths, this convenience class improves config readability.
"""
def __init__(self, *filter_strings):
"""
:param filter_strings:
One or more strings to be used as pattern matches.
"""
super().__init__(*filter_strings, include=True)
def __str__(self):
return f'Include({", ".join(self.filter_strings)})'
class Exclude(_PathFilter):
"""
A path filter which excludes matching paths, this convenience class improves config readability.
"""
def __init__(self, *filter_strings):
"""
:param filter_strings:
One or more strings to be used as pattern matches.
"""
super().__init__(*filter_strings, include=False)
def __str__(self):
return f'Exclude({", ".join(self.filter_strings)})'
@step
def find_source_files(config, source_root=None, output_collection="all_source",
path_filters: Optional[Iterable[_PathFilter]] = None):
"""
Find the files in the source folder, with filtering.
Files can be included or excluded with simple pattern matching.
Every file is included by default, unless the filters say otherwise.
Path filters are expected to be provided by the user in an *ordered* collection.
The two convenience subclasses, :class:`~fab.steps.walk_source.Include` and :class:`~fab.steps.walk_source.Exclude`,
improve readability.
Order matters. For example::
path_filters = [
Exclude('my_folder'),
Include('my_folder/my_file.F90'),
]
In the above example, swapping the order would stop the file being included in the build.
A path matches a filter string simply if it *contains* it,
so the path *my_folder/my_file.F90* would match filters "my_folder", "my_file" and "er/my".
:param config:
The :class:`fab.build_config.BuildConfig` object where we can read settings
such as the project workspace folder or the multiprocessing flag.
:param source_root:
Optional path to source folder, with a sensible default.
:param output_collection:
Name of artefact collection to create, with a sensible default.
:param path_filters:
Iterable of Include and/or Exclude objects, to be processed in order.
:param name:
Human friendly name for logger output, with sensible default.
"""
path_filters = path_filters or []
"""
Recursively get all files in the given folder, with filtering.
:param artefact_store:
Contains artefacts created by previous Steps, and where we add our new artefacts.
This is where the given :class:`~fab.artefacts.ArtefactsGetter` finds the artefacts to process.
:param config:
The :class:`fab.build_config.BuildConfig` object where we can read settings
such as the project workspace folder or the multiprocessing flag.
"""
source_root = source_root or config.source_root
# file filtering
filtered_fpaths = []
# todo: we shouldn't need to ignore the prebuild folder here, it's not underneath the source root.
for fpath in file_walk(source_root, ignore_folders=[config.prebuild_folder]):
wanted = True
for path_filter in path_filters:
# did this filter have anything to say about this file?
res = path_filter.check(fpath)
if res is not None:
wanted = res
if wanted:
filtered_fpaths.append(fpath)
else:
logger.debug(f"excluding {fpath}")
if not filtered_fpaths:
raise RuntimeError("no source files found after filtering")
config._artefact_store[output_collection] = filtered_fpaths
|
sci-fab
|
/sci_fab-1.0-py3-none-any.whl/fab/steps/find_source_files.py
|
find_source_files.py
|
import logging
from typing import Optional, Iterable
from fab.steps import step
from fab.util import file_walk
logger = logging.getLogger(__name__)
class _PathFilter(object):
# Simple pattern matching using string containment check.
# Deems an incoming path as included or excluded.
def __init__(self, *filter_strings: str, include: bool):
"""
:param filter_strings:
One or more strings to be used as pattern matches.
:param include:
Set to True or False to include or exclude matching paths.
"""
self.filter_strings: Iterable[str] = filter_strings
self.include = include
def check(self, path):
if any(str(i) in str(path) for i in self.filter_strings):
return self.include
return None
class Include(_PathFilter):
"""
A path filter which includes matching paths, this convenience class improves config readability.
"""
def __init__(self, *filter_strings):
"""
:param filter_strings:
One or more strings to be used as pattern matches.
"""
super().__init__(*filter_strings, include=True)
def __str__(self):
return f'Include({", ".join(self.filter_strings)})'
class Exclude(_PathFilter):
"""
A path filter which excludes matching paths, this convenience class improves config readability.
"""
def __init__(self, *filter_strings):
"""
:param filter_strings:
One or more strings to be used as pattern matches.
"""
super().__init__(*filter_strings, include=False)
def __str__(self):
return f'Exclude({", ".join(self.filter_strings)})'
@step
def find_source_files(config, source_root=None, output_collection="all_source",
path_filters: Optional[Iterable[_PathFilter]] = None):
"""
Find the files in the source folder, with filtering.
Files can be included or excluded with simple pattern matching.
Every file is included by default, unless the filters say otherwise.
Path filters are expected to be provided by the user in an *ordered* collection.
The two convenience subclasses, :class:`~fab.steps.walk_source.Include` and :class:`~fab.steps.walk_source.Exclude`,
improve readability.
Order matters. For example::
path_filters = [
Exclude('my_folder'),
Include('my_folder/my_file.F90'),
]
In the above example, swapping the order would stop the file being included in the build.
A path matches a filter string simply if it *contains* it,
so the path *my_folder/my_file.F90* would match filters "my_folder", "my_file" and "er/my".
:param config:
The :class:`fab.build_config.BuildConfig` object where we can read settings
such as the project workspace folder or the multiprocessing flag.
:param source_root:
Optional path to source folder, with a sensible default.
:param output_collection:
Name of artefact collection to create, with a sensible default.
:param path_filters:
Iterable of Include and/or Exclude objects, to be processed in order.
:param name:
Human friendly name for logger output, with sensible default.
"""
path_filters = path_filters or []
"""
Recursively get all files in the given folder, with filtering.
:param artefact_store:
Contains artefacts created by previous Steps, and where we add our new artefacts.
This is where the given :class:`~fab.artefacts.ArtefactsGetter` finds the artefacts to process.
:param config:
The :class:`fab.build_config.BuildConfig` object where we can read settings
such as the project workspace folder or the multiprocessing flag.
"""
source_root = source_root or config.source_root
# file filtering
filtered_fpaths = []
# todo: we shouldn't need to ignore the prebuild folder here, it's not underneath the source root.
for fpath in file_walk(source_root, ignore_folders=[config.prebuild_folder]):
wanted = True
for path_filter in path_filters:
# did this filter have anything to say about this file?
res = path_filter.check(fpath)
if res is not None:
wanted = res
if wanted:
filtered_fpaths.append(fpath)
else:
logger.debug(f"excluding {fpath}")
if not filtered_fpaths:
raise RuntimeError("no source files found after filtering")
config._artefact_store[output_collection] = filtered_fpaths
| 0.812049 | 0.40486 |
import logging
from string import Template
from typing import Optional
from fab.build_config import BuildConfig
from fab.constants import OBJECT_FILES, OBJECT_ARCHIVES
from fab.steps import step
from fab.util import log_or_dot
from fab.tools import run_command
from fab.artefacts import ArtefactsGetter, CollectionGetter
logger = logging.getLogger(__name__)
DEFAULT_SOURCE_GETTER = CollectionGetter(OBJECT_FILES)
# todo: two diagrams showing the flow of artefacts in the exe and library use cases
# show how the library has a single build target with None as the name.
# todo: all this documentation for such a simple step - should we split it up somehow?
@step
def archive_objects(config: BuildConfig, source: Optional[ArtefactsGetter] = None, archiver='ar',
output_fpath=None, output_collection=OBJECT_ARCHIVES):
"""
Create an object archive for every build target, from their object files.
An object archive is a set of object (*.o*) files bundled into a single file, typically with a *.a* extension.
Expects one or more build targets from its artefact getter, of the form Dict[name, object_files].
By default, it finds the build targets and their object files in the artefact collection named by
:py:const:`fab.constants.COMPILED_FILES`.
This step has three use cases:
* The **object archive** is the end goal of the build.
* The object archive is a convenience step before linking a **shared object**.
* One or more object archives as convenience steps before linking **executables**.
The benefit of creating an object archive before linking is simply to reduce the size
of the linker command, which might otherwise include thousands of .o files, making any error output
difficult to read. You don't have to use this step before linking.
The linker step has a default artefact getter which will work with or without this preceding step.
**Creating a Static or Shared Library:**
When building a library there is expected to be a single build target with a `None` name.
This typically happens when configuring the :class:`~fab.steps.analyser.Analyser` step *without* a root symbol.
We can assume the list of object files is the entire project source, compiled.
In this case you must specify an *output_fpath*.
**Creating Executables:**
When creating executables, there is expected to be one or more build targets, each with a name.
This typically happens when configuring the :class:`~fab.steps.analyser.Analyser` step *with* a root symbol(s).
We can assume each list of object files is sufficient to build each *<root_symbol>.exe*.
In this case you cannot specify an *output_fpath* path because they are automatically created from the
target name.
:param config:
The :class:`fab.build_config.BuildConfig` object where we can read settings
such as the project workspace folder or the multiprocessing flag.
:param source:
An :class:`~fab.artefacts.ArtefactsGetter` which give us our lists of objects to archive.
The artefacts are expected to be of the form `Dict[root_symbol_name, list_of_object_files]`.
:param archiver:
The archiver executable. Defaults to 'ar'.
:param output_fpath:
The file path of the archive file to create.
This string can include templating, where "$output" is replaced with the output folder.
* Must be specified when building a library file (no build target name).
* Must not be specified when building linker input (one or more build target names).
:param output_collection:
The name of the artefact collection to create. Defaults to the name in
:const:`fab.constants.OBJECT_ARCHIVES`.
"""
# todo: the output path should not be an abs fpath, it should be relative to the proj folder
source_getter = source or DEFAULT_SOURCE_GETTER
archiver = archiver
output_fpath = str(output_fpath) if output_fpath else None
output_collection = output_collection
target_objects = source_getter(config._artefact_store)
assert target_objects.keys()
if output_fpath and list(target_objects.keys()) != [None]:
raise ValueError("You must not specify an output path (library) when there are root symbols (exes)")
if not output_fpath and list(target_objects.keys()) == [None]:
raise ValueError("You must specify an output path when building a library.")
output_archives = config._artefact_store.setdefault(output_collection, {})
for root, objects in target_objects.items():
if root:
# we're building an object archive for an exe
output_fpath = str(config.build_output / f'{root}.a')
else:
# we're building a single object archive with a given filename
assert len(target_objects) == 1, "unexpected root of None with multiple build targets"
output_fpath = Template(str(output_fpath)).substitute(
output=config.build_output)
command = [archiver]
command.extend(['cr', output_fpath])
command.extend(map(str, sorted(objects)))
log_or_dot(logger, 'CreateObjectArchive running command: ' + ' '.join(command))
try:
run_command(command)
except Exception as err:
raise Exception(f"error creating object archive:\n{err}")
output_archives[root] = [output_fpath]
|
sci-fab
|
/sci_fab-1.0-py3-none-any.whl/fab/steps/archive_objects.py
|
archive_objects.py
|
import logging
from string import Template
from typing import Optional
from fab.build_config import BuildConfig
from fab.constants import OBJECT_FILES, OBJECT_ARCHIVES
from fab.steps import step
from fab.util import log_or_dot
from fab.tools import run_command
from fab.artefacts import ArtefactsGetter, CollectionGetter
logger = logging.getLogger(__name__)
DEFAULT_SOURCE_GETTER = CollectionGetter(OBJECT_FILES)
# todo: two diagrams showing the flow of artefacts in the exe and library use cases
# show how the library has a single build target with None as the name.
# todo: all this documentation for such a simple step - should we split it up somehow?
@step
def archive_objects(config: BuildConfig, source: Optional[ArtefactsGetter] = None, archiver='ar',
output_fpath=None, output_collection=OBJECT_ARCHIVES):
"""
Create an object archive for every build target, from their object files.
An object archive is a set of object (*.o*) files bundled into a single file, typically with a *.a* extension.
Expects one or more build targets from its artefact getter, of the form Dict[name, object_files].
By default, it finds the build targets and their object files in the artefact collection named by
:py:const:`fab.constants.COMPILED_FILES`.
This step has three use cases:
* The **object archive** is the end goal of the build.
* The object archive is a convenience step before linking a **shared object**.
* One or more object archives as convenience steps before linking **executables**.
The benefit of creating an object archive before linking is simply to reduce the size
of the linker command, which might otherwise include thousands of .o files, making any error output
difficult to read. You don't have to use this step before linking.
The linker step has a default artefact getter which will work with or without this preceding step.
**Creating a Static or Shared Library:**
When building a library there is expected to be a single build target with a `None` name.
This typically happens when configuring the :class:`~fab.steps.analyser.Analyser` step *without* a root symbol.
We can assume the list of object files is the entire project source, compiled.
In this case you must specify an *output_fpath*.
**Creating Executables:**
When creating executables, there is expected to be one or more build targets, each with a name.
This typically happens when configuring the :class:`~fab.steps.analyser.Analyser` step *with* a root symbol(s).
We can assume each list of object files is sufficient to build each *<root_symbol>.exe*.
In this case you cannot specify an *output_fpath* path because they are automatically created from the
target name.
:param config:
The :class:`fab.build_config.BuildConfig` object where we can read settings
such as the project workspace folder or the multiprocessing flag.
:param source:
An :class:`~fab.artefacts.ArtefactsGetter` which give us our lists of objects to archive.
The artefacts are expected to be of the form `Dict[root_symbol_name, list_of_object_files]`.
:param archiver:
The archiver executable. Defaults to 'ar'.
:param output_fpath:
The file path of the archive file to create.
This string can include templating, where "$output" is replaced with the output folder.
* Must be specified when building a library file (no build target name).
* Must not be specified when building linker input (one or more build target names).
:param output_collection:
The name of the artefact collection to create. Defaults to the name in
:const:`fab.constants.OBJECT_ARCHIVES`.
"""
# todo: the output path should not be an abs fpath, it should be relative to the proj folder
source_getter = source or DEFAULT_SOURCE_GETTER
archiver = archiver
output_fpath = str(output_fpath) if output_fpath else None
output_collection = output_collection
target_objects = source_getter(config._artefact_store)
assert target_objects.keys()
if output_fpath and list(target_objects.keys()) != [None]:
raise ValueError("You must not specify an output path (library) when there are root symbols (exes)")
if not output_fpath and list(target_objects.keys()) == [None]:
raise ValueError("You must specify an output path when building a library.")
output_archives = config._artefact_store.setdefault(output_collection, {})
for root, objects in target_objects.items():
if root:
# we're building an object archive for an exe
output_fpath = str(config.build_output / f'{root}.a')
else:
# we're building a single object archive with a given filename
assert len(target_objects) == 1, "unexpected root of None with multiple build targets"
output_fpath = Template(str(output_fpath)).substitute(
output=config.build_output)
command = [archiver]
command.extend(['cr', output_fpath])
command.extend(map(str, sorted(objects)))
log_or_dot(logger, 'CreateObjectArchive running command: ' + ' '.join(command))
try:
run_command(command)
except Exception as err:
raise Exception(f"error creating object archive:\n{err}")
output_archives[root] = [output_fpath]
| 0.572723 | 0.47457 |
import logging
import os
import warnings
import zlib
from collections import defaultdict
from dataclasses import dataclass
from typing import List, Dict, Optional, Tuple
from fab import FabException
from fab.artefacts import ArtefactsGetter, FilterBuildTrees
from fab.build_config import BuildConfig, FlagsConfig
from fab.constants import OBJECT_FILES
from fab.metrics import send_metric
from fab.parse.c import AnalysedC
from fab.steps import check_for_errors, run_mp, step
from fab.tools import flags_checksum, run_command, get_tool, get_compiler_version
from fab.util import CompiledFile, log_or_dot, Timer, by_type
logger = logging.getLogger(__name__)
DEFAULT_SOURCE_GETTER = FilterBuildTrees(suffix='.c')
DEFAULT_OUTPUT_ARTEFACT = ''
@dataclass
class MpCommonArgs(object):
config: BuildConfig
flags: FlagsConfig
compiler: str
compiler_version: str
@step
def compile_c(config, common_flags: Optional[List[str]] = None,
path_flags: Optional[List] = None, source: Optional[ArtefactsGetter] = None):
"""
Compiles all C files in all build trees, creating or extending a set of compiled files for each target.
This step uses multiprocessing.
All C files are compiled in a single pass.
The command line compiler to is taken from the environment variable `CC`, and defaults to `gcc -c`.
Uses multiprocessing, unless disabled in the *config*.
:param config:
The :class:`fab.build_config.BuildConfig` object where we can read settings
such as the project workspace folder or the multiprocessing flag.
:param common_flags:
A list of strings to be included in the command line call, for all files.
:param path_flags:
A list of :class:`~fab.build_config.AddFlags`, defining flags to be included in the command line call
for selected files.
:param source:
An :class:`~fab.artefacts.ArtefactsGetter` which give us our c files to process.
"""
# todo: tell the compiler (and other steps) which artefact name to create?
compiler, compiler_flags = get_tool(os.getenv('CC', 'gcc -c'))
compiler_version = get_compiler_version(compiler)
logger.info(f'c compiler is {compiler} {compiler_version}')
env_flags = os.getenv('CFLAGS', '').split()
common_flags = compiler_flags + env_flags + (common_flags or [])
# make sure we have a -c
# todo: c compiler awareness, like we have with fortran?
if '-c' not in common_flags:
warnings.warn("Adding '-c' to C compiler flags")
common_flags = ['-c'] + common_flags
flags = FlagsConfig(common_flags=common_flags, path_flags=path_flags)
source_getter = source or DEFAULT_SOURCE_GETTER
# gather all the source to compile, for all build trees, into one big lump
build_lists: Dict = source_getter(config._artefact_store)
to_compile: list = sum(build_lists.values(), [])
logger.info(f"compiling {len(to_compile)} c files")
mp_payload = MpCommonArgs(config=config, flags=flags, compiler=compiler, compiler_version=compiler_version)
mp_items = [(fpath, mp_payload) for fpath in to_compile]
# compile everything in one go
compilation_results = run_mp(config, items=mp_items, func=_compile_file)
check_for_errors(compilation_results, caller_label='compile c')
compiled_c = list(by_type(compilation_results, CompiledFile))
logger.info(f"compiled {len(compiled_c)} c files")
# record the prebuild files as being current, so the cleanup knows not to delete them
prebuild_files = {r.output_fpath for r in compiled_c}
config.add_current_prebuilds(prebuild_files)
# record the compilation results for the next step
store_artefacts(compiled_c, build_lists, config._artefact_store)
# todo: very similar code in fortran compiler
def store_artefacts(compiled_files: List[CompiledFile], build_lists: Dict[str, List], artefact_store):
"""
Create our artefact collection; object files for each compiled file, per root symbol.
"""
# add the new object files to the artefact store, by target
lookup = {c.input_fpath: c for c in compiled_files}
object_files = artefact_store.setdefault(OBJECT_FILES, defaultdict(set))
for root, source_files in build_lists.items():
new_objects = [lookup[af.fpath].output_fpath for af in source_files]
object_files[root].update(new_objects)
def _compile_file(arg: Tuple[AnalysedC, MpCommonArgs]):
analysed_file, mp_payload = arg
with Timer() as timer:
flags = mp_payload.flags.flags_for_path(path=analysed_file.fpath, config=mp_payload.config)
obj_combo_hash = _get_obj_combo_hash(mp_payload.compiler, mp_payload.compiler_version, analysed_file, flags)
obj_file_prebuild = mp_payload.config.prebuild_folder / f'{analysed_file.fpath.stem}.{obj_combo_hash:x}.o'
# prebuild available?
if obj_file_prebuild.exists():
log_or_dot(logger, f'CompileC using prebuild: {analysed_file.fpath}')
else:
obj_file_prebuild.parent.mkdir(parents=True, exist_ok=True)
command = mp_payload.compiler.split() # type: ignore
command.extend(flags)
command.append(str(analysed_file.fpath))
command.extend(['-o', str(obj_file_prebuild)])
log_or_dot(logger, f'CompileC compiling {analysed_file.fpath}')
try:
run_command(command)
except Exception as err:
return FabException(f"error compiling {analysed_file.fpath}:\n{err}")
send_metric(
group="compile c",
name=str(analysed_file.fpath),
value={'time_taken': timer.taken, 'start': timer.start})
return CompiledFile(input_fpath=analysed_file.fpath, output_fpath=obj_file_prebuild)
def _get_obj_combo_hash(compiler, compiler_version, analysed_file, flags):
# get a combo hash of things which matter to the object file we define
try:
obj_combo_hash = sum([
analysed_file.file_hash,
flags_checksum(flags),
zlib.crc32(compiler.encode()),
zlib.crc32(compiler_version.encode()),
])
except TypeError:
raise ValueError("could not generate combo hash for object file")
return obj_combo_hash
|
sci-fab
|
/sci_fab-1.0-py3-none-any.whl/fab/steps/compile_c.py
|
compile_c.py
|
import logging
import os
import warnings
import zlib
from collections import defaultdict
from dataclasses import dataclass
from typing import List, Dict, Optional, Tuple
from fab import FabException
from fab.artefacts import ArtefactsGetter, FilterBuildTrees
from fab.build_config import BuildConfig, FlagsConfig
from fab.constants import OBJECT_FILES
from fab.metrics import send_metric
from fab.parse.c import AnalysedC
from fab.steps import check_for_errors, run_mp, step
from fab.tools import flags_checksum, run_command, get_tool, get_compiler_version
from fab.util import CompiledFile, log_or_dot, Timer, by_type
logger = logging.getLogger(__name__)
DEFAULT_SOURCE_GETTER = FilterBuildTrees(suffix='.c')
DEFAULT_OUTPUT_ARTEFACT = ''
@dataclass
class MpCommonArgs(object):
config: BuildConfig
flags: FlagsConfig
compiler: str
compiler_version: str
@step
def compile_c(config, common_flags: Optional[List[str]] = None,
path_flags: Optional[List] = None, source: Optional[ArtefactsGetter] = None):
"""
Compiles all C files in all build trees, creating or extending a set of compiled files for each target.
This step uses multiprocessing.
All C files are compiled in a single pass.
The command line compiler to is taken from the environment variable `CC`, and defaults to `gcc -c`.
Uses multiprocessing, unless disabled in the *config*.
:param config:
The :class:`fab.build_config.BuildConfig` object where we can read settings
such as the project workspace folder or the multiprocessing flag.
:param common_flags:
A list of strings to be included in the command line call, for all files.
:param path_flags:
A list of :class:`~fab.build_config.AddFlags`, defining flags to be included in the command line call
for selected files.
:param source:
An :class:`~fab.artefacts.ArtefactsGetter` which give us our c files to process.
"""
# todo: tell the compiler (and other steps) which artefact name to create?
compiler, compiler_flags = get_tool(os.getenv('CC', 'gcc -c'))
compiler_version = get_compiler_version(compiler)
logger.info(f'c compiler is {compiler} {compiler_version}')
env_flags = os.getenv('CFLAGS', '').split()
common_flags = compiler_flags + env_flags + (common_flags or [])
# make sure we have a -c
# todo: c compiler awareness, like we have with fortran?
if '-c' not in common_flags:
warnings.warn("Adding '-c' to C compiler flags")
common_flags = ['-c'] + common_flags
flags = FlagsConfig(common_flags=common_flags, path_flags=path_flags)
source_getter = source or DEFAULT_SOURCE_GETTER
# gather all the source to compile, for all build trees, into one big lump
build_lists: Dict = source_getter(config._artefact_store)
to_compile: list = sum(build_lists.values(), [])
logger.info(f"compiling {len(to_compile)} c files")
mp_payload = MpCommonArgs(config=config, flags=flags, compiler=compiler, compiler_version=compiler_version)
mp_items = [(fpath, mp_payload) for fpath in to_compile]
# compile everything in one go
compilation_results = run_mp(config, items=mp_items, func=_compile_file)
check_for_errors(compilation_results, caller_label='compile c')
compiled_c = list(by_type(compilation_results, CompiledFile))
logger.info(f"compiled {len(compiled_c)} c files")
# record the prebuild files as being current, so the cleanup knows not to delete them
prebuild_files = {r.output_fpath for r in compiled_c}
config.add_current_prebuilds(prebuild_files)
# record the compilation results for the next step
store_artefacts(compiled_c, build_lists, config._artefact_store)
# todo: very similar code in fortran compiler
def store_artefacts(compiled_files: List[CompiledFile], build_lists: Dict[str, List], artefact_store):
"""
Create our artefact collection; object files for each compiled file, per root symbol.
"""
# add the new object files to the artefact store, by target
lookup = {c.input_fpath: c for c in compiled_files}
object_files = artefact_store.setdefault(OBJECT_FILES, defaultdict(set))
for root, source_files in build_lists.items():
new_objects = [lookup[af.fpath].output_fpath for af in source_files]
object_files[root].update(new_objects)
def _compile_file(arg: Tuple[AnalysedC, MpCommonArgs]):
analysed_file, mp_payload = arg
with Timer() as timer:
flags = mp_payload.flags.flags_for_path(path=analysed_file.fpath, config=mp_payload.config)
obj_combo_hash = _get_obj_combo_hash(mp_payload.compiler, mp_payload.compiler_version, analysed_file, flags)
obj_file_prebuild = mp_payload.config.prebuild_folder / f'{analysed_file.fpath.stem}.{obj_combo_hash:x}.o'
# prebuild available?
if obj_file_prebuild.exists():
log_or_dot(logger, f'CompileC using prebuild: {analysed_file.fpath}')
else:
obj_file_prebuild.parent.mkdir(parents=True, exist_ok=True)
command = mp_payload.compiler.split() # type: ignore
command.extend(flags)
command.append(str(analysed_file.fpath))
command.extend(['-o', str(obj_file_prebuild)])
log_or_dot(logger, f'CompileC compiling {analysed_file.fpath}')
try:
run_command(command)
except Exception as err:
return FabException(f"error compiling {analysed_file.fpath}:\n{err}")
send_metric(
group="compile c",
name=str(analysed_file.fpath),
value={'time_taken': timer.taken, 'start': timer.start})
return CompiledFile(input_fpath=analysed_file.fpath, output_fpath=obj_file_prebuild)
def _get_obj_combo_hash(compiler, compiler_version, analysed_file, flags):
# get a combo hash of things which matter to the object file we define
try:
obj_combo_hash = sum([
analysed_file.file_hash,
flags_checksum(flags),
zlib.crc32(compiler.encode()),
zlib.crc32(compiler_version.encode()),
])
except TypeError:
raise ValueError("could not generate combo hash for object file")
return obj_combo_hash
| 0.549882 | 0.131982 |
import logging
import os
from string import Template
from typing import Optional
from fab.constants import OBJECT_FILES, OBJECT_ARCHIVES, EXECUTABLES
from fab.steps import step
from fab.util import log_or_dot
from fab.tools import run_command
from fab.artefacts import ArtefactsGetter, CollectionGetter
logger = logging.getLogger(__name__)
class DefaultLinkerSource(ArtefactsGetter):
"""
A source getter specifically for linking.
Looks for the default output from archiving objects, falls back to default compiler output.
This allows a link step to work with or without a preceding object archive step.
"""
def __call__(self, artefact_store):
return CollectionGetter(OBJECT_ARCHIVES)(artefact_store) \
or CollectionGetter(OBJECT_FILES)(artefact_store)
def call_linker(linker, flags, filename, objects):
assert isinstance(linker, str)
command = linker.split()
command.extend(['-o', filename])
# todo: we need to be able to specify flags which appear before the object files
command.extend(map(str, sorted(objects)))
# note: this must this come after the list of object files?
command.extend(os.getenv('LDFLAGS', '').split())
command.extend(flags)
log_or_dot(logger, 'Link running command: ' + ' '.join(command))
try:
run_command(command)
except Exception as err:
raise Exception(f"error linking:\n{err}")
@step
def link_exe(config, linker: Optional[str] = None, flags=None, source: Optional[ArtefactsGetter] = None):
"""
Link object files into an executable for every build target.
Expects one or more build targets from its artefact getter, of the form Dict[name, object_files].
The default artefact getter, :py:const:`~fab.steps.link_exe.DefaultLinkerSource`, looks for any output
from an :class:`~fab.steps.archive_objects.ArchiveObjects` step, and falls back to using output from
compiler steps.
:param config:
The :class:`fab.build_config.BuildConfig` object where we can read settings
such as the project workspace folder or the multiprocessing flag.
:param linker:
E.g 'gcc' or 'ld'.
:param flags:
A list of flags to pass to the linker.
:param source:
An optional :class:`~fab.artefacts.ArtefactsGetter`.
Typically not required, as there is a sensible default.
"""
linker = linker or os.getenv('LD', 'ld')
logger.info(f'linker is {linker}')
flags = flags or []
source_getter = source or DefaultLinkerSource()
target_objects = source_getter(config._artefact_store)
for root, objects in target_objects.items():
exe_path = config.project_workspace / f'{root}.exe'
call_linker(linker=linker, flags=flags, filename=str(exe_path), objects=objects)
config._artefact_store.setdefault(EXECUTABLES, []).append(exe_path)
# todo: the bit about Dict[None, object_files] seems too obscure - try to rethink this.
@step
def link_shared_object(config, output_fpath: str, linker: Optional[str] = None, flags=None,
source: Optional[ArtefactsGetter] = None):
"""
Produce a shared object (*.so*) file from the given build target.
Expects a *single build target* from its artefact getter, of the form Dict[None, object_files].
We can assume the list of object files is the entire project source, compiled.
Params are as for :class:`~fab.steps.link_exe.LinkerBase`, with the addition of:
:param config:
The :class:`fab.build_config.BuildConfig` object where we can read settings
such as the project workspace folder or the multiprocessing flag.
:param output_fpath:
File path of the shared object to create.
:param linker:
E.g 'gcc' or 'ld'.
:param flags:
A list of flags to pass to the linker.
:param source:
An optional :class:`~fab.artefacts.ArtefactsGetter`.
Typically not required, as there is a sensible default.
"""
linker = linker or os.getenv('LD', 'ld')
logger.info(f'linker is {linker}')
flags = flags or []
source_getter = source or DefaultLinkerSource()
ensure_flags = ['-fPIC', '-shared']
for f in ensure_flags:
if f not in flags:
flags.append(f)
# We expect a single build target containing the whole codebase, with no name (as it's not a root symbol).
target_objects = source_getter(config._artefact_store)
assert list(target_objects.keys()) == [None]
objects = target_objects[None]
call_linker(
linker=linker, flags=flags,
filename=Template(output_fpath).substitute(output=config.build_output),
objects=objects)
|
sci-fab
|
/sci_fab-1.0-py3-none-any.whl/fab/steps/link.py
|
link.py
|
import logging
import os
from string import Template
from typing import Optional
from fab.constants import OBJECT_FILES, OBJECT_ARCHIVES, EXECUTABLES
from fab.steps import step
from fab.util import log_or_dot
from fab.tools import run_command
from fab.artefacts import ArtefactsGetter, CollectionGetter
logger = logging.getLogger(__name__)
class DefaultLinkerSource(ArtefactsGetter):
"""
A source getter specifically for linking.
Looks for the default output from archiving objects, falls back to default compiler output.
This allows a link step to work with or without a preceding object archive step.
"""
def __call__(self, artefact_store):
return CollectionGetter(OBJECT_ARCHIVES)(artefact_store) \
or CollectionGetter(OBJECT_FILES)(artefact_store)
def call_linker(linker, flags, filename, objects):
assert isinstance(linker, str)
command = linker.split()
command.extend(['-o', filename])
# todo: we need to be able to specify flags which appear before the object files
command.extend(map(str, sorted(objects)))
# note: this must this come after the list of object files?
command.extend(os.getenv('LDFLAGS', '').split())
command.extend(flags)
log_or_dot(logger, 'Link running command: ' + ' '.join(command))
try:
run_command(command)
except Exception as err:
raise Exception(f"error linking:\n{err}")
@step
def link_exe(config, linker: Optional[str] = None, flags=None, source: Optional[ArtefactsGetter] = None):
"""
Link object files into an executable for every build target.
Expects one or more build targets from its artefact getter, of the form Dict[name, object_files].
The default artefact getter, :py:const:`~fab.steps.link_exe.DefaultLinkerSource`, looks for any output
from an :class:`~fab.steps.archive_objects.ArchiveObjects` step, and falls back to using output from
compiler steps.
:param config:
The :class:`fab.build_config.BuildConfig` object where we can read settings
such as the project workspace folder or the multiprocessing flag.
:param linker:
E.g 'gcc' or 'ld'.
:param flags:
A list of flags to pass to the linker.
:param source:
An optional :class:`~fab.artefacts.ArtefactsGetter`.
Typically not required, as there is a sensible default.
"""
linker = linker or os.getenv('LD', 'ld')
logger.info(f'linker is {linker}')
flags = flags or []
source_getter = source or DefaultLinkerSource()
target_objects = source_getter(config._artefact_store)
for root, objects in target_objects.items():
exe_path = config.project_workspace / f'{root}.exe'
call_linker(linker=linker, flags=flags, filename=str(exe_path), objects=objects)
config._artefact_store.setdefault(EXECUTABLES, []).append(exe_path)
# todo: the bit about Dict[None, object_files] seems too obscure - try to rethink this.
@step
def link_shared_object(config, output_fpath: str, linker: Optional[str] = None, flags=None,
source: Optional[ArtefactsGetter] = None):
"""
Produce a shared object (*.so*) file from the given build target.
Expects a *single build target* from its artefact getter, of the form Dict[None, object_files].
We can assume the list of object files is the entire project source, compiled.
Params are as for :class:`~fab.steps.link_exe.LinkerBase`, with the addition of:
:param config:
The :class:`fab.build_config.BuildConfig` object where we can read settings
such as the project workspace folder or the multiprocessing flag.
:param output_fpath:
File path of the shared object to create.
:param linker:
E.g 'gcc' or 'ld'.
:param flags:
A list of flags to pass to the linker.
:param source:
An optional :class:`~fab.artefacts.ArtefactsGetter`.
Typically not required, as there is a sensible default.
"""
linker = linker or os.getenv('LD', 'ld')
logger.info(f'linker is {linker}')
flags = flags or []
source_getter = source or DefaultLinkerSource()
ensure_flags = ['-fPIC', '-shared']
for f in ensure_flags:
if f not in flags:
flags.append(f)
# We expect a single build target containing the whole codebase, with no name (as it's not a root symbol).
target_objects = source_getter(config._artefact_store)
assert list(target_objects.keys()) == [None]
objects = target_objects[None]
call_linker(
linker=linker, flags=flags,
filename=Template(output_fpath).substitute(output=config.build_output),
objects=objects)
| 0.623606 | 0.227308 |
import logging
import os
import shutil
from dataclasses import dataclass
from pathlib import Path
from typing import Collection, List, Optional, Tuple
from fab.build_config import BuildConfig, FlagsConfig
from fab.constants import PRAGMAD_C
from fab.metrics import send_metric
from fab.util import log_or_dot_finish, input_to_output_fpath, log_or_dot, suffix_filter, Timer, by_type
from fab.tools import get_tool, run_command
from fab.steps import check_for_errors, run_mp, step
from fab.artefacts import ArtefactsGetter, SuffixFilter, CollectionGetter
logger = logging.getLogger(__name__)
@dataclass
class MpCommonArgs(object):
"""Common args for calling process_artefact() using multiprocessing."""
config: BuildConfig
output_suffix: str
preprocessor: str
flags: FlagsConfig
name: str
def pre_processor(config: BuildConfig, preprocessor: str,
files: Collection[Path], output_collection, output_suffix,
common_flags: Optional[List[str]] = None,
path_flags: Optional[List] = None,
name="preprocess"):
"""
Preprocess Fortran or C files.
Uses multiprocessing, unless disabled in the config.
:param config:
The :class:`fab.build_config.BuildConfig` object where we can read settings
such as the project workspace folder or the multiprocessing flag.
:param preprocessor:
The preprocessor executable.
:param files:
The files to preprocess.
:param output_collection:
The name of the output artefact collection.
:param output_suffix:
Suffix for output files.
:param common_flags:
Used to construct a :class:`~fab.config.FlagsConfig` object.
:param path_flags:
Used to construct a :class:`~fab.build_config.FlagsConfig` object.
:param name:
Human friendly name for logger output, with sensible default.
"""
common_flags = common_flags or []
flags = FlagsConfig(common_flags=common_flags, path_flags=path_flags)
logger.info(f'preprocessor is {preprocessor}')
logger.info(f'preprocessing {len(files)} files')
# common args for the child process
mp_common_args = MpCommonArgs(
config=config,
output_suffix=output_suffix,
preprocessor=preprocessor,
flags=flags,
name=name,
)
# bundle files with common args
mp_args = [(file, mp_common_args) for file in files]
results = run_mp(config, items=mp_args, func=process_artefact)
check_for_errors(results, caller_label=name)
log_or_dot_finish(logger)
config._artefact_store[output_collection] = list(by_type(results, Path))
def process_artefact(arg: Tuple[Path, MpCommonArgs]):
"""
Expects an input file in the source folder.
Writes the output file to the output folder, with a lower case extension.
"""
fpath, args = arg
with Timer() as timer:
# output_fpath = input_to_output_fpath(config=self._config, input_path=fpath).with_suffix(self.output_suffix)
output_fpath = input_to_output_fpath(config=args.config, input_path=fpath).with_suffix(args.output_suffix)
# already preprocessed?
# todo: remove reuse_artefacts eveywhere!
if args.config.reuse_artefacts and output_fpath.exists():
log_or_dot(logger, f'Preprocessor skipping: {fpath}')
else:
output_fpath.parent.mkdir(parents=True, exist_ok=True)
command = [args.preprocessor]
command.extend(args.flags.flags_for_path(path=fpath, config=args.config))
command.append(str(fpath))
command.append(str(output_fpath))
log_or_dot(logger, 'PreProcessor running command: ' + ' '.join(command))
try:
run_command(command)
except Exception as err:
raise Exception(f"error preprocessing {fpath}:\n{err}")
send_metric(args.name, str(fpath), {'time_taken': timer.taken, 'start': timer.start})
return output_fpath
def get_fortran_preprocessor():
"""
Identify the fortran preprocessor and any flags from the environment.
Initially looks for the `FPP` environment variable, then tries to call the `fpp` and `cpp` command line tools.
Returns the executable and flags.
The returned flags will always include `-P` to suppress line numbers.
This fparser ticket requests line number handling https://github.com/stfc/fparser/issues/390 .
"""
fpp: Optional[str] = None
fpp_flags: Optional[List[str]] = None
try:
fpp, fpp_flags = get_tool(os.getenv('FPP'))
logger.info(f"The environment defined FPP as '{fpp}'")
except ValueError:
pass
if not fpp:
try:
run_command(['which', 'fpp'])
fpp, fpp_flags = 'fpp', ['-P']
logger.info('detected fpp')
except RuntimeError:
# fpp not available
pass
if not fpp:
try:
run_command(['which', 'cpp'])
fpp, fpp_flags = 'cpp', ['-traditional-cpp', '-P']
logger.info('detected cpp')
except RuntimeError:
# fpp not available
pass
if not fpp:
raise RuntimeError('no fortran preprocessor specified or discovered')
assert fpp_flags is not None
if '-P' not in fpp_flags:
fpp_flags.append('-P')
return fpp, fpp_flags
# todo: rename preprocess_fortran
@step
def preprocess_fortran(config: BuildConfig, source: Optional[ArtefactsGetter] = None, **kwargs):
"""
Wrapper to pre_processor for Fortran files.
Ensures all preprocessed files are in the build output.
This means *copying* already preprocessed files from source to build output.
Params as per :func:`~fab.steps.preprocess._pre_processor`.
The preprocessor is taken from the `FPP` environment, or falls back to `fpp -P`.
If source is not provided, it defaults to `SuffixFilter('all_source', '.F90')`.
"""
source_getter = source or SuffixFilter('all_source', ['.F90', '.f90'])
source_files = source_getter(config._artefact_store)
F90s = suffix_filter(source_files, '.F90')
f90s = suffix_filter(source_files, '.f90')
# get the tool from FPP
fpp, fpp_flags = get_fortran_preprocessor()
# make sure any flags from FPP are included in any common flags specified by the config
try:
common_flags = kwargs.pop('common_flags')
except KeyError:
common_flags = []
for fpp_flag in fpp_flags:
if fpp_flag not in common_flags:
common_flags.append(fpp_flag)
# preprocess big F90s
pre_processor(
config,
preprocessor=fpp,
common_flags=common_flags,
files=F90s,
output_collection='preprocessed_fortran', output_suffix='.f90',
name='preprocess fortran',
**kwargs,
)
# todo: parallel copy?
# copy little f90s from source to output folder
logger.info(f'Fortran preprocessor copying {len(f90s)} files to build_output')
for f90 in f90s:
output_path = input_to_output_fpath(config, input_path=f90)
if output_path != f90:
if not output_path.parent.exists():
output_path.parent.mkdir(parents=True)
log_or_dot(logger, f'copying {f90}')
shutil.copyfile(str(f90), str(output_path))
class DefaultCPreprocessorSource(ArtefactsGetter):
"""
A source getter specifically for c preprocessing.
Looks for the default output from pragma injection, falls back to default source finder.
This allows the step to work with or without a preceding pragma step.
"""
def __call__(self, artefact_store):
return CollectionGetter(PRAGMAD_C)(artefact_store) \
or SuffixFilter('all_source', '.c')(artefact_store)
# todo: rename preprocess_c
@step
def preprocess_c(config: BuildConfig, source=None, **kwargs):
"""
Wrapper to pre_processor for C files.
Params as per :func:`~fab.steps.preprocess._pre_processor`.
The preprocessor is taken from the `CPP` environment, or falls back to `cpp`.
If source is not provided, it defaults to :class:`~fab.steps.preprocess.DefaultCPreprocessorSource`.
"""
source_getter = source or DefaultCPreprocessorSource()
source_files = source_getter(config._artefact_store)
pre_processor(
config,
preprocessor=os.getenv('CPP', 'cpp'),
files=source_files,
output_collection='preprocessed_c', output_suffix='.c',
name='preprocess c',
**kwargs,
)
|
sci-fab
|
/sci_fab-1.0-py3-none-any.whl/fab/steps/preprocess.py
|
preprocess.py
|
import logging
import os
import shutil
from dataclasses import dataclass
from pathlib import Path
from typing import Collection, List, Optional, Tuple
from fab.build_config import BuildConfig, FlagsConfig
from fab.constants import PRAGMAD_C
from fab.metrics import send_metric
from fab.util import log_or_dot_finish, input_to_output_fpath, log_or_dot, suffix_filter, Timer, by_type
from fab.tools import get_tool, run_command
from fab.steps import check_for_errors, run_mp, step
from fab.artefacts import ArtefactsGetter, SuffixFilter, CollectionGetter
logger = logging.getLogger(__name__)
@dataclass
class MpCommonArgs(object):
"""Common args for calling process_artefact() using multiprocessing."""
config: BuildConfig
output_suffix: str
preprocessor: str
flags: FlagsConfig
name: str
def pre_processor(config: BuildConfig, preprocessor: str,
files: Collection[Path], output_collection, output_suffix,
common_flags: Optional[List[str]] = None,
path_flags: Optional[List] = None,
name="preprocess"):
"""
Preprocess Fortran or C files.
Uses multiprocessing, unless disabled in the config.
:param config:
The :class:`fab.build_config.BuildConfig` object where we can read settings
such as the project workspace folder or the multiprocessing flag.
:param preprocessor:
The preprocessor executable.
:param files:
The files to preprocess.
:param output_collection:
The name of the output artefact collection.
:param output_suffix:
Suffix for output files.
:param common_flags:
Used to construct a :class:`~fab.config.FlagsConfig` object.
:param path_flags:
Used to construct a :class:`~fab.build_config.FlagsConfig` object.
:param name:
Human friendly name for logger output, with sensible default.
"""
common_flags = common_flags or []
flags = FlagsConfig(common_flags=common_flags, path_flags=path_flags)
logger.info(f'preprocessor is {preprocessor}')
logger.info(f'preprocessing {len(files)} files')
# common args for the child process
mp_common_args = MpCommonArgs(
config=config,
output_suffix=output_suffix,
preprocessor=preprocessor,
flags=flags,
name=name,
)
# bundle files with common args
mp_args = [(file, mp_common_args) for file in files]
results = run_mp(config, items=mp_args, func=process_artefact)
check_for_errors(results, caller_label=name)
log_or_dot_finish(logger)
config._artefact_store[output_collection] = list(by_type(results, Path))
def process_artefact(arg: Tuple[Path, MpCommonArgs]):
"""
Expects an input file in the source folder.
Writes the output file to the output folder, with a lower case extension.
"""
fpath, args = arg
with Timer() as timer:
# output_fpath = input_to_output_fpath(config=self._config, input_path=fpath).with_suffix(self.output_suffix)
output_fpath = input_to_output_fpath(config=args.config, input_path=fpath).with_suffix(args.output_suffix)
# already preprocessed?
# todo: remove reuse_artefacts eveywhere!
if args.config.reuse_artefacts and output_fpath.exists():
log_or_dot(logger, f'Preprocessor skipping: {fpath}')
else:
output_fpath.parent.mkdir(parents=True, exist_ok=True)
command = [args.preprocessor]
command.extend(args.flags.flags_for_path(path=fpath, config=args.config))
command.append(str(fpath))
command.append(str(output_fpath))
log_or_dot(logger, 'PreProcessor running command: ' + ' '.join(command))
try:
run_command(command)
except Exception as err:
raise Exception(f"error preprocessing {fpath}:\n{err}")
send_metric(args.name, str(fpath), {'time_taken': timer.taken, 'start': timer.start})
return output_fpath
def get_fortran_preprocessor():
"""
Identify the fortran preprocessor and any flags from the environment.
Initially looks for the `FPP` environment variable, then tries to call the `fpp` and `cpp` command line tools.
Returns the executable and flags.
The returned flags will always include `-P` to suppress line numbers.
This fparser ticket requests line number handling https://github.com/stfc/fparser/issues/390 .
"""
fpp: Optional[str] = None
fpp_flags: Optional[List[str]] = None
try:
fpp, fpp_flags = get_tool(os.getenv('FPP'))
logger.info(f"The environment defined FPP as '{fpp}'")
except ValueError:
pass
if not fpp:
try:
run_command(['which', 'fpp'])
fpp, fpp_flags = 'fpp', ['-P']
logger.info('detected fpp')
except RuntimeError:
# fpp not available
pass
if not fpp:
try:
run_command(['which', 'cpp'])
fpp, fpp_flags = 'cpp', ['-traditional-cpp', '-P']
logger.info('detected cpp')
except RuntimeError:
# fpp not available
pass
if not fpp:
raise RuntimeError('no fortran preprocessor specified or discovered')
assert fpp_flags is not None
if '-P' not in fpp_flags:
fpp_flags.append('-P')
return fpp, fpp_flags
# todo: rename preprocess_fortran
@step
def preprocess_fortran(config: BuildConfig, source: Optional[ArtefactsGetter] = None, **kwargs):
"""
Wrapper to pre_processor for Fortran files.
Ensures all preprocessed files are in the build output.
This means *copying* already preprocessed files from source to build output.
Params as per :func:`~fab.steps.preprocess._pre_processor`.
The preprocessor is taken from the `FPP` environment, or falls back to `fpp -P`.
If source is not provided, it defaults to `SuffixFilter('all_source', '.F90')`.
"""
source_getter = source or SuffixFilter('all_source', ['.F90', '.f90'])
source_files = source_getter(config._artefact_store)
F90s = suffix_filter(source_files, '.F90')
f90s = suffix_filter(source_files, '.f90')
# get the tool from FPP
fpp, fpp_flags = get_fortran_preprocessor()
# make sure any flags from FPP are included in any common flags specified by the config
try:
common_flags = kwargs.pop('common_flags')
except KeyError:
common_flags = []
for fpp_flag in fpp_flags:
if fpp_flag not in common_flags:
common_flags.append(fpp_flag)
# preprocess big F90s
pre_processor(
config,
preprocessor=fpp,
common_flags=common_flags,
files=F90s,
output_collection='preprocessed_fortran', output_suffix='.f90',
name='preprocess fortran',
**kwargs,
)
# todo: parallel copy?
# copy little f90s from source to output folder
logger.info(f'Fortran preprocessor copying {len(f90s)} files to build_output')
for f90 in f90s:
output_path = input_to_output_fpath(config, input_path=f90)
if output_path != f90:
if not output_path.parent.exists():
output_path.parent.mkdir(parents=True)
log_or_dot(logger, f'copying {f90}')
shutil.copyfile(str(f90), str(output_path))
class DefaultCPreprocessorSource(ArtefactsGetter):
"""
A source getter specifically for c preprocessing.
Looks for the default output from pragma injection, falls back to default source finder.
This allows the step to work with or without a preceding pragma step.
"""
def __call__(self, artefact_store):
return CollectionGetter(PRAGMAD_C)(artefact_store) \
or SuffixFilter('all_source', '.c')(artefact_store)
# todo: rename preprocess_c
@step
def preprocess_c(config: BuildConfig, source=None, **kwargs):
"""
Wrapper to pre_processor for C files.
Params as per :func:`~fab.steps.preprocess._pre_processor`.
The preprocessor is taken from the `CPP` environment, or falls back to `cpp`.
If source is not provided, it defaults to :class:`~fab.steps.preprocess.DefaultCPreprocessorSource`.
"""
source_getter = source or DefaultCPreprocessorSource()
source_files = source_getter(config._artefact_store)
pre_processor(
config,
preprocessor=os.getenv('CPP', 'cpp'),
files=source_files,
output_collection='preprocessed_c', output_suffix='.c',
name='preprocess c',
**kwargs,
)
| 0.727104 | 0.153549 |
import re
from pathlib import Path
from typing import Pattern, Optional, Match
from fab import FabException
from fab.constants import PRAGMAD_C
from fab.steps import run_mp, step
from fab.artefacts import ArtefactsGetter, SuffixFilter
DEFAULT_SOURCE_GETTER = SuffixFilter('all_source', '.c')
# todo: test
@step
def c_pragma_injector(config, source: Optional[ArtefactsGetter] = None, output_name=None):
"""
A build step to inject custom pragmas to mark blocks of user and system include statements.
By default, reads .c files from the *all_source* artefact and creates the *pragmad_c* artefact.
This step does not write to the build output folder, it creates the pragmad c in the same folder as the c file.
This is because a subsequent preprocessing step needs to look in the source folder for header files,
including in paths relative to the c file.
:param config:
The :class:`fab.build_config.BuildConfig` object where we can read settings
such as the project workspace folder or the multiprocessing flag.
:param source:
An :class:`~fab.artefacts.ArtefactsGetter` which give us our c files to process.
:param output_name:
The name of the artefact collection to create in the artefact store, with a sensible default
"""
source_getter = source or DEFAULT_SOURCE_GETTER
output_name = output_name or PRAGMAD_C
files = source_getter(config._artefact_store)
results = run_mp(config, items=files, func=_process_artefact)
config._artefact_store[output_name] = list(results)
def _process_artefact(fpath: Path):
prag_output_fpath = fpath.with_suffix('.prag')
prag_output_fpath.open('w').writelines(inject_pragmas(fpath))
return prag_output_fpath
def inject_pragmas(fpath):
"""
Reads a C source file but when encountering an #include
preprocessor directive injects a special Fab-specific
#pragma which can be picked up later by the Analyser
after the preprocessing
"""
_include_re: str = r'^\s*#include\s+(\S+)'
_include_pattern: Pattern = re.compile(_include_re)
for line in open(fpath, 'rt', encoding='utf-8'):
include_match: Optional[Match] = _include_pattern.match(line)
if include_match:
# For valid C the first character of the matched
# part of the group will indicate whether this is
# a system library include or a user include
include: str = include_match.group(1)
if include.startswith('<'):
yield '#pragma FAB SysIncludeStart\n'
yield line
yield '#pragma FAB SysIncludeEnd\n'
elif include.startswith(('"', "'")):
yield '#pragma FAB UsrIncludeStart\n'
yield line
yield '#pragma FAB UsrIncludeEnd\n'
else:
msg = 'Found badly formatted #include'
raise FabException(msg)
else:
yield line
|
sci-fab
|
/sci_fab-1.0-py3-none-any.whl/fab/steps/c_pragma_injector.py
|
c_pragma_injector.py
|
import re
from pathlib import Path
from typing import Pattern, Optional, Match
from fab import FabException
from fab.constants import PRAGMAD_C
from fab.steps import run_mp, step
from fab.artefacts import ArtefactsGetter, SuffixFilter
DEFAULT_SOURCE_GETTER = SuffixFilter('all_source', '.c')
# todo: test
@step
def c_pragma_injector(config, source: Optional[ArtefactsGetter] = None, output_name=None):
"""
A build step to inject custom pragmas to mark blocks of user and system include statements.
By default, reads .c files from the *all_source* artefact and creates the *pragmad_c* artefact.
This step does not write to the build output folder, it creates the pragmad c in the same folder as the c file.
This is because a subsequent preprocessing step needs to look in the source folder for header files,
including in paths relative to the c file.
:param config:
The :class:`fab.build_config.BuildConfig` object where we can read settings
such as the project workspace folder or the multiprocessing flag.
:param source:
An :class:`~fab.artefacts.ArtefactsGetter` which give us our c files to process.
:param output_name:
The name of the artefact collection to create in the artefact store, with a sensible default
"""
source_getter = source or DEFAULT_SOURCE_GETTER
output_name = output_name or PRAGMAD_C
files = source_getter(config._artefact_store)
results = run_mp(config, items=files, func=_process_artefact)
config._artefact_store[output_name] = list(results)
def _process_artefact(fpath: Path):
prag_output_fpath = fpath.with_suffix('.prag')
prag_output_fpath.open('w').writelines(inject_pragmas(fpath))
return prag_output_fpath
def inject_pragmas(fpath):
"""
Reads a C source file but when encountering an #include
preprocessor directive injects a special Fab-specific
#pragma which can be picked up later by the Analyser
after the preprocessing
"""
_include_re: str = r'^\s*#include\s+(\S+)'
_include_pattern: Pattern = re.compile(_include_re)
for line in open(fpath, 'rt', encoding='utf-8'):
include_match: Optional[Match] = _include_pattern.match(line)
if include_match:
# For valid C the first character of the matched
# part of the group will indicate whether this is
# a system library include or a user include
include: str = include_match.group(1)
if include.startswith('<'):
yield '#pragma FAB SysIncludeStart\n'
yield line
yield '#pragma FAB SysIncludeEnd\n'
elif include.startswith(('"', "'")):
yield '#pragma FAB UsrIncludeStart\n'
yield line
yield '#pragma FAB UsrIncludeEnd\n'
else:
msg = 'Found badly formatted #include'
raise FabException(msg)
else:
yield line
| 0.468791 | 0.173831 |
import multiprocessing
from fab.metrics import send_metric
from fab.util import by_type, TimerLogger
from functools import wraps
def step(func):
"""Function decorator for steps."""
@wraps(func)
def wrapper(*args, **kwargs):
name = func.__name__
# call the function
with TimerLogger(name) as step:
func(*args, **kwargs)
send_metric('steps', name, step.taken)
return wrapper
def run_mp(config, items, func, no_multiprocessing: bool = False):
"""
Called from Step.run() to process multiple items in parallel.
For example, a compile step would, in its run() method, find a list of source files in the artefact store.
It could then pass those paths to this method, along with a function to compile a *single* file.
The whole set of results are returned in a list-like, with undefined order.
:param items:
An iterable of items to process in parallel.
:param func:
A function to process a single item. Must accept a single argument.
:param no_multiprocessing:
Overrides the config's multiprocessing flag, disabling multiprocessing for this call.
"""
if config.multiprocessing and not no_multiprocessing:
with multiprocessing.Pool(config.n_procs) as p:
results = p.map(func, items)
else:
results = [func(f) for f in items]
return results
def run_mp_imap(config, items, func, result_handler):
"""
Like run_mp, but uses imap instead of map so that we can process each result as it happens.
This is useful for a slow operation where we want to save our progress as we go
instead of waiting for everything to finish, allowing us to pick up where we left off if the program is halted.
:param items:
An iterable of items to process in parallel.
:param func:
A function to process a single item. Must accept a single argument.
:param result_handler:
A function to handle a single result. Must accept a single argument.
"""
if config.multiprocessing:
with multiprocessing.Pool(config.n_procs) as p:
analysis_results = p.imap_unordered(func, items)
result_handler(analysis_results)
else:
analysis_results = (func(a) for a in items) # generator
result_handler(analysis_results)
def check_for_errors(results, caller_label=None):
"""
Check an iterable of results for any exceptions and handle them gracefully.
This is a helper function for steps which use multiprocessing,
getting multiple results back from :meth:`~fab.steps.Step.run_mp` all in one go.
:param results:
An iterable of results.
:param caller_label:
Optional human-friendly name of the caller for logging.
"""
caller_label = f'during {caller_label}' if caller_label else ''
exceptions = list(by_type(results, Exception))
if exceptions:
formatted_errors = "\n\n".join(map(str, exceptions))
raise RuntimeError(
f"{formatted_errors}\n\n{len(exceptions)} error(s) found {caller_label}"
)
|
sci-fab
|
/sci_fab-1.0-py3-none-any.whl/fab/steps/__init__.py
|
__init__.py
|
import multiprocessing
from fab.metrics import send_metric
from fab.util import by_type, TimerLogger
from functools import wraps
def step(func):
"""Function decorator for steps."""
@wraps(func)
def wrapper(*args, **kwargs):
name = func.__name__
# call the function
with TimerLogger(name) as step:
func(*args, **kwargs)
send_metric('steps', name, step.taken)
return wrapper
def run_mp(config, items, func, no_multiprocessing: bool = False):
"""
Called from Step.run() to process multiple items in parallel.
For example, a compile step would, in its run() method, find a list of source files in the artefact store.
It could then pass those paths to this method, along with a function to compile a *single* file.
The whole set of results are returned in a list-like, with undefined order.
:param items:
An iterable of items to process in parallel.
:param func:
A function to process a single item. Must accept a single argument.
:param no_multiprocessing:
Overrides the config's multiprocessing flag, disabling multiprocessing for this call.
"""
if config.multiprocessing and not no_multiprocessing:
with multiprocessing.Pool(config.n_procs) as p:
results = p.map(func, items)
else:
results = [func(f) for f in items]
return results
def run_mp_imap(config, items, func, result_handler):
"""
Like run_mp, but uses imap instead of map so that we can process each result as it happens.
This is useful for a slow operation where we want to save our progress as we go
instead of waiting for everything to finish, allowing us to pick up where we left off if the program is halted.
:param items:
An iterable of items to process in parallel.
:param func:
A function to process a single item. Must accept a single argument.
:param result_handler:
A function to handle a single result. Must accept a single argument.
"""
if config.multiprocessing:
with multiprocessing.Pool(config.n_procs) as p:
analysis_results = p.imap_unordered(func, items)
result_handler(analysis_results)
else:
analysis_results = (func(a) for a in items) # generator
result_handler(analysis_results)
def check_for_errors(results, caller_label=None):
"""
Check an iterable of results for any exceptions and handle them gracefully.
This is a helper function for steps which use multiprocessing,
getting multiple results back from :meth:`~fab.steps.Step.run_mp` all in one go.
:param results:
An iterable of results.
:param caller_label:
Optional human-friendly name of the caller for logging.
"""
caller_label = f'during {caller_label}' if caller_label else ''
exceptions = list(by_type(results, Exception))
if exceptions:
formatted_errors = "\n\n".join(map(str, exceptions))
raise RuntimeError(
f"{formatted_errors}\n\n{len(exceptions)} error(s) found {caller_label}"
)
| 0.809238 | 0.251441 |
import warnings
from pathlib import Path
from typing import Union
from fab.steps import step
from fab.tools import run_command
def current_commit(folder=None):
folder = folder or '.'
output = run_command(['git', 'log', '--oneline', '-n', '1'], cwd=folder)
commit = output.split()[0]
return commit
def tool_available() -> bool:
"""Is the command line git tool available?"""
try:
run_command(['git', 'help'])
except FileNotFoundError:
return False
return True
def is_working_copy(dst: Union[str, Path]) -> bool:
"""Is the given path is a working copy?"""
try:
run_command(['git', 'status'], cwd=dst)
except RuntimeError:
return False
return True
def fetch(src, revision, dst):
# todo: allow shallow fetch with --depth 1
command = ['git', 'fetch', src]
if revision:
command.append(revision)
run_command(command, cwd=str(dst))
# todo: allow cli args, e.g to set the depth
@step
def git_checkout(config, src: str, dst_label: str = '', revision=None):
"""
Checkout or update a Git repo.
"""
_dst = config.source_root / dst_label
# create folder?
if not _dst.exists():
_dst.mkdir(parents=True)
run_command(['git', 'init', '.'], cwd=_dst)
elif not is_working_copy(_dst): # type: ignore
raise ValueError(f"destination exists but is not a working copy: '{_dst}'")
fetch(src, revision, _dst)
run_command(['git', 'checkout', 'FETCH_HEAD'], cwd=_dst)
try:
_dst.relative_to(config.project_workspace)
run_command(['git', 'clean', '-f'], cwd=_dst)
except ValueError:
warnings.warn(f'not safe to clean git source in {_dst}')
@step
def git_merge(config, src: str, dst_label: str = '', revision=None):
"""
Merge a git repo into a local working copy.
"""
_dst = config.source_root / dst_label
if not _dst or not is_working_copy(_dst):
raise ValueError(f"destination is not a working copy: '{_dst}'")
fetch(src=src, revision=revision, dst=_dst)
try:
run_command(['git', 'merge', 'FETCH_HEAD'], cwd=_dst)
except RuntimeError as err:
run_command(['git', 'merge', '--abort'], cwd=_dst)
raise RuntimeError(f"Error merging {revision}. Merge aborted.\n{err}")
|
sci-fab
|
/sci_fab-1.0-py3-none-any.whl/fab/steps/grab/git.py
|
git.py
|
import warnings
from pathlib import Path
from typing import Union
from fab.steps import step
from fab.tools import run_command
def current_commit(folder=None):
folder = folder or '.'
output = run_command(['git', 'log', '--oneline', '-n', '1'], cwd=folder)
commit = output.split()[0]
return commit
def tool_available() -> bool:
"""Is the command line git tool available?"""
try:
run_command(['git', 'help'])
except FileNotFoundError:
return False
return True
def is_working_copy(dst: Union[str, Path]) -> bool:
"""Is the given path is a working copy?"""
try:
run_command(['git', 'status'], cwd=dst)
except RuntimeError:
return False
return True
def fetch(src, revision, dst):
# todo: allow shallow fetch with --depth 1
command = ['git', 'fetch', src]
if revision:
command.append(revision)
run_command(command, cwd=str(dst))
# todo: allow cli args, e.g to set the depth
@step
def git_checkout(config, src: str, dst_label: str = '', revision=None):
"""
Checkout or update a Git repo.
"""
_dst = config.source_root / dst_label
# create folder?
if not _dst.exists():
_dst.mkdir(parents=True)
run_command(['git', 'init', '.'], cwd=_dst)
elif not is_working_copy(_dst): # type: ignore
raise ValueError(f"destination exists but is not a working copy: '{_dst}'")
fetch(src, revision, _dst)
run_command(['git', 'checkout', 'FETCH_HEAD'], cwd=_dst)
try:
_dst.relative_to(config.project_workspace)
run_command(['git', 'clean', '-f'], cwd=_dst)
except ValueError:
warnings.warn(f'not safe to clean git source in {_dst}')
@step
def git_merge(config, src: str, dst_label: str = '', revision=None):
"""
Merge a git repo into a local working copy.
"""
_dst = config.source_root / dst_label
if not _dst or not is_working_copy(_dst):
raise ValueError(f"destination is not a working copy: '{_dst}'")
fetch(src=src, revision=revision, dst=_dst)
try:
run_command(['git', 'merge', 'FETCH_HEAD'], cwd=_dst)
except RuntimeError as err:
run_command(['git', 'merge', '--abort'], cwd=_dst)
raise RuntimeError(f"Error merging {revision}. Merge aborted.\n{err}")
| 0.465873 | 0.198181 |
import os
import sys
import time
import random
from collections import defaultdict
import click
import colorama
from simple_loggers import SimpleLogger
from scihub import version_info
from scihub.core import SciHub
from scihub.util.host import check_host
colorama.init()
CONTEXT_SETTINGS = dict(help_option_names=['-h', '--help'])
prog = version_info['prog']
author = version_info['author']
author_email = version_info['author_email']
__epilog__ = click.style(f'''\
Examples:
\b
* check the available urls
{colorama.Fore.CYAN} {prog} -c {colorama.Fore.RESET}
\b
* search pmid(s)
{colorama.Fore.CYAN} {prog} -s 1,2,3 {colorama.Fore.RESET}
\b
* search doi(s)
{colorama.Fore.CYAN} {prog} -s 10.1038/s41524-017-0032-0 {colorama.Fore.RESET}
\b
* search with a specific url
{colorama.Fore.CYAN} {prog} -s 1,2,3 -u https://sci-hub.ren {colorama.Fore.RESET}
{colorama.Fore.YELLOW}
Contact: {author} <{author_email}>
{colorama.Fore.RESET}
''')
@click.command(no_args_is_help=True,
context_settings=CONTEXT_SETTINGS,
epilog=__epilog__,
help=click.style(version_info['desc'], fg='green', bold=True))
@click.option('-s', '--search', help='the string or file to search')
@click.option('-O', '--outdir', help='the output directory', default='pdf', show_default=True)
@click.option('-u', '--url', help='the url of sci-hub, eg. https://sci-hub.ee, automaticlly default')
@click.option('-l', '--list', help='list only but not download the pdf', is_flag=True)
@click.option('-c', '--check', help='check available urls of scihub', is_flag=True)
@click.option('-ns', '--name-by-search', help='name by search string', is_flag=True)
@click.option('-ow', '--overwrite', help='overwrite or not when file exists', type=click.Choice('YN'))
@click.option('-t', '--timeout', help='the seconds of timeout for requesting', type=int, default=60, show_default=True)
@click.version_option(version=version_info['version'], prog_name=version_info['prog'])
def cli(**kwargs):
start_time = time.time()
logger = SimpleLogger('Main')
# checking the available urls
if kwargs['check']:
urls, update_time = check_host()
print(f'last check time: {update_time}')
print('\n'.join(['\t'.join(item) for item in urls.items()]))
exit(0)
logger.info(f'input arguments: {kwargs}')
# collecting the search list
search = kwargs['search'] or click.prompt('input the search')
if search == '-' and not sys.stdin.isatty():
search_list_temp = [line.strip() for line in sys.stdin]
elif os.path.isfile(search):
search_list_temp = [line.strip() for line in open(search)]
else:
search_list_temp = search.strip().split(',')
# remove duplicate
search_list = []
for each in search_list_temp:
if each not in search_list:
search_list.append(each)
logger.info(f'{len(search_list)} to search: {search_list[:5]} ...')
# checking overwrite or not
overwrite = kwargs['overwrite']
if overwrite == 'Y':
overwrite = True
elif overwrite == 'N':
overwrite = False
sh = SciHub(url=kwargs['url'], timeout=kwargs['timeout'])
stat = defaultdict(list)
for n, search in enumerate(search_list, 1):
logger.debug(f'[{n}/{len(search_list)}] searching: {search}')
url = sh.search(search)
if url:
if kwargs['list']:
logger.info(f'{search}: {url}')
else:
filename = f'{search}.pdf' if kwargs['name_by_search'] else None
sh.download(url, outdir=kwargs['outdir'], filename=filename, overwrite=overwrite)
stat['success'].append(search)
else:
stat['failed'].append(search)
if n < len(search_list):
time.sleep(random.randint(3, 8))
logger.info(f'success: {len(stat["success"])}, failed: {len(stat["failed"])}')
if stat['failed']:
logger.info('failed list: {", ".join(stat["failed"])}')
elapsed = time.time() - start_time
logger.info(f'time elapsed: {elapsed:.2f}s')
def main():
cli()
if __name__ == "__main__":
main()
|
sci-hub
|
/sci-hub-1.0.4.tar.gz/sci-hub-1.0.4/scihub/bin/__init__.py
|
__init__.py
|
import os
import sys
import time
import random
from collections import defaultdict
import click
import colorama
from simple_loggers import SimpleLogger
from scihub import version_info
from scihub.core import SciHub
from scihub.util.host import check_host
colorama.init()
CONTEXT_SETTINGS = dict(help_option_names=['-h', '--help'])
prog = version_info['prog']
author = version_info['author']
author_email = version_info['author_email']
__epilog__ = click.style(f'''\
Examples:
\b
* check the available urls
{colorama.Fore.CYAN} {prog} -c {colorama.Fore.RESET}
\b
* search pmid(s)
{colorama.Fore.CYAN} {prog} -s 1,2,3 {colorama.Fore.RESET}
\b
* search doi(s)
{colorama.Fore.CYAN} {prog} -s 10.1038/s41524-017-0032-0 {colorama.Fore.RESET}
\b
* search with a specific url
{colorama.Fore.CYAN} {prog} -s 1,2,3 -u https://sci-hub.ren {colorama.Fore.RESET}
{colorama.Fore.YELLOW}
Contact: {author} <{author_email}>
{colorama.Fore.RESET}
''')
@click.command(no_args_is_help=True,
context_settings=CONTEXT_SETTINGS,
epilog=__epilog__,
help=click.style(version_info['desc'], fg='green', bold=True))
@click.option('-s', '--search', help='the string or file to search')
@click.option('-O', '--outdir', help='the output directory', default='pdf', show_default=True)
@click.option('-u', '--url', help='the url of sci-hub, eg. https://sci-hub.ee, automaticlly default')
@click.option('-l', '--list', help='list only but not download the pdf', is_flag=True)
@click.option('-c', '--check', help='check available urls of scihub', is_flag=True)
@click.option('-ns', '--name-by-search', help='name by search string', is_flag=True)
@click.option('-ow', '--overwrite', help='overwrite or not when file exists', type=click.Choice('YN'))
@click.option('-t', '--timeout', help='the seconds of timeout for requesting', type=int, default=60, show_default=True)
@click.version_option(version=version_info['version'], prog_name=version_info['prog'])
def cli(**kwargs):
start_time = time.time()
logger = SimpleLogger('Main')
# checking the available urls
if kwargs['check']:
urls, update_time = check_host()
print(f'last check time: {update_time}')
print('\n'.join(['\t'.join(item) for item in urls.items()]))
exit(0)
logger.info(f'input arguments: {kwargs}')
# collecting the search list
search = kwargs['search'] or click.prompt('input the search')
if search == '-' and not sys.stdin.isatty():
search_list_temp = [line.strip() for line in sys.stdin]
elif os.path.isfile(search):
search_list_temp = [line.strip() for line in open(search)]
else:
search_list_temp = search.strip().split(',')
# remove duplicate
search_list = []
for each in search_list_temp:
if each not in search_list:
search_list.append(each)
logger.info(f'{len(search_list)} to search: {search_list[:5]} ...')
# checking overwrite or not
overwrite = kwargs['overwrite']
if overwrite == 'Y':
overwrite = True
elif overwrite == 'N':
overwrite = False
sh = SciHub(url=kwargs['url'], timeout=kwargs['timeout'])
stat = defaultdict(list)
for n, search in enumerate(search_list, 1):
logger.debug(f'[{n}/{len(search_list)}] searching: {search}')
url = sh.search(search)
if url:
if kwargs['list']:
logger.info(f'{search}: {url}')
else:
filename = f'{search}.pdf' if kwargs['name_by_search'] else None
sh.download(url, outdir=kwargs['outdir'], filename=filename, overwrite=overwrite)
stat['success'].append(search)
else:
stat['failed'].append(search)
if n < len(search_list):
time.sleep(random.randint(3, 8))
logger.info(f'success: {len(stat["success"])}, failed: {len(stat["failed"])}')
if stat['failed']:
logger.info('failed list: {", ".join(stat["failed"])}')
elapsed = time.time() - start_time
logger.info(f'time elapsed: {elapsed:.2f}s')
def main():
cli()
if __name__ == "__main__":
main()
| 0.15704 | 0.104295 |
import os
import requests
from webrequests import WebRequest as WR
from simple_loggers import SimpleLogger
from .host import check_host
class SciHub(object):
def __init__(self, url=None):
self.logger = SimpleLogger('SciHub')
self.url = self.check_url(url)
def check_url(self, url, timeout=5):
def _check(url):
try:
resp = requests.head(url, timeout=timeout)
elapsed = resp.elapsed.total_seconds()
self.logger.info(f'good url: {url} [elapsed {elapsed:.3f}s]')
return elapsed
except Exception as e:
self.logger.warning(f'bad url: {url}')
def _post_url(url):
soup = WR.get_soup(url)
post_url = soup.select_one('form[method="POST"]').attrs['action']
if post_url == '/':
post_url = url
self.logger.info(f'post url: {post_url}')
return post_url
if url:
self.logger.info(f'checking url: {url} ...')
if _check(url):
return _post_url(url)
self.logger.info('checking fastest url automaticlly ...')
hosts, update_time = check_host()
fastest = 99999999
for host in hosts:
elapsed = _check(host)
if elapsed and elapsed < fastest:
fastest = elapsed
url = host
self.logger.info(f'fastest url: {url} [{fastest}s]')
return _post_url(url)
def search(self, term):
"""
term: URL, PMID, DOI or search string
return: url of pdf
"""
self.logger.info(f'searching: {term}')
payload = {
'sci-hub-plugin-check': '',
'request': term
}
soup = WR.get_soup(self.url, method='POST', data=payload)
pdf = soup.select_one('#pdf')
captcha = soup.select_one('#captcha')
if pdf:
pdf_url = pdf.attrs['src']
elif captcha:
captcha_url = captcha.attrs['src']
print(captcha_url)
else:
self.logger.error('your searching string is invalid, please check!')
return None
self.logger.info(f'pdf url of "{term}": {pdf_url}')
return pdf_url
def download(self, url, outdir='.', filename=None, chunk_size=512):
filename = filename or os.path.basename(url).split('#')[0]
if outdir != '.' and not os.path.exists(outdir):
os.makedirs(outdir)
outfile = os.path.join(outdir, filename)
resp = WR.get_response(url, stream=True)
length = int(resp.headers.get('Content-Length'))
self.logger.info(f'downloading pdf: {outfile} [{length/1024/1024:.2f} M]')
with open(outfile, 'wb') as out:
for chunk in resp.iter_content(chunk_size=chunk_size):
out.write(chunk)
self.logger.info(f'save file: {outfile}')
if __name__ == '__main__':
# sh = SciHub(url='https://scihub.bad')
# sh = SciHub()
# sh = SciHub(url='https://sci-hub.ai')
sh = SciHub(url='https://sci-hub.ee')
for term in range(26566462, 26566482):
pdf_url = sh.search(term)
if pdf_url:
sh.download(pdf_url)
|
sci-hub
|
/sci-hub-1.0.4.tar.gz/sci-hub-1.0.4/scihub/util/download.py
|
download.py
|
import os
import requests
from webrequests import WebRequest as WR
from simple_loggers import SimpleLogger
from .host import check_host
class SciHub(object):
def __init__(self, url=None):
self.logger = SimpleLogger('SciHub')
self.url = self.check_url(url)
def check_url(self, url, timeout=5):
def _check(url):
try:
resp = requests.head(url, timeout=timeout)
elapsed = resp.elapsed.total_seconds()
self.logger.info(f'good url: {url} [elapsed {elapsed:.3f}s]')
return elapsed
except Exception as e:
self.logger.warning(f'bad url: {url}')
def _post_url(url):
soup = WR.get_soup(url)
post_url = soup.select_one('form[method="POST"]').attrs['action']
if post_url == '/':
post_url = url
self.logger.info(f'post url: {post_url}')
return post_url
if url:
self.logger.info(f'checking url: {url} ...')
if _check(url):
return _post_url(url)
self.logger.info('checking fastest url automaticlly ...')
hosts, update_time = check_host()
fastest = 99999999
for host in hosts:
elapsed = _check(host)
if elapsed and elapsed < fastest:
fastest = elapsed
url = host
self.logger.info(f'fastest url: {url} [{fastest}s]')
return _post_url(url)
def search(self, term):
"""
term: URL, PMID, DOI or search string
return: url of pdf
"""
self.logger.info(f'searching: {term}')
payload = {
'sci-hub-plugin-check': '',
'request': term
}
soup = WR.get_soup(self.url, method='POST', data=payload)
pdf = soup.select_one('#pdf')
captcha = soup.select_one('#captcha')
if pdf:
pdf_url = pdf.attrs['src']
elif captcha:
captcha_url = captcha.attrs['src']
print(captcha_url)
else:
self.logger.error('your searching string is invalid, please check!')
return None
self.logger.info(f'pdf url of "{term}": {pdf_url}')
return pdf_url
def download(self, url, outdir='.', filename=None, chunk_size=512):
filename = filename or os.path.basename(url).split('#')[0]
if outdir != '.' and not os.path.exists(outdir):
os.makedirs(outdir)
outfile = os.path.join(outdir, filename)
resp = WR.get_response(url, stream=True)
length = int(resp.headers.get('Content-Length'))
self.logger.info(f'downloading pdf: {outfile} [{length/1024/1024:.2f} M]')
with open(outfile, 'wb') as out:
for chunk in resp.iter_content(chunk_size=chunk_size):
out.write(chunk)
self.logger.info(f'save file: {outfile}')
if __name__ == '__main__':
# sh = SciHub(url='https://scihub.bad')
# sh = SciHub()
# sh = SciHub(url='https://sci-hub.ai')
sh = SciHub(url='https://sci-hub.ee')
for term in range(26566462, 26566482):
pdf_url = sh.search(term)
if pdf_url:
sh.download(pdf_url)
| 0.213623 | 0.095434 |
import os.path
from collections.abc import Sequence
from contextlib import contextmanager
from typing import Iterable, Optional, Mapping, Any, Iterator
from hbutils.string import plural_word
from hbutils.testing import capture_output
from tqdm import tqdm
from igm.utils import tqdm_ncols
class RenderTask(Sequence):
def __init__(self, jobs: Iterable['RenderJob']):
self.__jobs = list(jobs)
def __len__(self):
return len(self.__jobs)
def __getitem__(self, index):
return self.__jobs[index]
def run(self, silent: bool = False):
raise NotImplementedError # pragma: no cover
class DirectoryBasedTask(RenderTask):
def __init__(self, srcdir: str, dstdir: str, extras: Optional[Mapping[str, Any]] = None):
self.srcdir = srcdir
self.dstdir = dstdir
self._extras = dict(extras or {})
RenderTask.__init__(self, list(self._yield_jobs()))
def _yield_jobs(self) -> Iterator['RenderJob']:
raise NotImplementedError # pragma: no cover
def run(self, silent: bool = False):
# initialize
if not silent:
jobs = tqdm(self, ncols=tqdm_ncols(), leave=True)
pgbar = jobs
else:
jobs = self
pgbar = None
os.makedirs(self.dstdir, exist_ok=True)
# run jobs
for job in jobs:
if pgbar:
pgbar.set_description(os.path.relpath(job.srcpath, self.srcdir))
pgbar.update()
job.run(silent=silent)
# run complete
if pgbar:
pgbar.set_description('Complete.')
pgbar.update()
def __repr__(self):
return f'<{type(self).__name__} {plural_word(len(self), "job")}, srcdir: {self.srcdir!r}>'
class RenderJob:
def __init__(self, srcpath, dstpath=None):
self.srcpath = srcpath
self.dstpath = dstpath
def _run(self):
raise NotImplementedError # pragma: no cover
def run(self, silent: bool = False):
with silent_wrapper(silent):
return self._run()
@contextmanager
def silent_wrapper(silent):
"""
Overview:
A wrapper for silencing the output inside.
:param silent: Silent or not. If ``True``, the output will be captured and ignored.
"""
if silent:
with capture_output():
yield
else:
yield
|
sci-igm
|
/sci_igm-0.0.2-py3-none-any.whl/igm/render/base.py
|
base.py
|
import os.path
from collections.abc import Sequence
from contextlib import contextmanager
from typing import Iterable, Optional, Mapping, Any, Iterator
from hbutils.string import plural_word
from hbutils.testing import capture_output
from tqdm import tqdm
from igm.utils import tqdm_ncols
class RenderTask(Sequence):
def __init__(self, jobs: Iterable['RenderJob']):
self.__jobs = list(jobs)
def __len__(self):
return len(self.__jobs)
def __getitem__(self, index):
return self.__jobs[index]
def run(self, silent: bool = False):
raise NotImplementedError # pragma: no cover
class DirectoryBasedTask(RenderTask):
def __init__(self, srcdir: str, dstdir: str, extras: Optional[Mapping[str, Any]] = None):
self.srcdir = srcdir
self.dstdir = dstdir
self._extras = dict(extras or {})
RenderTask.__init__(self, list(self._yield_jobs()))
def _yield_jobs(self) -> Iterator['RenderJob']:
raise NotImplementedError # pragma: no cover
def run(self, silent: bool = False):
# initialize
if not silent:
jobs = tqdm(self, ncols=tqdm_ncols(), leave=True)
pgbar = jobs
else:
jobs = self
pgbar = None
os.makedirs(self.dstdir, exist_ok=True)
# run jobs
for job in jobs:
if pgbar:
pgbar.set_description(os.path.relpath(job.srcpath, self.srcdir))
pgbar.update()
job.run(silent=silent)
# run complete
if pgbar:
pgbar.set_description('Complete.')
pgbar.update()
def __repr__(self):
return f'<{type(self).__name__} {plural_word(len(self), "job")}, srcdir: {self.srcdir!r}>'
class RenderJob:
def __init__(self, srcpath, dstpath=None):
self.srcpath = srcpath
self.dstpath = dstpath
def _run(self):
raise NotImplementedError # pragma: no cover
def run(self, silent: bool = False):
with silent_wrapper(silent):
return self._run()
@contextmanager
def silent_wrapper(silent):
"""
Overview:
A wrapper for silencing the output inside.
:param silent: Silent or not. If ``True``, the output will be captured and ignored.
"""
if silent:
with capture_output():
yield
else:
yield
| 0.690872 | 0.077657 |
import os
import shutil
import sys
import textwrap
from contextlib import contextmanager
from functools import wraps
from types import ModuleType
from typing import Optional, Mapping, Any, ContextManager, Tuple, Iterable
from urllib.request import urlretrieve
from hbutils.reflection import dynamic_call, mount_pythonpath
from .base import RenderJob
from ..utils import tqdm_ncols, get_globals, get_url_filename, get_archive_type, get_url_ext
from ..utils.retrieve import TqdmForURLDownload, LocalTemporaryDirectory
_SCRIPT_TAG = '__script__'
def _script_append(script, append):
def __script__(dst, **kwargs):
if script is not None:
dynamic_call(script)(dst, **kwargs)
dynamic_call(append)(dst, **kwargs)
return __script__
def igm_script_build(func):
@wraps(func)
def _new_func(*args, **kwargs):
g = get_globals()
_method = func(*args, **kwargs)
g[_SCRIPT_TAG] = _script_append(g.get(_SCRIPT_TAG, None), _method)
return _method
return _new_func
def igm_script(func):
g = get_globals()
g[_SCRIPT_TAG] = _script_append(g.get(_SCRIPT_TAG, None), func)
return func
@contextmanager
def _download_to_temp(url) -> ContextManager[Tuple[str, Optional[str]]]:
filename = get_url_filename(url)
with LocalTemporaryDirectory() as tdir:
dstfile = os.path.join(tdir, filename)
with TqdmForURLDownload(unit='B', unit_scale=True, unit_divisor=1024, miniters=1,
ncols=tqdm_ncols(), leave=True) as t:
local_filename, headers = urlretrieve(url, dstfile, reporthook=t.update_to, data=None)
t.total = t.n
yield dstfile, headers.get('Content-Type', None)
@igm_script_build
def download(url, *, subdir='.', auto_unpack: bool = True):
def _download_file(dst):
path, fname = os.path.split(dst)
with _download_to_temp(url) as (tfile, content_type):
_archive_type = get_archive_type(get_url_filename(url, content_type), content_type)
if auto_unpack and _archive_type:
os.makedirs(os.path.normpath(os.path.join(dst, '..')), exist_ok=True)
with LocalTemporaryDirectory() as tdir:
archive_dir = os.path.join(tdir, 'archive')
os.makedirs(archive_dir, exist_ok=True)
shutil.unpack_archive(tfile, archive_dir, _archive_type)
shutil.move(os.path.normpath(os.path.join(archive_dir, subdir)), dst)
else:
_ext = get_url_ext(url, content_type)
if _ext and not os.path.normcase(fname).endswith(_ext):
fname = f'{fname}{_ext}'
shutil.move(tfile, os.path.join(path, fname))
return _download_file
class _ExtrasModule(ModuleType):
def __init__(self, extras: Mapping[str, Any]) -> None:
ModuleType.__init__(self, 'extras', textwrap.dedent("""
This is a fake module for extra items.
"""))
self.__extras = extras
self.__all__ = sorted(extras.keys())
def __getattr__(self, item):
if item in self.__extras:
self.__dict__[item] = self.__extras[item]
return self.__extras[item]
else:
raise AttributeError(f'module {self.__name__!r} has no attribute {item!r}')
def __dir__(self) -> Iterable[str]:
return self.__all__
class ScriptJob(RenderJob):
def __init__(self, srcpath: str, dstpath: str, extras: Optional[Mapping[str, Any]] = None):
RenderJob.__init__(self, srcpath, dstpath)
self.__extras = dict(extras or {})
def _run(self):
with mount_pythonpath():
sys.modules['extras'] = _ExtrasModule(self.__extras)
meta = {}
with open(self.srcpath, 'r', encoding='utf-8') as f:
exec(f.read(), meta)
script = meta.get(_SCRIPT_TAG, None)
if script:
abs_dstpath = os.path.abspath(self.dstpath)
dstdir, dstfile = os.path.split(abs_dstpath)
curdir = os.path.abspath(os.curdir)
try:
rel_dstpath = os.path.relpath(abs_dstpath, start=dstdir)
if dstdir:
os.makedirs(dstdir, exist_ok=True)
os.chdir(dstdir)
# noinspection PyCallingNonCallable
script(rel_dstpath, **self.__extras)
finally:
os.chdir(curdir)
|
sci-igm
|
/sci_igm-0.0.2-py3-none-any.whl/igm/render/script.py
|
script.py
|
import os
import shutil
import sys
import textwrap
from contextlib import contextmanager
from functools import wraps
from types import ModuleType
from typing import Optional, Mapping, Any, ContextManager, Tuple, Iterable
from urllib.request import urlretrieve
from hbutils.reflection import dynamic_call, mount_pythonpath
from .base import RenderJob
from ..utils import tqdm_ncols, get_globals, get_url_filename, get_archive_type, get_url_ext
from ..utils.retrieve import TqdmForURLDownload, LocalTemporaryDirectory
_SCRIPT_TAG = '__script__'
def _script_append(script, append):
def __script__(dst, **kwargs):
if script is not None:
dynamic_call(script)(dst, **kwargs)
dynamic_call(append)(dst, **kwargs)
return __script__
def igm_script_build(func):
@wraps(func)
def _new_func(*args, **kwargs):
g = get_globals()
_method = func(*args, **kwargs)
g[_SCRIPT_TAG] = _script_append(g.get(_SCRIPT_TAG, None), _method)
return _method
return _new_func
def igm_script(func):
g = get_globals()
g[_SCRIPT_TAG] = _script_append(g.get(_SCRIPT_TAG, None), func)
return func
@contextmanager
def _download_to_temp(url) -> ContextManager[Tuple[str, Optional[str]]]:
filename = get_url_filename(url)
with LocalTemporaryDirectory() as tdir:
dstfile = os.path.join(tdir, filename)
with TqdmForURLDownload(unit='B', unit_scale=True, unit_divisor=1024, miniters=1,
ncols=tqdm_ncols(), leave=True) as t:
local_filename, headers = urlretrieve(url, dstfile, reporthook=t.update_to, data=None)
t.total = t.n
yield dstfile, headers.get('Content-Type', None)
@igm_script_build
def download(url, *, subdir='.', auto_unpack: bool = True):
def _download_file(dst):
path, fname = os.path.split(dst)
with _download_to_temp(url) as (tfile, content_type):
_archive_type = get_archive_type(get_url_filename(url, content_type), content_type)
if auto_unpack and _archive_type:
os.makedirs(os.path.normpath(os.path.join(dst, '..')), exist_ok=True)
with LocalTemporaryDirectory() as tdir:
archive_dir = os.path.join(tdir, 'archive')
os.makedirs(archive_dir, exist_ok=True)
shutil.unpack_archive(tfile, archive_dir, _archive_type)
shutil.move(os.path.normpath(os.path.join(archive_dir, subdir)), dst)
else:
_ext = get_url_ext(url, content_type)
if _ext and not os.path.normcase(fname).endswith(_ext):
fname = f'{fname}{_ext}'
shutil.move(tfile, os.path.join(path, fname))
return _download_file
class _ExtrasModule(ModuleType):
def __init__(self, extras: Mapping[str, Any]) -> None:
ModuleType.__init__(self, 'extras', textwrap.dedent("""
This is a fake module for extra items.
"""))
self.__extras = extras
self.__all__ = sorted(extras.keys())
def __getattr__(self, item):
if item in self.__extras:
self.__dict__[item] = self.__extras[item]
return self.__extras[item]
else:
raise AttributeError(f'module {self.__name__!r} has no attribute {item!r}')
def __dir__(self) -> Iterable[str]:
return self.__all__
class ScriptJob(RenderJob):
def __init__(self, srcpath: str, dstpath: str, extras: Optional[Mapping[str, Any]] = None):
RenderJob.__init__(self, srcpath, dstpath)
self.__extras = dict(extras or {})
def _run(self):
with mount_pythonpath():
sys.modules['extras'] = _ExtrasModule(self.__extras)
meta = {}
with open(self.srcpath, 'r', encoding='utf-8') as f:
exec(f.read(), meta)
script = meta.get(_SCRIPT_TAG, None)
if script:
abs_dstpath = os.path.abspath(self.dstpath)
dstdir, dstfile = os.path.split(abs_dstpath)
curdir = os.path.abspath(os.curdir)
try:
rel_dstpath = os.path.relpath(abs_dstpath, start=dstdir)
if dstdir:
os.makedirs(dstdir, exist_ok=True)
os.chdir(dstdir)
# noinspection PyCallingNonCallable
script(rel_dstpath, **self.__extras)
finally:
os.chdir(curdir)
| 0.474388 | 0.075176 |
import builtins
import os
import warnings
from functools import partial
from typing import List, Dict, Any, Optional, Mapping
from hbutils.system import copy, is_binary_file
from jinja2 import Environment
from potc import transobj as _potc_transobj
from potc.fixture.imports import ImportStatement
from .archive import ArchiveUnpackJob
from .base import RenderJob, DirectoryBasedTask
from .imports import PyImport
from .script import ScriptJob
from ..utils import get_archive_type, splitext
class NotTemplateFile(Exception):
pass
class IGMRenderTask(DirectoryBasedTask):
def __init__(self, srcdir: str, dstdir: str, extras: Optional[Mapping[str, Any]] = None):
DirectoryBasedTask.__init__(self, srcdir, dstdir, extras)
def _load_job_by_file(self, relfile: str):
directory, filename = os.path.split(os.path.normcase(relfile))
if filename.startswith('.') and filename.endswith('.py'): # script file or template
if filename.startswith('..'): # ..xxx.py --> .xxx.py (template)
return get_common_job(
os.path.join(self.srcdir, relfile),
os.path.join(self.dstdir, directory, filename[1:]),
self._extras
)
else: # .xxx.py --> xxx (script)
body, _ = splitext(filename)
return ScriptJob(
os.path.join(self.srcdir, relfile),
os.path.join(self.dstdir, directory, body[1:]),
self._extras
)
elif filename.startswith('.') and get_archive_type(filename): # unpack archive file
body, _ = splitext(filename)
return ArchiveUnpackJob( # .xxx.zip --> xxx (unzip)
os.path.join(self.srcdir, relfile),
os.path.join(self.dstdir, directory, body[1:]),
self._extras
)
else: # common cases
return get_common_job( # xxx.yy --> xxx.yy (template/binary copy)
os.path.join(self.srcdir, relfile),
os.path.join(self.dstdir, relfile),
self._extras
)
def _yield_jobs(self):
for curdir, subdirs, files in os.walk(self.srcdir):
cur_reldir = os.path.relpath(curdir, self.srcdir)
for file in files:
curfile = os.path.join(cur_reldir, file)
try:
yield self._load_job_by_file(curfile)
except NotTemplateFile: # pragma: no cover
pass
def get_common_job(src, dst, extras):
if is_binary_file(src):
return CopyJob(src, dst, extras)
else:
return TemplateJob(src, dst, extras)
class TemplateImportWarning(Warning):
pass
class TemplateJob(RenderJob):
def __init__(self, srcpath: str, dstpath: str, extras: Optional[Mapping[str, Any]] = None):
RenderJob.__init__(self, srcpath, dstpath)
self._imps: List[ImportStatement] = []
self._builtins = {name: getattr(builtins, name) for name in dir(builtins) if not (name.startswith('_'))}
self._extras = dict(extras or {})
self._environ = self._create_environ()
def _yield_extra_funcs(self):
for name, func in self._extras.items():
if callable(func):
yield name, func
def _create_environ(self):
environ = Environment(autoescape=False)
for name, value in self._builtins.items():
# register function filters
if 'a' <= name[0] <= 'z' and name not in environ.filters:
environ.filters[name] = value
# register type tests
if 'a' <= name[0] <= 'z' and isinstance(value, type) and name not in environ.tests:
environ.tests[name] = partial(lambda y, x: isinstance(x, y), value)
environ.filters['potc'] = self._transobj
environ.tests['None'] = lambda x: x is None
for name, func in self._yield_extra_funcs():
environ.filters[name] = func
environ.tests[name] = func
return environ
def _imports(self) -> List[str]:
return sorted(map(str, self._imps))
def _transobj(self, x) -> str:
result = _potc_transobj(x)
if result.imports:
for _import in result.imports:
self._imps.append(_import)
return result.code
def _parameters(self) -> Dict[str, Any]:
from igm.env import sys, env, user
return {
**self._builtins,
**self._extras,
'sys': sys, 'env': env, 'user': user,
'potc': self._transobj, 'py': PyImport(),
}
def _run(self):
with open(self.srcpath, 'r') as rf:
template = self._environ.from_string(rf.read())
dstdir, _ = os.path.split(self.dstpath)
if dstdir:
os.makedirs(dstdir, exist_ok=True)
with open(self.dstpath, 'w+') as wf:
result = template.render(**self._parameters())
wf.write(result)
unimports = []
for imp in self._imports():
if imp not in result:
unimports.append(imp)
if unimports:
warnings.warn(TemplateImportWarning(
f'These import statement is suggested to added in template {self.srcpath!r}:{os.linesep}'
f'{os.linesep.join(unimports)}'
))
class CopyJob(RenderJob):
def __init__(self, srcpath: str, dstpath: str, extras: Optional[Mapping[str, Any]] = None):
RenderJob.__init__(self, srcpath, dstpath)
_ = extras
def _run(self):
dstdir, _ = os.path.split(self.dstpath)
if dstdir:
os.makedirs(dstdir, exist_ok=True)
copy(self.srcpath, self.dstpath)
|
sci-igm
|
/sci_igm-0.0.2-py3-none-any.whl/igm/render/template.py
|
template.py
|
import builtins
import os
import warnings
from functools import partial
from typing import List, Dict, Any, Optional, Mapping
from hbutils.system import copy, is_binary_file
from jinja2 import Environment
from potc import transobj as _potc_transobj
from potc.fixture.imports import ImportStatement
from .archive import ArchiveUnpackJob
from .base import RenderJob, DirectoryBasedTask
from .imports import PyImport
from .script import ScriptJob
from ..utils import get_archive_type, splitext
class NotTemplateFile(Exception):
pass
class IGMRenderTask(DirectoryBasedTask):
def __init__(self, srcdir: str, dstdir: str, extras: Optional[Mapping[str, Any]] = None):
DirectoryBasedTask.__init__(self, srcdir, dstdir, extras)
def _load_job_by_file(self, relfile: str):
directory, filename = os.path.split(os.path.normcase(relfile))
if filename.startswith('.') and filename.endswith('.py'): # script file or template
if filename.startswith('..'): # ..xxx.py --> .xxx.py (template)
return get_common_job(
os.path.join(self.srcdir, relfile),
os.path.join(self.dstdir, directory, filename[1:]),
self._extras
)
else: # .xxx.py --> xxx (script)
body, _ = splitext(filename)
return ScriptJob(
os.path.join(self.srcdir, relfile),
os.path.join(self.dstdir, directory, body[1:]),
self._extras
)
elif filename.startswith('.') and get_archive_type(filename): # unpack archive file
body, _ = splitext(filename)
return ArchiveUnpackJob( # .xxx.zip --> xxx (unzip)
os.path.join(self.srcdir, relfile),
os.path.join(self.dstdir, directory, body[1:]),
self._extras
)
else: # common cases
return get_common_job( # xxx.yy --> xxx.yy (template/binary copy)
os.path.join(self.srcdir, relfile),
os.path.join(self.dstdir, relfile),
self._extras
)
def _yield_jobs(self):
for curdir, subdirs, files in os.walk(self.srcdir):
cur_reldir = os.path.relpath(curdir, self.srcdir)
for file in files:
curfile = os.path.join(cur_reldir, file)
try:
yield self._load_job_by_file(curfile)
except NotTemplateFile: # pragma: no cover
pass
def get_common_job(src, dst, extras):
if is_binary_file(src):
return CopyJob(src, dst, extras)
else:
return TemplateJob(src, dst, extras)
class TemplateImportWarning(Warning):
pass
class TemplateJob(RenderJob):
def __init__(self, srcpath: str, dstpath: str, extras: Optional[Mapping[str, Any]] = None):
RenderJob.__init__(self, srcpath, dstpath)
self._imps: List[ImportStatement] = []
self._builtins = {name: getattr(builtins, name) for name in dir(builtins) if not (name.startswith('_'))}
self._extras = dict(extras or {})
self._environ = self._create_environ()
def _yield_extra_funcs(self):
for name, func in self._extras.items():
if callable(func):
yield name, func
def _create_environ(self):
environ = Environment(autoescape=False)
for name, value in self._builtins.items():
# register function filters
if 'a' <= name[0] <= 'z' and name not in environ.filters:
environ.filters[name] = value
# register type tests
if 'a' <= name[0] <= 'z' and isinstance(value, type) and name not in environ.tests:
environ.tests[name] = partial(lambda y, x: isinstance(x, y), value)
environ.filters['potc'] = self._transobj
environ.tests['None'] = lambda x: x is None
for name, func in self._yield_extra_funcs():
environ.filters[name] = func
environ.tests[name] = func
return environ
def _imports(self) -> List[str]:
return sorted(map(str, self._imps))
def _transobj(self, x) -> str:
result = _potc_transobj(x)
if result.imports:
for _import in result.imports:
self._imps.append(_import)
return result.code
def _parameters(self) -> Dict[str, Any]:
from igm.env import sys, env, user
return {
**self._builtins,
**self._extras,
'sys': sys, 'env': env, 'user': user,
'potc': self._transobj, 'py': PyImport(),
}
def _run(self):
with open(self.srcpath, 'r') as rf:
template = self._environ.from_string(rf.read())
dstdir, _ = os.path.split(self.dstpath)
if dstdir:
os.makedirs(dstdir, exist_ok=True)
with open(self.dstpath, 'w+') as wf:
result = template.render(**self._parameters())
wf.write(result)
unimports = []
for imp in self._imports():
if imp not in result:
unimports.append(imp)
if unimports:
warnings.warn(TemplateImportWarning(
f'These import statement is suggested to added in template {self.srcpath!r}:{os.linesep}'
f'{os.linesep.join(unimports)}'
))
class CopyJob(RenderJob):
def __init__(self, srcpath: str, dstpath: str, extras: Optional[Mapping[str, Any]] = None):
RenderJob.__init__(self, srcpath, dstpath)
_ = extras
def _run(self):
dstdir, _ = os.path.split(self.dstpath)
if dstdir:
os.makedirs(dstdir, exist_ok=True)
copy(self.srcpath, self.dstpath)
| 0.588416 | 0.075312 |
import mimetypes
import os
import shutil
from typing import Optional
try:
import rarfile
except ImportError: # pragma: no cover
rarfile = None
class RARExtractionNotSupported(Exception):
pass
def _rar_extract(filename, extract_dir):
if rarfile is None:
raise RARExtractionNotSupported('RAR file extraction not supported, '
'please install \'rarfile\' package with your pip.')
with rarfile.RarFile(filename) as rf:
rf.extractall(path=extract_dir)
try:
import py7zr
except ImportError: # pragma: no cover
py7zr = None
class SevenZipExtractionNotSupported(Exception):
pass
def _7z_extract(filename, extract_dir):
if py7zr is None:
raise SevenZipExtractionNotSupported('7z file extraction not supported, '
'please install \'py7zr\' package with your pip.')
with py7zr.SevenZipFile(filename) as rf:
rf.extractall(path=extract_dir)
shutil.register_unpack_format('rar', ['.rar'], _rar_extract, [], 'WinRAR file')
shutil.register_unpack_format('7z', ['.7z'], _7z_extract, [], '7z file')
def unpack_archive(filename, dstpath, fmt: Optional[str] = None):
"""
Overview:
Extract from all kind of archive files, including ``.zip``, ``.tar``, ``.tar.gz``, ``.tar.xz``, ``.tar.bz2``, \
``.rar`` (requires ``rarfile`` package) and ``.7z`` (requires ``py7zr`` package``).
:param filename: Filename of the archive file.
:param dstpath: Destination path of the extracted file.
:param fmt: Format of the file, default is ``None`` which means the format will be auto-detected with ``filename``.
:return: Destination path of this extraction.
.. note::
Password is not supported at present.
"""
shutil.unpack_archive(filename, dstpath, fmt)
return dstpath
def get_archive_type(filename: str, content_type: Optional[str] = None) -> Optional[str]:
"""
Overview:
Get archive file type of the given ``filename`` and ``content_type``.
:param filename: Filename.
:param content_type: Content-Type information from remote.
:return: Archive format, can be used in :func:`shutils.unpack_archive` method.
"""
if content_type:
ext_guess = mimetypes.guess_extension(content_type)
if ext_guess:
for name, exts, _ in shutil.get_unpack_formats():
if ext_guess in exts:
return name
filename = os.path.normcase(filename)
for name, exts, _ in shutil.get_unpack_formats():
for ext in exts:
if filename.endswith(ext):
return name
return None
|
sci-igm
|
/sci_igm-0.0.2-py3-none-any.whl/igm/utils/archive.py
|
archive.py
|
import mimetypes
import os
import shutil
from typing import Optional
try:
import rarfile
except ImportError: # pragma: no cover
rarfile = None
class RARExtractionNotSupported(Exception):
pass
def _rar_extract(filename, extract_dir):
if rarfile is None:
raise RARExtractionNotSupported('RAR file extraction not supported, '
'please install \'rarfile\' package with your pip.')
with rarfile.RarFile(filename) as rf:
rf.extractall(path=extract_dir)
try:
import py7zr
except ImportError: # pragma: no cover
py7zr = None
class SevenZipExtractionNotSupported(Exception):
pass
def _7z_extract(filename, extract_dir):
if py7zr is None:
raise SevenZipExtractionNotSupported('7z file extraction not supported, '
'please install \'py7zr\' package with your pip.')
with py7zr.SevenZipFile(filename) as rf:
rf.extractall(path=extract_dir)
shutil.register_unpack_format('rar', ['.rar'], _rar_extract, [], 'WinRAR file')
shutil.register_unpack_format('7z', ['.7z'], _7z_extract, [], '7z file')
def unpack_archive(filename, dstpath, fmt: Optional[str] = None):
"""
Overview:
Extract from all kind of archive files, including ``.zip``, ``.tar``, ``.tar.gz``, ``.tar.xz``, ``.tar.bz2``, \
``.rar`` (requires ``rarfile`` package) and ``.7z`` (requires ``py7zr`` package``).
:param filename: Filename of the archive file.
:param dstpath: Destination path of the extracted file.
:param fmt: Format of the file, default is ``None`` which means the format will be auto-detected with ``filename``.
:return: Destination path of this extraction.
.. note::
Password is not supported at present.
"""
shutil.unpack_archive(filename, dstpath, fmt)
return dstpath
def get_archive_type(filename: str, content_type: Optional[str] = None) -> Optional[str]:
"""
Overview:
Get archive file type of the given ``filename`` and ``content_type``.
:param filename: Filename.
:param content_type: Content-Type information from remote.
:return: Archive format, can be used in :func:`shutils.unpack_archive` method.
"""
if content_type:
ext_guess = mimetypes.guess_extension(content_type)
if ext_guess:
for name, exts, _ in shutil.get_unpack_formats():
if ext_guess in exts:
return name
filename = os.path.normcase(filename)
for name, exts, _ in shutil.get_unpack_formats():
for ext in exts:
if filename.endswith(ext):
return name
return None
| 0.601477 | 0.182826 |
import builtins
import os
from functools import partial
from typing import Optional, Callable, Mapping, Any
from hbutils.reflection import mount_pythonpath
from hbutils.system import remove
from .inquire import with_user_inquire, inquire_call
from ..render import IGMRenderTask
from ..utils import normpath
_DEFAULT_TEMPLATE_DIR = 'template'
class IGMTemplate:
def __init__(self, name, version, description,
path, template_dir=_DEFAULT_TEMPLATE_DIR,
inquire: Optional[Callable[[], Mapping]] = None,
extras: Optional[Mapping[str, Any]] = None):
self.__name = name
self.__version = version
self.__description = description
self.__path = normpath(path)
self.__template_dir = normpath(self.__path, template_dir)
self.__inquire = (inquire or (lambda: {}))
self.__extras = dict(extras or {})
@property
def name(self):
return self.__name
@property
def version(self):
return self.__version
@property
def description(self) -> str:
return self.__description
@property
def path(self) -> str:
return self.__path
@property
def template_dir(self) -> str:
return self.__template_dir
def print_info(self, file=None):
# print is replaced here to print all the output to ``file``
if file is not None:
# noinspection PyShadowingBuiltins
print = partial(builtins.print, file=file)
else:
# noinspection PyShadowingBuiltins
print = builtins.print
print(f'{self.__name}, v{self.__version}')
print(f'{self.__description}')
print(f'Located at {self.__path!r}.')
def __repr__(self) -> str:
return f'<{type(self).__name__} {self.__name}, v{self.__version}>'
def run(self, dstdir: str, silent: bool = False) -> bool:
if os.path.exists(dstdir):
raise FileExistsError(f'Path {dstdir!r} already exist.')
ok, inquire_data = inquire_call(self.__inquire)
if ok:
try:
with with_user_inquire(inquire_data), mount_pythonpath(self.__path):
task = IGMRenderTask(
self.__template_dir, dstdir,
{
'template': self,
'project_dir': os.path.abspath(dstdir),
**self.__extras
}
)
task.run(silent=silent)
return True
except BaseException:
if os.path.exists(dstdir):
remove(dstdir)
raise
else:
return False
|
sci-igm
|
/sci_igm-0.0.2-py3-none-any.whl/igm/conf/template.py
|
template.py
|
import builtins
import os
from functools import partial
from typing import Optional, Callable, Mapping, Any
from hbutils.reflection import mount_pythonpath
from hbutils.system import remove
from .inquire import with_user_inquire, inquire_call
from ..render import IGMRenderTask
from ..utils import normpath
_DEFAULT_TEMPLATE_DIR = 'template'
class IGMTemplate:
def __init__(self, name, version, description,
path, template_dir=_DEFAULT_TEMPLATE_DIR,
inquire: Optional[Callable[[], Mapping]] = None,
extras: Optional[Mapping[str, Any]] = None):
self.__name = name
self.__version = version
self.__description = description
self.__path = normpath(path)
self.__template_dir = normpath(self.__path, template_dir)
self.__inquire = (inquire or (lambda: {}))
self.__extras = dict(extras or {})
@property
def name(self):
return self.__name
@property
def version(self):
return self.__version
@property
def description(self) -> str:
return self.__description
@property
def path(self) -> str:
return self.__path
@property
def template_dir(self) -> str:
return self.__template_dir
def print_info(self, file=None):
# print is replaced here to print all the output to ``file``
if file is not None:
# noinspection PyShadowingBuiltins
print = partial(builtins.print, file=file)
else:
# noinspection PyShadowingBuiltins
print = builtins.print
print(f'{self.__name}, v{self.__version}')
print(f'{self.__description}')
print(f'Located at {self.__path!r}.')
def __repr__(self) -> str:
return f'<{type(self).__name__} {self.__name}, v{self.__version}>'
def run(self, dstdir: str, silent: bool = False) -> bool:
if os.path.exists(dstdir):
raise FileExistsError(f'Path {dstdir!r} already exist.')
ok, inquire_data = inquire_call(self.__inquire)
if ok:
try:
with with_user_inquire(inquire_data), mount_pythonpath(self.__path):
task = IGMRenderTask(
self.__template_dir, dstdir,
{
'template': self,
'project_dir': os.path.abspath(dstdir),
**self.__extras
}
)
task.run(silent=silent)
return True
except BaseException:
if os.path.exists(dstdir):
remove(dstdir)
raise
else:
return False
| 0.572842 | 0.079175 |
import builtins
import datetime
import os.path
import shlex
import subprocess
import sys
from contextlib import contextmanager
from functools import partial
from typing import Union, List, Any, Mapping, Optional, ContextManager, Dict
from hbutils.reflection import mount_pythonpath
from hbutils.string import plural_word
from ..utils import get_globals
class IGMScript:
def describe(self) -> str:
raise NotImplementedError # pragma: no cover
def run(self, pfunc=None):
self._run_with_wrapper(pfunc)
def _run_with_wrapper(self, pfunc=None, prefix=None):
pfunc = pfunc or partial(builtins.print, flush=True)
title = self.describe() if not prefix else f'{prefix} {self.describe()}'
pfunc(title)
self._run()
def _run(self):
raise NotImplementedError # pragma: no cover
class IGMFuncScript(IGMScript):
def __init__(self, func):
self.func = func
def describe(self) -> str:
if getattr(self.func, '__doc__', None) and self.func.__doc__.strip():
return self.func.__doc__.strip()
else:
return f'Call function {self.func.__name__!r}.'
def _run(self):
self.func()
def _trans_command(command: Union[List[str], str]) -> List[str]:
if isinstance(command, str):
return shlex.split(command)
else:
return command
def _repr_command(command: Union[List[str], str]) -> str:
return ' '.join(map(shlex.quote, _trans_command(command)))
class IGMCommandScript(IGMScript):
def __init__(self, command: Union[List[str], str]):
self.args = _trans_command(command)
def _visual_command(self) -> List[str]:
return self.args
def describe(self) -> str:
return f'Command - {_repr_command(self._visual_command())}'
def _run(self):
process = subprocess.run(self.args, stdin=sys.stdin, stderr=sys.stderr, stdout=sys.stdout)
process.check_returncode()
class IGMPythonScript(IGMCommandScript):
def __init__(self, command: Union[List[str], str]):
self._python_command = _trans_command(command)
IGMCommandScript.__init__(self, [sys.executable, *self._python_command])
def _visual_command(self) -> List[str]:
return ['python', *self._python_command]
class IGMPipScript(IGMPythonScript):
def __init__(self, command: Union[List[str], str]):
self._pip_command = _trans_command(command)
IGMPythonScript.__init__(self, ['-m', 'pip', *self._pip_command])
def _visual_command(self) -> List[str]:
return ['pip', *self._pip_command]
class IGMScriptSet(IGMScript):
def __init__(self, *scripts: 'IGMScript', desc: Optional[str] = None):
self.scripts = scripts
self.desc = desc
def describe(self) -> str:
return self.desc or f'Run a set of {plural_word(len(self.scripts), "scripts")} in order.'
def _run_with_wrapper(self, pfunc=None, prefix=None):
pfunc = pfunc or partial(builtins.print, flush=True)
title = self.describe() if not prefix else f'{prefix} {self.describe()}'
pfunc(title)
try:
for i, script in enumerate(self.scripts, start=1):
new_prefix = f'{prefix}{i}.' if prefix else f'{i}.'
script._run_with_wrapper(pfunc, new_prefix)
finally:
print(flush=True)
def _run(self):
raise NotImplementedError # pragma: no cover
def _to_script(v):
if isinstance(v, IGMScript):
return v
elif isinstance(v, str):
return IGMCommandScript(v)
elif callable(v):
return IGMFuncScript(v)
elif isinstance(v, (list, tuple)):
if all([isinstance(x, str) for x in v]):
return IGMCommandScript(v)
else:
return IGMScriptSet(*map(_to_script, v))
else: # pragma: no cover
raise TypeError(f'Unknown script type - {v!r}.')
def cpy(command: Union[List[str], str], *cmd: str) -> IGMPythonScript:
return IGMPythonScript([command, *cmd] if cmd else command)
def cpip(command: Union[List[str], str], *cmd: str) -> IGMPipScript:
return IGMPipScript([command, *cmd] if cmd else command)
def cmds(description: str, v: List) -> IGMScriptSet:
return IGMScriptSet(*map(_to_script, v), desc=description)
def _to_timestamp(v) -> float:
if isinstance(v, str):
return datetime.datetime.fromisoformat(v).timestamp()
elif isinstance(v, (int, float)):
return float(v)
else:
raise TypeError(f'Invalid time type - {v!r}.')
def _timestamp_repr(v) -> str:
_local_timezone = datetime.datetime.now(datetime.timezone.utc).astimezone().tzinfo
return datetime.datetime.fromtimestamp(_to_timestamp(v), _local_timezone).isoformat()
class IGMProject:
def __init__(self, name, version, template_name, template_version, created_at, params, scripts):
self.name = name
self.version = version
self.template_name = template_name
self.template_version = template_version
self.created_at = _to_timestamp(created_at)
self.params = dict(params or {})
self.scripts: Dict[Optional[str], IGMScript] = \
{name: _to_script(s) for name, s in (scripts or {}).items()}
@property
def created_at_repr(self) -> str:
return _timestamp_repr(self.created_at)
_IGM_PROJECT_TAG = '__igm_project__'
def igm_project(
name,
version,
template_name,
template_version,
created_at,
params: Optional[Mapping[str, Any]] = None,
scripts: Optional[Mapping[Optional[str], Any]] = None,
):
g = get_globals()
proj = IGMProject(
name, version,
template_name, template_version, created_at,
params, scripts,
)
g[_IGM_PROJECT_TAG] = proj
return proj
class NotIGMProject(Exception):
pass
@contextmanager
def load_igm_project(directory, meta_filename='igmeta.py') -> ContextManager[IGMProject]:
if not os.path.exists(directory):
raise FileNotFoundError(directory)
if os.path.isfile(directory):
proj_dir, metafile = os.path.split(os.path.abspath(directory))
else:
proj_dir, metafile = os.path.abspath(directory), meta_filename
_globals = {}
with mount_pythonpath(proj_dir):
with open(os.path.join(proj_dir, metafile), 'r') as f:
exec(f.read(), _globals)
_project = _globals.get(_IGM_PROJECT_TAG, None)
if isinstance(_project, IGMProject):
yield _project
else:
raise NotIGMProject(directory)
|
sci-igm
|
/sci_igm-0.0.2-py3-none-any.whl/igm/conf/project.py
|
project.py
|
import builtins
import datetime
import os.path
import shlex
import subprocess
import sys
from contextlib import contextmanager
from functools import partial
from typing import Union, List, Any, Mapping, Optional, ContextManager, Dict
from hbutils.reflection import mount_pythonpath
from hbutils.string import plural_word
from ..utils import get_globals
class IGMScript:
def describe(self) -> str:
raise NotImplementedError # pragma: no cover
def run(self, pfunc=None):
self._run_with_wrapper(pfunc)
def _run_with_wrapper(self, pfunc=None, prefix=None):
pfunc = pfunc or partial(builtins.print, flush=True)
title = self.describe() if not prefix else f'{prefix} {self.describe()}'
pfunc(title)
self._run()
def _run(self):
raise NotImplementedError # pragma: no cover
class IGMFuncScript(IGMScript):
def __init__(self, func):
self.func = func
def describe(self) -> str:
if getattr(self.func, '__doc__', None) and self.func.__doc__.strip():
return self.func.__doc__.strip()
else:
return f'Call function {self.func.__name__!r}.'
def _run(self):
self.func()
def _trans_command(command: Union[List[str], str]) -> List[str]:
if isinstance(command, str):
return shlex.split(command)
else:
return command
def _repr_command(command: Union[List[str], str]) -> str:
return ' '.join(map(shlex.quote, _trans_command(command)))
class IGMCommandScript(IGMScript):
def __init__(self, command: Union[List[str], str]):
self.args = _trans_command(command)
def _visual_command(self) -> List[str]:
return self.args
def describe(self) -> str:
return f'Command - {_repr_command(self._visual_command())}'
def _run(self):
process = subprocess.run(self.args, stdin=sys.stdin, stderr=sys.stderr, stdout=sys.stdout)
process.check_returncode()
class IGMPythonScript(IGMCommandScript):
def __init__(self, command: Union[List[str], str]):
self._python_command = _trans_command(command)
IGMCommandScript.__init__(self, [sys.executable, *self._python_command])
def _visual_command(self) -> List[str]:
return ['python', *self._python_command]
class IGMPipScript(IGMPythonScript):
def __init__(self, command: Union[List[str], str]):
self._pip_command = _trans_command(command)
IGMPythonScript.__init__(self, ['-m', 'pip', *self._pip_command])
def _visual_command(self) -> List[str]:
return ['pip', *self._pip_command]
class IGMScriptSet(IGMScript):
def __init__(self, *scripts: 'IGMScript', desc: Optional[str] = None):
self.scripts = scripts
self.desc = desc
def describe(self) -> str:
return self.desc or f'Run a set of {plural_word(len(self.scripts), "scripts")} in order.'
def _run_with_wrapper(self, pfunc=None, prefix=None):
pfunc = pfunc or partial(builtins.print, flush=True)
title = self.describe() if not prefix else f'{prefix} {self.describe()}'
pfunc(title)
try:
for i, script in enumerate(self.scripts, start=1):
new_prefix = f'{prefix}{i}.' if prefix else f'{i}.'
script._run_with_wrapper(pfunc, new_prefix)
finally:
print(flush=True)
def _run(self):
raise NotImplementedError # pragma: no cover
def _to_script(v):
if isinstance(v, IGMScript):
return v
elif isinstance(v, str):
return IGMCommandScript(v)
elif callable(v):
return IGMFuncScript(v)
elif isinstance(v, (list, tuple)):
if all([isinstance(x, str) for x in v]):
return IGMCommandScript(v)
else:
return IGMScriptSet(*map(_to_script, v))
else: # pragma: no cover
raise TypeError(f'Unknown script type - {v!r}.')
def cpy(command: Union[List[str], str], *cmd: str) -> IGMPythonScript:
return IGMPythonScript([command, *cmd] if cmd else command)
def cpip(command: Union[List[str], str], *cmd: str) -> IGMPipScript:
return IGMPipScript([command, *cmd] if cmd else command)
def cmds(description: str, v: List) -> IGMScriptSet:
return IGMScriptSet(*map(_to_script, v), desc=description)
def _to_timestamp(v) -> float:
if isinstance(v, str):
return datetime.datetime.fromisoformat(v).timestamp()
elif isinstance(v, (int, float)):
return float(v)
else:
raise TypeError(f'Invalid time type - {v!r}.')
def _timestamp_repr(v) -> str:
_local_timezone = datetime.datetime.now(datetime.timezone.utc).astimezone().tzinfo
return datetime.datetime.fromtimestamp(_to_timestamp(v), _local_timezone).isoformat()
class IGMProject:
def __init__(self, name, version, template_name, template_version, created_at, params, scripts):
self.name = name
self.version = version
self.template_name = template_name
self.template_version = template_version
self.created_at = _to_timestamp(created_at)
self.params = dict(params or {})
self.scripts: Dict[Optional[str], IGMScript] = \
{name: _to_script(s) for name, s in (scripts or {}).items()}
@property
def created_at_repr(self) -> str:
return _timestamp_repr(self.created_at)
_IGM_PROJECT_TAG = '__igm_project__'
def igm_project(
name,
version,
template_name,
template_version,
created_at,
params: Optional[Mapping[str, Any]] = None,
scripts: Optional[Mapping[Optional[str], Any]] = None,
):
g = get_globals()
proj = IGMProject(
name, version,
template_name, template_version, created_at,
params, scripts,
)
g[_IGM_PROJECT_TAG] = proj
return proj
class NotIGMProject(Exception):
pass
@contextmanager
def load_igm_project(directory, meta_filename='igmeta.py') -> ContextManager[IGMProject]:
if not os.path.exists(directory):
raise FileNotFoundError(directory)
if os.path.isfile(directory):
proj_dir, metafile = os.path.split(os.path.abspath(directory))
else:
proj_dir, metafile = os.path.abspath(directory), meta_filename
_globals = {}
with mount_pythonpath(proj_dir):
with open(os.path.join(proj_dir, metafile), 'r') as f:
exec(f.read(), _globals)
_project = _globals.get(_IGM_PROJECT_TAG, None)
if isinstance(_project, IGMProject):
yield _project
else:
raise NotIGMProject(directory)
| 0.570212 | 0.134378 |
import math
from typing import Optional
from .percentage import Percentage
from .size import SizeScale
class UsedPercentage(Percentage):
pass
class FreePercentage(Percentage):
pass
class AvailPercentage(Percentage):
pass
class MemoryStatus:
def __init__(self, total, used, free=None, avail=None):
self.__total = SizeScale(total)
self.__used = SizeScale(used)
self.__free = SizeScale(free) if free is not None else \
SizeScale(self.__total.bytes - self.__used.bytes)
self.__avail = SizeScale(avail) if avail is not None else None
@property
def total(self) -> SizeScale:
return self.__total
@property
def used(self) -> SizeScale:
return self.__used
@property
def free(self) -> SizeScale:
return self.__free
@property
def used_percentage(self) -> Optional[UsedPercentage]:
try:
ratio = self.used.bytes / self.total.bytes
if math.isnan(ratio):
raise ZeroDivisionError
except ZeroDivisionError:
return None
else:
return UsedPercentage(ratio)
@property
def free_percentage(self) -> Optional[FreePercentage]:
try:
ratio = self.free.bytes / self.total.bytes
if math.isnan(ratio):
raise ZeroDivisionError
except ZeroDivisionError:
return None
else:
return FreePercentage(self.free.bytes / self.total.bytes)
@property
def avail(self) -> Optional[SizeScale]:
return self.__avail
@property
def avail_percentage(self) -> Optional[AvailPercentage]:
if self.__avail is not None:
try:
ratio = self.avail.bytes / self.total.bytes
if math.isnan(ratio):
raise ZeroDivisionError
except ZeroDivisionError:
return None
else:
return AvailPercentage(ratio)
else:
return None
def __bool__(self):
return bool(self.total)
def __repr__(self):
if not self:
return f'<{type(self).__name__} total: {self.total}>'
else:
if self.__avail is not None:
return f'<{type(self).__name__} total: {self.total}, ' \
f'used: {self.used} ({self.used_percentage}), ' \
f'free: {self.free} ({self.free_percentage}), ' \
f'avail: {self.avail} ({self.avail_percentage})>'
else:
return f'<{type(self).__name__} total: {self.total}, ' \
f'used: {self.used} ({self.used_percentage}), ' \
f'free: {self.free} ({self.free_percentage})>'
|
sci-igm
|
/sci_igm-0.0.2-py3-none-any.whl/igm/model/memory.py
|
memory.py
|
import math
from typing import Optional
from .percentage import Percentage
from .size import SizeScale
class UsedPercentage(Percentage):
pass
class FreePercentage(Percentage):
pass
class AvailPercentage(Percentage):
pass
class MemoryStatus:
def __init__(self, total, used, free=None, avail=None):
self.__total = SizeScale(total)
self.__used = SizeScale(used)
self.__free = SizeScale(free) if free is not None else \
SizeScale(self.__total.bytes - self.__used.bytes)
self.__avail = SizeScale(avail) if avail is not None else None
@property
def total(self) -> SizeScale:
return self.__total
@property
def used(self) -> SizeScale:
return self.__used
@property
def free(self) -> SizeScale:
return self.__free
@property
def used_percentage(self) -> Optional[UsedPercentage]:
try:
ratio = self.used.bytes / self.total.bytes
if math.isnan(ratio):
raise ZeroDivisionError
except ZeroDivisionError:
return None
else:
return UsedPercentage(ratio)
@property
def free_percentage(self) -> Optional[FreePercentage]:
try:
ratio = self.free.bytes / self.total.bytes
if math.isnan(ratio):
raise ZeroDivisionError
except ZeroDivisionError:
return None
else:
return FreePercentage(self.free.bytes / self.total.bytes)
@property
def avail(self) -> Optional[SizeScale]:
return self.__avail
@property
def avail_percentage(self) -> Optional[AvailPercentage]:
if self.__avail is not None:
try:
ratio = self.avail.bytes / self.total.bytes
if math.isnan(ratio):
raise ZeroDivisionError
except ZeroDivisionError:
return None
else:
return AvailPercentage(ratio)
else:
return None
def __bool__(self):
return bool(self.total)
def __repr__(self):
if not self:
return f'<{type(self).__name__} total: {self.total}>'
else:
if self.__avail is not None:
return f'<{type(self).__name__} total: {self.total}, ' \
f'used: {self.used} ({self.used_percentage}), ' \
f'free: {self.free} ({self.free_percentage}), ' \
f'avail: {self.avail} ({self.avail_percentage})>'
else:
return f'<{type(self).__name__} total: {self.total}, ' \
f'used: {self.used} ({self.used_percentage}), ' \
f'free: {self.free} ({self.free_percentage})>'
| 0.816004 | 0.132234 |
from typing import Callable, Union
from pkg_resources import parse_version
from .comparable import Comparable
_Version = type(parse_version('0.0.1'))
class VersionInfo(Comparable):
"""
Overview:
Class for wrapping version information.
.. warning::
This class is not immutable for its designing for dynamic comparison and boolean check.
Please pay attention when use it.
"""
def __init__(self, v: Union['VersionInfo', _Version, Callable, str, tuple, int]):
"""
Constructor of :class:`VersionInfo`.
:param v: Version information, can be a :class:`VersionInfo`, version, function, str, \
tuple or integer.
"""
if isinstance(v, VersionInfo):
self._version, self._func = v._version, v._func
elif isinstance(v, _Version) or v is None:
self._version, self._func = v, None
elif callable(v):
self._version, self._func = None, v
elif isinstance(v, str):
VersionInfo.__init__(self, parse_version(v))
elif isinstance(v, tuple):
VersionInfo.__init__(self, '.'.join(map(str, v)))
elif isinstance(v, int):
VersionInfo.__init__(self, str(v))
else:
raise TypeError(f'Unknown version type - {repr(v)}.')
@property
def _actual_version(self):
if self._func is None:
return self._version
else:
return VersionInfo(self._func())._version
def _value(self):
return self._actual_version
def _cmp_precondition(self, other):
return Comparable._cmp_precondition(self, other) and (self and other)
def __eq__(self, other):
return self._actual_version == VersionInfo(other)._actual_version
def __bool__(self):
return bool(self._actual_version)
def __str__(self):
return str(self._actual_version) if self._actual_version else ''
def __repr__(self):
return f'<{type(self).__name__} {self._actual_version}>'
|
sci-igm
|
/sci_igm-0.0.2-py3-none-any.whl/igm/model/version.py
|
version.py
|
from typing import Callable, Union
from pkg_resources import parse_version
from .comparable import Comparable
_Version = type(parse_version('0.0.1'))
class VersionInfo(Comparable):
"""
Overview:
Class for wrapping version information.
.. warning::
This class is not immutable for its designing for dynamic comparison and boolean check.
Please pay attention when use it.
"""
def __init__(self, v: Union['VersionInfo', _Version, Callable, str, tuple, int]):
"""
Constructor of :class:`VersionInfo`.
:param v: Version information, can be a :class:`VersionInfo`, version, function, str, \
tuple or integer.
"""
if isinstance(v, VersionInfo):
self._version, self._func = v._version, v._func
elif isinstance(v, _Version) or v is None:
self._version, self._func = v, None
elif callable(v):
self._version, self._func = None, v
elif isinstance(v, str):
VersionInfo.__init__(self, parse_version(v))
elif isinstance(v, tuple):
VersionInfo.__init__(self, '.'.join(map(str, v)))
elif isinstance(v, int):
VersionInfo.__init__(self, str(v))
else:
raise TypeError(f'Unknown version type - {repr(v)}.')
@property
def _actual_version(self):
if self._func is None:
return self._version
else:
return VersionInfo(self._func())._version
def _value(self):
return self._actual_version
def _cmp_precondition(self, other):
return Comparable._cmp_precondition(self, other) and (self and other)
def __eq__(self, other):
return self._actual_version == VersionInfo(other)._actual_version
def __bool__(self):
return bool(self._actual_version)
def __str__(self):
return str(self._actual_version) if self._actual_version else ''
def __repr__(self):
return f'<{type(self).__name__} {self._actual_version}>'
| 0.851089 | 0.143578 |
import builtins
import itertools
import os
import sys
import traceback
from functools import wraps, partial
from typing import Optional, IO, Callable
import click
from click.exceptions import ClickException
CONTEXT_SETTINGS = dict(
help_option_names=['-h', '--help']
)
class ClickWarningException(ClickException):
def show(self, file: Optional[IO] = None) -> None:
click.secho(self.format_message(), fg='yellow', file=sys.stderr)
class ClickErrorException(ClickException):
def show(self, file: Optional[IO] = None) -> None:
click.secho(self.format_message(), fg='red', file=sys.stderr)
# noinspection PyShadowingBuiltins
def print_exception(err: BaseException, print: Optional[Callable] = None):
# noinspection PyShadowingBuiltins
print = print or builtins.print
lines = list(itertools.chain(*map(
lambda x: x.splitlines(keepends=False),
traceback.format_tb(err.__traceback__)
)))
if lines:
print('Traceback (most recent call last):')
print(os.linesep.join(lines))
if len(err.args) == 0:
print(f'{type(err).__name__}')
elif len(err.args) == 1:
print(f'{type(err).__name__}: {err.args[0]}')
else:
print(f'{type(err).__name__}: {err.args}')
class KeyboardInterrupted(ClickWarningException):
exit_code = 0x7
def __init__(self, msg=None):
ClickWarningException.__init__(self, msg or 'Interrupted.')
def command_wrap():
def _decorator(func):
@wraps(func)
def _new_func(*args, **kwargs):
try:
return func(*args, **kwargs)
except ClickException:
raise
except KeyboardInterrupt:
raise KeyboardInterrupted
except BaseException as err:
click.secho('Unexpected error found when running IGM CLI!', fg='red', file=sys.stderr)
print_exception(err, partial(click.secho, fg='red', file=sys.stderr))
sys.exit(0x1)
return _new_func
return _decorator
|
sci-igm
|
/sci_igm-0.0.2-py3-none-any.whl/igm/entry/base.py
|
base.py
|
import builtins
import itertools
import os
import sys
import traceback
from functools import wraps, partial
from typing import Optional, IO, Callable
import click
from click.exceptions import ClickException
CONTEXT_SETTINGS = dict(
help_option_names=['-h', '--help']
)
class ClickWarningException(ClickException):
def show(self, file: Optional[IO] = None) -> None:
click.secho(self.format_message(), fg='yellow', file=sys.stderr)
class ClickErrorException(ClickException):
def show(self, file: Optional[IO] = None) -> None:
click.secho(self.format_message(), fg='red', file=sys.stderr)
# noinspection PyShadowingBuiltins
def print_exception(err: BaseException, print: Optional[Callable] = None):
# noinspection PyShadowingBuiltins
print = print or builtins.print
lines = list(itertools.chain(*map(
lambda x: x.splitlines(keepends=False),
traceback.format_tb(err.__traceback__)
)))
if lines:
print('Traceback (most recent call last):')
print(os.linesep.join(lines))
if len(err.args) == 0:
print(f'{type(err).__name__}')
elif len(err.args) == 1:
print(f'{type(err).__name__}: {err.args[0]}')
else:
print(f'{type(err).__name__}: {err.args}')
class KeyboardInterrupted(ClickWarningException):
exit_code = 0x7
def __init__(self, msg=None):
ClickWarningException.__init__(self, msg or 'Interrupted.')
def command_wrap():
def _decorator(func):
@wraps(func)
def _new_func(*args, **kwargs):
try:
return func(*args, **kwargs)
except ClickException:
raise
except KeyboardInterrupt:
raise KeyboardInterrupted
except BaseException as err:
click.secho('Unexpected error found when running IGM CLI!', fg='red', file=sys.stderr)
print_exception(err, partial(click.secho, fg='red', file=sys.stderr))
sys.exit(0x1)
return _new_func
return _decorator
| 0.416678 | 0.069542 |
import hashlib
import sys
import logging
"""
``logging_filters``
-------------------
Python uses `filters`_ to add contextural information to its
:mod:`~python:logging` facility.
Filters defined below are attached to :data:`settings.LOGGING` and
also :class:`~.middleware.LogSetupMiddleware`.
.. _filters:
http://docs.python.org/2.6/library/logging.html#\
adding-contextual-information-to-your-logging-output
"""
class RequestFilter(object):
"""
Filter that adds information about a *request* to the logging record.
:param request:
:type request: :class:`~django.http.HttpRequest`
Extra information can be substituted in the formatter string:
``http_user_agent``
The user agent string, provided by the client.
``path_info``
The requested HTTP path.
``remote_addr``
The remote IP address.
``request_method``
The HTTP request method (*e.g.* GET, POST, PUT, DELETE, *etc.*)
``server_protocol``
The server protocol (*e.g.* HTTP, HTTPS, *etc.*)
``username``
The username for the logged-in user.
"""
def __init__(self, request=None):
"""Saves *request* (a WSGIRequest object) for later."""
self.request = request
def filter(self, record):
"""
Adds information from the request to the logging *record*.
If certain information cannot be extracted from ``self.request``,
a hyphen ``'-'`` is substituted as a placeholder.
"""
request = self.request
# Basic
record.request_method = getattr(request, 'method', '-')
record.path_info = getattr(request, 'path_info', '-')
# User
user = getattr(request, 'user', None)
if user and not user.is_anonymous():
# Hash it
record.username = hashlib.sha1(user.username.encode()).hexdigest()[:8]
record.userid = str(user.id)
else:
record.username = '---'
record.userid = '-'
# Headers
META = getattr(request, 'META', {})
record.remote_addr = META.get('REMOTE_ADDR', '-')
record.server_protocol = META.get('SERVER_PROTOCOL', '-')
record.http_user_agent = META.get('HTTP_USER_AGENT', '-')
return True
import weakref
weakref_type = type(weakref.ref(lambda: None))
def deref(x):
return x() if x and type(x) == weakref_type else x
class LogSetupMiddleware(object):
"""
Adds :class:`.logging_filters.RequestFilter` to every request.
If *root* is a module name, only look at loggers inside that
logging subtree.
This filter adds useful information about `HttpRequest`\ s to log
entries. See :class:`.logging_filters.RequestFilter` for details
about which formatter substitutions are added.
Automatically detects which handlers and logger need
RequestFilter installed, by looking for an unbound RequestFilter
attached to a handler or logger. To configure Django, in your
:envvar:`DJANGO_SETTINGS_MODULE`::
LOGGING = {
'filters': {
# Add an unbound RequestFilter.
'request': {
'()': 'django_requestlogging.logging_filters.RequestFilter',
},
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'filters': ['request'],
},
},
'loggers': {
'myapp': {
# Add your handlers that have the unbound request filter
'handlers': ['console'],
# Optionally, add the unbound request filter to your
# application.
'filters': ['request'],
},
},
}
"""
FILTER = RequestFilter
def __init__(self, root=''):
self.root = root
def find_loggers(self):
"""
Returns a :class:`dict` of names and the associated loggers.
"""
# Extract the full logger tree from Logger.manager.loggerDict
# that are under ``self.root``.
result = {}
prefix = self.root + '.'
for name, logger in logging.Logger.manager.loggerDict.items():
if self.root and not name.startswith(prefix):
# Does not fall under self.root
continue
result[name] = logger
# Add the self.root logger
result[self.root] = logging.getLogger(self.root)
return result
def find_handlers(self):
"""
Returns a list of handlers.
"""
return list(logging._handlerList)
def _find_filterer_with_filter(self, filterers, filter_cls):
"""
Returns a :class:`dict` of filterers mapped to a list of filters.
*filterers* should be a list of filterers.
*filter_cls* should be a logging filter that should be matched.
"""
result = {}
for logger in map(deref, filterers):
filters = [f for f in map(deref, getattr(logger, 'filters', []))
if isinstance(f, filter_cls)]
if filters:
result[logger] = filters
return result
def find_loggers_with_filter(self, filter_cls):
"""
Returns a :class:`dict` of loggers mapped to a list of filters.
Looks for instances of *filter_cls* attached to each logger.
If the logger has at least one, it is included in the result.
"""
return self._find_filterer_with_filter(self.find_loggers().values(),
filter_cls)
def find_handlers_with_filter(self, filter_cls):
"""
Returns a :class:`dict` of handlers mapped to a list of filters.
Looks for instances of *filter_cls* attached to each handler.
If the handler has at least one, it is included in the result.
"""
return self._find_filterer_with_filter(self.find_handlers(),
filter_cls)
def add_filter(self, f, filter_cls=None):
"""Add filter *f* to any loggers that have *filter_cls* filters."""
if filter_cls is None:
filter_cls = type(f)
for logger in self.find_loggers_with_filter(filter_cls):
logger.addFilter(f)
for handler in self.find_handlers_with_filter(filter_cls):
handler.addFilter(f)
def remove_filter(self, f):
"""Remove filter *f* from all loggers."""
for logger in self.find_loggers_with_filter(type(f)):
logger.removeFilter(f)
for handler in self.find_handlers_with_filter(type(f)):
handler.removeFilter(f)
def process_request(self, request):
"""Adds a filter, bound to *request*, to the appropriate loggers."""
request.logging_filter = RequestFilter(request)
self.add_filter(request.logging_filter)
def process_response(self, request, response):
"""Removes this *request*'s filter from all loggers."""
f = getattr(request, 'logging_filter', None)
if f:
self.remove_filter(f)
return response
def process_exception(self, request, exception):
"""Removes this *request*'s filter from all loggers."""
f = getattr(request, 'logging_filter', None)
if f:
self.remove_filter(f)
class LoggingConfiguration(object):
def __init__(self, project='NONE'):
self.django_log_config = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'console': {
'format': '[DJANGO] - [' + project + '] - [%(asctime)s][%(levelname)s][%(name)s.%(funcName)s:%(lineno)d]'
'[%(username)s][%(userid)s] - %(message)s',
},
},
'filters': {
# Add an unbound RequestFilter.
'request': {
'()': 'scilogging.logging.RequestFilter',
},
},
'handlers': {
'sentry': {
'level': 'ERROR', # To capture more than ERROR, change to WARNING, INFO, etc.
'class': 'raven.contrib.django.raven_compat.handlers.SentryHandler',
'tags': {'custom-tag': 'x'},
},
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'console',
'stream': sys.stdout,
'filters': ['request'],
},
},
'root': {
'handlers': ['console'],
'level': 'DEBUG',
'filters': ['request'],
},
'loggers': {
'django': {
'handlers': ['console'],
'level': 'WARNING',
'propagate': True,
},
'raven': {
'level': 'WARNING',
'handlers': ['console'],
'propagate': False,
},
'sentry.errors': {
'level': 'WARNING',
'handlers': ['console'],
'propagate': False,
},
},
}
|
sci-logging
|
/sci-logging-0.2.tar.gz/sci-logging-0.2/scilogging/logging.py
|
logging.py
|
import hashlib
import sys
import logging
"""
``logging_filters``
-------------------
Python uses `filters`_ to add contextural information to its
:mod:`~python:logging` facility.
Filters defined below are attached to :data:`settings.LOGGING` and
also :class:`~.middleware.LogSetupMiddleware`.
.. _filters:
http://docs.python.org/2.6/library/logging.html#\
adding-contextual-information-to-your-logging-output
"""
class RequestFilter(object):
"""
Filter that adds information about a *request* to the logging record.
:param request:
:type request: :class:`~django.http.HttpRequest`
Extra information can be substituted in the formatter string:
``http_user_agent``
The user agent string, provided by the client.
``path_info``
The requested HTTP path.
``remote_addr``
The remote IP address.
``request_method``
The HTTP request method (*e.g.* GET, POST, PUT, DELETE, *etc.*)
``server_protocol``
The server protocol (*e.g.* HTTP, HTTPS, *etc.*)
``username``
The username for the logged-in user.
"""
def __init__(self, request=None):
"""Saves *request* (a WSGIRequest object) for later."""
self.request = request
def filter(self, record):
"""
Adds information from the request to the logging *record*.
If certain information cannot be extracted from ``self.request``,
a hyphen ``'-'`` is substituted as a placeholder.
"""
request = self.request
# Basic
record.request_method = getattr(request, 'method', '-')
record.path_info = getattr(request, 'path_info', '-')
# User
user = getattr(request, 'user', None)
if user and not user.is_anonymous():
# Hash it
record.username = hashlib.sha1(user.username.encode()).hexdigest()[:8]
record.userid = str(user.id)
else:
record.username = '---'
record.userid = '-'
# Headers
META = getattr(request, 'META', {})
record.remote_addr = META.get('REMOTE_ADDR', '-')
record.server_protocol = META.get('SERVER_PROTOCOL', '-')
record.http_user_agent = META.get('HTTP_USER_AGENT', '-')
return True
import weakref
weakref_type = type(weakref.ref(lambda: None))
def deref(x):
return x() if x and type(x) == weakref_type else x
class LogSetupMiddleware(object):
"""
Adds :class:`.logging_filters.RequestFilter` to every request.
If *root* is a module name, only look at loggers inside that
logging subtree.
This filter adds useful information about `HttpRequest`\ s to log
entries. See :class:`.logging_filters.RequestFilter` for details
about which formatter substitutions are added.
Automatically detects which handlers and logger need
RequestFilter installed, by looking for an unbound RequestFilter
attached to a handler or logger. To configure Django, in your
:envvar:`DJANGO_SETTINGS_MODULE`::
LOGGING = {
'filters': {
# Add an unbound RequestFilter.
'request': {
'()': 'django_requestlogging.logging_filters.RequestFilter',
},
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'filters': ['request'],
},
},
'loggers': {
'myapp': {
# Add your handlers that have the unbound request filter
'handlers': ['console'],
# Optionally, add the unbound request filter to your
# application.
'filters': ['request'],
},
},
}
"""
FILTER = RequestFilter
def __init__(self, root=''):
self.root = root
def find_loggers(self):
"""
Returns a :class:`dict` of names and the associated loggers.
"""
# Extract the full logger tree from Logger.manager.loggerDict
# that are under ``self.root``.
result = {}
prefix = self.root + '.'
for name, logger in logging.Logger.manager.loggerDict.items():
if self.root and not name.startswith(prefix):
# Does not fall under self.root
continue
result[name] = logger
# Add the self.root logger
result[self.root] = logging.getLogger(self.root)
return result
def find_handlers(self):
"""
Returns a list of handlers.
"""
return list(logging._handlerList)
def _find_filterer_with_filter(self, filterers, filter_cls):
"""
Returns a :class:`dict` of filterers mapped to a list of filters.
*filterers* should be a list of filterers.
*filter_cls* should be a logging filter that should be matched.
"""
result = {}
for logger in map(deref, filterers):
filters = [f for f in map(deref, getattr(logger, 'filters', []))
if isinstance(f, filter_cls)]
if filters:
result[logger] = filters
return result
def find_loggers_with_filter(self, filter_cls):
"""
Returns a :class:`dict` of loggers mapped to a list of filters.
Looks for instances of *filter_cls* attached to each logger.
If the logger has at least one, it is included in the result.
"""
return self._find_filterer_with_filter(self.find_loggers().values(),
filter_cls)
def find_handlers_with_filter(self, filter_cls):
"""
Returns a :class:`dict` of handlers mapped to a list of filters.
Looks for instances of *filter_cls* attached to each handler.
If the handler has at least one, it is included in the result.
"""
return self._find_filterer_with_filter(self.find_handlers(),
filter_cls)
def add_filter(self, f, filter_cls=None):
"""Add filter *f* to any loggers that have *filter_cls* filters."""
if filter_cls is None:
filter_cls = type(f)
for logger in self.find_loggers_with_filter(filter_cls):
logger.addFilter(f)
for handler in self.find_handlers_with_filter(filter_cls):
handler.addFilter(f)
def remove_filter(self, f):
"""Remove filter *f* from all loggers."""
for logger in self.find_loggers_with_filter(type(f)):
logger.removeFilter(f)
for handler in self.find_handlers_with_filter(type(f)):
handler.removeFilter(f)
def process_request(self, request):
"""Adds a filter, bound to *request*, to the appropriate loggers."""
request.logging_filter = RequestFilter(request)
self.add_filter(request.logging_filter)
def process_response(self, request, response):
"""Removes this *request*'s filter from all loggers."""
f = getattr(request, 'logging_filter', None)
if f:
self.remove_filter(f)
return response
def process_exception(self, request, exception):
"""Removes this *request*'s filter from all loggers."""
f = getattr(request, 'logging_filter', None)
if f:
self.remove_filter(f)
class LoggingConfiguration(object):
def __init__(self, project='NONE'):
self.django_log_config = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'console': {
'format': '[DJANGO] - [' + project + '] - [%(asctime)s][%(levelname)s][%(name)s.%(funcName)s:%(lineno)d]'
'[%(username)s][%(userid)s] - %(message)s',
},
},
'filters': {
# Add an unbound RequestFilter.
'request': {
'()': 'scilogging.logging.RequestFilter',
},
},
'handlers': {
'sentry': {
'level': 'ERROR', # To capture more than ERROR, change to WARNING, INFO, etc.
'class': 'raven.contrib.django.raven_compat.handlers.SentryHandler',
'tags': {'custom-tag': 'x'},
},
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'console',
'stream': sys.stdout,
'filters': ['request'],
},
},
'root': {
'handlers': ['console'],
'level': 'DEBUG',
'filters': ['request'],
},
'loggers': {
'django': {
'handlers': ['console'],
'level': 'WARNING',
'propagate': True,
},
'raven': {
'level': 'WARNING',
'handlers': ['console'],
'propagate': False,
},
'sentry.errors': {
'level': 'WARNING',
'handlers': ['console'],
'propagate': False,
},
},
}
| 0.709724 | 0.268462 |
"Utility functions for handling buffers"
import sys as _sys
import numpy as _numpy
def _ord(byte):
r"""Convert a byte to an integer.
>>> buffer = b'\x00\x01\x02'
>>> [_ord(b) for b in buffer]
[0, 1, 2]
"""
if _sys.version_info >= (3,):
return byte
else:
return ord(byte)
def hex_bytes(buffer, spaces=None):
r"""Pretty-printing for binary buffers.
>>> hex_bytes(b'\x00\x01\x02\x03\x04')
'0001020304'
>>> hex_bytes(b'\x00\x01\x02\x03\x04', spaces=1)
'00 01 02 03 04'
>>> hex_bytes(b'\x00\x01\x02\x03\x04', spaces=2)
'0001 0203 04'
>>> hex_bytes(b'\x00\x01\x02\x03\x04\x05\x06', spaces=2)
'0001 0203 0405 06'
>>> hex_bytes(b'\x00\x01\x02\x03\x04\x05\x06', spaces=3)
'000102 030405 06'
"""
hex_bytes = ['{:02x}'.format(_ord(x)) for x in buffer]
if spaces is None:
return ''.join(hex_bytes)
elif spaces is 1:
return ' '.join(hex_bytes)
for i in range(len(hex_bytes)//spaces):
hex_bytes.insert((spaces+1)*(i+1)-1, ' ')
return ''.join(hex_bytes)
def assert_null(buffer, strict=True):
r"""Ensure an input buffer is entirely zero.
>>> import sys
>>> assert_null(b'')
>>> assert_null(b'\x00\x00')
>>> assert_null(b'\x00\x01\x02\x03')
Traceback (most recent call last):
...
ValueError: 00 01 02 03
>>> stderr = sys.stderr
>>> sys.stderr = sys.stdout
>>> assert_null(b'\x00\x01\x02\x03', strict=False)
warning: post-data padding not zero: 00 01 02 03
>>> sys.stderr = stderr
"""
if buffer and _ord(max(buffer)) != 0:
hex_string = hex_bytes(buffer, spaces=1)
if strict:
raise ValueError(hex_string)
else:
_sys.stderr.write(
'warning: post-data padding not zero: {}\n'.format(hex_string))
# From ReadWave.c
def byte_order(needToReorderBytes):
little_endian = _sys.byteorder == 'little'
if needToReorderBytes:
little_endian = not little_endian
if little_endian:
return '<' # little-endian
return '>' # big-endian
# From ReadWave.c
def need_to_reorder_bytes(version):
# If the low order byte of the version field of the BinHeader
# structure is zero then the file is from a platform that uses
# different byte-ordering and therefore all data will need to be
# reordered.
return version & 0xFF == 0
# From ReadWave.c
def checksum(buffer, byte_order, oldcksum, numbytes):
x = _numpy.ndarray(
(numbytes/2,), # 2 bytes to a short -- ignore trailing odd byte
dtype=_numpy.dtype(byte_order+'h'),
buffer=buffer)
oldcksum += x.sum()
if oldcksum > 2**31: # fake the C implementation's int rollover
oldcksum %= 2**32
if oldcksum > 2**31:
oldcksum -= 2**31
return oldcksum & 0xffff
def _bytes(obj, encoding='utf-8'):
"""Convert bytes or strings into bytes
>>> _bytes(b'123')
'123'
>>> _bytes('123')
'123'
"""
if _sys.version_info >= (3,):
if isinstance(obj, bytes):
return obj
else:
return bytes(obj, encoding)
else:
return bytes(obj)
|
sci-memex
|
/sci_memex-0.0.3-py3-none-any.whl/memex/translators/igor/util.py
|
util.py
|
"Utility functions for handling buffers"
import sys as _sys
import numpy as _numpy
def _ord(byte):
r"""Convert a byte to an integer.
>>> buffer = b'\x00\x01\x02'
>>> [_ord(b) for b in buffer]
[0, 1, 2]
"""
if _sys.version_info >= (3,):
return byte
else:
return ord(byte)
def hex_bytes(buffer, spaces=None):
r"""Pretty-printing for binary buffers.
>>> hex_bytes(b'\x00\x01\x02\x03\x04')
'0001020304'
>>> hex_bytes(b'\x00\x01\x02\x03\x04', spaces=1)
'00 01 02 03 04'
>>> hex_bytes(b'\x00\x01\x02\x03\x04', spaces=2)
'0001 0203 04'
>>> hex_bytes(b'\x00\x01\x02\x03\x04\x05\x06', spaces=2)
'0001 0203 0405 06'
>>> hex_bytes(b'\x00\x01\x02\x03\x04\x05\x06', spaces=3)
'000102 030405 06'
"""
hex_bytes = ['{:02x}'.format(_ord(x)) for x in buffer]
if spaces is None:
return ''.join(hex_bytes)
elif spaces is 1:
return ' '.join(hex_bytes)
for i in range(len(hex_bytes)//spaces):
hex_bytes.insert((spaces+1)*(i+1)-1, ' ')
return ''.join(hex_bytes)
def assert_null(buffer, strict=True):
r"""Ensure an input buffer is entirely zero.
>>> import sys
>>> assert_null(b'')
>>> assert_null(b'\x00\x00')
>>> assert_null(b'\x00\x01\x02\x03')
Traceback (most recent call last):
...
ValueError: 00 01 02 03
>>> stderr = sys.stderr
>>> sys.stderr = sys.stdout
>>> assert_null(b'\x00\x01\x02\x03', strict=False)
warning: post-data padding not zero: 00 01 02 03
>>> sys.stderr = stderr
"""
if buffer and _ord(max(buffer)) != 0:
hex_string = hex_bytes(buffer, spaces=1)
if strict:
raise ValueError(hex_string)
else:
_sys.stderr.write(
'warning: post-data padding not zero: {}\n'.format(hex_string))
# From ReadWave.c
def byte_order(needToReorderBytes):
little_endian = _sys.byteorder == 'little'
if needToReorderBytes:
little_endian = not little_endian
if little_endian:
return '<' # little-endian
return '>' # big-endian
# From ReadWave.c
def need_to_reorder_bytes(version):
# If the low order byte of the version field of the BinHeader
# structure is zero then the file is from a platform that uses
# different byte-ordering and therefore all data will need to be
# reordered.
return version & 0xFF == 0
# From ReadWave.c
def checksum(buffer, byte_order, oldcksum, numbytes):
x = _numpy.ndarray(
(numbytes/2,), # 2 bytes to a short -- ignore trailing odd byte
dtype=_numpy.dtype(byte_order+'h'),
buffer=buffer)
oldcksum += x.sum()
if oldcksum > 2**31: # fake the C implementation's int rollover
oldcksum %= 2**32
if oldcksum > 2**31:
oldcksum -= 2**31
return oldcksum & 0xffff
def _bytes(obj, encoding='utf-8'):
"""Convert bytes or strings into bytes
>>> _bytes(b'123')
'123'
>>> _bytes('123')
'123'
"""
if _sys.version_info >= (3,):
if isinstance(obj, bytes):
return obj
else:
return bytes(obj, encoding)
else:
return bytes(obj)
| 0.494629 | 0.427337 |
"Read IGOR Binary Wave files into Numpy arrays."
# Based on WaveMetric's Technical Note 003, "Igor Binary Format"
# ftp://ftp.wavemetrics.net/IgorPro/Technical_Notes/TN003.zip
# From ftp://ftp.wavemetrics.net/IgorPro/Technical_Notes/TN000.txt
# We place no restrictions on copying Technical Notes, with the
# exception that you cannot resell them. So read, enjoy, and
# share. We hope IGOR Technical Notes will provide you with lots of
# valuable information while you are developing IGOR applications.
from __future__ import absolute_import
import array as _array
import struct as _struct
import sys as _sys
import types as _types
import numpy as np
from . import LOG as _LOG
from .struct import Structure as _Structure
from .struct import DynamicStructure as _DynamicStructure
from .struct import Field as _Field
from .struct import DynamicField as _DynamicField
from .util import assert_null as _assert_null
from .util import byte_order as _byte_order
from .util import need_to_reorder_bytes as _need_to_reorder_bytes
from .util import checksum as _checksum
# Numpy doesn't support complex integers by default, see
# http://mail.python.org/pipermail/python-dev/2002-April/022408.html
# http://mail.scipy.org/pipermail/numpy-discussion/2007-October/029447.html
# So we roll our own types. See
# http://docs.scipy.org/doc/numpy/user/basics.rec.html
# http://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.html
complexInt8 = np.dtype([('real', np.int8), ('imag', np.int8)])
complexInt16 = np.dtype([('real', np.int16), ('imag', np.int16)])
complexInt32 = np.dtype([('real', np.int32), ('imag', np.int32)])
complexUInt8 = np.dtype([('real', np.uint8), ('imag', np.uint8)])
complexUInt16 = np.dtype(
[('real', np.uint16), ('imag', np.uint16)])
complexUInt32 = np.dtype(
[('real', np.uint32), ('imag', np.uint32)])
class StaticStringField (_DynamicField):
_null_terminated = False
_array_size_field = None
def __init__(self, *args, **kwargs):
if 'array' not in kwargs:
kwargs['array'] = True
super(StaticStringField, self).__init__(*args, **kwargs)
def post_unpack(self, parents, data):
wave_structure = parents[-1]
wave_data = self._get_structure_data(parents, data, wave_structure)
d = self._normalize_string(wave_data[self.name])
wave_data[self.name] = d
def _normalize_string(self, d):
if isinstance(d, bytes):
pass
elif hasattr(d, 'tobytes'):
d = d.tobytes()
elif hasattr(d, 'tostring'): # Python 2 compatibility
d = d.tostring()
else:
d = b''.join(d)
if self._array_size_field:
start = 0
strings = []
for count in self.counts:
end = start + count
if end > start:
strings.append(d[start:end])
if self._null_terminated:
strings[-1] = strings[-1].split(b'\x00', 1)[0]
start = end
elif self._null_terminated:
d = d.split(b'\x00', 1)[0]
return d
class NullStaticStringField (StaticStringField):
_null_terminated = True
# Begin IGOR constants and typedefs from IgorBin.h
# From IgorMath.h
TYPE_TABLE = { # (key: integer flag, value: numpy dtype)
0:None, # Text wave, not handled in ReadWave.c
1:complex, # NT_CMPLX, makes number complex.
2:np.float32, # NT_FP32, 32 bit fp numbers.
3:np.complex64,
4:np.float64, # NT_FP64, 64 bit fp numbers.
5:np.complex128,
8:np.int8, # NT_I8, 8 bit signed integer. Requires Igor Pro
# 2.0 or later.
9:complexInt8,
0x10:np.int16,# NT_I16, 16 bit integer numbers. Requires Igor
# Pro 2.0 or later.
0x11:complexInt16,
0x20:np.int32,# NT_I32, 32 bit integer numbers. Requires Igor
# Pro 2.0 or later.
0x21:complexInt32,
# 0x40:None, # NT_UNSIGNED, Makes above signed integers
# # unsigned. Requires Igor Pro 3.0 or later.
0x48:np.uint8,
0x49:complexUInt8,
0x50:np.uint16,
0x51:complexUInt16,
0x60:np.uint32,
0x61:complexUInt32,
}
# From wave.h
MAXDIMS = 4
# From binary.h
BinHeader1 = _Structure( # `version` field pulled out into Wave
name='BinHeader1',
fields=[
_Field('l', 'wfmSize', help='The size of the WaveHeader2 data structure plus the wave data plus 16 bytes of padding.'),
_Field('h', 'checksum', help='Checksum over this header and the wave header.'),
])
BinHeader2 = _Structure( # `version` field pulled out into Wave
name='BinHeader2',
fields=[
_Field('l', 'wfmSize', help='The size of the WaveHeader2 data structure plus the wave data plus 16 bytes of padding.'),
_Field('l', 'noteSize', help='The size of the note text.'),
_Field('l', 'pictSize', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('h', 'checksum', help='Checksum over this header and the wave header.'),
])
BinHeader3 = _Structure( # `version` field pulled out into Wave
name='BinHeader3',
fields=[
_Field('l', 'wfmSize', help='The size of the WaveHeader2 data structure plus the wave data plus 16 bytes of padding.'),
_Field('l', 'noteSize', help='The size of the note text.'),
_Field('l', 'formulaSize', help='The size of the dependency formula, if any.'),
_Field('l', 'pictSize', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('h', 'checksum', help='Checksum over this header and the wave header.'),
])
BinHeader5 = _Structure( # `version` field pulled out into Wave
name='BinHeader5',
fields=[
_Field('h', 'checksum', help='Checksum over this header and the wave header.'),
_Field('l', 'wfmSize', help='The size of the WaveHeader5 data structure plus the wave data.'),
_Field('l', 'formulaSize', help='The size of the dependency formula, if any.'),
_Field('l', 'noteSize', help='The size of the note text.'),
_Field('l', 'dataEUnitsSize', help='The size of optional extended data units.'),
_Field('l', 'dimEUnitsSize', help='The size of optional extended dimension units.', count=MAXDIMS, array=True),
_Field('l', 'dimLabelsSize', help='The size of optional dimension labels.', count=MAXDIMS, array=True),
_Field('l', 'sIndicesSize', help='The size of string indicies if this is a text wave.'),
_Field('l', 'optionsSize1', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('l', 'optionsSize2', default=0, help='Reserved. Write zero. Ignore on read.'),
])
# From wave.h
MAX_WAVE_NAME2 = 18 # Maximum length of wave name in version 1 and 2
# files. Does not include the trailing null.
MAX_WAVE_NAME5 = 31 # Maximum length of wave name in version 5
# files. Does not include the trailing null.
MAX_UNIT_CHARS = 3
# Header to an array of waveform data.
# `wData` field pulled out into DynamicWaveDataField1
WaveHeader2 = _DynamicStructure(
name='WaveHeader2',
fields=[
_Field('h', 'type', help='See types (e.g. NT_FP64) above. Zero for text waves.'),
_Field('P', 'next', default=0, help='Used in memory only. Write zero. Ignore on read.'),
NullStaticStringField('c', 'bname', help='Name of wave plus trailing null.', count=MAX_WAVE_NAME2+2),
_Field('h', 'whVersion', default=0, help='Write 0. Ignore on read.'),
_Field('h', 'srcFldr', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('P', 'fileName', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('c', 'dataUnits', default=0, help='Natural data units go here - null if none.', count=MAX_UNIT_CHARS+1, array=True),
_Field('c', 'xUnits', default=0, help='Natural x-axis units go here - null if none.', count=MAX_UNIT_CHARS+1, array=True),
_Field('l', 'npnts', help='Number of data points in wave.'),
_Field('h', 'aModified', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('d', 'hsA', help='X value for point p = hsA*p + hsB'),
_Field('d', 'hsB', help='X value for point p = hsA*p + hsB'),
_Field('h', 'wModified', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('h', 'swModified', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('h', 'fsValid', help='True if full scale values have meaning.'),
_Field('d', 'topFullScale', help='The min full scale value for wave.'), # sic, 'min' should probably be 'max'
_Field('d', 'botFullScale', help='The min full scale value for wave.'),
_Field('c', 'useBits', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('c', 'kindBits', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('P', 'formula', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('l', 'depID', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('L', 'creationDate', help='DateTime of creation. Not used in version 1 files.'),
_Field('c', 'wUnused', default=0, help='Reserved. Write zero. Ignore on read.', count=2, array=True),
_Field('L', 'modDate', help='DateTime of last modification.'),
_Field('P', 'waveNoteH', help='Used in memory only. Write zero. Ignore on read.'),
])
# `sIndices` pointer unset (use Wave5_data['sIndices'] instead). This
# field is filled in by DynamicStringIndicesDataField.
# `wData` field pulled out into DynamicWaveDataField5
WaveHeader5 = _DynamicStructure(
name='WaveHeader5',
fields=[
_Field('P', 'next', help='link to next wave in linked list.'),
_Field('L', 'creationDate', help='DateTime of creation.'),
_Field('L', 'modDate', help='DateTime of last modification.'),
_Field('l', 'npnts', help='Total number of points (multiply dimensions up to first zero).'),
_Field('h', 'type', help='See types (e.g. NT_FP64) above. Zero for text waves.'),
_Field('h', 'dLock', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('c', 'whpad1', default=0, help='Reserved. Write zero. Ignore on read.', count=6, array=True),
_Field('h', 'whVersion', default=1, help='Write 1. Ignore on read.'),
NullStaticStringField('c', 'bname', help='Name of wave plus trailing null.', count=MAX_WAVE_NAME5+1),
_Field('l', 'whpad2', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('P', 'dFolder', default=0, help='Used in memory only. Write zero. Ignore on read.'),
# Dimensioning info. [0] == rows, [1] == cols etc
_Field('l', 'nDim', help='Number of of items in a dimension -- 0 means no data.', count=MAXDIMS, array=True),
_Field('d', 'sfA', help='Index value for element e of dimension d = sfA[d]*e + sfB[d].', count=MAXDIMS, array=True),
_Field('d', 'sfB', help='Index value for element e of dimension d = sfA[d]*e + sfB[d].', count=MAXDIMS, array=True),
# SI units
_Field('c', 'dataUnits', default=0, help='Natural data units go here - null if none.', count=MAX_UNIT_CHARS+1, array=True),
_Field('c', 'dimUnits', default=0, help='Natural dimension units go here - null if none.', count=(MAXDIMS, MAX_UNIT_CHARS+1), array=True),
_Field('h', 'fsValid', help='TRUE if full scale values have meaning.'),
_Field('h', 'whpad3', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('d', 'topFullScale', help='The max and max full scale value for wave'), # sic, probably "max and min"
_Field('d', 'botFullScale', help='The max and max full scale value for wave.'), # sic, probably "max and min"
_Field('P', 'dataEUnits', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('P', 'dimEUnits', default=0, help='Used in memory only. Write zero. Ignore on read.', count=MAXDIMS, array=True),
_Field('P', 'dimLabels', default=0, help='Used in memory only. Write zero. Ignore on read.', count=MAXDIMS, array=True),
_Field('P', 'waveNoteH', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('l', 'whUnused', default=0, help='Reserved. Write zero. Ignore on read.', count=16, array=True),
# The following stuff is considered private to Igor.
_Field('h', 'aModified', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('h', 'wModified', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('h', 'swModified', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('c', 'useBits', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('c', 'kindBits', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('P', 'formula', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('l', 'depID', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('h', 'whpad4', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('h', 'srcFldr', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('P', 'fileName', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('P', 'sIndices', default=0, help='Used in memory only. Write zero. Ignore on read.'),
])
class DynamicWaveDataField1 (_DynamicField):
def pre_pack(self, parents, data):
raise NotImplementedError()
def pre_unpack(self, parents, data):
full_structure = parents[0]
wave_structure = parents[-1]
wave_header_structure = wave_structure.fields[1].format
wave_data = self._get_structure_data(parents, data, wave_structure)
version = data['version']
bin_header = wave_data['bin_header']
wave_header = wave_data['wave_header']
self.count = wave_header['npnts']
self.data_size = self._get_size(bin_header, wave_header_structure.size)
type_ = TYPE_TABLE.get(wave_header['type'], None)
if type_:
self.shape = self._get_shape(bin_header, wave_header)
else: # text wave
type_ = np.dtype('S1')
self.shape = (self.data_size,)
# dtype() wrapping to avoid numpy.generic and
# getset_descriptor issues with the builtin numpy types
# (e.g. int32). It has no effect on our local complex
# integers.
self.dtype = np.dtype(type_).newbyteorder(
wave_structure.byte_order)
if (version == 3 and
self.count > 0 and
bin_header['formulaSize'] > 0 and
self.data_size == 0):
"""From TN003:
Igor Pro 2.00 included support for dependency formulae. If
a wave was governed by a dependency formula then the
actual wave data was not written to disk for that wave,
because on loading the wave Igor could recalculate the
data. However,this prevented the wave from being loaded
into an experiment other than the original
experiment. Consequently, in a version of Igor Pro 3.0x,
we changed it so that the wave data was written even if
the wave was governed by a dependency formula. When
reading a binary wave file, you can detect that the wave
file does not contain the wave data by examining the
wfmSize, formulaSize and npnts fields. If npnts is greater
than zero and formulaSize is greater than zero and
the waveDataSize as calculated above is zero, then this is
a file governed by a dependency formula that was written
without the actual wave data.
"""
self.shape = (0,)
elif TYPE_TABLE.get(wave_header['type'], None) is not None:
assert self.data_size == self.count * self.dtype.itemsize, (
self.data_size, self.count, self.dtype.itemsize, self.dtype)
else:
assert self.data_size >= 0, (
bin_header['wfmSize'], wave_header_structure.size)
def _get_size(self, bin_header, wave_header_size):
return bin_header['wfmSize'] - wave_header_size - 16
def _get_shape(self, bin_header, wave_header):
return (self.count,)
def unpack(self, stream):
data_b = stream.read(self.data_size)
try:
data = np.ndarray(
shape=self.shape,
dtype=self.dtype,
buffer=data_b,
order='F',
)
except:
_LOG.error(
'could not reshape data from {} to {}'.format(
self.shape, data_b))
raise
return data
class DynamicWaveDataField5 (DynamicWaveDataField1):
"Adds support for multidimensional data."
def _get_size(self, bin_header, wave_header_size):
return bin_header['wfmSize'] - wave_header_size
def _get_shape(self, bin_header, wave_header):
return [n for n in wave_header['nDim'] if n > 0] or (0,)
# End IGOR constants and typedefs from IgorBin.h
class DynamicStringField (StaticStringField):
_size_field = None
def pre_unpack(self, parents, data):
size = self._get_size_data(parents, data)
if self._array_size_field:
self.counts = size
self.count = sum(self.counts)
else:
self.count = size
self.setup()
def _get_size_data(self, parents, data):
wave_structure = parents[-1]
wave_data = self._get_structure_data(parents, data, wave_structure)
bin_header = wave_data['bin_header']
return bin_header[self._size_field]
class DynamicWaveNoteField (DynamicStringField):
_size_field = 'noteSize'
class DynamicDependencyFormulaField (DynamicStringField):
"""Optional wave dependency formula
Excerpted from TN003:
A wave has a dependency formula if it has been bound by a
statement such as "wave0 := sin(x)". In this example, the
dependency formula is "sin(x)". The formula is stored with
no trailing null byte.
"""
_size_field = 'formulaSize'
# Except when it is stored with a trailing null byte :p. See, for
# example, test/data/mac-version3Dependent.ibw.
_null_terminated = True
class DynamicDataUnitsField (DynamicStringField):
"""Optional extended data units data
Excerpted from TN003:
dataUnits - Present in versions 1, 2, 3, 5. The dataUnits field
stores the units for the data represented by the wave. It is a C
string terminated with a null character. This field supports
units of 0 to 3 bytes. In version 1, 2 and 3 files, longer units
can not be represented. In version 5 files, longer units can be
stored using the optional extended data units section of the
file.
"""
_size_field = 'dataEUnitsSize'
class DynamicDimensionUnitsField (DynamicStringField):
"""Optional extended dimension units data
Excerpted from TN003:
xUnits - Present in versions 1, 2, 3. The xUnits field stores the
X units for a wave. It is a C string terminated with a null
character. This field supports units of 0 to 3 bytes. In
version 1, 2 and 3 files, longer units can not be represented.
dimUnits - Present in version 5 only. This field is an array of 4
strings, one for each possible wave dimension. Each string
supports units of 0 to 3 bytes. Longer units can be stored using
the optional extended dimension units section of the file.
"""
_size_field = 'dimEUnitsSize'
_array_size_field = True
class DynamicLabelsField (DynamicStringField):
"""Optional dimension label data
From TN003:
If the wave has dimension labels for dimension d then the
dimLabelsSize[d] field of the BinHeader5 structure will be
non-zero.
A wave will have dimension labels if a SetDimLabel command has
been executed on it.
A 3 point 1D wave has 4 dimension labels. The first dimension
label is the label for the dimension as a whole. The next three
dimension labels are the labels for rows 0, 1, and 2. When Igor
writes dimension labels to disk, it writes each dimension label as
a C string (null-terminated) in a field of 32 bytes.
"""
_size_field = 'dimLabelsSize'
_array_size_field = True
def post_unpack(self, parents, data):
wave_structure = parents[-1]
wave_data = self._get_structure_data(parents, data, wave_structure)
bin_header = wave_data['bin_header']
d = wave_data[self.name]
dim_labels = []
start = 0
for size in bin_header[self._size_field]:
end = start + size
if end > start:
dim_data = d[start:end]
chunks = []
for i in range(size//32):
chunks.append(dim_data[32*i:32*(i+1)])
labels = [b'']
for chunk in chunks:
labels[-1] = labels[-1] + b''.join(chunk)
if b'\x00' in chunk:
labels.append(b'')
labels.pop(-1)
start = end
else:
labels = []
dim_labels.append(labels)
wave_data[self.name] = dim_labels
class DynamicStringIndicesDataField (_DynamicField):
"""String indices used for text waves only
"""
def pre_pack(self, parents, data):
raise NotImplementedError()
def pre_unpack(self, parents, data):
wave_structure = parents[-1]
wave_data = self._get_structure_data(parents, data, wave_structure)
bin_header = wave_data['bin_header']
wave_header = wave_data['wave_header']
self.string_indices_size = bin_header['sIndicesSize']
self.count = self.string_indices_size // 4
if self.count: # make sure we're in a text wave
assert TYPE_TABLE[wave_header['type']] is None, wave_header
self.setup()
def post_unpack(self, parents, data):
if not self.count:
return
wave_structure = parents[-1]
wave_data = self._get_structure_data(parents, data, wave_structure)
wave_header = wave_data['wave_header']
wdata = wave_data['wData']
strings = []
start = 0
for i,offset in enumerate(wave_data['sIndices']):
if offset > start:
chars = wdata[start:offset]
strings.append(b''.join(chars))
start = offset
elif offset == start:
strings.append(b'')
else:
raise ValueError((offset, wave_data['sIndices']))
wdata = np.array(strings)
shape = [n for n in wave_header['nDim'] if n > 0] or (0,)
try:
wdata = wdata.reshape(shape)
except ValueError:
_LOG.error(
'could not reshape strings from {} to {}'.format(
shape, wdata.shape))
raise
wave_data['wData'] = wdata
class DynamicVersionField (_DynamicField):
def pre_pack(self, parents, byte_order):
raise NotImplementedError()
def post_unpack(self, parents, data):
wave_structure = parents[-1]
wave_data = self._get_structure_data(parents, data, wave_structure)
version = wave_data['version']
if wave_structure.byte_order in '@=':
need_to_reorder_bytes = _need_to_reorder_bytes(version)
wave_structure.byte_order = _byte_order(need_to_reorder_bytes)
_LOG.debug(
'get byte order from version: {} (reorder? {})'.format(
wave_structure.byte_order, need_to_reorder_bytes))
else:
need_to_reorder_bytes = False
old_format = wave_structure.fields[-1].format
if version == 1:
wave_structure.fields[-1].format = Wave1
elif version == 2:
wave_structure.fields[-1].format = Wave2
elif version == 3:
wave_structure.fields[-1].format = Wave3
elif version == 5:
wave_structure.fields[-1].format = Wave5
elif not need_to_reorder_bytes:
raise ValueError(
'invalid binary wave version: {}'.format(version))
if wave_structure.fields[-1].format != old_format:
_LOG.debug('change wave headers from {} to {}'.format(
old_format, wave_structure.fields[-1].format))
wave_structure.setup()
elif need_to_reorder_bytes:
wave_structure.setup()
# we might need to unpack again with the new byte order
return need_to_reorder_bytes
class DynamicWaveField (_DynamicField):
def post_unpack(self, parents, data):
return
raise NotImplementedError() # TODO
checksum_size = bin.size + wave.size
wave_structure = parents[-1]
if version == 5:
# Version 5 checksum does not include the wData field.
checksum_size -= 4
c = _checksum(b, parents[-1].byte_order, 0, checksum_size)
if c != 0:
raise ValueError(
('This does not appear to be a valid Igor binary wave file. '
'Error in checksum: should be 0, is {}.').format(c))
Wave1 = _DynamicStructure(
name='Wave1',
fields=[
_Field(BinHeader1, 'bin_header', help='Binary wave header'),
_Field(WaveHeader2, 'wave_header', help='Wave header'),
DynamicWaveDataField1('f', 'wData', help='The start of the array of waveform data.', count=0, array=True),
])
Wave2 = _DynamicStructure(
name='Wave2',
fields=[
_Field(BinHeader2, 'bin_header', help='Binary wave header'),
_Field(WaveHeader2, 'wave_header', help='Wave header'),
DynamicWaveDataField1('f', 'wData', help='The start of the array of waveform data.', count=0, array=True),
_Field('x', 'padding', help='16 bytes of padding in versions 2 and 3.', count=16, array=True),
DynamicWaveNoteField('c', 'note', help='Optional wave note data', count=0, array=True),
])
Wave3 = _DynamicStructure(
name='Wave3',
fields=[
_Field(BinHeader3, 'bin_header', help='Binary wave header'),
_Field(WaveHeader2, 'wave_header', help='Wave header'),
DynamicWaveDataField1('f', 'wData', help='The start of the array of waveform data.', count=0, array=True),
_Field('x', 'padding', help='16 bytes of padding in versions 2 and 3.', count=16, array=True),
DynamicWaveNoteField('c', 'note', help='Optional wave note data', count=0, array=True),
DynamicDependencyFormulaField('c', 'formula', help='Optional wave dependency formula', count=0, array=True),
])
Wave5 = _DynamicStructure(
name='Wave5',
fields=[
_Field(BinHeader5, 'bin_header', help='Binary wave header'),
_Field(WaveHeader5, 'wave_header', help='Wave header'),
DynamicWaveDataField5('f', 'wData', help='The start of the array of waveform data.', count=0, array=True),
DynamicDependencyFormulaField('c', 'formula', help='Optional wave dependency formula.', count=0, array=True),
DynamicWaveNoteField('c', 'note', help='Optional wave note data.', count=0, array=True),
DynamicDataUnitsField('c', 'data_units', help='Optional extended data units data.', count=0, array=True),
DynamicDimensionUnitsField('c', 'dimension_units', help='Optional dimension label data', count=0, array=True),
DynamicLabelsField('c', 'labels', help="Optional dimension label data", count=0, array=True),
DynamicStringIndicesDataField('P', 'sIndices', help='Dynamic string indices for text waves.', count=0, array=True),
])
Wave = _DynamicStructure(
name='Wave',
fields=[
DynamicVersionField('h', 'version', help='Version number for backwards compatibility.'),
DynamicWaveField(Wave1, 'wave', help='The rest of the wave data.'),
])
def load(filename):
if hasattr(filename, 'read'):
f = filename # filename is actually a stream object
else:
f = open(filename, 'rb')
try:
Wave.byte_order = '='
Wave.setup()
data = Wave.unpack_stream(f)
finally:
if not hasattr(filename, 'read'):
f.close()
return data
def save(filename):
raise NotImplementedError
|
sci-memex
|
/sci_memex-0.0.3-py3-none-any.whl/memex/translators/igor/binarywave.py
|
binarywave.py
|
"Read IGOR Binary Wave files into Numpy arrays."
# Based on WaveMetric's Technical Note 003, "Igor Binary Format"
# ftp://ftp.wavemetrics.net/IgorPro/Technical_Notes/TN003.zip
# From ftp://ftp.wavemetrics.net/IgorPro/Technical_Notes/TN000.txt
# We place no restrictions on copying Technical Notes, with the
# exception that you cannot resell them. So read, enjoy, and
# share. We hope IGOR Technical Notes will provide you with lots of
# valuable information while you are developing IGOR applications.
from __future__ import absolute_import
import array as _array
import struct as _struct
import sys as _sys
import types as _types
import numpy as np
from . import LOG as _LOG
from .struct import Structure as _Structure
from .struct import DynamicStructure as _DynamicStructure
from .struct import Field as _Field
from .struct import DynamicField as _DynamicField
from .util import assert_null as _assert_null
from .util import byte_order as _byte_order
from .util import need_to_reorder_bytes as _need_to_reorder_bytes
from .util import checksum as _checksum
# Numpy doesn't support complex integers by default, see
# http://mail.python.org/pipermail/python-dev/2002-April/022408.html
# http://mail.scipy.org/pipermail/numpy-discussion/2007-October/029447.html
# So we roll our own types. See
# http://docs.scipy.org/doc/numpy/user/basics.rec.html
# http://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.html
complexInt8 = np.dtype([('real', np.int8), ('imag', np.int8)])
complexInt16 = np.dtype([('real', np.int16), ('imag', np.int16)])
complexInt32 = np.dtype([('real', np.int32), ('imag', np.int32)])
complexUInt8 = np.dtype([('real', np.uint8), ('imag', np.uint8)])
complexUInt16 = np.dtype(
[('real', np.uint16), ('imag', np.uint16)])
complexUInt32 = np.dtype(
[('real', np.uint32), ('imag', np.uint32)])
class StaticStringField (_DynamicField):
_null_terminated = False
_array_size_field = None
def __init__(self, *args, **kwargs):
if 'array' not in kwargs:
kwargs['array'] = True
super(StaticStringField, self).__init__(*args, **kwargs)
def post_unpack(self, parents, data):
wave_structure = parents[-1]
wave_data = self._get_structure_data(parents, data, wave_structure)
d = self._normalize_string(wave_data[self.name])
wave_data[self.name] = d
def _normalize_string(self, d):
if isinstance(d, bytes):
pass
elif hasattr(d, 'tobytes'):
d = d.tobytes()
elif hasattr(d, 'tostring'): # Python 2 compatibility
d = d.tostring()
else:
d = b''.join(d)
if self._array_size_field:
start = 0
strings = []
for count in self.counts:
end = start + count
if end > start:
strings.append(d[start:end])
if self._null_terminated:
strings[-1] = strings[-1].split(b'\x00', 1)[0]
start = end
elif self._null_terminated:
d = d.split(b'\x00', 1)[0]
return d
class NullStaticStringField (StaticStringField):
_null_terminated = True
# Begin IGOR constants and typedefs from IgorBin.h
# From IgorMath.h
TYPE_TABLE = { # (key: integer flag, value: numpy dtype)
0:None, # Text wave, not handled in ReadWave.c
1:complex, # NT_CMPLX, makes number complex.
2:np.float32, # NT_FP32, 32 bit fp numbers.
3:np.complex64,
4:np.float64, # NT_FP64, 64 bit fp numbers.
5:np.complex128,
8:np.int8, # NT_I8, 8 bit signed integer. Requires Igor Pro
# 2.0 or later.
9:complexInt8,
0x10:np.int16,# NT_I16, 16 bit integer numbers. Requires Igor
# Pro 2.0 or later.
0x11:complexInt16,
0x20:np.int32,# NT_I32, 32 bit integer numbers. Requires Igor
# Pro 2.0 or later.
0x21:complexInt32,
# 0x40:None, # NT_UNSIGNED, Makes above signed integers
# # unsigned. Requires Igor Pro 3.0 or later.
0x48:np.uint8,
0x49:complexUInt8,
0x50:np.uint16,
0x51:complexUInt16,
0x60:np.uint32,
0x61:complexUInt32,
}
# From wave.h
MAXDIMS = 4
# From binary.h
BinHeader1 = _Structure( # `version` field pulled out into Wave
name='BinHeader1',
fields=[
_Field('l', 'wfmSize', help='The size of the WaveHeader2 data structure plus the wave data plus 16 bytes of padding.'),
_Field('h', 'checksum', help='Checksum over this header and the wave header.'),
])
BinHeader2 = _Structure( # `version` field pulled out into Wave
name='BinHeader2',
fields=[
_Field('l', 'wfmSize', help='The size of the WaveHeader2 data structure plus the wave data plus 16 bytes of padding.'),
_Field('l', 'noteSize', help='The size of the note text.'),
_Field('l', 'pictSize', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('h', 'checksum', help='Checksum over this header and the wave header.'),
])
BinHeader3 = _Structure( # `version` field pulled out into Wave
name='BinHeader3',
fields=[
_Field('l', 'wfmSize', help='The size of the WaveHeader2 data structure plus the wave data plus 16 bytes of padding.'),
_Field('l', 'noteSize', help='The size of the note text.'),
_Field('l', 'formulaSize', help='The size of the dependency formula, if any.'),
_Field('l', 'pictSize', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('h', 'checksum', help='Checksum over this header and the wave header.'),
])
BinHeader5 = _Structure( # `version` field pulled out into Wave
name='BinHeader5',
fields=[
_Field('h', 'checksum', help='Checksum over this header and the wave header.'),
_Field('l', 'wfmSize', help='The size of the WaveHeader5 data structure plus the wave data.'),
_Field('l', 'formulaSize', help='The size of the dependency formula, if any.'),
_Field('l', 'noteSize', help='The size of the note text.'),
_Field('l', 'dataEUnitsSize', help='The size of optional extended data units.'),
_Field('l', 'dimEUnitsSize', help='The size of optional extended dimension units.', count=MAXDIMS, array=True),
_Field('l', 'dimLabelsSize', help='The size of optional dimension labels.', count=MAXDIMS, array=True),
_Field('l', 'sIndicesSize', help='The size of string indicies if this is a text wave.'),
_Field('l', 'optionsSize1', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('l', 'optionsSize2', default=0, help='Reserved. Write zero. Ignore on read.'),
])
# From wave.h
MAX_WAVE_NAME2 = 18 # Maximum length of wave name in version 1 and 2
# files. Does not include the trailing null.
MAX_WAVE_NAME5 = 31 # Maximum length of wave name in version 5
# files. Does not include the trailing null.
MAX_UNIT_CHARS = 3
# Header to an array of waveform data.
# `wData` field pulled out into DynamicWaveDataField1
WaveHeader2 = _DynamicStructure(
name='WaveHeader2',
fields=[
_Field('h', 'type', help='See types (e.g. NT_FP64) above. Zero for text waves.'),
_Field('P', 'next', default=0, help='Used in memory only. Write zero. Ignore on read.'),
NullStaticStringField('c', 'bname', help='Name of wave plus trailing null.', count=MAX_WAVE_NAME2+2),
_Field('h', 'whVersion', default=0, help='Write 0. Ignore on read.'),
_Field('h', 'srcFldr', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('P', 'fileName', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('c', 'dataUnits', default=0, help='Natural data units go here - null if none.', count=MAX_UNIT_CHARS+1, array=True),
_Field('c', 'xUnits', default=0, help='Natural x-axis units go here - null if none.', count=MAX_UNIT_CHARS+1, array=True),
_Field('l', 'npnts', help='Number of data points in wave.'),
_Field('h', 'aModified', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('d', 'hsA', help='X value for point p = hsA*p + hsB'),
_Field('d', 'hsB', help='X value for point p = hsA*p + hsB'),
_Field('h', 'wModified', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('h', 'swModified', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('h', 'fsValid', help='True if full scale values have meaning.'),
_Field('d', 'topFullScale', help='The min full scale value for wave.'), # sic, 'min' should probably be 'max'
_Field('d', 'botFullScale', help='The min full scale value for wave.'),
_Field('c', 'useBits', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('c', 'kindBits', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('P', 'formula', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('l', 'depID', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('L', 'creationDate', help='DateTime of creation. Not used in version 1 files.'),
_Field('c', 'wUnused', default=0, help='Reserved. Write zero. Ignore on read.', count=2, array=True),
_Field('L', 'modDate', help='DateTime of last modification.'),
_Field('P', 'waveNoteH', help='Used in memory only. Write zero. Ignore on read.'),
])
# `sIndices` pointer unset (use Wave5_data['sIndices'] instead). This
# field is filled in by DynamicStringIndicesDataField.
# `wData` field pulled out into DynamicWaveDataField5
WaveHeader5 = _DynamicStructure(
name='WaveHeader5',
fields=[
_Field('P', 'next', help='link to next wave in linked list.'),
_Field('L', 'creationDate', help='DateTime of creation.'),
_Field('L', 'modDate', help='DateTime of last modification.'),
_Field('l', 'npnts', help='Total number of points (multiply dimensions up to first zero).'),
_Field('h', 'type', help='See types (e.g. NT_FP64) above. Zero for text waves.'),
_Field('h', 'dLock', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('c', 'whpad1', default=0, help='Reserved. Write zero. Ignore on read.', count=6, array=True),
_Field('h', 'whVersion', default=1, help='Write 1. Ignore on read.'),
NullStaticStringField('c', 'bname', help='Name of wave plus trailing null.', count=MAX_WAVE_NAME5+1),
_Field('l', 'whpad2', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('P', 'dFolder', default=0, help='Used in memory only. Write zero. Ignore on read.'),
# Dimensioning info. [0] == rows, [1] == cols etc
_Field('l', 'nDim', help='Number of of items in a dimension -- 0 means no data.', count=MAXDIMS, array=True),
_Field('d', 'sfA', help='Index value for element e of dimension d = sfA[d]*e + sfB[d].', count=MAXDIMS, array=True),
_Field('d', 'sfB', help='Index value for element e of dimension d = sfA[d]*e + sfB[d].', count=MAXDIMS, array=True),
# SI units
_Field('c', 'dataUnits', default=0, help='Natural data units go here - null if none.', count=MAX_UNIT_CHARS+1, array=True),
_Field('c', 'dimUnits', default=0, help='Natural dimension units go here - null if none.', count=(MAXDIMS, MAX_UNIT_CHARS+1), array=True),
_Field('h', 'fsValid', help='TRUE if full scale values have meaning.'),
_Field('h', 'whpad3', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('d', 'topFullScale', help='The max and max full scale value for wave'), # sic, probably "max and min"
_Field('d', 'botFullScale', help='The max and max full scale value for wave.'), # sic, probably "max and min"
_Field('P', 'dataEUnits', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('P', 'dimEUnits', default=0, help='Used in memory only. Write zero. Ignore on read.', count=MAXDIMS, array=True),
_Field('P', 'dimLabels', default=0, help='Used in memory only. Write zero. Ignore on read.', count=MAXDIMS, array=True),
_Field('P', 'waveNoteH', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('l', 'whUnused', default=0, help='Reserved. Write zero. Ignore on read.', count=16, array=True),
# The following stuff is considered private to Igor.
_Field('h', 'aModified', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('h', 'wModified', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('h', 'swModified', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('c', 'useBits', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('c', 'kindBits', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('P', 'formula', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('l', 'depID', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('h', 'whpad4', default=0, help='Reserved. Write zero. Ignore on read.'),
_Field('h', 'srcFldr', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('P', 'fileName', default=0, help='Used in memory only. Write zero. Ignore on read.'),
_Field('P', 'sIndices', default=0, help='Used in memory only. Write zero. Ignore on read.'),
])
class DynamicWaveDataField1 (_DynamicField):
def pre_pack(self, parents, data):
raise NotImplementedError()
def pre_unpack(self, parents, data):
full_structure = parents[0]
wave_structure = parents[-1]
wave_header_structure = wave_structure.fields[1].format
wave_data = self._get_structure_data(parents, data, wave_structure)
version = data['version']
bin_header = wave_data['bin_header']
wave_header = wave_data['wave_header']
self.count = wave_header['npnts']
self.data_size = self._get_size(bin_header, wave_header_structure.size)
type_ = TYPE_TABLE.get(wave_header['type'], None)
if type_:
self.shape = self._get_shape(bin_header, wave_header)
else: # text wave
type_ = np.dtype('S1')
self.shape = (self.data_size,)
# dtype() wrapping to avoid numpy.generic and
# getset_descriptor issues with the builtin numpy types
# (e.g. int32). It has no effect on our local complex
# integers.
self.dtype = np.dtype(type_).newbyteorder(
wave_structure.byte_order)
if (version == 3 and
self.count > 0 and
bin_header['formulaSize'] > 0 and
self.data_size == 0):
"""From TN003:
Igor Pro 2.00 included support for dependency formulae. If
a wave was governed by a dependency formula then the
actual wave data was not written to disk for that wave,
because on loading the wave Igor could recalculate the
data. However,this prevented the wave from being loaded
into an experiment other than the original
experiment. Consequently, in a version of Igor Pro 3.0x,
we changed it so that the wave data was written even if
the wave was governed by a dependency formula. When
reading a binary wave file, you can detect that the wave
file does not contain the wave data by examining the
wfmSize, formulaSize and npnts fields. If npnts is greater
than zero and formulaSize is greater than zero and
the waveDataSize as calculated above is zero, then this is
a file governed by a dependency formula that was written
without the actual wave data.
"""
self.shape = (0,)
elif TYPE_TABLE.get(wave_header['type'], None) is not None:
assert self.data_size == self.count * self.dtype.itemsize, (
self.data_size, self.count, self.dtype.itemsize, self.dtype)
else:
assert self.data_size >= 0, (
bin_header['wfmSize'], wave_header_structure.size)
def _get_size(self, bin_header, wave_header_size):
return bin_header['wfmSize'] - wave_header_size - 16
def _get_shape(self, bin_header, wave_header):
return (self.count,)
def unpack(self, stream):
data_b = stream.read(self.data_size)
try:
data = np.ndarray(
shape=self.shape,
dtype=self.dtype,
buffer=data_b,
order='F',
)
except:
_LOG.error(
'could not reshape data from {} to {}'.format(
self.shape, data_b))
raise
return data
class DynamicWaveDataField5 (DynamicWaveDataField1):
"Adds support for multidimensional data."
def _get_size(self, bin_header, wave_header_size):
return bin_header['wfmSize'] - wave_header_size
def _get_shape(self, bin_header, wave_header):
return [n for n in wave_header['nDim'] if n > 0] or (0,)
# End IGOR constants and typedefs from IgorBin.h
class DynamicStringField (StaticStringField):
_size_field = None
def pre_unpack(self, parents, data):
size = self._get_size_data(parents, data)
if self._array_size_field:
self.counts = size
self.count = sum(self.counts)
else:
self.count = size
self.setup()
def _get_size_data(self, parents, data):
wave_structure = parents[-1]
wave_data = self._get_structure_data(parents, data, wave_structure)
bin_header = wave_data['bin_header']
return bin_header[self._size_field]
class DynamicWaveNoteField (DynamicStringField):
_size_field = 'noteSize'
class DynamicDependencyFormulaField (DynamicStringField):
"""Optional wave dependency formula
Excerpted from TN003:
A wave has a dependency formula if it has been bound by a
statement such as "wave0 := sin(x)". In this example, the
dependency formula is "sin(x)". The formula is stored with
no trailing null byte.
"""
_size_field = 'formulaSize'
# Except when it is stored with a trailing null byte :p. See, for
# example, test/data/mac-version3Dependent.ibw.
_null_terminated = True
class DynamicDataUnitsField (DynamicStringField):
"""Optional extended data units data
Excerpted from TN003:
dataUnits - Present in versions 1, 2, 3, 5. The dataUnits field
stores the units for the data represented by the wave. It is a C
string terminated with a null character. This field supports
units of 0 to 3 bytes. In version 1, 2 and 3 files, longer units
can not be represented. In version 5 files, longer units can be
stored using the optional extended data units section of the
file.
"""
_size_field = 'dataEUnitsSize'
class DynamicDimensionUnitsField (DynamicStringField):
"""Optional extended dimension units data
Excerpted from TN003:
xUnits - Present in versions 1, 2, 3. The xUnits field stores the
X units for a wave. It is a C string terminated with a null
character. This field supports units of 0 to 3 bytes. In
version 1, 2 and 3 files, longer units can not be represented.
dimUnits - Present in version 5 only. This field is an array of 4
strings, one for each possible wave dimension. Each string
supports units of 0 to 3 bytes. Longer units can be stored using
the optional extended dimension units section of the file.
"""
_size_field = 'dimEUnitsSize'
_array_size_field = True
class DynamicLabelsField (DynamicStringField):
"""Optional dimension label data
From TN003:
If the wave has dimension labels for dimension d then the
dimLabelsSize[d] field of the BinHeader5 structure will be
non-zero.
A wave will have dimension labels if a SetDimLabel command has
been executed on it.
A 3 point 1D wave has 4 dimension labels. The first dimension
label is the label for the dimension as a whole. The next three
dimension labels are the labels for rows 0, 1, and 2. When Igor
writes dimension labels to disk, it writes each dimension label as
a C string (null-terminated) in a field of 32 bytes.
"""
_size_field = 'dimLabelsSize'
_array_size_field = True
def post_unpack(self, parents, data):
wave_structure = parents[-1]
wave_data = self._get_structure_data(parents, data, wave_structure)
bin_header = wave_data['bin_header']
d = wave_data[self.name]
dim_labels = []
start = 0
for size in bin_header[self._size_field]:
end = start + size
if end > start:
dim_data = d[start:end]
chunks = []
for i in range(size//32):
chunks.append(dim_data[32*i:32*(i+1)])
labels = [b'']
for chunk in chunks:
labels[-1] = labels[-1] + b''.join(chunk)
if b'\x00' in chunk:
labels.append(b'')
labels.pop(-1)
start = end
else:
labels = []
dim_labels.append(labels)
wave_data[self.name] = dim_labels
class DynamicStringIndicesDataField (_DynamicField):
"""String indices used for text waves only
"""
def pre_pack(self, parents, data):
raise NotImplementedError()
def pre_unpack(self, parents, data):
wave_structure = parents[-1]
wave_data = self._get_structure_data(parents, data, wave_structure)
bin_header = wave_data['bin_header']
wave_header = wave_data['wave_header']
self.string_indices_size = bin_header['sIndicesSize']
self.count = self.string_indices_size // 4
if self.count: # make sure we're in a text wave
assert TYPE_TABLE[wave_header['type']] is None, wave_header
self.setup()
def post_unpack(self, parents, data):
if not self.count:
return
wave_structure = parents[-1]
wave_data = self._get_structure_data(parents, data, wave_structure)
wave_header = wave_data['wave_header']
wdata = wave_data['wData']
strings = []
start = 0
for i,offset in enumerate(wave_data['sIndices']):
if offset > start:
chars = wdata[start:offset]
strings.append(b''.join(chars))
start = offset
elif offset == start:
strings.append(b'')
else:
raise ValueError((offset, wave_data['sIndices']))
wdata = np.array(strings)
shape = [n for n in wave_header['nDim'] if n > 0] or (0,)
try:
wdata = wdata.reshape(shape)
except ValueError:
_LOG.error(
'could not reshape strings from {} to {}'.format(
shape, wdata.shape))
raise
wave_data['wData'] = wdata
class DynamicVersionField (_DynamicField):
def pre_pack(self, parents, byte_order):
raise NotImplementedError()
def post_unpack(self, parents, data):
wave_structure = parents[-1]
wave_data = self._get_structure_data(parents, data, wave_structure)
version = wave_data['version']
if wave_structure.byte_order in '@=':
need_to_reorder_bytes = _need_to_reorder_bytes(version)
wave_structure.byte_order = _byte_order(need_to_reorder_bytes)
_LOG.debug(
'get byte order from version: {} (reorder? {})'.format(
wave_structure.byte_order, need_to_reorder_bytes))
else:
need_to_reorder_bytes = False
old_format = wave_structure.fields[-1].format
if version == 1:
wave_structure.fields[-1].format = Wave1
elif version == 2:
wave_structure.fields[-1].format = Wave2
elif version == 3:
wave_structure.fields[-1].format = Wave3
elif version == 5:
wave_structure.fields[-1].format = Wave5
elif not need_to_reorder_bytes:
raise ValueError(
'invalid binary wave version: {}'.format(version))
if wave_structure.fields[-1].format != old_format:
_LOG.debug('change wave headers from {} to {}'.format(
old_format, wave_structure.fields[-1].format))
wave_structure.setup()
elif need_to_reorder_bytes:
wave_structure.setup()
# we might need to unpack again with the new byte order
return need_to_reorder_bytes
class DynamicWaveField (_DynamicField):
def post_unpack(self, parents, data):
return
raise NotImplementedError() # TODO
checksum_size = bin.size + wave.size
wave_structure = parents[-1]
if version == 5:
# Version 5 checksum does not include the wData field.
checksum_size -= 4
c = _checksum(b, parents[-1].byte_order, 0, checksum_size)
if c != 0:
raise ValueError(
('This does not appear to be a valid Igor binary wave file. '
'Error in checksum: should be 0, is {}.').format(c))
Wave1 = _DynamicStructure(
name='Wave1',
fields=[
_Field(BinHeader1, 'bin_header', help='Binary wave header'),
_Field(WaveHeader2, 'wave_header', help='Wave header'),
DynamicWaveDataField1('f', 'wData', help='The start of the array of waveform data.', count=0, array=True),
])
Wave2 = _DynamicStructure(
name='Wave2',
fields=[
_Field(BinHeader2, 'bin_header', help='Binary wave header'),
_Field(WaveHeader2, 'wave_header', help='Wave header'),
DynamicWaveDataField1('f', 'wData', help='The start of the array of waveform data.', count=0, array=True),
_Field('x', 'padding', help='16 bytes of padding in versions 2 and 3.', count=16, array=True),
DynamicWaveNoteField('c', 'note', help='Optional wave note data', count=0, array=True),
])
Wave3 = _DynamicStructure(
name='Wave3',
fields=[
_Field(BinHeader3, 'bin_header', help='Binary wave header'),
_Field(WaveHeader2, 'wave_header', help='Wave header'),
DynamicWaveDataField1('f', 'wData', help='The start of the array of waveform data.', count=0, array=True),
_Field('x', 'padding', help='16 bytes of padding in versions 2 and 3.', count=16, array=True),
DynamicWaveNoteField('c', 'note', help='Optional wave note data', count=0, array=True),
DynamicDependencyFormulaField('c', 'formula', help='Optional wave dependency formula', count=0, array=True),
])
Wave5 = _DynamicStructure(
name='Wave5',
fields=[
_Field(BinHeader5, 'bin_header', help='Binary wave header'),
_Field(WaveHeader5, 'wave_header', help='Wave header'),
DynamicWaveDataField5('f', 'wData', help='The start of the array of waveform data.', count=0, array=True),
DynamicDependencyFormulaField('c', 'formula', help='Optional wave dependency formula.', count=0, array=True),
DynamicWaveNoteField('c', 'note', help='Optional wave note data.', count=0, array=True),
DynamicDataUnitsField('c', 'data_units', help='Optional extended data units data.', count=0, array=True),
DynamicDimensionUnitsField('c', 'dimension_units', help='Optional dimension label data', count=0, array=True),
DynamicLabelsField('c', 'labels', help="Optional dimension label data", count=0, array=True),
DynamicStringIndicesDataField('P', 'sIndices', help='Dynamic string indices for text waves.', count=0, array=True),
])
Wave = _DynamicStructure(
name='Wave',
fields=[
DynamicVersionField('h', 'version', help='Version number for backwards compatibility.'),
DynamicWaveField(Wave1, 'wave', help='The rest of the wave data.'),
])
def load(filename):
if hasattr(filename, 'read'):
f = filename # filename is actually a stream object
else:
f = open(filename, 'rb')
try:
Wave.byte_order = '='
Wave.setup()
data = Wave.unpack_stream(f)
finally:
if not hasattr(filename, 'read'):
f.close()
return data
def save(filename):
raise NotImplementedError
| 0.576065 | 0.265714 |
"Common code for scripts distributed with the `igor` package."
from __future__ import absolute_import
import argparse as _argparse
import logging as _logging
import sys as _sys
try:
import matplotlib as _matplotlib
import matplotlib.pyplot as _matplotlib_pyplot
except ImportError as _matplotlib_import_error:
_matplotlib = None
from . import __version__
from . import LOG as _LOG
class Script (object):
log_levels = [_logging.ERROR, _logging.WARNING, _logging.INFO, _logging.DEBUG]
def __init__(self, description=None, filetype='IGOR Binary Wave (.ibw) file'):
self.parser = _argparse.ArgumentParser(description=description)
self.parser.add_argument(
'--version', action='version',
version='%(prog)s {}'.format(__version__))
self.parser.add_argument(
'-f', '--infile', metavar='FILE', default='-',
help='input {}'.format(filetype))
self.parser.add_argument(
'-o', '--outfile', metavar='FILE', default='-',
help='file for ASCII output')
self.parser.add_argument(
'-p', '--plot', action='store_const', const=True,
help='use Matplotlib to plot any IGOR waves')
self.parser.add_argument(
'-V', '--verbose', action='count', default=0,
help='increment verbosity')
self._num_plots = 0
def run(self, *args, **kwargs):
args = self.parser.parse_args(*args, **kwargs)
if args.infile == '-':
args.infile = _sys.stdin
if args.outfile == '-':
args.outfile = _sys.stdout
if args.verbose > 1:
log_level = self.log_levels[min(args.verbose-1, len(self.log_levels)-1)]
_LOG.setLevel(log_level)
self._run(args)
self.display_plots()
def _run(self, args):
raise NotImplementedError()
def plot_wave(self, args, wave, title=None):
if not args.plot:
return # no-op
if not _matplotlib:
raise _matplotlib_import_error
if title is None:
title = wave['wave']['wave_header']['bname']
figure = _matplotlib_pyplot.figure()
axes = figure.add_subplot(1, 1, 1)
axes.set_title(title)
try:
axes.plot(wave['wave']['wData'], 'r.')
except ValueError as error:
_LOG.error('error plotting {}: {}'.format(title, error))
pass
self._num_plots += 1
def display_plots(self):
if self._num_plots:
_matplotlib_pyplot.show()
|
sci-memex
|
/sci_memex-0.0.3-py3-none-any.whl/memex/translators/igor/script.py
|
script.py
|
"Common code for scripts distributed with the `igor` package."
from __future__ import absolute_import
import argparse as _argparse
import logging as _logging
import sys as _sys
try:
import matplotlib as _matplotlib
import matplotlib.pyplot as _matplotlib_pyplot
except ImportError as _matplotlib_import_error:
_matplotlib = None
from . import __version__
from . import LOG as _LOG
class Script (object):
log_levels = [_logging.ERROR, _logging.WARNING, _logging.INFO, _logging.DEBUG]
def __init__(self, description=None, filetype='IGOR Binary Wave (.ibw) file'):
self.parser = _argparse.ArgumentParser(description=description)
self.parser.add_argument(
'--version', action='version',
version='%(prog)s {}'.format(__version__))
self.parser.add_argument(
'-f', '--infile', metavar='FILE', default='-',
help='input {}'.format(filetype))
self.parser.add_argument(
'-o', '--outfile', metavar='FILE', default='-',
help='file for ASCII output')
self.parser.add_argument(
'-p', '--plot', action='store_const', const=True,
help='use Matplotlib to plot any IGOR waves')
self.parser.add_argument(
'-V', '--verbose', action='count', default=0,
help='increment verbosity')
self._num_plots = 0
def run(self, *args, **kwargs):
args = self.parser.parse_args(*args, **kwargs)
if args.infile == '-':
args.infile = _sys.stdin
if args.outfile == '-':
args.outfile = _sys.stdout
if args.verbose > 1:
log_level = self.log_levels[min(args.verbose-1, len(self.log_levels)-1)]
_LOG.setLevel(log_level)
self._run(args)
self.display_plots()
def _run(self, args):
raise NotImplementedError()
def plot_wave(self, args, wave, title=None):
if not args.plot:
return # no-op
if not _matplotlib:
raise _matplotlib_import_error
if title is None:
title = wave['wave']['wave_header']['bname']
figure = _matplotlib_pyplot.figure()
axes = figure.add_subplot(1, 1, 1)
axes.set_title(title)
try:
axes.plot(wave['wave']['wData'], 'r.')
except ValueError as error:
_LOG.error('error plotting {}: {}'.format(title, error))
pass
self._num_plots += 1
def display_plots(self):
if self._num_plots:
_matplotlib_pyplot.show()
| 0.659844 | 0.109563 |
"Read IGOR Packed Experiment files files into records."
from . import LOG as _LOG
from .struct import Structure as _Structure
from .struct import Field as _Field
from .util import byte_order as _byte_order
from .util import need_to_reorder_bytes as _need_to_reorder_bytes
from .util import _bytes
from .record import RECORD_TYPE as _RECORD_TYPE
from .record.base import UnknownRecord as _UnknownRecord
from .record.base import UnusedRecord as _UnusedRecord
from .record.folder import FolderStartRecord as _FolderStartRecord
from .record.folder import FolderEndRecord as _FolderEndRecord
from .record.variables import VariablesRecord as _VariablesRecord
from .record.wave import WaveRecord as _WaveRecord
# From PTN003:
# Igor writes other kinds of records in a packed experiment file, for
# storing things like pictures, page setup records, and miscellaneous
# settings. The format for these records is quite complex and is not
# described in PTN003. If you are writing a program to read packed
# files, you must skip any record with a record type that is not
# listed above.
PackedFileRecordHeader = _Structure(
name='PackedFileRecordHeader',
fields=[
_Field('H', 'recordType', help='Record type plus superceded flag.'),
_Field('h', 'version', help='Version information depends on the type of record.'),
_Field('l', 'numDataBytes', help='Number of data bytes in the record following this record header.'),
])
#CR_STR = '\x15' (\r)
PACKEDRECTYPE_MASK = 0x7FFF # Record type = (recordType & PACKEDREC_TYPE_MASK)
SUPERCEDED_MASK = 0x8000 # Bit is set if the record is superceded by
# a later record in the packed file.
def load(filename, strict=True, ignore_unknown=True):
_LOG.debug('loading a packed experiment file from {}'.format(filename))
records = []
if hasattr(filename, 'read'):
f = filename # filename is actually a stream object
else:
f = open(filename, 'rb')
byte_order = None
initial_byte_order = '='
try:
while True:
PackedFileRecordHeader.byte_order = initial_byte_order
PackedFileRecordHeader.setup()
b = bytes(f.read(PackedFileRecordHeader.size))
if not b:
break
if len(b) < PackedFileRecordHeader.size:
raise ValueError(
('not enough data for the next record header ({} < {})'
).format(len(b), PackedFileRecordHeader.size))
_LOG.debug('reading a new packed experiment file record')
header = PackedFileRecordHeader.unpack_from(b)
if header['version'] and not byte_order:
need_to_reorder = _need_to_reorder_bytes(header['version'])
byte_order = initial_byte_order = _byte_order(need_to_reorder)
_LOG.debug(
'get byte order from version: {} (reorder? {})'.format(
byte_order, need_to_reorder))
if need_to_reorder:
PackedFileRecordHeader.byte_order = byte_order
PackedFileRecordHeader.setup()
header = PackedFileRecordHeader.unpack_from(b)
_LOG.debug(
'reordered version: {}'.format(header['version']))
data = bytes(f.read(header['numDataBytes']))
if len(data) < header['numDataBytes']:
raise ValueError(
('not enough data for the next record ({} < {})'
).format(len(b), header['numDataBytes']))
record_type = _RECORD_TYPE.get(
header['recordType'] & PACKEDRECTYPE_MASK, _UnknownRecord)
_LOG.debug('the new record has type {} ({}).'.format(
record_type, header['recordType']))
if record_type in [_UnknownRecord, _UnusedRecord
] and not ignore_unknown:
raise KeyError('unkown record type {}'.format(
header['recordType']))
records.append(record_type(header, data, byte_order=byte_order))
finally:
_LOG.debug('finished loading {} records from {}'.format(
len(records), filename))
if not hasattr(filename, 'read'):
f.close()
filesystem = _build_filesystem(records)
return (records, filesystem)
def _build_filesystem(records):
# From PTN003:
"""The name must be a valid Igor data folder name. See Object
Names in the Igor Reference help file for name rules.
When Igor Pro reads the data folder start record, it creates a new
data folder with the specified name. Any subsequent variable, wave
or data folder start records cause Igor to create data objects in
this new data folder, until Igor Pro reads a corresponding data
folder end record."""
# From the Igor Manual, chapter 2, section 8, page II-123
# http://www.wavemetrics.net/doc/igorman/II-08%20Data%20Folders.pdf
"""Like the Macintosh file system, Igor Pro's data folders use the
colon character (:) to separate components of a path to an
object. This is analogous to Unix which uses / and Windows which
uses \. (Reminder: Igor's data folders exist wholly in memory
while an experiment is open. It is not a disk file system!)
A data folder named "root" always exists and contains all other
data folders.
"""
# From the Igor Manual, chapter 4, page IV-2
# http://www.wavemetrics.net/doc/igorman/IV-01%20Commands.pdf
"""For waves and data folders only, you can also use "liberal"
names. Liberal names can include almost any character, including
spaces and dots (see Liberal Object Names on page III-415 for
details).
"""
# From the Igor Manual, chapter 3, section 16, page III-416
# http://www.wavemetrics.net/doc/igorman/III-16%20Miscellany.pdf
"""Liberal names have the same rules as standard names except you
may use any character except control characters and the following:
" ' : ;
"""
filesystem = {'root': {}}
dir_stack = [('root', filesystem['root'])]
for record in records:
cwd = dir_stack[-1][-1]
if isinstance(record, _FolderStartRecord):
name = record.null_terminated_text
cwd[name] = {}
dir_stack.append((name, cwd[name]))
elif isinstance(record, _FolderEndRecord):
dir_stack.pop()
elif isinstance(record, (_VariablesRecord, _WaveRecord)):
if isinstance(record, _VariablesRecord):
sys_vars = record.variables['variables']['sysVars'].keys()
for filename,value in record.namespace.items():
if len(dir_stack) > 1 and filename in sys_vars:
# From PTN003:
"""When reading a packed file, any system
variables encountered while the current data
folder is not the root should be ignored.
"""
continue
_check_filename(dir_stack, filename)
cwd[filename] = value
else: # WaveRecord
filename = record.wave['wave']['wave_header']['bname']
_check_filename(dir_stack, filename)
cwd[filename] = record
return filesystem
def _check_filename(dir_stack, filename):
cwd = dir_stack[-1][-1]
if filename in cwd:
raise ValueError('collision on name {} in {}'.format(
filename, ':'.join(d for d,cwd in dir_stack)))
def walk(filesystem, callback, dirpath=None):
"""Walk a packed experiment filesystem, operating on each key,value pair.
"""
if dirpath is None:
dirpath = []
for key,value in sorted((_bytes(k),v) for k,v in filesystem.items()):
callback(dirpath, key, value)
if isinstance(value, dict):
walk(filesystem=value, callback=callback, dirpath=dirpath+[key])
|
sci-memex
|
/sci_memex-0.0.3-py3-none-any.whl/memex/translators/igor/packed.py
|
packed.py
|
"Read IGOR Packed Experiment files files into records."
from . import LOG as _LOG
from .struct import Structure as _Structure
from .struct import Field as _Field
from .util import byte_order as _byte_order
from .util import need_to_reorder_bytes as _need_to_reorder_bytes
from .util import _bytes
from .record import RECORD_TYPE as _RECORD_TYPE
from .record.base import UnknownRecord as _UnknownRecord
from .record.base import UnusedRecord as _UnusedRecord
from .record.folder import FolderStartRecord as _FolderStartRecord
from .record.folder import FolderEndRecord as _FolderEndRecord
from .record.variables import VariablesRecord as _VariablesRecord
from .record.wave import WaveRecord as _WaveRecord
# From PTN003:
# Igor writes other kinds of records in a packed experiment file, for
# storing things like pictures, page setup records, and miscellaneous
# settings. The format for these records is quite complex and is not
# described in PTN003. If you are writing a program to read packed
# files, you must skip any record with a record type that is not
# listed above.
PackedFileRecordHeader = _Structure(
name='PackedFileRecordHeader',
fields=[
_Field('H', 'recordType', help='Record type plus superceded flag.'),
_Field('h', 'version', help='Version information depends on the type of record.'),
_Field('l', 'numDataBytes', help='Number of data bytes in the record following this record header.'),
])
#CR_STR = '\x15' (\r)
PACKEDRECTYPE_MASK = 0x7FFF # Record type = (recordType & PACKEDREC_TYPE_MASK)
SUPERCEDED_MASK = 0x8000 # Bit is set if the record is superceded by
# a later record in the packed file.
def load(filename, strict=True, ignore_unknown=True):
_LOG.debug('loading a packed experiment file from {}'.format(filename))
records = []
if hasattr(filename, 'read'):
f = filename # filename is actually a stream object
else:
f = open(filename, 'rb')
byte_order = None
initial_byte_order = '='
try:
while True:
PackedFileRecordHeader.byte_order = initial_byte_order
PackedFileRecordHeader.setup()
b = bytes(f.read(PackedFileRecordHeader.size))
if not b:
break
if len(b) < PackedFileRecordHeader.size:
raise ValueError(
('not enough data for the next record header ({} < {})'
).format(len(b), PackedFileRecordHeader.size))
_LOG.debug('reading a new packed experiment file record')
header = PackedFileRecordHeader.unpack_from(b)
if header['version'] and not byte_order:
need_to_reorder = _need_to_reorder_bytes(header['version'])
byte_order = initial_byte_order = _byte_order(need_to_reorder)
_LOG.debug(
'get byte order from version: {} (reorder? {})'.format(
byte_order, need_to_reorder))
if need_to_reorder:
PackedFileRecordHeader.byte_order = byte_order
PackedFileRecordHeader.setup()
header = PackedFileRecordHeader.unpack_from(b)
_LOG.debug(
'reordered version: {}'.format(header['version']))
data = bytes(f.read(header['numDataBytes']))
if len(data) < header['numDataBytes']:
raise ValueError(
('not enough data for the next record ({} < {})'
).format(len(b), header['numDataBytes']))
record_type = _RECORD_TYPE.get(
header['recordType'] & PACKEDRECTYPE_MASK, _UnknownRecord)
_LOG.debug('the new record has type {} ({}).'.format(
record_type, header['recordType']))
if record_type in [_UnknownRecord, _UnusedRecord
] and not ignore_unknown:
raise KeyError('unkown record type {}'.format(
header['recordType']))
records.append(record_type(header, data, byte_order=byte_order))
finally:
_LOG.debug('finished loading {} records from {}'.format(
len(records), filename))
if not hasattr(filename, 'read'):
f.close()
filesystem = _build_filesystem(records)
return (records, filesystem)
def _build_filesystem(records):
# From PTN003:
"""The name must be a valid Igor data folder name. See Object
Names in the Igor Reference help file for name rules.
When Igor Pro reads the data folder start record, it creates a new
data folder with the specified name. Any subsequent variable, wave
or data folder start records cause Igor to create data objects in
this new data folder, until Igor Pro reads a corresponding data
folder end record."""
# From the Igor Manual, chapter 2, section 8, page II-123
# http://www.wavemetrics.net/doc/igorman/II-08%20Data%20Folders.pdf
"""Like the Macintosh file system, Igor Pro's data folders use the
colon character (:) to separate components of a path to an
object. This is analogous to Unix which uses / and Windows which
uses \. (Reminder: Igor's data folders exist wholly in memory
while an experiment is open. It is not a disk file system!)
A data folder named "root" always exists and contains all other
data folders.
"""
# From the Igor Manual, chapter 4, page IV-2
# http://www.wavemetrics.net/doc/igorman/IV-01%20Commands.pdf
"""For waves and data folders only, you can also use "liberal"
names. Liberal names can include almost any character, including
spaces and dots (see Liberal Object Names on page III-415 for
details).
"""
# From the Igor Manual, chapter 3, section 16, page III-416
# http://www.wavemetrics.net/doc/igorman/III-16%20Miscellany.pdf
"""Liberal names have the same rules as standard names except you
may use any character except control characters and the following:
" ' : ;
"""
filesystem = {'root': {}}
dir_stack = [('root', filesystem['root'])]
for record in records:
cwd = dir_stack[-1][-1]
if isinstance(record, _FolderStartRecord):
name = record.null_terminated_text
cwd[name] = {}
dir_stack.append((name, cwd[name]))
elif isinstance(record, _FolderEndRecord):
dir_stack.pop()
elif isinstance(record, (_VariablesRecord, _WaveRecord)):
if isinstance(record, _VariablesRecord):
sys_vars = record.variables['variables']['sysVars'].keys()
for filename,value in record.namespace.items():
if len(dir_stack) > 1 and filename in sys_vars:
# From PTN003:
"""When reading a packed file, any system
variables encountered while the current data
folder is not the root should be ignored.
"""
continue
_check_filename(dir_stack, filename)
cwd[filename] = value
else: # WaveRecord
filename = record.wave['wave']['wave_header']['bname']
_check_filename(dir_stack, filename)
cwd[filename] = record
return filesystem
def _check_filename(dir_stack, filename):
cwd = dir_stack[-1][-1]
if filename in cwd:
raise ValueError('collision on name {} in {}'.format(
filename, ':'.join(d for d,cwd in dir_stack)))
def walk(filesystem, callback, dirpath=None):
"""Walk a packed experiment filesystem, operating on each key,value pair.
"""
if dirpath is None:
dirpath = []
for key,value in sorted((_bytes(k),v) for k,v in filesystem.items()):
callback(dirpath, key, value)
if isinstance(value, dict):
walk(filesystem=value, callback=callback, dirpath=dirpath+[key])
| 0.421433 | 0.260648 |
from __future__ import absolute_import
import io as _io
import locale as _locale
import re as _re
import sys as _sys
import numpy as _numpy
from .binarywave import MAXDIMS as _MAXDIMS
from .packed import load as _load
from .record.base import UnknownRecord as _UnknownRecord
from .record.folder import FolderStartRecord as _FolderStartRecord
from .record.folder import FolderEndRecord as _FolderEndRecord
from .record.history import HistoryRecord as _HistoryRecord
from .record.history import GetHistoryRecord as _GetHistoryRecord
from .record.history import RecreationRecord as _RecreationRecord
from .record.packedfile import PackedFileRecord as _PackedFileRecord
from .record.procedure import ProcedureRecord as _ProcedureRecord
from .record.wave import WaveRecord as _WaveRecord
from .record.variables import VariablesRecord as _VariablesRecord
__version__='0.10'
ENCODING = _locale.getpreferredencoding() or _sys.getdefaultencoding()
PYKEYWORDS = set(('and','as','assert','break','class','continue',
'def','elif','else','except','exec','finally',
'for','global','if','import','in','is','lambda',
'or','pass','print','raise','return','try','with',
'yield'))
PYID = _re.compile(r"^[^\d\W]\w*$", _re.UNICODE)
def valid_identifier(s):
"""Check if a name is a valid identifier"""
return PYID.match(s) and s not in PYKEYWORDS
class IgorObject(object):
""" Parent class for all objects the parser can return """
pass
class Variables(IgorObject):
"""
Contains system numeric variables (e.g., K0) and user numeric and string variables.
"""
def __init__(self, record):
self.sysvar = record.variables['variables']['sysVars']
self.uservar = record.variables['variables']['userVars']
self.userstr = record.variables['variables']['userStrs']
self.depvar = record.variables['variables'].get('dependentVars', {})
self.depstr = record.variables['variables'].get('dependentStrs', {})
def format(self, indent=0):
return " "*indent+"<Variables: system %d, user %d, dependent %s>"\
%(len(self.sysvar),
len(self.uservar)+len(self.userstr),
len(self.depvar)+len(self.depstr))
class History(IgorObject):
"""
Contains the experiment's history as plain text.
"""
def __init__(self, data):
self.data = data
def format(self, indent=0):
return " "*indent+"<History>"
class Wave(IgorObject):
"""
Contains the data for a wave
"""
def __init__(self, record):
d = record.wave['wave']
self.name = d['wave_header']['bname'].decode(ENCODING)
self.data = d['wData']
self.fs = d['wave_header']['fsValid']
self.fstop = d['wave_header']['topFullScale']
self.fsbottom = d['wave_header']['botFullScale']
if record.wave['version'] in [1,2,3]:
dims = [d['wave_header']['npnts']] + [0]*(_MAXDIMS-1)
sfA = [d['wave_header']['hsA']] + [0]*(_MAXDIMS-1)
sfB = [d['wave_header']['hsB']] + [0]*(_MAXDIMS-1)
self.data_units = [d['wave_header']['dataUnits']]
self.axis_units = [d['wave_header']['xUnits']]
else:
dims = d['wave_header']['nDim']
sfA = d['wave_header']['sfA']
sfB = d['wave_header']['sfB']
# TODO find example with multiple data units
self.data_units = [d['data_units'].decode(ENCODING)]
self.axis_units = [d['dimension_units'].decode(ENCODING)]
self.data_units.extend(['']*(_MAXDIMS-len(self.data_units)))
self.data_units = tuple(self.data_units)
self.axis_units.extend(['']*(_MAXDIMS-len(self.axis_units)))
self.axis_units = tuple(self.axis_units)
self.axis = [_numpy.linspace(a,b,c) for a,b,c in zip(sfA, sfB, dims)]
self.formula = d.get('formula', '')
self.notes = d.get('note', '')
def format(self, indent=0):
if isinstance(self.data, list):
type,size = "text", "%d"%len(self.data)
else:
type,size = "data", "x".join(str(d) for d in self.data.shape)
return " "*indent+"%s %s (%s)"%(self.name, type, size)
def __array__(self):
return self.data
__repr__ = __str__ = lambda s: "<igor.Wave %s>" % s.format()
class Recreation(IgorObject):
"""
Contains the experiment's recreation procedures as plain text.
"""
def __init__(self, data):
self.data = data
def format(self, indent=0):
return " "*indent + "<Recreation>"
class Procedure(IgorObject):
"""
Contains the experiment's main procedure window text as plain text.
"""
def __init__(self, data):
self.data = data
def format(self, indent=0):
return " "*indent + "<Procedure>"
class GetHistory(IgorObject):
"""
Not a real record but rather, a message to go back and read the history text.
The reason for GetHistory is that IGOR runs Recreation when it loads the
datafile. This puts entries in the history that shouldn't be there. The
GetHistory entry simply says that the Recreation has run, and the History
can be restored from the previously saved value.
"""
def __init__(self, data):
self.data = data
def format(self, indent=0):
return " "*indent + "<GetHistory>"
class PackedFile(IgorObject):
"""
Contains the data for a procedure file or notebook in packed form.
"""
def __init__(self, data):
self.data = data
def format(self, indent=0):
return " "*indent + "<PackedFile>"
class Unknown(IgorObject):
"""
Record type not documented in PTN003/TN003.
"""
def __init__(self, data, type):
self.data = data
self.type = type
def format(self, indent=0):
return " "*indent + "<Unknown type %s>"%self.type
class Folder(IgorObject):
"""
Hierarchical record container.
"""
def __init__(self, path):
self.name = path[-1]
self.path = path
self.children = []
def __getitem__(self, key):
if isinstance(key, int):
return self.children[key]
else:
for r in self.children:
if isinstance(r, (Folder,Wave)) and r.name == key:
return r
raise KeyError("Folder %s does not exist"%key)
def __str__(self):
return "<igor.Folder %s>" % "/".join(self.path)
__repr__ = __str__
def append(self, record):
"""
Add a record to the folder.
"""
self.children.append(record)
try:
# Record may not have a name, the name may be invalid, or it
# may already be in use. The noname case will be covered by
# record.name raising an attribute error. The others we need
# to test for explicitly.
if valid_identifier(record.name) and not hasattr(self, record.name):
setattr(self, record.name, record)
except AttributeError:
pass
def format(self, indent=0):
parent = " "*indent+self.name
children = [r.format(indent=indent+2) for r in self.children]
return "\n".join([parent]+children)
def loads(s, **kwargs):
"""Load an igor file from string"""
stream = _io.BytesIO(s)
return load(stream, **kwargs)
def load(filename, **kwargs):
"""Load an igor file"""
try:
packed_experiment = _load(filename)
except ValueError as e:
if e.args[0].startswith('not enough data for the next record header'):
raise IOError('invalid record header; bad pxp file?')
elif e.args[0].startswith('not enough data for the next record'):
raise IOError('final record too long; bad pxp file?')
raise
return _convert(packed_experiment, **kwargs)
def _convert(packed_experiment, ignore_unknown=True):
records, filesystem = packed_experiment
stack = [Folder(path=['root'])]
for record in records:
if isinstance(record, _UnknownRecord):
if ignore_unknown:
continue
else:
r = Unknown(record.data, type=record.header['recordType'])
elif isinstance(record, _GetHistoryRecord):
r = GetHistory(record.text)
elif isinstance(record, _HistoryRecord):
r = History(record.text)
elif isinstance(record, _PackedFileRecord):
r = PackedFile(record.text)
elif isinstance(record, _ProcedureRecord):
r = Procedure(record.text)
elif isinstance(record, _RecreationRecord):
r = Recreation(record.text)
elif isinstance(record, _VariablesRecord):
r = Variables(record)
elif isinstance(record, _WaveRecord):
r = Wave(record)
else:
r = None
if isinstance(record, _FolderStartRecord):
path = stack[-1].path + [
record.null_terminated_text.decode(ENCODING)]
folder = Folder(path)
stack[-1].append(folder)
stack.append(folder)
elif isinstance(record, _FolderEndRecord):
stack.pop()
elif r is None:
raise NotImplementedError(record)
else:
stack[-1].append(r)
if len(stack) != 1:
raise IOError("FolderStart records do not match FolderEnd records")
return stack[0]
|
sci-memex
|
/sci_memex-0.0.3-py3-none-any.whl/memex/translators/igor/igorpy.py
|
igorpy.py
|
from __future__ import absolute_import
import io as _io
import locale as _locale
import re as _re
import sys as _sys
import numpy as _numpy
from .binarywave import MAXDIMS as _MAXDIMS
from .packed import load as _load
from .record.base import UnknownRecord as _UnknownRecord
from .record.folder import FolderStartRecord as _FolderStartRecord
from .record.folder import FolderEndRecord as _FolderEndRecord
from .record.history import HistoryRecord as _HistoryRecord
from .record.history import GetHistoryRecord as _GetHistoryRecord
from .record.history import RecreationRecord as _RecreationRecord
from .record.packedfile import PackedFileRecord as _PackedFileRecord
from .record.procedure import ProcedureRecord as _ProcedureRecord
from .record.wave import WaveRecord as _WaveRecord
from .record.variables import VariablesRecord as _VariablesRecord
__version__='0.10'
ENCODING = _locale.getpreferredencoding() or _sys.getdefaultencoding()
PYKEYWORDS = set(('and','as','assert','break','class','continue',
'def','elif','else','except','exec','finally',
'for','global','if','import','in','is','lambda',
'or','pass','print','raise','return','try','with',
'yield'))
PYID = _re.compile(r"^[^\d\W]\w*$", _re.UNICODE)
def valid_identifier(s):
"""Check if a name is a valid identifier"""
return PYID.match(s) and s not in PYKEYWORDS
class IgorObject(object):
""" Parent class for all objects the parser can return """
pass
class Variables(IgorObject):
"""
Contains system numeric variables (e.g., K0) and user numeric and string variables.
"""
def __init__(self, record):
self.sysvar = record.variables['variables']['sysVars']
self.uservar = record.variables['variables']['userVars']
self.userstr = record.variables['variables']['userStrs']
self.depvar = record.variables['variables'].get('dependentVars', {})
self.depstr = record.variables['variables'].get('dependentStrs', {})
def format(self, indent=0):
return " "*indent+"<Variables: system %d, user %d, dependent %s>"\
%(len(self.sysvar),
len(self.uservar)+len(self.userstr),
len(self.depvar)+len(self.depstr))
class History(IgorObject):
"""
Contains the experiment's history as plain text.
"""
def __init__(self, data):
self.data = data
def format(self, indent=0):
return " "*indent+"<History>"
class Wave(IgorObject):
"""
Contains the data for a wave
"""
def __init__(self, record):
d = record.wave['wave']
self.name = d['wave_header']['bname'].decode(ENCODING)
self.data = d['wData']
self.fs = d['wave_header']['fsValid']
self.fstop = d['wave_header']['topFullScale']
self.fsbottom = d['wave_header']['botFullScale']
if record.wave['version'] in [1,2,3]:
dims = [d['wave_header']['npnts']] + [0]*(_MAXDIMS-1)
sfA = [d['wave_header']['hsA']] + [0]*(_MAXDIMS-1)
sfB = [d['wave_header']['hsB']] + [0]*(_MAXDIMS-1)
self.data_units = [d['wave_header']['dataUnits']]
self.axis_units = [d['wave_header']['xUnits']]
else:
dims = d['wave_header']['nDim']
sfA = d['wave_header']['sfA']
sfB = d['wave_header']['sfB']
# TODO find example with multiple data units
self.data_units = [d['data_units'].decode(ENCODING)]
self.axis_units = [d['dimension_units'].decode(ENCODING)]
self.data_units.extend(['']*(_MAXDIMS-len(self.data_units)))
self.data_units = tuple(self.data_units)
self.axis_units.extend(['']*(_MAXDIMS-len(self.axis_units)))
self.axis_units = tuple(self.axis_units)
self.axis = [_numpy.linspace(a,b,c) for a,b,c in zip(sfA, sfB, dims)]
self.formula = d.get('formula', '')
self.notes = d.get('note', '')
def format(self, indent=0):
if isinstance(self.data, list):
type,size = "text", "%d"%len(self.data)
else:
type,size = "data", "x".join(str(d) for d in self.data.shape)
return " "*indent+"%s %s (%s)"%(self.name, type, size)
def __array__(self):
return self.data
__repr__ = __str__ = lambda s: "<igor.Wave %s>" % s.format()
class Recreation(IgorObject):
"""
Contains the experiment's recreation procedures as plain text.
"""
def __init__(self, data):
self.data = data
def format(self, indent=0):
return " "*indent + "<Recreation>"
class Procedure(IgorObject):
"""
Contains the experiment's main procedure window text as plain text.
"""
def __init__(self, data):
self.data = data
def format(self, indent=0):
return " "*indent + "<Procedure>"
class GetHistory(IgorObject):
"""
Not a real record but rather, a message to go back and read the history text.
The reason for GetHistory is that IGOR runs Recreation when it loads the
datafile. This puts entries in the history that shouldn't be there. The
GetHistory entry simply says that the Recreation has run, and the History
can be restored from the previously saved value.
"""
def __init__(self, data):
self.data = data
def format(self, indent=0):
return " "*indent + "<GetHistory>"
class PackedFile(IgorObject):
"""
Contains the data for a procedure file or notebook in packed form.
"""
def __init__(self, data):
self.data = data
def format(self, indent=0):
return " "*indent + "<PackedFile>"
class Unknown(IgorObject):
"""
Record type not documented in PTN003/TN003.
"""
def __init__(self, data, type):
self.data = data
self.type = type
def format(self, indent=0):
return " "*indent + "<Unknown type %s>"%self.type
class Folder(IgorObject):
"""
Hierarchical record container.
"""
def __init__(self, path):
self.name = path[-1]
self.path = path
self.children = []
def __getitem__(self, key):
if isinstance(key, int):
return self.children[key]
else:
for r in self.children:
if isinstance(r, (Folder,Wave)) and r.name == key:
return r
raise KeyError("Folder %s does not exist"%key)
def __str__(self):
return "<igor.Folder %s>" % "/".join(self.path)
__repr__ = __str__
def append(self, record):
"""
Add a record to the folder.
"""
self.children.append(record)
try:
# Record may not have a name, the name may be invalid, or it
# may already be in use. The noname case will be covered by
# record.name raising an attribute error. The others we need
# to test for explicitly.
if valid_identifier(record.name) and not hasattr(self, record.name):
setattr(self, record.name, record)
except AttributeError:
pass
def format(self, indent=0):
parent = " "*indent+self.name
children = [r.format(indent=indent+2) for r in self.children]
return "\n".join([parent]+children)
def loads(s, **kwargs):
"""Load an igor file from string"""
stream = _io.BytesIO(s)
return load(stream, **kwargs)
def load(filename, **kwargs):
"""Load an igor file"""
try:
packed_experiment = _load(filename)
except ValueError as e:
if e.args[0].startswith('not enough data for the next record header'):
raise IOError('invalid record header; bad pxp file?')
elif e.args[0].startswith('not enough data for the next record'):
raise IOError('final record too long; bad pxp file?')
raise
return _convert(packed_experiment, **kwargs)
def _convert(packed_experiment, ignore_unknown=True):
records, filesystem = packed_experiment
stack = [Folder(path=['root'])]
for record in records:
if isinstance(record, _UnknownRecord):
if ignore_unknown:
continue
else:
r = Unknown(record.data, type=record.header['recordType'])
elif isinstance(record, _GetHistoryRecord):
r = GetHistory(record.text)
elif isinstance(record, _HistoryRecord):
r = History(record.text)
elif isinstance(record, _PackedFileRecord):
r = PackedFile(record.text)
elif isinstance(record, _ProcedureRecord):
r = Procedure(record.text)
elif isinstance(record, _RecreationRecord):
r = Recreation(record.text)
elif isinstance(record, _VariablesRecord):
r = Variables(record)
elif isinstance(record, _WaveRecord):
r = Wave(record)
else:
r = None
if isinstance(record, _FolderStartRecord):
path = stack[-1].path + [
record.null_terminated_text.decode(ENCODING)]
folder = Folder(path)
stack[-1].append(folder)
stack.append(folder)
elif isinstance(record, _FolderEndRecord):
stack.pop()
elif r is None:
raise NotImplementedError(record)
else:
stack[-1].append(r)
if len(stack) != 1:
raise IOError("FolderStart records do not match FolderEnd records")
return stack[0]
| 0.237487 | 0.199542 |
import io as _io
from .. import LOG as _LOG
from ..binarywave import TYPE_TABLE as _TYPE_TABLE
from ..binarywave import NullStaticStringField as _NullStaticStringField
from ..binarywave import DynamicStringField as _DynamicStringField
from ..struct import Structure as _Structure
from ..struct import DynamicStructure as _DynamicStructure
from ..struct import Field as _Field
from ..struct import DynamicField as _DynamicField
from ..util import byte_order as _byte_order
from ..util import need_to_reorder_bytes as _need_to_reorder_bytes
from .base import Record
class ListedStaticStringField (_NullStaticStringField):
"""Handle string conversions for multi-count dynamic parents.
If a field belongs to a multi-count dynamic parent, the parent is
called multiple times to parse each count, and the field's
post-unpack hook gets called after the field is unpacked during
each iteration. This requires alternative logic for getting and
setting the string data. The actual string formatting code is not
affected.
"""
def post_unpack(self, parents, data):
parent_structure = parents[-1]
parent_data = self._get_structure_data(parents, data, parent_structure)
d = self._normalize_string(parent_data[-1][self.name])
parent_data[-1][self.name] = d
class ListedStaticStringField (_NullStaticStringField):
"""Handle string conversions for multi-count dynamic parents.
If a field belongs to a multi-count dynamic parent, the parent is
called multiple times to parse each count, and the field's
post-unpack hook gets called after the field is unpacked during
each iteration. This requires alternative logic for getting and
setting the string data. The actual string formatting code is not
affected.
"""
def post_unpack(self, parents, data):
parent_structure = parents[-1]
parent_data = self._get_structure_data(parents, data, parent_structure)
d = self._normalize_string(parent_data[-1][self.name])
parent_data[-1][self.name] = d
class ListedDynamicStrDataField (_DynamicStringField, ListedStaticStringField):
_size_field = 'strLen'
_null_terminated = False
def _get_size_data(self, parents, data):
parent_structure = parents[-1]
parent_data = self._get_structure_data(parents, data, parent_structure)
return parent_data[-1][self._size_field]
class DynamicVarDataField (_DynamicField):
def __init__(self, *args, **kwargs):
if 'array' not in kwargs:
kwargs['array'] = True
super(DynamicVarDataField, self).__init__(*args, **kwargs)
def pre_pack(self, parents, data):
raise NotImplementedError()
def post_unpack(self, parents, data):
var_structure = parents[-1]
var_data = self._get_structure_data(parents, data, var_structure)
data = var_data[self.name]
d = {}
for i,value in enumerate(data):
key,value = self._normalize_item(i, value)
d[key] = value
var_data[self.name] = d
def _normalize_item(self, index, value):
raise NotImplementedError()
class DynamicSysVarField (DynamicVarDataField):
def _normalize_item(self, index, value):
name = 'K{}'.format(index)
return (name, value)
class DynamicUserVarField (DynamicVarDataField):
def _normalize_item(self, index, value):
name = value['name']
value = value['num']
return (name, value)
class DynamicUserStrField (DynamicVarDataField):
def _normalize_item(self, index, value):
name = value['name']
value = value['data']
return (name, value)
class DynamicVarNumField (_DynamicField):
def post_unpack(self, parents, data):
parent_structure = parents[-1]
parent_data = self._get_structure_data(parents, data, parent_structure)
d = self._normalize_numeric_variable(parent_data[-1][self.name])
parent_data[-1][self.name] = d
def _normalize_numeric_variable(self, num_var):
t = _TYPE_TABLE[num_var['numType']]
if num_var['numType'] % 2: # complex number
return t(complex(num_var['realPart'], num_var['imagPart']))
else:
return t(num_var['realPart'])
class DynamicFormulaField (_DynamicStringField):
_size_field = 'formulaLen'
_null_terminated = True
# From Variables.h
VarHeader1 = _Structure( # `version` field pulled out into VariablesRecord
name='VarHeader1',
fields=[
_Field('h', 'numSysVars', help='Number of system variables (K0, K1, ...).'),
_Field('h', 'numUserVars', help='Number of user numeric variables -- may be zero.'),
_Field('h', 'numUserStrs', help='Number of user string variables -- may be zero.'),
])
# From Variables.h
VarHeader2 = _Structure( # `version` field pulled out into VariablesRecord
name='VarHeader2',
fields=[
_Field('h', 'numSysVars', help='Number of system variables (K0, K1, ...).'),
_Field('h', 'numUserVars', help='Number of user numeric variables -- may be zero.'),
_Field('h', 'numUserStrs', help='Number of user string variables -- may be zero.'),
_Field('h', 'numDependentVars', help='Number of dependent numeric variables -- may be zero.'),
_Field('h', 'numDependentStrs', help='Number of dependent string variables -- may be zero.'),
])
# From Variables.h
UserStrVarRec1 = _DynamicStructure(
name='UserStrVarRec1',
fields=[
ListedStaticStringField('c', 'name', help='Name of the string variable.', count=32),
_Field('h', 'strLen', help='The real size of the following array.'),
ListedDynamicStrDataField('c', 'data'),
])
# From Variables.h
UserStrVarRec2 = _DynamicStructure(
name='UserStrVarRec2',
fields=[
ListedStaticStringField('c', 'name', help='Name of the string variable.', count=32),
_Field('l', 'strLen', help='The real size of the following array.'),
_Field('c', 'data'),
])
# From Variables.h
VarNumRec = _Structure(
name='VarNumRec',
fields=[
_Field('h', 'numType', help='Type from binarywave.TYPE_TABLE'),
_Field('d', 'realPart', help='The real part of the number.'),
_Field('d', 'imagPart', help='The imag part if the number is complex.'),
_Field('l', 'reserved', help='Reserved - set to zero.'),
])
# From Variables.h
UserNumVarRec = _DynamicStructure(
name='UserNumVarRec',
fields=[
ListedStaticStringField('c', 'name', help='Name of the string variable.', count=32),
_Field('h', 'type', help='0 = string, 1 = numeric.'),
DynamicVarNumField(VarNumRec, 'num', help='Type and value of the variable if it is numeric. Not used for string.'),
])
# From Variables.h
UserDependentVarRec = _DynamicStructure(
name='UserDependentVarRec',
fields=[
ListedStaticStringField('c', 'name', help='Name of the string variable.', count=32),
_Field('h', 'type', help='0 = string, 1 = numeric.'),
_Field(VarNumRec, 'num', help='Type and value of the variable if it is numeric. Not used for string.'),
_Field('h', 'formulaLen', help='The length of the dependency formula.'),
DynamicFormulaField('c', 'formula', help='Start of the dependency formula. A C string including null terminator.'),
])
class DynamicVarHeaderField (_DynamicField):
def pre_pack(self, parents, data):
raise NotImplementedError()
def post_unpack(self, parents, data):
var_structure = parents[-1]
var_data = self._get_structure_data(
parents, data, var_structure)
var_header_structure = self.format
data = var_data['var_header']
sys_vars_field = var_structure.get_field('sysVars')
sys_vars_field.count = data['numSysVars']
sys_vars_field.setup()
user_vars_field = var_structure.get_field('userVars')
user_vars_field.count = data['numUserVars']
user_vars_field.setup()
user_strs_field = var_structure.get_field('userStrs')
user_strs_field.count = data['numUserStrs']
user_strs_field.setup()
if 'numDependentVars' in data:
dependent_vars_field = var_structure.get_field('dependentVars')
dependent_vars_field.count = data['numDependentVars']
dependent_vars_field.setup()
dependent_strs_field = var_structure.get_field('dependentStrs')
dependent_strs_field.count = data['numDependentStrs']
dependent_strs_field.setup()
var_structure.setup()
Variables1 = _DynamicStructure(
name='Variables1',
fields=[
DynamicVarHeaderField(VarHeader1, 'var_header', help='Variables header'),
DynamicSysVarField('f', 'sysVars', help='System variables', count=0),
DynamicUserVarField(UserNumVarRec, 'userVars', help='User numeric variables', count=0),
DynamicUserStrField(UserStrVarRec1, 'userStrs', help='User string variables', count=0),
])
Variables2 = _DynamicStructure(
name='Variables2',
fields=[
DynamicVarHeaderField(VarHeader2, 'var_header', help='Variables header'),
DynamicSysVarField('f', 'sysVars', help='System variables', count=0),
DynamicUserVarField(UserNumVarRec, 'userVars', help='User numeric variables', count=0),
DynamicUserStrField(UserStrVarRec2, 'userStrs', help='User string variables', count=0),
_Field(UserDependentVarRec, 'dependentVars', help='Dependent numeric variables.', count=0, array=True),
_Field(UserDependentVarRec, 'dependentStrs', help='Dependent string variables.', count=0, array=True),
])
class DynamicVersionField (_DynamicField):
def pre_pack(self, parents, byte_order):
raise NotImplementedError()
def post_unpack(self, parents, data):
variables_structure = parents[-1]
variables_data = self._get_structure_data(
parents, data, variables_structure)
version = variables_data['version']
if variables_structure.byte_order in '@=':
need_to_reorder_bytes = _need_to_reorder_bytes(version)
variables_structure.byte_order = _byte_order(need_to_reorder_bytes)
_LOG.debug(
'get byte order from version: {} (reorder? {})'.format(
variables_structure.byte_order, need_to_reorder_bytes))
else:
need_to_reorder_bytes = False
old_format = variables_structure.fields[-1].format
if version == 1:
variables_structure.fields[-1].format = Variables1
elif version == 2:
variables_structure.fields[-1].format = Variables2
elif not need_to_reorder_bytes:
raise ValueError(
'invalid variables record version: {}'.format(version))
if variables_structure.fields[-1].format != old_format:
_LOG.debug('change variables record from {} to {}'.format(
old_format, variables_structure.fields[-1].format))
variables_structure.setup()
elif need_to_reorder_bytes:
variables_structure.setup()
# we might need to unpack again with the new byte order
return need_to_reorder_bytes
VariablesRecordStructure = _DynamicStructure(
name='VariablesRecord',
fields=[
DynamicVersionField('h', 'version', help='Version number for this header.'),
_Field(Variables1, 'variables', help='The rest of the variables data.'),
])
class VariablesRecord (Record):
def __init__(self, *args, **kwargs):
super(VariablesRecord, self).__init__(*args, **kwargs)
# self.header['version'] # record version always 0?
VariablesRecordStructure.byte_order = '='
VariablesRecordStructure.setup()
stream = _io.BytesIO(bytes(self.data))
self.variables = VariablesRecordStructure.unpack_stream(stream)
self.namespace = {}
for key,value in self.variables['variables'].items():
if key not in ['var_header']:
_LOG.debug('update namespace {} with {} for {}'.format(
self.namespace, value, key))
self.namespace.update(value)
|
sci-memex
|
/sci_memex-0.0.3-py3-none-any.whl/memex/translators/igor/record/variables.py
|
variables.py
|
import io as _io
from .. import LOG as _LOG
from ..binarywave import TYPE_TABLE as _TYPE_TABLE
from ..binarywave import NullStaticStringField as _NullStaticStringField
from ..binarywave import DynamicStringField as _DynamicStringField
from ..struct import Structure as _Structure
from ..struct import DynamicStructure as _DynamicStructure
from ..struct import Field as _Field
from ..struct import DynamicField as _DynamicField
from ..util import byte_order as _byte_order
from ..util import need_to_reorder_bytes as _need_to_reorder_bytes
from .base import Record
class ListedStaticStringField (_NullStaticStringField):
"""Handle string conversions for multi-count dynamic parents.
If a field belongs to a multi-count dynamic parent, the parent is
called multiple times to parse each count, and the field's
post-unpack hook gets called after the field is unpacked during
each iteration. This requires alternative logic for getting and
setting the string data. The actual string formatting code is not
affected.
"""
def post_unpack(self, parents, data):
parent_structure = parents[-1]
parent_data = self._get_structure_data(parents, data, parent_structure)
d = self._normalize_string(parent_data[-1][self.name])
parent_data[-1][self.name] = d
class ListedStaticStringField (_NullStaticStringField):
"""Handle string conversions for multi-count dynamic parents.
If a field belongs to a multi-count dynamic parent, the parent is
called multiple times to parse each count, and the field's
post-unpack hook gets called after the field is unpacked during
each iteration. This requires alternative logic for getting and
setting the string data. The actual string formatting code is not
affected.
"""
def post_unpack(self, parents, data):
parent_structure = parents[-1]
parent_data = self._get_structure_data(parents, data, parent_structure)
d = self._normalize_string(parent_data[-1][self.name])
parent_data[-1][self.name] = d
class ListedDynamicStrDataField (_DynamicStringField, ListedStaticStringField):
_size_field = 'strLen'
_null_terminated = False
def _get_size_data(self, parents, data):
parent_structure = parents[-1]
parent_data = self._get_structure_data(parents, data, parent_structure)
return parent_data[-1][self._size_field]
class DynamicVarDataField (_DynamicField):
def __init__(self, *args, **kwargs):
if 'array' not in kwargs:
kwargs['array'] = True
super(DynamicVarDataField, self).__init__(*args, **kwargs)
def pre_pack(self, parents, data):
raise NotImplementedError()
def post_unpack(self, parents, data):
var_structure = parents[-1]
var_data = self._get_structure_data(parents, data, var_structure)
data = var_data[self.name]
d = {}
for i,value in enumerate(data):
key,value = self._normalize_item(i, value)
d[key] = value
var_data[self.name] = d
def _normalize_item(self, index, value):
raise NotImplementedError()
class DynamicSysVarField (DynamicVarDataField):
def _normalize_item(self, index, value):
name = 'K{}'.format(index)
return (name, value)
class DynamicUserVarField (DynamicVarDataField):
def _normalize_item(self, index, value):
name = value['name']
value = value['num']
return (name, value)
class DynamicUserStrField (DynamicVarDataField):
def _normalize_item(self, index, value):
name = value['name']
value = value['data']
return (name, value)
class DynamicVarNumField (_DynamicField):
def post_unpack(self, parents, data):
parent_structure = parents[-1]
parent_data = self._get_structure_data(parents, data, parent_structure)
d = self._normalize_numeric_variable(parent_data[-1][self.name])
parent_data[-1][self.name] = d
def _normalize_numeric_variable(self, num_var):
t = _TYPE_TABLE[num_var['numType']]
if num_var['numType'] % 2: # complex number
return t(complex(num_var['realPart'], num_var['imagPart']))
else:
return t(num_var['realPart'])
class DynamicFormulaField (_DynamicStringField):
_size_field = 'formulaLen'
_null_terminated = True
# From Variables.h
VarHeader1 = _Structure( # `version` field pulled out into VariablesRecord
name='VarHeader1',
fields=[
_Field('h', 'numSysVars', help='Number of system variables (K0, K1, ...).'),
_Field('h', 'numUserVars', help='Number of user numeric variables -- may be zero.'),
_Field('h', 'numUserStrs', help='Number of user string variables -- may be zero.'),
])
# From Variables.h
VarHeader2 = _Structure( # `version` field pulled out into VariablesRecord
name='VarHeader2',
fields=[
_Field('h', 'numSysVars', help='Number of system variables (K0, K1, ...).'),
_Field('h', 'numUserVars', help='Number of user numeric variables -- may be zero.'),
_Field('h', 'numUserStrs', help='Number of user string variables -- may be zero.'),
_Field('h', 'numDependentVars', help='Number of dependent numeric variables -- may be zero.'),
_Field('h', 'numDependentStrs', help='Number of dependent string variables -- may be zero.'),
])
# From Variables.h
UserStrVarRec1 = _DynamicStructure(
name='UserStrVarRec1',
fields=[
ListedStaticStringField('c', 'name', help='Name of the string variable.', count=32),
_Field('h', 'strLen', help='The real size of the following array.'),
ListedDynamicStrDataField('c', 'data'),
])
# From Variables.h
UserStrVarRec2 = _DynamicStructure(
name='UserStrVarRec2',
fields=[
ListedStaticStringField('c', 'name', help='Name of the string variable.', count=32),
_Field('l', 'strLen', help='The real size of the following array.'),
_Field('c', 'data'),
])
# From Variables.h
VarNumRec = _Structure(
name='VarNumRec',
fields=[
_Field('h', 'numType', help='Type from binarywave.TYPE_TABLE'),
_Field('d', 'realPart', help='The real part of the number.'),
_Field('d', 'imagPart', help='The imag part if the number is complex.'),
_Field('l', 'reserved', help='Reserved - set to zero.'),
])
# From Variables.h
UserNumVarRec = _DynamicStructure(
name='UserNumVarRec',
fields=[
ListedStaticStringField('c', 'name', help='Name of the string variable.', count=32),
_Field('h', 'type', help='0 = string, 1 = numeric.'),
DynamicVarNumField(VarNumRec, 'num', help='Type and value of the variable if it is numeric. Not used for string.'),
])
# From Variables.h
UserDependentVarRec = _DynamicStructure(
name='UserDependentVarRec',
fields=[
ListedStaticStringField('c', 'name', help='Name of the string variable.', count=32),
_Field('h', 'type', help='0 = string, 1 = numeric.'),
_Field(VarNumRec, 'num', help='Type and value of the variable if it is numeric. Not used for string.'),
_Field('h', 'formulaLen', help='The length of the dependency formula.'),
DynamicFormulaField('c', 'formula', help='Start of the dependency formula. A C string including null terminator.'),
])
class DynamicVarHeaderField (_DynamicField):
def pre_pack(self, parents, data):
raise NotImplementedError()
def post_unpack(self, parents, data):
var_structure = parents[-1]
var_data = self._get_structure_data(
parents, data, var_structure)
var_header_structure = self.format
data = var_data['var_header']
sys_vars_field = var_structure.get_field('sysVars')
sys_vars_field.count = data['numSysVars']
sys_vars_field.setup()
user_vars_field = var_structure.get_field('userVars')
user_vars_field.count = data['numUserVars']
user_vars_field.setup()
user_strs_field = var_structure.get_field('userStrs')
user_strs_field.count = data['numUserStrs']
user_strs_field.setup()
if 'numDependentVars' in data:
dependent_vars_field = var_structure.get_field('dependentVars')
dependent_vars_field.count = data['numDependentVars']
dependent_vars_field.setup()
dependent_strs_field = var_structure.get_field('dependentStrs')
dependent_strs_field.count = data['numDependentStrs']
dependent_strs_field.setup()
var_structure.setup()
Variables1 = _DynamicStructure(
name='Variables1',
fields=[
DynamicVarHeaderField(VarHeader1, 'var_header', help='Variables header'),
DynamicSysVarField('f', 'sysVars', help='System variables', count=0),
DynamicUserVarField(UserNumVarRec, 'userVars', help='User numeric variables', count=0),
DynamicUserStrField(UserStrVarRec1, 'userStrs', help='User string variables', count=0),
])
Variables2 = _DynamicStructure(
name='Variables2',
fields=[
DynamicVarHeaderField(VarHeader2, 'var_header', help='Variables header'),
DynamicSysVarField('f', 'sysVars', help='System variables', count=0),
DynamicUserVarField(UserNumVarRec, 'userVars', help='User numeric variables', count=0),
DynamicUserStrField(UserStrVarRec2, 'userStrs', help='User string variables', count=0),
_Field(UserDependentVarRec, 'dependentVars', help='Dependent numeric variables.', count=0, array=True),
_Field(UserDependentVarRec, 'dependentStrs', help='Dependent string variables.', count=0, array=True),
])
class DynamicVersionField (_DynamicField):
def pre_pack(self, parents, byte_order):
raise NotImplementedError()
def post_unpack(self, parents, data):
variables_structure = parents[-1]
variables_data = self._get_structure_data(
parents, data, variables_structure)
version = variables_data['version']
if variables_structure.byte_order in '@=':
need_to_reorder_bytes = _need_to_reorder_bytes(version)
variables_structure.byte_order = _byte_order(need_to_reorder_bytes)
_LOG.debug(
'get byte order from version: {} (reorder? {})'.format(
variables_structure.byte_order, need_to_reorder_bytes))
else:
need_to_reorder_bytes = False
old_format = variables_structure.fields[-1].format
if version == 1:
variables_structure.fields[-1].format = Variables1
elif version == 2:
variables_structure.fields[-1].format = Variables2
elif not need_to_reorder_bytes:
raise ValueError(
'invalid variables record version: {}'.format(version))
if variables_structure.fields[-1].format != old_format:
_LOG.debug('change variables record from {} to {}'.format(
old_format, variables_structure.fields[-1].format))
variables_structure.setup()
elif need_to_reorder_bytes:
variables_structure.setup()
# we might need to unpack again with the new byte order
return need_to_reorder_bytes
VariablesRecordStructure = _DynamicStructure(
name='VariablesRecord',
fields=[
DynamicVersionField('h', 'version', help='Version number for this header.'),
_Field(Variables1, 'variables', help='The rest of the variables data.'),
])
class VariablesRecord (Record):
def __init__(self, *args, **kwargs):
super(VariablesRecord, self).__init__(*args, **kwargs)
# self.header['version'] # record version always 0?
VariablesRecordStructure.byte_order = '='
VariablesRecordStructure.setup()
stream = _io.BytesIO(bytes(self.data))
self.variables = VariablesRecordStructure.unpack_stream(stream)
self.namespace = {}
for key,value in self.variables['variables'].items():
if key not in ['var_header']:
_LOG.debug('update namespace {} with {} for {}'.format(
self.namespace, value, key))
self.namespace.update(value)
| 0.602646 | 0.30005 |
<img src="https://raw.githubusercontent.com/altar31/altar31/7b6b6fbca83da051934d326feaa64db12ba8ad15/public/sciml-logo.png" alt="Image Title" width="800" height="auto">
## ☕️ About
**sci-ml** is an attempt to provide a **high-level** and **human** friendly [API](https://en.wikipedia.org/wiki/API) for **Scientific Machine Learning** algorithms such as [PINN](https://arxiv.org/abs/2201.05624), [LSTM-RNN](https://arxiv.org/abs/1909.09586), [RC](https://arxiv.org/abs/2307.15092)... but with **applications** in mind.\
In the spirit of [scikit-learn](https://scikit-learn.org/stable/index.html), the user will find an **extensive documentation** of the implemented algorithms as well as some **practicals** use-cases in **science** and **engineering**.
Although some implementations and packages already exist, the **Python Scientific Machine Learning Community** is **sparse**... Thus, the **long-term** goal of the project is to provide a **constitutive** implementation of such algorithms under the same **banner**.
## 🎯 Goals
At first the motivations of this project are purely educatives and practicals... So as a researcher using machine/deep learning on a daily basis, i would like to deep dive into it and implement some algorithms in such way that they will be **easily reusable** and **useful** for **others**.
## 🚀 Features
- **Simple** and **efficient** tools for solving **science** and **engineering** problems using **Machine Learning**
- **Practical** and **expressive** API
- Stand on the **shoulders of giants** -> on top of [Pandas](https://pandas.pydata.org/), [Keras](https://keras.io/), [scikit-learn](https://scikit-learn.org/stable/) and [seaborn](https://seaborn.pydata.org/).
## ⚠️ Warnings
**For the moment:**
- The development of the project takes place in a **private** GitHub repository
- The project is at a very **early stage** -> **Nothing** is implemented in this **published version**
- The **documentation** is **missing**
The **GitHub repository** as well as a **usable** Python package will be **available** in the **upcoming months** when the project will be **more advanced**.
## 🤝 Community-driven
**sci-ml** is foremost a **community-driven** project ! The project will be **highly collaborative** and everyone is welcome to the project ! 🤗
Don't hesitate to contact me if you want to know more or are interested in ! 😃
**Stay tuned !** 🗓️
|
sci-mls
|
/sci_mls-0.0.0.tar.gz/sci_mls-0.0.0/README.md
|
README.md
|
<img src="https://raw.githubusercontent.com/altar31/altar31/7b6b6fbca83da051934d326feaa64db12ba8ad15/public/sciml-logo.png" alt="Image Title" width="800" height="auto">
## ☕️ About
**sci-ml** is an attempt to provide a **high-level** and **human** friendly [API](https://en.wikipedia.org/wiki/API) for **Scientific Machine Learning** algorithms such as [PINN](https://arxiv.org/abs/2201.05624), [LSTM-RNN](https://arxiv.org/abs/1909.09586), [RC](https://arxiv.org/abs/2307.15092)... but with **applications** in mind.\
In the spirit of [scikit-learn](https://scikit-learn.org/stable/index.html), the user will find an **extensive documentation** of the implemented algorithms as well as some **practicals** use-cases in **science** and **engineering**.
Although some implementations and packages already exist, the **Python Scientific Machine Learning Community** is **sparse**... Thus, the **long-term** goal of the project is to provide a **constitutive** implementation of such algorithms under the same **banner**.
## 🎯 Goals
At first the motivations of this project are purely educatives and practicals... So as a researcher using machine/deep learning on a daily basis, i would like to deep dive into it and implement some algorithms in such way that they will be **easily reusable** and **useful** for **others**.
## 🚀 Features
- **Simple** and **efficient** tools for solving **science** and **engineering** problems using **Machine Learning**
- **Practical** and **expressive** API
- Stand on the **shoulders of giants** -> on top of [Pandas](https://pandas.pydata.org/), [Keras](https://keras.io/), [scikit-learn](https://scikit-learn.org/stable/) and [seaborn](https://seaborn.pydata.org/).
## ⚠️ Warnings
**For the moment:**
- The development of the project takes place in a **private** GitHub repository
- The project is at a very **early stage** -> **Nothing** is implemented in this **published version**
- The **documentation** is **missing**
The **GitHub repository** as well as a **usable** Python package will be **available** in the **upcoming months** when the project will be **more advanced**.
## 🤝 Community-driven
**sci-ml** is foremost a **community-driven** project ! The project will be **highly collaborative** and everyone is welcome to the project ! 🤗
Don't hesitate to contact me if you want to know more or are interested in ! 😃
**Stay tuned !** 🗓️
| 0.779154 | 0.881462 |
from django.core.context_processors import csrf
from django.template import Template, Context
from django.conf.global_settings import TIME_FORMAT
from django.shortcuts import redirect, render
from django.core.urlresolvers import reverse
from django.http import HttpResponse
from models import *
from django.contrib.auth.decorators import permission_required
import json
from forms import MailForm, JobForm, NewsletterSettingsForm, CKEditorForm
import datetime
from django.core.mail import send_mail
# TODO Ограничить доступ к страницам рассылки
def saveForms(job_pk, formsData):
"""
Сохраняет данные форм со страницы в работу с ключем job_pk
Страница должна быть с двумя формами: mail и job
formsData - заполненный кортеж (fJob, fMail)
"""
job = Job.objects.get(pk=job_pk)
job.send_date = formsData[0].cleaned_data['send_date']
job.state = formsData[0].cleaned_data['state']
job.recievers = formsData[0].cleaned_data['recievers']
job.save()
job.mail.text = formsData[1].cleaned_data['text']
job.mail.subject = formsData[1].cleaned_data['subject']
job.mail.save()
return
# TODO: файлы шаблонов лежат в bib/templates/newsletter...
def job(request, job_pk):
ctx = {}
# Получаем саму работу
ctx.update({'job_pk': job_pk})
job = Job.objects.get(pk=job_pk)
newsletterSettings = NewsletterSettings.objects.all()[0]
ctx.update(csrf(request))
# Если принимаем форму:
if request.method == 'POST':
fJob = JobForm(request.POST)
fMail = MailForm(request.POST)
fSettings = NewsletterSettingsForm(request.POST)
formsData = (fJob, fMail)
# Сохранение настроек рассылки, если они были переданы
if fSettings.is_valid():
if 'save' in request.POST:
newsletterSettings.newsletter_type = fSettings.cleaned_data['newsletter_type']
newsletterSettings.day = fSettings.cleaned_data['day']
newsletterSettings.save()
print "Настройки рассылки сохранены..."
if fJob.is_valid() and fMail.is_valid():
if 'send' in request.POST:
# TODO необходимо сохранять письмо и после уже отправлять
saveForms(job_pk, formsData)
if (job.send()):
return redirect( reverse('job_list') )
else:
# Нет информации о том, что письмо не было отправлено и почему
return redirect( reverse('job', args=[job_pk]) )
if 'create' in request.POST:
mail_new = Mail(subject=fMail.cleaned_data['subject'], text=fMail.cleaned_data['text'], template=fMail.cleaned_data['template'])
mail_new.save()
job_new = Job(send_date=fJob.cleaned_data['send_date'], state=fJob.cleaned_data['state'], mail=mail_new, recievers=fJob.cleaned_data['recievers'])
job_new.save()
job_pk = job_new.id
return redirect( reverse('job', args=[job_pk]) )
if 'save' in request.POST:
saveForms(job_pk, formsData)
return redirect( reverse('job', args=[job_pk]) )
if 'preview' in request.POST:
saveForms(job_pk, formsData)
return redirect( reverse('preview', args=[job_pk]) )
# При этих действиях не нужна валиация формы:
if 'update' in request.POST:
job.make()
return redirect( reverse('job', args=[job_pk]) )
if 'delete' in request.POST:
job.delete()
return redirect( reverse('job_list') )
# Если посылаем начальную форму
else:
# Если загружается неотосланная рассылка - обновялем её и добавляем настройки рассылки:
if (job.mail.template.name == u'Ежемесячная рассылка' and not job.state == MAIL_STATE_SENT):
fSettings = NewsletterSettingsForm(initial={'newsletter_type': newsletterSettings.newsletter_type, 'day': newsletterSettings.day })
ctx.update({'formSettings': fSettings})
job.make()
fJob = JobForm(initial={'send_date': job.send_date.strftime(settings.TIME_FORMAT), 'state': job.state, 'mail': job.mail, 'recievers': job.recievers })
fMail = MailForm(initial={'subject': job.mail.subject, 'text': job.mail.text, 'template': job.mail.template })
if job.state == MAIL_STATE_SENT:
fJob.fields['state'].choices = SENT_MAIL_STATES
else:
fJob.fields['state'].choices = NOT_SENT_MAIL_STATES
ctx.update({'formJob': fJob})
ctx.update({'formMail': fMail})
ctx.update({'emails': ", ".join( job.get_recievers_list(job.recievers) )})
if (job.mail.template.name == u'Ежемесячная рассылка'):
ctx.update({'is_feed': True })
else:
ctx.update({'is_feed': False })
return render(request, 'newsletter/main.html', ctx)
def job_preview(request, job_pk):
job = Job.objects.get(pk=job_pk)
ctx = {}
ctx.update({'previewText': job.get_html()})
ctx.update({'job_pk': job_pk })
ctx.update({'recievers': ", ".join( job.get_recievers_list(job.recievers) )})
ctx.update({'send_date': job.send_date.strftime(settings.TIME_FORMAT)})
ctx.update(csrf(request))
# not_sent - показывать ли кнопку "редактировать" в предпросмотре и список получателей.
if (job.state == MAIL_STATE_SENT):
ctx.update({'not_sent': False })
else:
ctx.update({'not_sent': True })
ckeForm = CKEditorForm(initial={'text': ctx['previewText'] })
ctx.update({'ckeForm': ckeForm})
return render(request, 'newsletter/preview.html', ctx)
def job_preview_last(request):
job_sent_list = Job.objects.all().filter(state=MAIL_STATE_SENT).order_by('-send_date')
if len(job_sent_list) >= 1:
job = job_sent_list[0]
else:
return redirect(reverse('job_list'))
return redirect( reverse('preview', args=[job.pk]) )
def job_list(request):
ctx = {}
# Какое условие фильтра != (?)
jobs = Job.objects.all().filter(state__lt=MAIL_STATE_SENT).order_by('send_date')
ctx.update({'jobs': jobs})
return render(request, 'newsletter/jobs.html', ctx)
def new_job(request):
template = MailTemplate.objects.get(name=u"Пользовательское письмо")
mail_new = Mail(subject="", text="", template=template)
mail_new.save()
job_new = Job(send_date=datetime.date.today(), state=MAIL_STATE_DRAFT, mail=mail_new, recievers=RECIEVERS_SUBSCRIBED)
job_new.save()
job_pk = job_new.id
return redirect(reverse('job', args=[job_pk]) )
def feed(request):
"""Направляет на страницу текущей ежемесячной рассылки. Если рассылок нет - создает ее"""
feed = Job.objects.filter(mail__template__name__startswith='Ежемесячная', state__lt=MAIL_STATE_SENT)
if (len(feed) == 0):
job_pk = Job.create_feed()
return redirect( reverse('job', args=[job_pk]) )
else:
job_pk = feed[0].id
return redirect( reverse('job', args=[job_pk]) )
def archive(request):
ctx = {}
jobs = Job.objects.all().filter(state=MAIL_STATE_SENT).order_by('-send_date')
ctx.update({'jobs': jobs })
ctx.update({'archive': True })
return render(request, 'newsletter/jobs.html', ctx)
# Ajax:
def get_recievers(request):
rec = request.GET['rec']
response = Job.get_recievers_list(rec)
content = json.dumps(response)
return HttpResponse(content, mimetype='application/javascript; charset=utf8')
|
sci-newsletter
|
/sci-newsletter-0.50.tar.gz/sci-newsletter-0.50/newsletter/views.py
|
views.py
|
from django.core.context_processors import csrf
from django.template import Template, Context
from django.conf.global_settings import TIME_FORMAT
from django.shortcuts import redirect, render
from django.core.urlresolvers import reverse
from django.http import HttpResponse
from models import *
from django.contrib.auth.decorators import permission_required
import json
from forms import MailForm, JobForm, NewsletterSettingsForm, CKEditorForm
import datetime
from django.core.mail import send_mail
# TODO Ограничить доступ к страницам рассылки
def saveForms(job_pk, formsData):
"""
Сохраняет данные форм со страницы в работу с ключем job_pk
Страница должна быть с двумя формами: mail и job
formsData - заполненный кортеж (fJob, fMail)
"""
job = Job.objects.get(pk=job_pk)
job.send_date = formsData[0].cleaned_data['send_date']
job.state = formsData[0].cleaned_data['state']
job.recievers = formsData[0].cleaned_data['recievers']
job.save()
job.mail.text = formsData[1].cleaned_data['text']
job.mail.subject = formsData[1].cleaned_data['subject']
job.mail.save()
return
# TODO: файлы шаблонов лежат в bib/templates/newsletter...
def job(request, job_pk):
ctx = {}
# Получаем саму работу
ctx.update({'job_pk': job_pk})
job = Job.objects.get(pk=job_pk)
newsletterSettings = NewsletterSettings.objects.all()[0]
ctx.update(csrf(request))
# Если принимаем форму:
if request.method == 'POST':
fJob = JobForm(request.POST)
fMail = MailForm(request.POST)
fSettings = NewsletterSettingsForm(request.POST)
formsData = (fJob, fMail)
# Сохранение настроек рассылки, если они были переданы
if fSettings.is_valid():
if 'save' in request.POST:
newsletterSettings.newsletter_type = fSettings.cleaned_data['newsletter_type']
newsletterSettings.day = fSettings.cleaned_data['day']
newsletterSettings.save()
print "Настройки рассылки сохранены..."
if fJob.is_valid() and fMail.is_valid():
if 'send' in request.POST:
# TODO необходимо сохранять письмо и после уже отправлять
saveForms(job_pk, formsData)
if (job.send()):
return redirect( reverse('job_list') )
else:
# Нет информации о том, что письмо не было отправлено и почему
return redirect( reverse('job', args=[job_pk]) )
if 'create' in request.POST:
mail_new = Mail(subject=fMail.cleaned_data['subject'], text=fMail.cleaned_data['text'], template=fMail.cleaned_data['template'])
mail_new.save()
job_new = Job(send_date=fJob.cleaned_data['send_date'], state=fJob.cleaned_data['state'], mail=mail_new, recievers=fJob.cleaned_data['recievers'])
job_new.save()
job_pk = job_new.id
return redirect( reverse('job', args=[job_pk]) )
if 'save' in request.POST:
saveForms(job_pk, formsData)
return redirect( reverse('job', args=[job_pk]) )
if 'preview' in request.POST:
saveForms(job_pk, formsData)
return redirect( reverse('preview', args=[job_pk]) )
# При этих действиях не нужна валиация формы:
if 'update' in request.POST:
job.make()
return redirect( reverse('job', args=[job_pk]) )
if 'delete' in request.POST:
job.delete()
return redirect( reverse('job_list') )
# Если посылаем начальную форму
else:
# Если загружается неотосланная рассылка - обновялем её и добавляем настройки рассылки:
if (job.mail.template.name == u'Ежемесячная рассылка' and not job.state == MAIL_STATE_SENT):
fSettings = NewsletterSettingsForm(initial={'newsletter_type': newsletterSettings.newsletter_type, 'day': newsletterSettings.day })
ctx.update({'formSettings': fSettings})
job.make()
fJob = JobForm(initial={'send_date': job.send_date.strftime(settings.TIME_FORMAT), 'state': job.state, 'mail': job.mail, 'recievers': job.recievers })
fMail = MailForm(initial={'subject': job.mail.subject, 'text': job.mail.text, 'template': job.mail.template })
if job.state == MAIL_STATE_SENT:
fJob.fields['state'].choices = SENT_MAIL_STATES
else:
fJob.fields['state'].choices = NOT_SENT_MAIL_STATES
ctx.update({'formJob': fJob})
ctx.update({'formMail': fMail})
ctx.update({'emails': ", ".join( job.get_recievers_list(job.recievers) )})
if (job.mail.template.name == u'Ежемесячная рассылка'):
ctx.update({'is_feed': True })
else:
ctx.update({'is_feed': False })
return render(request, 'newsletter/main.html', ctx)
def job_preview(request, job_pk):
job = Job.objects.get(pk=job_pk)
ctx = {}
ctx.update({'previewText': job.get_html()})
ctx.update({'job_pk': job_pk })
ctx.update({'recievers': ", ".join( job.get_recievers_list(job.recievers) )})
ctx.update({'send_date': job.send_date.strftime(settings.TIME_FORMAT)})
ctx.update(csrf(request))
# not_sent - показывать ли кнопку "редактировать" в предпросмотре и список получателей.
if (job.state == MAIL_STATE_SENT):
ctx.update({'not_sent': False })
else:
ctx.update({'not_sent': True })
ckeForm = CKEditorForm(initial={'text': ctx['previewText'] })
ctx.update({'ckeForm': ckeForm})
return render(request, 'newsletter/preview.html', ctx)
def job_preview_last(request):
job_sent_list = Job.objects.all().filter(state=MAIL_STATE_SENT).order_by('-send_date')
if len(job_sent_list) >= 1:
job = job_sent_list[0]
else:
return redirect(reverse('job_list'))
return redirect( reverse('preview', args=[job.pk]) )
def job_list(request):
ctx = {}
# Какое условие фильтра != (?)
jobs = Job.objects.all().filter(state__lt=MAIL_STATE_SENT).order_by('send_date')
ctx.update({'jobs': jobs})
return render(request, 'newsletter/jobs.html', ctx)
def new_job(request):
template = MailTemplate.objects.get(name=u"Пользовательское письмо")
mail_new = Mail(subject="", text="", template=template)
mail_new.save()
job_new = Job(send_date=datetime.date.today(), state=MAIL_STATE_DRAFT, mail=mail_new, recievers=RECIEVERS_SUBSCRIBED)
job_new.save()
job_pk = job_new.id
return redirect(reverse('job', args=[job_pk]) )
def feed(request):
"""Направляет на страницу текущей ежемесячной рассылки. Если рассылок нет - создает ее"""
feed = Job.objects.filter(mail__template__name__startswith='Ежемесячная', state__lt=MAIL_STATE_SENT)
if (len(feed) == 0):
job_pk = Job.create_feed()
return redirect( reverse('job', args=[job_pk]) )
else:
job_pk = feed[0].id
return redirect( reverse('job', args=[job_pk]) )
def archive(request):
ctx = {}
jobs = Job.objects.all().filter(state=MAIL_STATE_SENT).order_by('-send_date')
ctx.update({'jobs': jobs })
ctx.update({'archive': True })
return render(request, 'newsletter/jobs.html', ctx)
# Ajax:
def get_recievers(request):
rec = request.GET['rec']
response = Job.get_recievers_list(rec)
content = json.dumps(response)
return HttpResponse(content, mimetype='application/javascript; charset=utf8')
| 0.109801 | 0.072604 |
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
from newsletter.models import NEWSLETTER_MONTLY
class Migration(DataMigration):
def forwards(self, orm):
"Write your forwards methods here."
templateText="""
<b>Доброго времени суток!</b>
<br>
<i>На нашем сайте произошли следующие изменения:</i>
<br>
{{text}}
<br>
<i>С уважением, администрация сайта SciBib</i>
"""
feedTemplate = orm['newsletter.mailtemplate'].objects.create(name='Ежемесячная рассылка', template=templateText)
feedTemplate.save()
templateText="""
<b>Доброго времени суток!</b>
<br>
<i>У нас есть для вас следующая информация:</i>
<br>
{{text}}
<br>
<i>С уважением, администрация сайта SciBib</i>
"""
mailTemplate = orm['newsletter.mailtemplate'].objects.create(name='Пользовательское письмо', template=templateText)
mailTemplate.save()
feedSettings = orm['newsletter.newslettersettings'].objects.create(newsletter_type=NEWSLETTER_MONTLY, day = 12)
feedSettings.save()
def backwards(self, orm):
"Write your backwards methods here."
orm['newsletter.mailtemplate'].objects.get(name="Ежемесячная рассылка").delete()
orm['newsletter.mailtemplate'].objects.get(name="Пользовательское письмо").delete()
orm['newsletter.newslettersettings'].objects.all()[0].delete()
models = {
'newsletter.job': {
'Meta': {'object_name': 'Job'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'mail': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'mails'", 'to': "orm['newsletter.Mail']"}),
'recievers': ('django.db.models.fields.CharField', [], {'default': "'0'", 'max_length': '125'}),
'send_date': ('django.db.models.fields.DateField', [], {}),
'state': ('django.db.models.fields.DecimalField', [], {'default': '0', 'max_digits': '1', 'decimal_places': '0'})
},
'newsletter.mail': {
'Meta': {'object_name': 'Mail'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'subject': ('django.db.models.fields.CharField', [], {'max_length': '125'}),
'template': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'templates'", 'to': "orm['newsletter.MailTemplate']"}),
'text': ('ckeditor.fields.RichTextField', [], {})
},
'newsletter.mailtemplate': {
'Meta': {'object_name': 'MailTemplate'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '125'}),
'template': ('django.db.models.fields.TextField', [], {})
},
'newsletter.newslettersettings': {
'Meta': {'object_name': 'NewsletterSettings'},
'day': ('django.db.models.fields.IntegerField', [], {'default': '7'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'newsletter_type': ('django.db.models.fields.CharField', [], {'default': "'0'", 'max_length': '125'})
}
}
complete_apps = ['newsletter']
symmetrical = True
|
sci-newsletter
|
/sci-newsletter-0.50.tar.gz/sci-newsletter-0.50/newsletter/migrations/0002_adding_initial_data.py
|
0002_adding_initial_data.py
|
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
from newsletter.models import NEWSLETTER_MONTLY
class Migration(DataMigration):
def forwards(self, orm):
"Write your forwards methods here."
templateText="""
<b>Доброго времени суток!</b>
<br>
<i>На нашем сайте произошли следующие изменения:</i>
<br>
{{text}}
<br>
<i>С уважением, администрация сайта SciBib</i>
"""
feedTemplate = orm['newsletter.mailtemplate'].objects.create(name='Ежемесячная рассылка', template=templateText)
feedTemplate.save()
templateText="""
<b>Доброго времени суток!</b>
<br>
<i>У нас есть для вас следующая информация:</i>
<br>
{{text}}
<br>
<i>С уважением, администрация сайта SciBib</i>
"""
mailTemplate = orm['newsletter.mailtemplate'].objects.create(name='Пользовательское письмо', template=templateText)
mailTemplate.save()
feedSettings = orm['newsletter.newslettersettings'].objects.create(newsletter_type=NEWSLETTER_MONTLY, day = 12)
feedSettings.save()
def backwards(self, orm):
"Write your backwards methods here."
orm['newsletter.mailtemplate'].objects.get(name="Ежемесячная рассылка").delete()
orm['newsletter.mailtemplate'].objects.get(name="Пользовательское письмо").delete()
orm['newsletter.newslettersettings'].objects.all()[0].delete()
models = {
'newsletter.job': {
'Meta': {'object_name': 'Job'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'mail': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'mails'", 'to': "orm['newsletter.Mail']"}),
'recievers': ('django.db.models.fields.CharField', [], {'default': "'0'", 'max_length': '125'}),
'send_date': ('django.db.models.fields.DateField', [], {}),
'state': ('django.db.models.fields.DecimalField', [], {'default': '0', 'max_digits': '1', 'decimal_places': '0'})
},
'newsletter.mail': {
'Meta': {'object_name': 'Mail'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'subject': ('django.db.models.fields.CharField', [], {'max_length': '125'}),
'template': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'templates'", 'to': "orm['newsletter.MailTemplate']"}),
'text': ('ckeditor.fields.RichTextField', [], {})
},
'newsletter.mailtemplate': {
'Meta': {'object_name': 'MailTemplate'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '125'}),
'template': ('django.db.models.fields.TextField', [], {})
},
'newsletter.newslettersettings': {
'Meta': {'object_name': 'NewsletterSettings'},
'day': ('django.db.models.fields.IntegerField', [], {'default': '7'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'newsletter_type': ('django.db.models.fields.CharField', [], {'default': "'0'", 'max_length': '125'})
}
}
complete_apps = ['newsletter']
symmetrical = True
| 0.316264 | 0.151404 |
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'NewsletterSettings'
db.create_table('newsletter_newslettersettings', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('newsletter_type', self.gf('django.db.models.fields.CharField')(default='0', max_length=125)),
('day', self.gf('django.db.models.fields.IntegerField')(default=7)),
))
db.send_create_signal('newsletter', ['NewsletterSettings'])
# Adding model 'Job'
db.create_table('newsletter_job', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('send_date', self.gf('django.db.models.fields.DateField')()),
('state', self.gf('django.db.models.fields.DecimalField')(default=0, max_digits=1, decimal_places=0)),
('mail', self.gf('django.db.models.fields.related.ForeignKey')(related_name='mails', to=orm['newsletter.Mail'])),
('recievers', self.gf('django.db.models.fields.CharField')(default='0', max_length=125)),
))
db.send_create_signal('newsletter', ['Job'])
# Adding model 'Mail'
db.create_table('newsletter_mail', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('subject', self.gf('django.db.models.fields.CharField')(max_length=125)),
('text', self.gf('ckeditor.fields.RichTextField')()),
('template', self.gf('django.db.models.fields.related.ForeignKey')(related_name='templates', to=orm['newsletter.MailTemplate'])),
))
db.send_create_signal('newsletter', ['Mail'])
# Adding model 'MailTemplate'
db.create_table('newsletter_mailtemplate', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=125)),
('template', self.gf('django.db.models.fields.TextField')()),
))
db.send_create_signal('newsletter', ['MailTemplate'])
def backwards(self, orm):
# Deleting model 'NewsletterSettings'
db.delete_table('newsletter_newslettersettings')
# Deleting model 'Job'
db.delete_table('newsletter_job')
# Deleting model 'Mail'
db.delete_table('newsletter_mail')
# Deleting model 'MailTemplate'
db.delete_table('newsletter_mailtemplate')
models = {
'newsletter.job': {
'Meta': {'object_name': 'Job'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'mail': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'mails'", 'to': "orm['newsletter.Mail']"}),
'recievers': ('django.db.models.fields.CharField', [], {'default': "'0'", 'max_length': '125'}),
'send_date': ('django.db.models.fields.DateField', [], {}),
'state': ('django.db.models.fields.DecimalField', [], {'default': '0', 'max_digits': '1', 'decimal_places': '0'})
},
'newsletter.mail': {
'Meta': {'object_name': 'Mail'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'subject': ('django.db.models.fields.CharField', [], {'max_length': '125'}),
'template': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'templates'", 'to': "orm['newsletter.MailTemplate']"}),
'text': ('ckeditor.fields.RichTextField', [], {})
},
'newsletter.mailtemplate': {
'Meta': {'object_name': 'MailTemplate'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '125'}),
'template': ('django.db.models.fields.TextField', [], {})
},
'newsletter.newslettersettings': {
'Meta': {'object_name': 'NewsletterSettings'},
'day': ('django.db.models.fields.IntegerField', [], {'default': '7'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'newsletter_type': ('django.db.models.fields.CharField', [], {'default': "'0'", 'max_length': '125'})
}
}
complete_apps = ['newsletter']
|
sci-newsletter
|
/sci-newsletter-0.50.tar.gz/sci-newsletter-0.50/newsletter/migrations/0001_initial.py
|
0001_initial.py
|
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'NewsletterSettings'
db.create_table('newsletter_newslettersettings', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('newsletter_type', self.gf('django.db.models.fields.CharField')(default='0', max_length=125)),
('day', self.gf('django.db.models.fields.IntegerField')(default=7)),
))
db.send_create_signal('newsletter', ['NewsletterSettings'])
# Adding model 'Job'
db.create_table('newsletter_job', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('send_date', self.gf('django.db.models.fields.DateField')()),
('state', self.gf('django.db.models.fields.DecimalField')(default=0, max_digits=1, decimal_places=0)),
('mail', self.gf('django.db.models.fields.related.ForeignKey')(related_name='mails', to=orm['newsletter.Mail'])),
('recievers', self.gf('django.db.models.fields.CharField')(default='0', max_length=125)),
))
db.send_create_signal('newsletter', ['Job'])
# Adding model 'Mail'
db.create_table('newsletter_mail', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('subject', self.gf('django.db.models.fields.CharField')(max_length=125)),
('text', self.gf('ckeditor.fields.RichTextField')()),
('template', self.gf('django.db.models.fields.related.ForeignKey')(related_name='templates', to=orm['newsletter.MailTemplate'])),
))
db.send_create_signal('newsletter', ['Mail'])
# Adding model 'MailTemplate'
db.create_table('newsletter_mailtemplate', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=125)),
('template', self.gf('django.db.models.fields.TextField')()),
))
db.send_create_signal('newsletter', ['MailTemplate'])
def backwards(self, orm):
# Deleting model 'NewsletterSettings'
db.delete_table('newsletter_newslettersettings')
# Deleting model 'Job'
db.delete_table('newsletter_job')
# Deleting model 'Mail'
db.delete_table('newsletter_mail')
# Deleting model 'MailTemplate'
db.delete_table('newsletter_mailtemplate')
models = {
'newsletter.job': {
'Meta': {'object_name': 'Job'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'mail': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'mails'", 'to': "orm['newsletter.Mail']"}),
'recievers': ('django.db.models.fields.CharField', [], {'default': "'0'", 'max_length': '125'}),
'send_date': ('django.db.models.fields.DateField', [], {}),
'state': ('django.db.models.fields.DecimalField', [], {'default': '0', 'max_digits': '1', 'decimal_places': '0'})
},
'newsletter.mail': {
'Meta': {'object_name': 'Mail'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'subject': ('django.db.models.fields.CharField', [], {'max_length': '125'}),
'template': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'templates'", 'to': "orm['newsletter.MailTemplate']"}),
'text': ('ckeditor.fields.RichTextField', [], {})
},
'newsletter.mailtemplate': {
'Meta': {'object_name': 'MailTemplate'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '125'}),
'template': ('django.db.models.fields.TextField', [], {})
},
'newsletter.newslettersettings': {
'Meta': {'object_name': 'NewsletterSettings'},
'day': ('django.db.models.fields.IntegerField', [], {'default': '7'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'newsletter_type': ('django.db.models.fields.CharField', [], {'default': "'0'", 'max_length': '125'})
}
}
complete_apps = ['newsletter']
| 0.507324 | 0.085709 |
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
class Migration(DataMigration):
def forwards(self, orm):
"Write your forwards methods here."
# Note: Don't use "from appname.models import ModelName".
# Use orm.ModelName to refer to models in this application,
# and orm['appname.ModelName'] for models in other applications.
template_text = orm['newsletter.mailtemplate'].objects.get(name="Ежемесячная рассылка")
template_text.template="""
<b>Доброго времени суток!</b>
<br>
<i>На нашем сайте произошли следующие изменения:</i>
<br>
{{text}}
<br>
<i>С уважением, администрация сайта "{{res_name}}"</i>
"""
template_text.save()
template_text = orm['newsletter.mailtemplate'].objects.get(name="Пользовательское письмо")
template_text.template="""
<b>Доброго времени суток!</b>
<br>
<i>У нас есть для вас следующая информация:</i>
<br>
{{text}}
<br>
<i>С уважением, администрация сайта "{{res_name}}"</i>
"""
template_text.save()
def backwards(self, orm):
"Write your backwards methods here."
template_text = orm['newsletter.mailtemplate'].objects.get(name="Ежемесячная рассылка")
template_text.template="""
<b>Доброго времени суток!</b>
<br>
<i>На нашем сайте произошли следующие изменения:</i>
<br>
{{text}}
<br>
<i>С уважением, администрация сайта SciBib</i>
"""
template_text.save()
template_text = orm['newsletter.mailtemplate'].objects.get(name="Пользовательское письмо")
template_text.template="""
<b>Доброго времени суток!</b>
<br>
<i>У нас есть для вас следующая информация:</i>
<br>
{{text}}
<br>
<i>С уважением, администрация сайта SciBib</i>
"""
template_text.save()
models = {
'newsletter.job': {
'Meta': {'object_name': 'Job'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'mail': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'mails'", 'to': "orm['newsletter.Mail']"}),
'recievers': ('django.db.models.fields.CharField', [], {'default': "'0'", 'max_length': '125'}),
'send_date': ('django.db.models.fields.DateField', [], {}),
'state': ('django.db.models.fields.DecimalField', [], {'default': '0', 'max_digits': '1', 'decimal_places': '0'})
},
'newsletter.mail': {
'Meta': {'object_name': 'Mail'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'subject': ('django.db.models.fields.CharField', [], {'max_length': '125'}),
'template': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'templates'", 'to': "orm['newsletter.MailTemplate']"}),
'text': ('ckeditor.fields.RichTextField', [], {})
},
'newsletter.mailtemplate': {
'Meta': {'object_name': 'MailTemplate'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '125'}),
'template': ('django.db.models.fields.TextField', [], {})
},
'newsletter.newslettersettings': {
'Meta': {'object_name': 'NewsletterSettings'},
'day': ('django.db.models.fields.IntegerField', [], {'default': '7'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'newsletter_type': ('django.db.models.fields.CharField', [], {'default': "'0'", 'max_length': '125'})
}
}
complete_apps = ['newsletter']
symmetrical = True
|
sci-newsletter
|
/sci-newsletter-0.50.tar.gz/sci-newsletter-0.50/newsletter/migrations/0003_changing_mail_templates.py
|
0003_changing_mail_templates.py
|
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
class Migration(DataMigration):
def forwards(self, orm):
"Write your forwards methods here."
# Note: Don't use "from appname.models import ModelName".
# Use orm.ModelName to refer to models in this application,
# and orm['appname.ModelName'] for models in other applications.
template_text = orm['newsletter.mailtemplate'].objects.get(name="Ежемесячная рассылка")
template_text.template="""
<b>Доброго времени суток!</b>
<br>
<i>На нашем сайте произошли следующие изменения:</i>
<br>
{{text}}
<br>
<i>С уважением, администрация сайта "{{res_name}}"</i>
"""
template_text.save()
template_text = orm['newsletter.mailtemplate'].objects.get(name="Пользовательское письмо")
template_text.template="""
<b>Доброго времени суток!</b>
<br>
<i>У нас есть для вас следующая информация:</i>
<br>
{{text}}
<br>
<i>С уважением, администрация сайта "{{res_name}}"</i>
"""
template_text.save()
def backwards(self, orm):
"Write your backwards methods here."
template_text = orm['newsletter.mailtemplate'].objects.get(name="Ежемесячная рассылка")
template_text.template="""
<b>Доброго времени суток!</b>
<br>
<i>На нашем сайте произошли следующие изменения:</i>
<br>
{{text}}
<br>
<i>С уважением, администрация сайта SciBib</i>
"""
template_text.save()
template_text = orm['newsletter.mailtemplate'].objects.get(name="Пользовательское письмо")
template_text.template="""
<b>Доброго времени суток!</b>
<br>
<i>У нас есть для вас следующая информация:</i>
<br>
{{text}}
<br>
<i>С уважением, администрация сайта SciBib</i>
"""
template_text.save()
models = {
'newsletter.job': {
'Meta': {'object_name': 'Job'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'mail': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'mails'", 'to': "orm['newsletter.Mail']"}),
'recievers': ('django.db.models.fields.CharField', [], {'default': "'0'", 'max_length': '125'}),
'send_date': ('django.db.models.fields.DateField', [], {}),
'state': ('django.db.models.fields.DecimalField', [], {'default': '0', 'max_digits': '1', 'decimal_places': '0'})
},
'newsletter.mail': {
'Meta': {'object_name': 'Mail'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'subject': ('django.db.models.fields.CharField', [], {'max_length': '125'}),
'template': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'templates'", 'to': "orm['newsletter.MailTemplate']"}),
'text': ('ckeditor.fields.RichTextField', [], {})
},
'newsletter.mailtemplate': {
'Meta': {'object_name': 'MailTemplate'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '125'}),
'template': ('django.db.models.fields.TextField', [], {})
},
'newsletter.newslettersettings': {
'Meta': {'object_name': 'NewsletterSettings'},
'day': ('django.db.models.fields.IntegerField', [], {'default': '7'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'newsletter_type': ('django.db.models.fields.CharField', [], {'default': "'0'", 'max_length': '125'})
}
}
complete_apps = ['newsletter']
symmetrical = True
| 0.36727 | 0.18717 |
from typing import Container
import docker
import logging
import os
import typer
import subprocess
import re
from collections.abc import Mapping
import sys
_LOGGER = logging.getLogger(__name__)
def port_mapping(mapping: str, public: bool) -> Mapping:
m = re.fullmatch("^(([0-9]{1,5})(?:/(?:tcp|udp))?):([0-9]{1,5})$", mapping)
if not m:
typer.secho(
f"Invalid port specification '{mapping}'",
fg=typer.colors.RED,
)
raise typer.Exit(code=1)
container = m.group(1)
srcPort = int(m.group(2))
hostPort = int(m.group(3))
if srcPort < 0 or srcPort > 65535:
typer.secho(
f"Invalid port number '{srcPort}'",
fg=typer.colors.RED,
)
raise typer.Exit(code=1)
if hostPort < 0 or hostPort > 65535:
typer.secho(
f"Invalid port number '{srcPort}'",
fg=typer.colors.RED,
)
raise typer.Exit(code=1)
host = "0.0.0.0" if public else "127.0.0.1"
return {container: (host, hostPort) if hostPort != 0 else None}
def port_map(portList: list, public: bool) -> dict:
portMapping = {}
for p in portList:
portMapping.update(port_mapping(p, public))
return portMapping
def port_env_mapping(mapping: str) -> str:
m = re.fullmatch("^(([0-9]{1,5})(?:/(?:tcp|udp))?):([0-9]{1,5})$", mapping)
if not m:
typer.secho(
f"Invalid port specification '{mapping}'",
fg=typer.colors.RED,
)
raise typer.Exit(code=1)
srcPort = int(m.group(2))
hostPort = int(m.group(3))
if srcPort < 0 or srcPort > 65535:
typer.secho(
f"Invalid port number '{srcPort}'",
fg=typer.colors.RED,
)
raise typer.Exit(code=1)
if hostPort < 0 or hostPort > 65535:
typer.secho(
f"Invalid port number '{srcPort}'",
fg=typer.colors.RED,
)
raise typer.Exit(code=1)
return f"PORT_{srcPort}={hostPort}"
def port_env_map(portList: list) -> list:
return [port_env_mapping(p) for p in portList]
def fetch_latest(client: docker.client, repository, **kwargs):
_LOGGER.info(
f'pulling latest version of the "{repository}" docker image, this may take a while...'
)
oldImage = None
try:
oldImage = client.images.get(repository)
except:
pass
image = client.images.pull(repository)
_LOGGER.info("Done pulling the latest docker image")
if oldImage and oldImage.id != image.id:
oldImage.remove()
return image
def create_container(client: docker.client, course: dict, **kwargs):
try:
_LOGGER.info(f"checking if image {course['image']} exists locally...")
i = client.images.get(course["image"])
_LOGGER.info("Image exists locally.")
except docker.errors.ImageNotFound as e:
_LOGGER.info("Image is not found, start will take a while to pull first.")
typer.secho(
f"Course image needs to be downloaded, this may take a while...",
fg=typer.colors.YELLOW,
)
_LOGGER.info(f"starting `{course['image']}` container as `{course['name']}`...")
try:
container = client.containers.run(
course["image"],
ports=port_map(course["ports"], course.get("public", False)),
environment=port_env_map(course["ports"]),
name=f'scioer_{course["name"]}',
hostname=course["name"],
tty=True,
detach=True,
volumes=[f"{course['volume']}:/course"],
)
except docker.errors.ImageNotFound as e:
_LOGGER.error("Image not found.", e)
typer.secho(
f"Course image not found, check the config file that the image name is correct.",
fg=typer.colors.RED,
)
sys.exit(1)
except docker.errors.APIError as e:
_LOGGER.debug(f"Failed to start the container: {e}")
if e.status_code == 409:
typer.secho(
f"Container name already in use. Please delete the container with the name `scioer_{course['name']}` before trying again.",
fg=typer.colors.RED,
)
sys.exit(2)
elif e.status_code == 404:
typer.secho(
f"Course image not found, check the config file that the image name is correct.",
fg=typer.colors.RED,
)
sys.exit(3)
typer.secho(
f"Unknown error: {e.explanation}, aborting...",
fg=typer.colors.RED,
)
sys.exit(4)
return container
def start_container(client: docker.client, course: dict, **kwargs):
container = None
try:
container = client.containers.get(f'scioer_{course["name"]}')
_LOGGER.info(f'Container for `scioer_{course["name"]}` already exists.')
if container.status == "running":
_LOGGER.info("Container is already running, not restarting.")
else:
_LOGGER.info("Restarting container")
container.start()
_LOGGER.info("Successfully started")
except:
_LOGGER.info(f'Container `scioer_{course["name"]}` does not exist, starting...')
container = create_container(client, course)
return container
def stop_container(client: docker.client, courseName: str, keep: bool, **kwargs):
_LOGGER.info("stopping docker container...")
try:
container = client.containers.get(f"scioer_{courseName}")
except:
typer.secho(
f"Container for course '{courseName}' is not running",
fg=typer.colors.YELLOW,
)
return
container.stop()
typer.secho(
f"Container for course '{courseName}' is has been stopped",
fg=typer.colors.GREEN,
)
if not keep:
delete_container(container)
def attach(client: docker.client, courseName: str, **kwargs):
_LOGGER.info("attaching to docker container...")
try:
container = client.containers.get(f"scioer_{courseName}")
except:
typer.secho(
f"Container for course '{courseName}' is not running",
fg=typer.colors.YELLOW,
)
return
os.system(f"docker exec -it scioer_{courseName} cat /scripts/motd.txt")
typer.echo("Starting interactive shell in the container, type `exit` to quit.")
os.system(f"docker exec -it scioer_{courseName} bash --login")
def delete_container(container, **kwargs):
_LOGGER.info("Deleting container...")
container.remove()
def setup():
client = None
try:
client = docker.from_env()
except:
typer.secho(
"failed to connect to docker, check that Docker is running on the host.",
fg=typer.colors.RED,
)
raise typer.Exit(code=1)
return client
|
sci-oer
|
/sci_oer-1.3.0-py3-none-any.whl/scioer/docker.py
|
docker.py
|
from typing import Container
import docker
import logging
import os
import typer
import subprocess
import re
from collections.abc import Mapping
import sys
_LOGGER = logging.getLogger(__name__)
def port_mapping(mapping: str, public: bool) -> Mapping:
m = re.fullmatch("^(([0-9]{1,5})(?:/(?:tcp|udp))?):([0-9]{1,5})$", mapping)
if not m:
typer.secho(
f"Invalid port specification '{mapping}'",
fg=typer.colors.RED,
)
raise typer.Exit(code=1)
container = m.group(1)
srcPort = int(m.group(2))
hostPort = int(m.group(3))
if srcPort < 0 or srcPort > 65535:
typer.secho(
f"Invalid port number '{srcPort}'",
fg=typer.colors.RED,
)
raise typer.Exit(code=1)
if hostPort < 0 or hostPort > 65535:
typer.secho(
f"Invalid port number '{srcPort}'",
fg=typer.colors.RED,
)
raise typer.Exit(code=1)
host = "0.0.0.0" if public else "127.0.0.1"
return {container: (host, hostPort) if hostPort != 0 else None}
def port_map(portList: list, public: bool) -> dict:
portMapping = {}
for p in portList:
portMapping.update(port_mapping(p, public))
return portMapping
def port_env_mapping(mapping: str) -> str:
m = re.fullmatch("^(([0-9]{1,5})(?:/(?:tcp|udp))?):([0-9]{1,5})$", mapping)
if not m:
typer.secho(
f"Invalid port specification '{mapping}'",
fg=typer.colors.RED,
)
raise typer.Exit(code=1)
srcPort = int(m.group(2))
hostPort = int(m.group(3))
if srcPort < 0 or srcPort > 65535:
typer.secho(
f"Invalid port number '{srcPort}'",
fg=typer.colors.RED,
)
raise typer.Exit(code=1)
if hostPort < 0 or hostPort > 65535:
typer.secho(
f"Invalid port number '{srcPort}'",
fg=typer.colors.RED,
)
raise typer.Exit(code=1)
return f"PORT_{srcPort}={hostPort}"
def port_env_map(portList: list) -> list:
return [port_env_mapping(p) for p in portList]
def fetch_latest(client: docker.client, repository, **kwargs):
_LOGGER.info(
f'pulling latest version of the "{repository}" docker image, this may take a while...'
)
oldImage = None
try:
oldImage = client.images.get(repository)
except:
pass
image = client.images.pull(repository)
_LOGGER.info("Done pulling the latest docker image")
if oldImage and oldImage.id != image.id:
oldImage.remove()
return image
def create_container(client: docker.client, course: dict, **kwargs):
try:
_LOGGER.info(f"checking if image {course['image']} exists locally...")
i = client.images.get(course["image"])
_LOGGER.info("Image exists locally.")
except docker.errors.ImageNotFound as e:
_LOGGER.info("Image is not found, start will take a while to pull first.")
typer.secho(
f"Course image needs to be downloaded, this may take a while...",
fg=typer.colors.YELLOW,
)
_LOGGER.info(f"starting `{course['image']}` container as `{course['name']}`...")
try:
container = client.containers.run(
course["image"],
ports=port_map(course["ports"], course.get("public", False)),
environment=port_env_map(course["ports"]),
name=f'scioer_{course["name"]}',
hostname=course["name"],
tty=True,
detach=True,
volumes=[f"{course['volume']}:/course"],
)
except docker.errors.ImageNotFound as e:
_LOGGER.error("Image not found.", e)
typer.secho(
f"Course image not found, check the config file that the image name is correct.",
fg=typer.colors.RED,
)
sys.exit(1)
except docker.errors.APIError as e:
_LOGGER.debug(f"Failed to start the container: {e}")
if e.status_code == 409:
typer.secho(
f"Container name already in use. Please delete the container with the name `scioer_{course['name']}` before trying again.",
fg=typer.colors.RED,
)
sys.exit(2)
elif e.status_code == 404:
typer.secho(
f"Course image not found, check the config file that the image name is correct.",
fg=typer.colors.RED,
)
sys.exit(3)
typer.secho(
f"Unknown error: {e.explanation}, aborting...",
fg=typer.colors.RED,
)
sys.exit(4)
return container
def start_container(client: docker.client, course: dict, **kwargs):
container = None
try:
container = client.containers.get(f'scioer_{course["name"]}')
_LOGGER.info(f'Container for `scioer_{course["name"]}` already exists.')
if container.status == "running":
_LOGGER.info("Container is already running, not restarting.")
else:
_LOGGER.info("Restarting container")
container.start()
_LOGGER.info("Successfully started")
except:
_LOGGER.info(f'Container `scioer_{course["name"]}` does not exist, starting...')
container = create_container(client, course)
return container
def stop_container(client: docker.client, courseName: str, keep: bool, **kwargs):
_LOGGER.info("stopping docker container...")
try:
container = client.containers.get(f"scioer_{courseName}")
except:
typer.secho(
f"Container for course '{courseName}' is not running",
fg=typer.colors.YELLOW,
)
return
container.stop()
typer.secho(
f"Container for course '{courseName}' is has been stopped",
fg=typer.colors.GREEN,
)
if not keep:
delete_container(container)
def attach(client: docker.client, courseName: str, **kwargs):
_LOGGER.info("attaching to docker container...")
try:
container = client.containers.get(f"scioer_{courseName}")
except:
typer.secho(
f"Container for course '{courseName}' is not running",
fg=typer.colors.YELLOW,
)
return
os.system(f"docker exec -it scioer_{courseName} cat /scripts/motd.txt")
typer.echo("Starting interactive shell in the container, type `exit` to quit.")
os.system(f"docker exec -it scioer_{courseName} bash --login")
def delete_container(container, **kwargs):
_LOGGER.info("Deleting container...")
container.remove()
def setup():
client = None
try:
client = docker.from_env()
except:
typer.secho(
"failed to connect to docker, check that Docker is running on the host.",
fg=typer.colors.RED,
)
raise typer.Exit(code=1)
return client
| 0.540924 | 0.181191 |
import typer
from collections.abc import Mapping
import click
import scioer.config.load as load
import scioer.config.parse as parser
import os
import re
from typing import Optional
from pathlib import Path
import logging
try:
import readline
except:
import sys
if sys.platform == "win32" or sys.platform == "cygwin":
try:
from pyreadline3 import Readline
except:
pass
__version__ = "UNKNOWN"
try:
from scioer.__version__ import __version__
except:
pass
import scioer.docker as docker
_LOGGER = logging.getLogger(__name__)
app = typer.Typer(
name="Self Contained Interactive Open Educational Resource Helper",
help=""" A CLI tool to help configure, start, stop course resources.
\b
Common usage commands:
1. `scioer config`
.... fill in the form
2. `scioer start <course>`
3. `scioer shell <course>`
4. `scioer stop <course>`
\f
""",
no_args_is_help=True,
context_settings={"help_option_names": ["-h", "--help"]},
)
def conf_callback(ctx: typer.Context, param: typer.CallbackParam, value: Path):
if value:
value = os.path.realpath(os.path.expanduser(str(value)))
configFiles = load.get_config_files(value)
config = load.filter_config_files(configFiles)
if not value and not config:
config = configFiles[0]
_LOGGER.info(f"No config file found, using default: {config}")
elif value and value != config:
config = value
_LOGGER.info(f"Config file does not exist yet, using anyway: {config}")
if config:
_LOGGER.info(f"Loading from config file: {config}")
data = parser.load_config_file(config)
ctx.default_map = ctx.default_map or {} # Initialize the default map
ctx.default_map.update(data) # Merge the config dict into default_map
ctx.obj = ctx.obj or {} # Initialize the object
ctx.obj.update(
{"config_file": config, "config": data}
) # Merge the config dict into object
return config
def version_callback(value: bool):
if value:
typer.echo(f"scioer CLI Version: {__version__}")
raise typer.Exit()
configOption = typer.Option(
None,
"--config",
"-c",
metavar="FILE",
dir_okay=False,
resolve_path=False,
readable=True,
writable=True,
callback=conf_callback,
is_eager=True,
help="Path to the yaml config file",
)
courseNameArgument = typer.Argument(
None, metavar="COURSE_NAME", help="The name of the course"
)
def load_course(config: Mapping, courseName: str, ask: bool = True):
course = config.get(courseName, {})
while ask and not course:
if courseName:
typer.secho(
f'Course "{courseName} is not found. use `scioer config` if you want to create it.',
fg=typer.colors.YELLOW,
)
courses = [k for k in config.keys() if isinstance(config[k], dict)]
courseName = click.prompt(
"Course not found, did you mean one of:", type=click.Choice(courses)
)
course = config.get(courseName, {})
if course:
course["name"] = courseName
_LOGGER.info(f"course content: {course}")
return course
def print_post_start_help(courseName):
typer.secho(f"Started the {courseName} course resource", fg=typer.colors.GREEN)
typer.secho(
f"Login for the wiki: '[email protected]' 'password'", fg=typer.colors.GREEN
)
typer.echo("-----")
typer.secho(f"To stop the course: scioer stop {courseName}", fg=typer.colors.YELLOW)
typer.secho(
f"To get a shell for the course: scioer shell {courseName}",
fg=typer.colors.YELLOW,
)
typer.secho(
f"To re-start the course: scioer start {courseName}", fg=typer.colors.YELLOW
)
typer.secho(
f"To get information on the courses: scioer status", fg=typer.colors.YELLOW
)
@app.command()
def start(
ctx: typer.Context,
name: Optional[str] = courseNameArgument,
pull: bool = False,
configFile: Optional[Path] = configOption,
):
"""Start a oer container"""
client = docker.setup()
config = ctx.default_map
if not name and len(config.keys()) == 1:
name = list(config.keys())[0]
course = load_course(config, name)
if course.get("auto_pull", False) or pull:
typer.secho(
f"Pulling the latest version of the {course['name']}",
fg=typer.colors.GREEN,
)
typer.secho(
f"This may take a while...",
fg=typer.colors.YELLOW,
)
docker.fetch_latest(client, course["image"])
typer.secho("Starting...", fg=typer.colors.GREEN)
docker.start_container(client, course)
print_post_start_help(course["name"])
@app.command()
def stop(
ctx: typer.Context,
name: Optional[str] = courseNameArgument,
keep: Optional[bool] = typer.Option(False, "--no-remove", "-k"),
configFile: Optional[Path] = configOption,
):
"""Stop a running course container"""
client = docker.setup()
config = ctx.default_map
if not name and len(config.keys()) == 1:
name = list(config.keys())[0]
course = load_course(config, name, ask=False)
if course:
typer.secho(
f"Stopping course container, this make take a couple seconds...",
fg=typer.colors.RED,
)
docker.stop_container(client, course["name"], keep)
else:
typer.secho(f'Course "{name}" is not running.', fg=typer.colors.YELLOW)
@app.command()
def shell(
ctx: typer.Context,
name: str = courseNameArgument,
configFile: Optional[Path] = configOption,
):
"""Start a shell in a running course container"""
client = docker.setup()
config = ctx.default_map
if not name and len(config.keys()) == 1:
name = list(config.keys())[0]
course = load_course(config, name, ask=False)
if course:
docker.attach(client, course["name"])
else:
typer.secho(f'Course "{name}" is not running.', fg=typer.colors.YELLOW)
def print_container(container):
indent = 2
print(f"Course: {container.name[7:]} ({container.status})")
print(f'{" " * indent }Ports:')
ports = list(container.ports.items())
if ports:
for port in ports[:-1]:
print(f'{" " * indent }├── {port[1][0]["HostPort"]} -> {port[0][:-4]}')
port = ports[-1]
print(f'{" " * indent }└── {port[1][0]["HostPort"]} -> {port[0][:-4]}')
print(f'{" " * indent }Volumes:')
volumes = [v for v in container.attrs["Mounts"] if v["Type"] == "bind"]
home = os.path.expanduser("~")
if volumes:
for volume in volumes[:-1]:
hostPath = volume["Source"].replace(home, "~")
print(f'{" " * indent }├── {hostPath} as {volume["Destination"]}')
volume = volumes[-1]
hostPath = volume["Source"].replace(home, "~")
print(f'{" " * indent }└── {hostPath} as {volume["Destination"]}')
@app.command()
def status(
ctx: typer.Context,
configFile: Optional[Path] = configOption,
):
"""Prints all the information about the running container"""
client = docker.setup()
config = ctx.default_map
config_file = ctx.obj["config_file"]
_LOGGER.info(f"Config contents: {config}")
names = list(config.keys())
home = os.path.expanduser("~")
configPath = config_file.replace(home, "~")
print(f"Config file: {configPath}")
# containers = client.containers.list(all=True)
# containers = [c for c in containers if c.name.replace('scioer_', '') in names ]
filtered = []
notRunning = []
for n in names:
try:
c = client.containers.get(f"scioer_{n}")
_LOGGER.debug(f"Container information for course {n}: {c.attrs}")
filtered.append(c)
except:
notRunning.append(n)
for c in filtered:
print_container(c)
for n in notRunning:
print(f"Course: {n} (Not Running)")
def prompt_port(message: str, default: int) -> int:
value = typer.prompt(
message,
default=default,
type=int,
)
while value < 0 or value > 65535:
typer.secho(
f"`{value}` is not a valid port number.",
fg=typer.colors.RED,
)
value = typer.prompt(
message,
default=default,
type=int,
)
return value
def port_mapping(mapping: str) -> Mapping:
m = re.fullmatch("^(([0-9]{1,5})(?:/(?:tcp|udp))?)(?::([0-9]{1,5}))?$", mapping)
if not m:
return {}
container = m.group(1)
srcPort = int(m.group(2))
hostPort = int(m.group(3) or m.group(2))
if srcPort < 0 or srcPort > 65535 or hostPort < 0 or hostPort > 65535:
typer.secho(
"Invalid port number.",
fg=typer.colors.RED,
)
return {}
return {container: hostPort}
def prompt_custom_ports() -> Mapping:
value = typer.prompt(
"Custom ports to expose, in the form of 'container[:host]', or no input to skip ",
default="",
value_proc=lambda v: v.strip(),
type=str,
)
mapping = port_mapping(value)
if value != "" and not mapping:
typer.secho(
"Invalid port specification, please try again.",
fg=typer.colors.RED,
)
mappings = mapping
while value != "":
value = typer.prompt(
"Custom ports to expose, in the form of 'container:host', or no input to skip ",
default="",
value_proc=lambda v: v.strip(),
type=str,
)
mapping = port_mapping(value)
if value != "" and not mapping:
typer.secho(
"Invalid port specification, please try again.",
fg=typer.colors.RED,
)
continue
if mapping:
mappings = {**mappings, **mapping}
return mappings
@app.command()
def config(
ctx: typer.Context,
configFile: Optional[Path] = configOption,
):
"""Setup a new course resource, or edit an existing one"""
if not configFile:
typer.echo(
f"Config file not found, or not specified. Make sure that file exists or use `--config=FILE` to specify the file"
)
raise typer.Exit(1)
config = ctx.default_map
_LOGGER.info(f"config contents: {config}")
course_name = typer.prompt("What's the name of the course?")
safe_course_name = "".join(
x for x in course_name.replace(" ", "_") if x.isalnum() or x == "_"
)
default_image = "scioer/oo-java:W23"
default_volume = os.path.join(os.path.expanduser("~/Desktop"), safe_course_name)
course = config.get(safe_course_name, {})
docker_image = typer.prompt(
"What docker image does the course use?",
default=course.get("image", default_image),
)
auto_pull = typer.confirm(
"Automatically fetch new versions", default=course.get("auto_pull", False)
)
course_storage = typer.prompt(
"Where should the files for the course be stored?",
default=course.get("volume", default_volume),
)
useDefaults = typer.confirm("Use the default ports", default=True)
mappings = {
"3000": 3000,
"8888": 8888,
"8000": 8000,
"22": 2222,
}
if not useDefaults:
wiki_port = prompt_port(
"Wiki port to expose, (0 to publish a random port)",
3000,
)
jupyter_port = prompt_port(
"Jupyter notebooks port to expose, (0 to publish a random port)",
8888,
)
lectures_port = prompt_port(
"Lectures port to expose, (0 to publish a random port)",
8000,
)
ssh_port = prompt_port(
"ssh port to expose, (0 to publish a random port)",
2222,
)
customPorts = prompt_custom_ports()
mappings = {
"3000": wiki_port,
"8888": jupyter_port,
"8000": lectures_port,
"22": ssh_port,
**customPorts,
}
ports = [f"{k}:{v}" for k, v in mappings.items()]
config[safe_course_name] = {
"image": docker_image,
"volume": os.path.realpath(os.path.expanduser(course_storage)),
"ports": ports,
"auto_pull": auto_pull,
"public": False,
}
parser.save_config_file(configFile, config)
@app.callback()
def setup(
verbose: Optional[bool] = typer.Option(False, "--verbose", "-v"),
debug: Optional[bool] = typer.Option(False, "--debug"),
version: Optional[bool] = typer.Option(
None, "--version", "-V", callback=version_callback, is_eager=True
),
):
level = logging.WARNING
if debug:
level = logging.DEBUG
elif verbose:
level = logging.INFO
logging.basicConfig(level=level)
if __name__ == "__main__":
app()
|
sci-oer
|
/sci_oer-1.3.0-py3-none-any.whl/scioer/cli.py
|
cli.py
|
import typer
from collections.abc import Mapping
import click
import scioer.config.load as load
import scioer.config.parse as parser
import os
import re
from typing import Optional
from pathlib import Path
import logging
try:
import readline
except:
import sys
if sys.platform == "win32" or sys.platform == "cygwin":
try:
from pyreadline3 import Readline
except:
pass
__version__ = "UNKNOWN"
try:
from scioer.__version__ import __version__
except:
pass
import scioer.docker as docker
_LOGGER = logging.getLogger(__name__)
app = typer.Typer(
name="Self Contained Interactive Open Educational Resource Helper",
help=""" A CLI tool to help configure, start, stop course resources.
\b
Common usage commands:
1. `scioer config`
.... fill in the form
2. `scioer start <course>`
3. `scioer shell <course>`
4. `scioer stop <course>`
\f
""",
no_args_is_help=True,
context_settings={"help_option_names": ["-h", "--help"]},
)
def conf_callback(ctx: typer.Context, param: typer.CallbackParam, value: Path):
if value:
value = os.path.realpath(os.path.expanduser(str(value)))
configFiles = load.get_config_files(value)
config = load.filter_config_files(configFiles)
if not value and not config:
config = configFiles[0]
_LOGGER.info(f"No config file found, using default: {config}")
elif value and value != config:
config = value
_LOGGER.info(f"Config file does not exist yet, using anyway: {config}")
if config:
_LOGGER.info(f"Loading from config file: {config}")
data = parser.load_config_file(config)
ctx.default_map = ctx.default_map or {} # Initialize the default map
ctx.default_map.update(data) # Merge the config dict into default_map
ctx.obj = ctx.obj or {} # Initialize the object
ctx.obj.update(
{"config_file": config, "config": data}
) # Merge the config dict into object
return config
def version_callback(value: bool):
if value:
typer.echo(f"scioer CLI Version: {__version__}")
raise typer.Exit()
configOption = typer.Option(
None,
"--config",
"-c",
metavar="FILE",
dir_okay=False,
resolve_path=False,
readable=True,
writable=True,
callback=conf_callback,
is_eager=True,
help="Path to the yaml config file",
)
courseNameArgument = typer.Argument(
None, metavar="COURSE_NAME", help="The name of the course"
)
def load_course(config: Mapping, courseName: str, ask: bool = True):
course = config.get(courseName, {})
while ask and not course:
if courseName:
typer.secho(
f'Course "{courseName} is not found. use `scioer config` if you want to create it.',
fg=typer.colors.YELLOW,
)
courses = [k for k in config.keys() if isinstance(config[k], dict)]
courseName = click.prompt(
"Course not found, did you mean one of:", type=click.Choice(courses)
)
course = config.get(courseName, {})
if course:
course["name"] = courseName
_LOGGER.info(f"course content: {course}")
return course
def print_post_start_help(courseName):
typer.secho(f"Started the {courseName} course resource", fg=typer.colors.GREEN)
typer.secho(
f"Login for the wiki: '[email protected]' 'password'", fg=typer.colors.GREEN
)
typer.echo("-----")
typer.secho(f"To stop the course: scioer stop {courseName}", fg=typer.colors.YELLOW)
typer.secho(
f"To get a shell for the course: scioer shell {courseName}",
fg=typer.colors.YELLOW,
)
typer.secho(
f"To re-start the course: scioer start {courseName}", fg=typer.colors.YELLOW
)
typer.secho(
f"To get information on the courses: scioer status", fg=typer.colors.YELLOW
)
@app.command()
def start(
ctx: typer.Context,
name: Optional[str] = courseNameArgument,
pull: bool = False,
configFile: Optional[Path] = configOption,
):
"""Start a oer container"""
client = docker.setup()
config = ctx.default_map
if not name and len(config.keys()) == 1:
name = list(config.keys())[0]
course = load_course(config, name)
if course.get("auto_pull", False) or pull:
typer.secho(
f"Pulling the latest version of the {course['name']}",
fg=typer.colors.GREEN,
)
typer.secho(
f"This may take a while...",
fg=typer.colors.YELLOW,
)
docker.fetch_latest(client, course["image"])
typer.secho("Starting...", fg=typer.colors.GREEN)
docker.start_container(client, course)
print_post_start_help(course["name"])
@app.command()
def stop(
ctx: typer.Context,
name: Optional[str] = courseNameArgument,
keep: Optional[bool] = typer.Option(False, "--no-remove", "-k"),
configFile: Optional[Path] = configOption,
):
"""Stop a running course container"""
client = docker.setup()
config = ctx.default_map
if not name and len(config.keys()) == 1:
name = list(config.keys())[0]
course = load_course(config, name, ask=False)
if course:
typer.secho(
f"Stopping course container, this make take a couple seconds...",
fg=typer.colors.RED,
)
docker.stop_container(client, course["name"], keep)
else:
typer.secho(f'Course "{name}" is not running.', fg=typer.colors.YELLOW)
@app.command()
def shell(
ctx: typer.Context,
name: str = courseNameArgument,
configFile: Optional[Path] = configOption,
):
"""Start a shell in a running course container"""
client = docker.setup()
config = ctx.default_map
if not name and len(config.keys()) == 1:
name = list(config.keys())[0]
course = load_course(config, name, ask=False)
if course:
docker.attach(client, course["name"])
else:
typer.secho(f'Course "{name}" is not running.', fg=typer.colors.YELLOW)
def print_container(container):
indent = 2
print(f"Course: {container.name[7:]} ({container.status})")
print(f'{" " * indent }Ports:')
ports = list(container.ports.items())
if ports:
for port in ports[:-1]:
print(f'{" " * indent }├── {port[1][0]["HostPort"]} -> {port[0][:-4]}')
port = ports[-1]
print(f'{" " * indent }└── {port[1][0]["HostPort"]} -> {port[0][:-4]}')
print(f'{" " * indent }Volumes:')
volumes = [v for v in container.attrs["Mounts"] if v["Type"] == "bind"]
home = os.path.expanduser("~")
if volumes:
for volume in volumes[:-1]:
hostPath = volume["Source"].replace(home, "~")
print(f'{" " * indent }├── {hostPath} as {volume["Destination"]}')
volume = volumes[-1]
hostPath = volume["Source"].replace(home, "~")
print(f'{" " * indent }└── {hostPath} as {volume["Destination"]}')
@app.command()
def status(
ctx: typer.Context,
configFile: Optional[Path] = configOption,
):
"""Prints all the information about the running container"""
client = docker.setup()
config = ctx.default_map
config_file = ctx.obj["config_file"]
_LOGGER.info(f"Config contents: {config}")
names = list(config.keys())
home = os.path.expanduser("~")
configPath = config_file.replace(home, "~")
print(f"Config file: {configPath}")
# containers = client.containers.list(all=True)
# containers = [c for c in containers if c.name.replace('scioer_', '') in names ]
filtered = []
notRunning = []
for n in names:
try:
c = client.containers.get(f"scioer_{n}")
_LOGGER.debug(f"Container information for course {n}: {c.attrs}")
filtered.append(c)
except:
notRunning.append(n)
for c in filtered:
print_container(c)
for n in notRunning:
print(f"Course: {n} (Not Running)")
def prompt_port(message: str, default: int) -> int:
value = typer.prompt(
message,
default=default,
type=int,
)
while value < 0 or value > 65535:
typer.secho(
f"`{value}` is not a valid port number.",
fg=typer.colors.RED,
)
value = typer.prompt(
message,
default=default,
type=int,
)
return value
def port_mapping(mapping: str) -> Mapping:
m = re.fullmatch("^(([0-9]{1,5})(?:/(?:tcp|udp))?)(?::([0-9]{1,5}))?$", mapping)
if not m:
return {}
container = m.group(1)
srcPort = int(m.group(2))
hostPort = int(m.group(3) or m.group(2))
if srcPort < 0 or srcPort > 65535 or hostPort < 0 or hostPort > 65535:
typer.secho(
"Invalid port number.",
fg=typer.colors.RED,
)
return {}
return {container: hostPort}
def prompt_custom_ports() -> Mapping:
value = typer.prompt(
"Custom ports to expose, in the form of 'container[:host]', or no input to skip ",
default="",
value_proc=lambda v: v.strip(),
type=str,
)
mapping = port_mapping(value)
if value != "" and not mapping:
typer.secho(
"Invalid port specification, please try again.",
fg=typer.colors.RED,
)
mappings = mapping
while value != "":
value = typer.prompt(
"Custom ports to expose, in the form of 'container:host', or no input to skip ",
default="",
value_proc=lambda v: v.strip(),
type=str,
)
mapping = port_mapping(value)
if value != "" and not mapping:
typer.secho(
"Invalid port specification, please try again.",
fg=typer.colors.RED,
)
continue
if mapping:
mappings = {**mappings, **mapping}
return mappings
@app.command()
def config(
ctx: typer.Context,
configFile: Optional[Path] = configOption,
):
"""Setup a new course resource, or edit an existing one"""
if not configFile:
typer.echo(
f"Config file not found, or not specified. Make sure that file exists or use `--config=FILE` to specify the file"
)
raise typer.Exit(1)
config = ctx.default_map
_LOGGER.info(f"config contents: {config}")
course_name = typer.prompt("What's the name of the course?")
safe_course_name = "".join(
x for x in course_name.replace(" ", "_") if x.isalnum() or x == "_"
)
default_image = "scioer/oo-java:W23"
default_volume = os.path.join(os.path.expanduser("~/Desktop"), safe_course_name)
course = config.get(safe_course_name, {})
docker_image = typer.prompt(
"What docker image does the course use?",
default=course.get("image", default_image),
)
auto_pull = typer.confirm(
"Automatically fetch new versions", default=course.get("auto_pull", False)
)
course_storage = typer.prompt(
"Where should the files for the course be stored?",
default=course.get("volume", default_volume),
)
useDefaults = typer.confirm("Use the default ports", default=True)
mappings = {
"3000": 3000,
"8888": 8888,
"8000": 8000,
"22": 2222,
}
if not useDefaults:
wiki_port = prompt_port(
"Wiki port to expose, (0 to publish a random port)",
3000,
)
jupyter_port = prompt_port(
"Jupyter notebooks port to expose, (0 to publish a random port)",
8888,
)
lectures_port = prompt_port(
"Lectures port to expose, (0 to publish a random port)",
8000,
)
ssh_port = prompt_port(
"ssh port to expose, (0 to publish a random port)",
2222,
)
customPorts = prompt_custom_ports()
mappings = {
"3000": wiki_port,
"8888": jupyter_port,
"8000": lectures_port,
"22": ssh_port,
**customPorts,
}
ports = [f"{k}:{v}" for k, v in mappings.items()]
config[safe_course_name] = {
"image": docker_image,
"volume": os.path.realpath(os.path.expanduser(course_storage)),
"ports": ports,
"auto_pull": auto_pull,
"public": False,
}
parser.save_config_file(configFile, config)
@app.callback()
def setup(
verbose: Optional[bool] = typer.Option(False, "--verbose", "-v"),
debug: Optional[bool] = typer.Option(False, "--debug"),
version: Optional[bool] = typer.Option(
None, "--version", "-V", callback=version_callback, is_eager=True
),
):
level = logging.WARNING
if debug:
level = logging.DEBUG
elif verbose:
level = logging.INFO
logging.basicConfig(level=level)
if __name__ == "__main__":
app()
| 0.496582 | 0.158435 |
# sci palettes for matplotlib/seaborn
## Installation
```bash
python3 -m pip install sci-palettes
```
## Usage
```python
import seaborn as sns
import matplotlib.pyplot as plt
import sci_palettes
print(sci_palettes.PALETTES.keys())
sci_palettes.register_cmap() # register all palettes
sci_palettes.register_cmap('aaas') # register a special palette
# methods for setting palette
plt.set_cmap('aaas')
plt.style.use('aaas')
sns.set_theme(palette='aaas')
sns.set_palette('aaas')
sns.scatterplot(...)
# set palette when plotting
sns.scatterplot(..., palette='aaas')
```
> Full examples in [examples](https://suqingdong.github.io/sci_palettes/examples/test.py)
## Gallery
<details>
<summary>展开查看</summary>
<div>
<h3>AAAS</h3>
<img src="https://suqingdong.github.io/sci_palettes/examples/aaas.png" />
</div>
<div>
<h3>JAMA</h3>
<img src="https://suqingdong.github.io/sci_palettes/examples/jama.png" />
</div>
<div>
<h3>NPG</h3>
<img src="https://suqingdong.github.io/sci_palettes/examples/npg_nrc.png" />
</div>
<div>
<h3>JCO</h3>
<img src="https://suqingdong.github.io/sci_palettes/examples/jco.png" />
</div>
<div>
<h3>LANCET</h3>
<img src="https://suqingdong.github.io/sci_palettes/examples/lancet_lanonc.png" />
</div>
</details>
### Inspired by the R Package [ggsci](https://github.com/nanxstats/ggsci)
|
sci-palettes
|
/sci-palettes-1.0.1.tar.gz/sci-palettes-1.0.1/README.md
|
README.md
|
python3 -m pip install sci-palettes
import seaborn as sns
import matplotlib.pyplot as plt
import sci_palettes
print(sci_palettes.PALETTES.keys())
sci_palettes.register_cmap() # register all palettes
sci_palettes.register_cmap('aaas') # register a special palette
# methods for setting palette
plt.set_cmap('aaas')
plt.style.use('aaas')
sns.set_theme(palette='aaas')
sns.set_palette('aaas')
sns.scatterplot(...)
# set palette when plotting
sns.scatterplot(..., palette='aaas')
| 0.46393 | 0.665913 |
PALETTES = {
"npg_nrc": {
"Cinnabar": "#E64B35",
"Shakespeare": "#4DBBD5",
"PersianGreen": "#00A087",
"Chambray": "#3C5488",
"Apricot": "#F39B7F",
"WildBlueYonder": "#8491B4",
"MonteCarlo": "#91D1C2",
"Monza": "#DC0000",
"RomanCoffee": "#7E6148",
"Sandrift": "#B09C85"
},
"aaas": {
"Chambray": "#3B4992",
"Red": "#EE0000",
"FunGreen": "#008B45",
"HoneyFlower": "#631879",
"Teal": "#008280",
"Monza": "#BB0021",
"ButterflyBush": "#5F559B",
"FreshEggplant": "#A20056",
"Stack": "#808180",
"CodGray": "#1B1919"
},
"nejm": {
"TallPoppy": "#BC3C29",
"DeepCerulean": "#0072B5",
"Zest": "#E18727",
"Eucalyptus": "#20854E",
"WildBlueYonder": "#7876B1",
"Gothic": "#6F99AD",
"Salomie": "#FFDC91",
"FrenchRose": "#EE4C97"
},
"lancet_lanonc": {
"CongressBlue": "#00468B",
"Red": "#ED0000",
"Apple": "#42B540",
"BondiBlue": "#0099B4",
"TrendyPink": "#925E9F",
"MonaLisa": "#FDAF91",
"Carmine": "#AD002A",
"Edward": "#ADB6B6",
"CodGray": "#1B1919"
},
"jama": {
"Limed Spruce": "#374E55",
"Anzac": "#DF8F44",
"Cerulean": "#00A1D5",
"Apple Blossom": "#B24745",
"Acapulco": "#79AF97",
"Kimberly": "#6A6599",
"Makara": "#80796B"
},
"jco": {
"Lochmara": "#0073C2",
"Corn": "#EFC000",
"Gray": "#868686",
"ChestnutRose": "#CD534C",
"Danube": "#7AA6DC",
"RegalBlue": "#003C67",
"Olive": "#8F7700",
"MineShaft": "#3B3B3B",
"WellRead": "#A73030",
"KashmirBlue": "#4A6990"
},
"ucscgb": {
"chr5": "#FF0000",
"chr8": "#FF9900",
"chr9": "#FFCC00",
"chr12": "#00FF00",
"chr15": "#6699FF",
"chr20": "#CC33FF",
"chr3": "#99991E",
"chrX": "#999999",
"chr6": "#FF00CC",
"chr4": "#CC0000",
"chr7": "#FFCCCC",
"chr10": "#FFFF00",
"chr11": "#CCFF00",
"chr13": "#358000",
"chr14": "#0000CC",
"chr16": "#99CCFF",
"chr17": "#00FFFF",
"chr18": "#CCFFFF",
"chr19": "#9900CC",
"chr21": "#CC99FF",
"chr1": "#996600",
"chr2": "#666600",
"chr22": "#666666",
"chrY": "#CCCCCC",
"chrUn": "#79CC3D",
"chrM": "#CCCC99"
},
"d3_category10": {
"Matisse": "#1F77B4",
"Flamenco": "#FF7F0E",
"ForestGreen": "#2CA02C",
"Punch": "#D62728",
"Wisteria": "#9467BD",
"SpicyMix": "#8C564B",
"Orchid": "#E377C2",
"Gray": "#7F7F7F",
"KeyLimePie": "#BCBD22",
"Java": "#17BECF"
},
"d3_category20": {
"Matisse": "#1F77B4",
"Flamenco": "#FF7F0E",
"ForestGreen": "#2CA02C",
"Punch": "#D62728",
"Wisteria": "#9467BD",
"SpicyMix": "#8C564B",
"Orchid": "#E377C2",
"Gray": "#7F7F7F",
"KeyLimePie": "#BCBD22",
"Java": "#17BECF",
"Spindle": "#AEC7E8",
"MaC": "#FFBB78",
"Feijoa": "#98DF8A",
"MonaLisa": "#FF9896",
"LavenderGray": "#C5B0D5",
"Quicksand": "#C49C94",
"Chantilly": "#F7B6D2",
"Silver": "#C7C7C7",
"Deco": "#DBDB8D",
"RegentStBlue": "#9EDAE5"
},
"d3_category20b": {
"EastBay": "#393B79",
"ChaletGreen": "#637939",
"Pesto": "#8C6D31",
"Lotus": "#843C39",
"CannonPink": "#7B4173",
"ButterflyBush": "#5254A3",
"ChelseaCucumber": "#8CA252",
"Tussock": "#BD9E39",
"AppleBlossom": "#AD494A",
"Tapestry": "#A55194",
"MoodyBlue": "#6B6ECF",
"WildWillow": "#B5CF6B",
"Ronchi": "#E7BA52",
"ChestnutRose": "#D6616B",
"Hopbush": "#CE6DBD",
"ColdPurple": "#9C9EDE",
"Deco": "#CEDB9C",
"Putty": "#E7CB94",
"TonysPink": "#E7969C",
"LightOrchid": "#DE9ED6"
},
"d3_category20c": {
"BostonBlue": "#3182BD",
"Christine": "#E6550D",
"SeaGreen": "#31A354",
"Deluge": "#756BB1",
"DoveGray": "#636363",
"Danube": "#6BAED6",
"NeonCarrot": "#FD8D3C",
"DeYork": "#74C476",
"BlueBell": "#9E9AC8",
"DustyGray": "#969696",
"RegentStBlue": "#9ECAE1",
"Koromiko": "#FDAE6B",
"MossGreen": "#A1D99B",
"LavenderGray": "#BCBDDC",
"Silver": "#BDBDBD",
"Spindle": "#C6DBEF",
"Flesh": "#FDD0A2",
"Celadon": "#C7E9C0",
"Snuff": "#DADAEB",
"Alto": "#D9D9D9"
},
"igv": {
"chr1": "#5050FF",
"chr2": "#CE3D32",
"chr3": "#749B58",
"chr4": "#F0E685",
"chr5": "#466983",
"chr6": "#BA6338",
"chr7": "#5DB1DD",
"chr8": "#802268",
"chr9": "#6BD76B",
"chr10": "#D595A7",
"chr11": "#924822",
"chr12": "#837B8D",
"chr13": "#C75127",
"chr14": "#D58F5C",
"chr15": "#7A65A5",
"chr16": "#E4AF69",
"chr17": "#3B1B53",
"chr18": "#CDDEB7",
"chr19": "#612A79",
"chr20": "#AE1F63",
"chr21": "#E7C76F",
"chr22": "#5A655E",
"chrX": "#CC9900",
"chrY": "#99CC00",
"chrUn": "#A9A9A9",
"chr23": "#CC9900",
"chr24": "#99CC00",
"chr25": "#33CC00",
"chr26": "#00CC33",
"chr27": "#00CC99",
"chr28": "#0099CC",
"chr29": "#0A47FF",
"chr30": "#4775FF",
"chr31": "#FFC20A",
"chr32": "#FFD147",
"chr33": "#990033",
"chr34": "#991A00",
"chr35": "#996600",
"chr36": "#809900",
"chr37": "#339900",
"chr38": "#00991A",
"chr39": "#009966",
"chr40": "#008099",
"chr41": "#003399",
"chr42": "#1A0099",
"chr43": "#660099",
"chr44": "#990080",
"chr45": "#D60047",
"chr46": "#FF1463",
"chr47": "#00D68F",
"chr48": "#14FFB1"
},
"igv_alternating": {
"Indigo": "#5773CC",
"SelectiveYellow": "#FFB900"
},
"locuszoom": {
"0.8to1.0": "#D43F3A",
"0.6to0.8": "#EEA236",
"0.4to0.6": "#5CB85C",
"0.2to0.4": "#46B8DA",
"0.0to0.2": "#357EBD",
"LDRefVar": "#9632B8",
"nodata": "#B8B8B8"
},
"uchicago": {
"Maroon": "#800000",
"DarkGray": "#767676",
"Yellow": "#FFA319",
"LightGreen": "#8A9045",
"Blue": "#155F83",
"Orange": "#C16622",
"Red": "#8F3931",
"DarkGreen": "#58593F",
"Violet": "#350E20"
},
"uchicago_light": {
"Maroon": "#800000",
"LightGray": "#D6D6CE",
"Yellow": "#FFB547",
"LightGreen": "#ADB17D",
"Blue": "#5B8FA8",
"Orange": "#D49464",
"Red": "#B1746F",
"DarkGreen": "#8A8B79",
"Violet": "#725663"
},
"uchicago_dark": {
"Maroon": "#800000",
"DarkGray": "#767676",
"Yellow": "#CC8214",
"LightGreen": "#616530",
"Blue": "#0F425C",
"Orange": "#9A5324",
"Red": "#642822",
"DarkGreen": "#3E3E23",
"Violet": "#350E20"
},
"cosmic_hallmarks_dark": {
"Invasion and Metastasis": "#171717",
"Escaping Immunic Response to Cancer": "#7D0226",
"Change of Cellular Energetics": "#300049",
"Cell Replicative Immortality": "#165459",
"Suppression of Growth": "#3F2327",
"Genome Instability and Mutations": "#0B1948",
"Angiogenesis": "#E71012",
"Escaping Programmed Cell Death": "#555555",
"Proliferative Signaling": "#193006",
"Tumour Promoting Inflammation": "#A8450C"
},
"cosmic_hallmarks_light": {
"Invasion and Metastasis": "#2E2A2B",
"Escaping Immunic Response to Cancer": "#CF4E9C",
"Change of Cellular Energetics": "#8C57A2",
"Cell Replicative Immortality": "#358DB9",
"Suppression of Growth": "#82581F",
"Genome Instability and Mutations": "#2F509E",
"Angiogenesis": "#E5614C",
"Escaping Programmed Cell Death": "#97A1A7",
"Proliferative Signaling": "#3DA873",
"Tumour Promoting Inflammation": "#DC9445"
},
"cosmic_signature_substitutions": {
"C>A": "#5ABCEB",
"C>G": "#050708",
"C>T": "#D33C32",
"T>A": "#CBCACB",
"T>C": "#ABCD72",
"T>G": "#E7C9C6"
},
"simpsons_springfield": {
"HomerYellow": "#FED439",
"HomerBlue": "#709AE1",
"HomerGrey": "#8A9197",
"HomerBrown": "#D2AF81",
"BartOrange": "#FD7446",
"MargeGreen": "#D5E4A2",
"MargeBlue": "#197EC0",
"LisaOrange": "#F05C3B",
"NedGreen": "#46732E",
"MaggieBlue": "#71D0F5",
"BurnsPurple": "#370335",
"BurnsGreen": "#075149",
"DuffRed": "#C80813",
"KentRed": "#91331F",
"BobGreen": "#1A9993",
"FrinkPink": "#FD8CC1"
},
"futurama_planetexpress": {
"FryOrange": "#FF6F00",
"FryRed": "#C71000",
"FryBlue": "#008EA0",
"LeelaPurple": "#8A4198",
"BenderIron": "#5A9599",
"ZoidbergRed": "#FF6348",
"ZoidbergBlue": "#84D7E1",
"AmyPink": "#FF95A8",
"HermesGreen": "#3D3B25",
"ProfessorBlue": "#ADE2D0",
"ScruffyGreen": "#1A5354",
"LeelaGrey": "#3F4041"
},
"rickandmorty_schwifty": {
"MortyYellow": "#FAFD7C",
"MortyBrown": "#82491E",
"MortyBlue": "#24325F",
"RickBlue": "#B7E4F9",
"BethRed": "#FB6467",
"JerryGreen": "#526E2D",
"SummerPink": "#E762D7",
"SummerOrange": "#E89242",
"BethYellow": "#FAE48B",
"RickGreen": "#A6EEE6",
"RickBrown": "#917C5D",
"MeeseeksBlue": "#69C8EC"
},
"startrek_uniform": {
"Engineering": "#CC0C00",
"Sciences": "#5C88DA",
"Senior": "#84BD00",
"Command": "#FFCD00",
"Teal": "#7C878E",
"Cerulean": "#00B5E2",
"Jade": "#00AF66"
},
"tron_legacy": {
"BlackGuard": "#FF410D",
"Sam": "#6EE2FF",
"Clu": "#F7C530",
"Underclass": "#95CC5E",
"KevinFlynn": "#D0DFE6",
"CluFollower": "#F79D1E",
"Underclass2": "#748AA6"
},
"gsea": {
"Purple": "#4500AD",
"DarkBlue": "#2700D1",
"RoyalBlue": "#6B58EF",
"Malibu": "#8888FF",
"Melrose": "#C7C1FF",
"Fog": "#D5D5FF",
"CottonCandy": "#FFC0E5",
"VividTangerine": "#FF8989",
"BrinkPink": "#FF7080",
"Persimmon": "#FF5A5A",
"Flamingo": "#EF4040",
"GuardsmanRed": "#D60C00"
},
"material_red": {
"Red50": "#FFEBEE",
"Red100": "#FFCDD2",
"Red200": "#EF9A9A",
"Red300": "#E57373",
"Red400": "#EF5350",
"Red500": "#F44336",
"Red600": "#E53935",
"Red700": "#D32F2F",
"Red800": "#C62828",
"Red900": "#B71C1C"
},
"material_pink": {
"Pink50": "#FCE4EC",
"Pink100": "#F8BBD0",
"Pink200": "#F48FB1",
"Pink300": "#F06292",
"Pink400": "#EC407A",
"Pink500": "#E91E63",
"Pink600": "#D81B60",
"Pink700": "#C2185B",
"Pink800": "#AD1457",
"Pink900": "#880E4F"
},
"material_purple": {
"Purple50": "#F3E5F5",
"Purple100": "#E1BEE7",
"Purple200": "#CE93D8",
"Purple300": "#BA68C8",
"Purple400": "#AB47BC",
"Purple500": "#9C27B0",
"Purple600": "#8E24AA",
"Purple700": "#7B1FA2",
"Purple800": "#6A1B9A",
"Purple900": "#4A148C"
},
"material_indigo": {
"Indigo50": "#E8EAF6",
"Indigo100": "#C5CAE9",
"Indigo200": "#9FA8DA",
"Indigo300": "#7986CB",
"Indigo400": "#5C6BC0",
"Indigo500": "#3F51B5",
"Indigo600": "#3949AB",
"Indigo700": "#303F9F",
"Indigo800": "#283593",
"Indigo900": "#1A237E"
},
"material_blue": {
"Blue50": "#E3F2FD",
"Blue100": "#BBDEFB",
"Blue200": "#90CAF9",
"Blue300": "#64B5F6",
"Blue400": "#42A5F5",
"Blue500": "#2196F3",
"Blue600": "#1E88E5",
"Blue700": "#1976D2",
"Blue800": "#1565C0",
"Blue900": "#0D47A1"
},
"material_cyan": {
"Cyan50": "#E0F7FA",
"Cyan100": "#B2EBF2",
"Cyan200": "#80DEEA",
"Cyan300": "#4DD0E1",
"Cyan400": "#26C6DA",
"Cyan500": "#00BCD4",
"Cyan600": "#00ACC1",
"Cyan700": "#0097A7",
"Cyan800": "#00838F",
"Cyan900": "#006064"
},
"material_teal": {
"Teal50": "#E0F2F1",
"Teal100": "#B2DFDB",
"Teal200": "#80CBC4",
"Teal300": "#4DB6AC",
"Teal400": "#26A69A",
"Teal500": "#009688",
"Teal600": "#00897B",
"Teal700": "#00796B",
"Teal800": "#00695C",
"Teal900": "#004D40"
},
"material_green": {
"Green50": "#E8F5E9",
"Green100": "#C8E6C9",
"Green200": "#A5D6A7",
"Green300": "#81C784",
"Green400": "#66BB6A",
"Green500": "#4CAF50",
"Green600": "#43A047",
"Green700": "#388E3C",
"Green800": "#2E7D32",
"Green900": "#1B5E20"
},
"material_lime": {
"Lime50": "#F9FBE7",
"Lime100": "#F0F4C3",
"Lime200": "#E6EE9C",
"Lime300": "#DCE775",
"Lime400": "#D4E157",
"Lime500": "#CDDC39",
"Lime600": "#C0CA33",
"Lime700": "#AFB42B",
"Lime800": "#9E9D24",
"Lime900": "#827717"
},
"material_yellow": {
"Yellow50": "#FFFDE7",
"Yellow100": "#FFF9C4",
"Yellow200": "#FFF59D",
"Yellow300": "#FFF176",
"Yellow400": "#FFEE58",
"Yellow500": "#FFEB3B",
"Yellow600": "#FDD835",
"Yellow700": "#FBC02D",
"Yellow800": "#F9A825",
"Yellow900": "#F57F17"
},
"material_amber": {
"Amber50": "#FFF8E1",
"Amber100": "#FFECB3",
"Amber200": "#FFE082",
"Amber300": "#FFD54F",
"Amber400": "#FFCA28",
"Amber500": "#FFC107",
"Amber600": "#FFB300",
"Amber700": "#FFA000",
"Amber800": "#FF8F00",
"Amber900": "#FF6F00"
},
"material_orange": {
"Orange50": "#FFF3E0",
"Orange100": "#FFE0B2",
"Orange200": "#FFCC80",
"Orange300": "#FFB74D",
"Orange400": "#FFA726",
"Orange500": "#FF9800",
"Orange600": "#FB8C00",
"Orange700": "#F57C00",
"Orange800": "#EF6C00",
"Orange900": "#E65100"
},
"material_brown": {
"Brown50": "#EFEBE9",
"Brown100": "#D7CCC8",
"Brown200": "#BCAAA4",
"Brown300": "#A1887F",
"Brown400": "#8D6E63",
"Brown500": "#795548",
"Brown600": "#6D4C41",
"Brown700": "#5D4037",
"Brown800": "#4E342E",
"Brown900": "#3E2723"
},
"material_grey": {
"Grey50": "#FAFAFA",
"Grey100": "#F5F5F5",
"Grey200": "#EEEEEE",
"Grey300": "#E0E0E0",
"Grey400": "#BDBDBD",
"Grey500": "#9E9E9E",
"Grey600": "#757575",
"Grey700": "#616161",
"Grey800": "#424242",
"Grey900": "#212121"
}
}
|
sci-palettes
|
/sci-palettes-1.0.1.tar.gz/sci-palettes-1.0.1/sci_palettes/palettes.py
|
palettes.py
|
PALETTES = {
"npg_nrc": {
"Cinnabar": "#E64B35",
"Shakespeare": "#4DBBD5",
"PersianGreen": "#00A087",
"Chambray": "#3C5488",
"Apricot": "#F39B7F",
"WildBlueYonder": "#8491B4",
"MonteCarlo": "#91D1C2",
"Monza": "#DC0000",
"RomanCoffee": "#7E6148",
"Sandrift": "#B09C85"
},
"aaas": {
"Chambray": "#3B4992",
"Red": "#EE0000",
"FunGreen": "#008B45",
"HoneyFlower": "#631879",
"Teal": "#008280",
"Monza": "#BB0021",
"ButterflyBush": "#5F559B",
"FreshEggplant": "#A20056",
"Stack": "#808180",
"CodGray": "#1B1919"
},
"nejm": {
"TallPoppy": "#BC3C29",
"DeepCerulean": "#0072B5",
"Zest": "#E18727",
"Eucalyptus": "#20854E",
"WildBlueYonder": "#7876B1",
"Gothic": "#6F99AD",
"Salomie": "#FFDC91",
"FrenchRose": "#EE4C97"
},
"lancet_lanonc": {
"CongressBlue": "#00468B",
"Red": "#ED0000",
"Apple": "#42B540",
"BondiBlue": "#0099B4",
"TrendyPink": "#925E9F",
"MonaLisa": "#FDAF91",
"Carmine": "#AD002A",
"Edward": "#ADB6B6",
"CodGray": "#1B1919"
},
"jama": {
"Limed Spruce": "#374E55",
"Anzac": "#DF8F44",
"Cerulean": "#00A1D5",
"Apple Blossom": "#B24745",
"Acapulco": "#79AF97",
"Kimberly": "#6A6599",
"Makara": "#80796B"
},
"jco": {
"Lochmara": "#0073C2",
"Corn": "#EFC000",
"Gray": "#868686",
"ChestnutRose": "#CD534C",
"Danube": "#7AA6DC",
"RegalBlue": "#003C67",
"Olive": "#8F7700",
"MineShaft": "#3B3B3B",
"WellRead": "#A73030",
"KashmirBlue": "#4A6990"
},
"ucscgb": {
"chr5": "#FF0000",
"chr8": "#FF9900",
"chr9": "#FFCC00",
"chr12": "#00FF00",
"chr15": "#6699FF",
"chr20": "#CC33FF",
"chr3": "#99991E",
"chrX": "#999999",
"chr6": "#FF00CC",
"chr4": "#CC0000",
"chr7": "#FFCCCC",
"chr10": "#FFFF00",
"chr11": "#CCFF00",
"chr13": "#358000",
"chr14": "#0000CC",
"chr16": "#99CCFF",
"chr17": "#00FFFF",
"chr18": "#CCFFFF",
"chr19": "#9900CC",
"chr21": "#CC99FF",
"chr1": "#996600",
"chr2": "#666600",
"chr22": "#666666",
"chrY": "#CCCCCC",
"chrUn": "#79CC3D",
"chrM": "#CCCC99"
},
"d3_category10": {
"Matisse": "#1F77B4",
"Flamenco": "#FF7F0E",
"ForestGreen": "#2CA02C",
"Punch": "#D62728",
"Wisteria": "#9467BD",
"SpicyMix": "#8C564B",
"Orchid": "#E377C2",
"Gray": "#7F7F7F",
"KeyLimePie": "#BCBD22",
"Java": "#17BECF"
},
"d3_category20": {
"Matisse": "#1F77B4",
"Flamenco": "#FF7F0E",
"ForestGreen": "#2CA02C",
"Punch": "#D62728",
"Wisteria": "#9467BD",
"SpicyMix": "#8C564B",
"Orchid": "#E377C2",
"Gray": "#7F7F7F",
"KeyLimePie": "#BCBD22",
"Java": "#17BECF",
"Spindle": "#AEC7E8",
"MaC": "#FFBB78",
"Feijoa": "#98DF8A",
"MonaLisa": "#FF9896",
"LavenderGray": "#C5B0D5",
"Quicksand": "#C49C94",
"Chantilly": "#F7B6D2",
"Silver": "#C7C7C7",
"Deco": "#DBDB8D",
"RegentStBlue": "#9EDAE5"
},
"d3_category20b": {
"EastBay": "#393B79",
"ChaletGreen": "#637939",
"Pesto": "#8C6D31",
"Lotus": "#843C39",
"CannonPink": "#7B4173",
"ButterflyBush": "#5254A3",
"ChelseaCucumber": "#8CA252",
"Tussock": "#BD9E39",
"AppleBlossom": "#AD494A",
"Tapestry": "#A55194",
"MoodyBlue": "#6B6ECF",
"WildWillow": "#B5CF6B",
"Ronchi": "#E7BA52",
"ChestnutRose": "#D6616B",
"Hopbush": "#CE6DBD",
"ColdPurple": "#9C9EDE",
"Deco": "#CEDB9C",
"Putty": "#E7CB94",
"TonysPink": "#E7969C",
"LightOrchid": "#DE9ED6"
},
"d3_category20c": {
"BostonBlue": "#3182BD",
"Christine": "#E6550D",
"SeaGreen": "#31A354",
"Deluge": "#756BB1",
"DoveGray": "#636363",
"Danube": "#6BAED6",
"NeonCarrot": "#FD8D3C",
"DeYork": "#74C476",
"BlueBell": "#9E9AC8",
"DustyGray": "#969696",
"RegentStBlue": "#9ECAE1",
"Koromiko": "#FDAE6B",
"MossGreen": "#A1D99B",
"LavenderGray": "#BCBDDC",
"Silver": "#BDBDBD",
"Spindle": "#C6DBEF",
"Flesh": "#FDD0A2",
"Celadon": "#C7E9C0",
"Snuff": "#DADAEB",
"Alto": "#D9D9D9"
},
"igv": {
"chr1": "#5050FF",
"chr2": "#CE3D32",
"chr3": "#749B58",
"chr4": "#F0E685",
"chr5": "#466983",
"chr6": "#BA6338",
"chr7": "#5DB1DD",
"chr8": "#802268",
"chr9": "#6BD76B",
"chr10": "#D595A7",
"chr11": "#924822",
"chr12": "#837B8D",
"chr13": "#C75127",
"chr14": "#D58F5C",
"chr15": "#7A65A5",
"chr16": "#E4AF69",
"chr17": "#3B1B53",
"chr18": "#CDDEB7",
"chr19": "#612A79",
"chr20": "#AE1F63",
"chr21": "#E7C76F",
"chr22": "#5A655E",
"chrX": "#CC9900",
"chrY": "#99CC00",
"chrUn": "#A9A9A9",
"chr23": "#CC9900",
"chr24": "#99CC00",
"chr25": "#33CC00",
"chr26": "#00CC33",
"chr27": "#00CC99",
"chr28": "#0099CC",
"chr29": "#0A47FF",
"chr30": "#4775FF",
"chr31": "#FFC20A",
"chr32": "#FFD147",
"chr33": "#990033",
"chr34": "#991A00",
"chr35": "#996600",
"chr36": "#809900",
"chr37": "#339900",
"chr38": "#00991A",
"chr39": "#009966",
"chr40": "#008099",
"chr41": "#003399",
"chr42": "#1A0099",
"chr43": "#660099",
"chr44": "#990080",
"chr45": "#D60047",
"chr46": "#FF1463",
"chr47": "#00D68F",
"chr48": "#14FFB1"
},
"igv_alternating": {
"Indigo": "#5773CC",
"SelectiveYellow": "#FFB900"
},
"locuszoom": {
"0.8to1.0": "#D43F3A",
"0.6to0.8": "#EEA236",
"0.4to0.6": "#5CB85C",
"0.2to0.4": "#46B8DA",
"0.0to0.2": "#357EBD",
"LDRefVar": "#9632B8",
"nodata": "#B8B8B8"
},
"uchicago": {
"Maroon": "#800000",
"DarkGray": "#767676",
"Yellow": "#FFA319",
"LightGreen": "#8A9045",
"Blue": "#155F83",
"Orange": "#C16622",
"Red": "#8F3931",
"DarkGreen": "#58593F",
"Violet": "#350E20"
},
"uchicago_light": {
"Maroon": "#800000",
"LightGray": "#D6D6CE",
"Yellow": "#FFB547",
"LightGreen": "#ADB17D",
"Blue": "#5B8FA8",
"Orange": "#D49464",
"Red": "#B1746F",
"DarkGreen": "#8A8B79",
"Violet": "#725663"
},
"uchicago_dark": {
"Maroon": "#800000",
"DarkGray": "#767676",
"Yellow": "#CC8214",
"LightGreen": "#616530",
"Blue": "#0F425C",
"Orange": "#9A5324",
"Red": "#642822",
"DarkGreen": "#3E3E23",
"Violet": "#350E20"
},
"cosmic_hallmarks_dark": {
"Invasion and Metastasis": "#171717",
"Escaping Immunic Response to Cancer": "#7D0226",
"Change of Cellular Energetics": "#300049",
"Cell Replicative Immortality": "#165459",
"Suppression of Growth": "#3F2327",
"Genome Instability and Mutations": "#0B1948",
"Angiogenesis": "#E71012",
"Escaping Programmed Cell Death": "#555555",
"Proliferative Signaling": "#193006",
"Tumour Promoting Inflammation": "#A8450C"
},
"cosmic_hallmarks_light": {
"Invasion and Metastasis": "#2E2A2B",
"Escaping Immunic Response to Cancer": "#CF4E9C",
"Change of Cellular Energetics": "#8C57A2",
"Cell Replicative Immortality": "#358DB9",
"Suppression of Growth": "#82581F",
"Genome Instability and Mutations": "#2F509E",
"Angiogenesis": "#E5614C",
"Escaping Programmed Cell Death": "#97A1A7",
"Proliferative Signaling": "#3DA873",
"Tumour Promoting Inflammation": "#DC9445"
},
"cosmic_signature_substitutions": {
"C>A": "#5ABCEB",
"C>G": "#050708",
"C>T": "#D33C32",
"T>A": "#CBCACB",
"T>C": "#ABCD72",
"T>G": "#E7C9C6"
},
"simpsons_springfield": {
"HomerYellow": "#FED439",
"HomerBlue": "#709AE1",
"HomerGrey": "#8A9197",
"HomerBrown": "#D2AF81",
"BartOrange": "#FD7446",
"MargeGreen": "#D5E4A2",
"MargeBlue": "#197EC0",
"LisaOrange": "#F05C3B",
"NedGreen": "#46732E",
"MaggieBlue": "#71D0F5",
"BurnsPurple": "#370335",
"BurnsGreen": "#075149",
"DuffRed": "#C80813",
"KentRed": "#91331F",
"BobGreen": "#1A9993",
"FrinkPink": "#FD8CC1"
},
"futurama_planetexpress": {
"FryOrange": "#FF6F00",
"FryRed": "#C71000",
"FryBlue": "#008EA0",
"LeelaPurple": "#8A4198",
"BenderIron": "#5A9599",
"ZoidbergRed": "#FF6348",
"ZoidbergBlue": "#84D7E1",
"AmyPink": "#FF95A8",
"HermesGreen": "#3D3B25",
"ProfessorBlue": "#ADE2D0",
"ScruffyGreen": "#1A5354",
"LeelaGrey": "#3F4041"
},
"rickandmorty_schwifty": {
"MortyYellow": "#FAFD7C",
"MortyBrown": "#82491E",
"MortyBlue": "#24325F",
"RickBlue": "#B7E4F9",
"BethRed": "#FB6467",
"JerryGreen": "#526E2D",
"SummerPink": "#E762D7",
"SummerOrange": "#E89242",
"BethYellow": "#FAE48B",
"RickGreen": "#A6EEE6",
"RickBrown": "#917C5D",
"MeeseeksBlue": "#69C8EC"
},
"startrek_uniform": {
"Engineering": "#CC0C00",
"Sciences": "#5C88DA",
"Senior": "#84BD00",
"Command": "#FFCD00",
"Teal": "#7C878E",
"Cerulean": "#00B5E2",
"Jade": "#00AF66"
},
"tron_legacy": {
"BlackGuard": "#FF410D",
"Sam": "#6EE2FF",
"Clu": "#F7C530",
"Underclass": "#95CC5E",
"KevinFlynn": "#D0DFE6",
"CluFollower": "#F79D1E",
"Underclass2": "#748AA6"
},
"gsea": {
"Purple": "#4500AD",
"DarkBlue": "#2700D1",
"RoyalBlue": "#6B58EF",
"Malibu": "#8888FF",
"Melrose": "#C7C1FF",
"Fog": "#D5D5FF",
"CottonCandy": "#FFC0E5",
"VividTangerine": "#FF8989",
"BrinkPink": "#FF7080",
"Persimmon": "#FF5A5A",
"Flamingo": "#EF4040",
"GuardsmanRed": "#D60C00"
},
"material_red": {
"Red50": "#FFEBEE",
"Red100": "#FFCDD2",
"Red200": "#EF9A9A",
"Red300": "#E57373",
"Red400": "#EF5350",
"Red500": "#F44336",
"Red600": "#E53935",
"Red700": "#D32F2F",
"Red800": "#C62828",
"Red900": "#B71C1C"
},
"material_pink": {
"Pink50": "#FCE4EC",
"Pink100": "#F8BBD0",
"Pink200": "#F48FB1",
"Pink300": "#F06292",
"Pink400": "#EC407A",
"Pink500": "#E91E63",
"Pink600": "#D81B60",
"Pink700": "#C2185B",
"Pink800": "#AD1457",
"Pink900": "#880E4F"
},
"material_purple": {
"Purple50": "#F3E5F5",
"Purple100": "#E1BEE7",
"Purple200": "#CE93D8",
"Purple300": "#BA68C8",
"Purple400": "#AB47BC",
"Purple500": "#9C27B0",
"Purple600": "#8E24AA",
"Purple700": "#7B1FA2",
"Purple800": "#6A1B9A",
"Purple900": "#4A148C"
},
"material_indigo": {
"Indigo50": "#E8EAF6",
"Indigo100": "#C5CAE9",
"Indigo200": "#9FA8DA",
"Indigo300": "#7986CB",
"Indigo400": "#5C6BC0",
"Indigo500": "#3F51B5",
"Indigo600": "#3949AB",
"Indigo700": "#303F9F",
"Indigo800": "#283593",
"Indigo900": "#1A237E"
},
"material_blue": {
"Blue50": "#E3F2FD",
"Blue100": "#BBDEFB",
"Blue200": "#90CAF9",
"Blue300": "#64B5F6",
"Blue400": "#42A5F5",
"Blue500": "#2196F3",
"Blue600": "#1E88E5",
"Blue700": "#1976D2",
"Blue800": "#1565C0",
"Blue900": "#0D47A1"
},
"material_cyan": {
"Cyan50": "#E0F7FA",
"Cyan100": "#B2EBF2",
"Cyan200": "#80DEEA",
"Cyan300": "#4DD0E1",
"Cyan400": "#26C6DA",
"Cyan500": "#00BCD4",
"Cyan600": "#00ACC1",
"Cyan700": "#0097A7",
"Cyan800": "#00838F",
"Cyan900": "#006064"
},
"material_teal": {
"Teal50": "#E0F2F1",
"Teal100": "#B2DFDB",
"Teal200": "#80CBC4",
"Teal300": "#4DB6AC",
"Teal400": "#26A69A",
"Teal500": "#009688",
"Teal600": "#00897B",
"Teal700": "#00796B",
"Teal800": "#00695C",
"Teal900": "#004D40"
},
"material_green": {
"Green50": "#E8F5E9",
"Green100": "#C8E6C9",
"Green200": "#A5D6A7",
"Green300": "#81C784",
"Green400": "#66BB6A",
"Green500": "#4CAF50",
"Green600": "#43A047",
"Green700": "#388E3C",
"Green800": "#2E7D32",
"Green900": "#1B5E20"
},
"material_lime": {
"Lime50": "#F9FBE7",
"Lime100": "#F0F4C3",
"Lime200": "#E6EE9C",
"Lime300": "#DCE775",
"Lime400": "#D4E157",
"Lime500": "#CDDC39",
"Lime600": "#C0CA33",
"Lime700": "#AFB42B",
"Lime800": "#9E9D24",
"Lime900": "#827717"
},
"material_yellow": {
"Yellow50": "#FFFDE7",
"Yellow100": "#FFF9C4",
"Yellow200": "#FFF59D",
"Yellow300": "#FFF176",
"Yellow400": "#FFEE58",
"Yellow500": "#FFEB3B",
"Yellow600": "#FDD835",
"Yellow700": "#FBC02D",
"Yellow800": "#F9A825",
"Yellow900": "#F57F17"
},
"material_amber": {
"Amber50": "#FFF8E1",
"Amber100": "#FFECB3",
"Amber200": "#FFE082",
"Amber300": "#FFD54F",
"Amber400": "#FFCA28",
"Amber500": "#FFC107",
"Amber600": "#FFB300",
"Amber700": "#FFA000",
"Amber800": "#FF8F00",
"Amber900": "#FF6F00"
},
"material_orange": {
"Orange50": "#FFF3E0",
"Orange100": "#FFE0B2",
"Orange200": "#FFCC80",
"Orange300": "#FFB74D",
"Orange400": "#FFA726",
"Orange500": "#FF9800",
"Orange600": "#FB8C00",
"Orange700": "#F57C00",
"Orange800": "#EF6C00",
"Orange900": "#E65100"
},
"material_brown": {
"Brown50": "#EFEBE9",
"Brown100": "#D7CCC8",
"Brown200": "#BCAAA4",
"Brown300": "#A1887F",
"Brown400": "#8D6E63",
"Brown500": "#795548",
"Brown600": "#6D4C41",
"Brown700": "#5D4037",
"Brown800": "#4E342E",
"Brown900": "#3E2723"
},
"material_grey": {
"Grey50": "#FAFAFA",
"Grey100": "#F5F5F5",
"Grey200": "#EEEEEE",
"Grey300": "#E0E0E0",
"Grey400": "#BDBDBD",
"Grey500": "#9E9E9E",
"Grey600": "#757575",
"Grey700": "#616161",
"Grey800": "#424242",
"Grey900": "#212121"
}
}
| 0.432902 | 0.561696 |
import time
class MessageBuilder:
DEVIDER = {"type": "divider"}
def __init__(self, expr_name):
self.expr_name = expr_name
def create_message(self, message=None):
raise NotImplementedError
def _create_section(self, text, text_type='mrkdwn'):
return {
'type': 'section',
'text': {
'type': text_type,
'text': text,
}
}
def _create_mrkdwn_fields(self, *attributes):
fields = [{
'type': 'mrkdwn',
'text': '*{}:*\n{}'.format(k, v),
} for k, v in attributes]
return {
'type': 'section',
'fields': fields,
}
def _quote(self, content):
return self._create_section('\n'.join(
['>' + line for line in content.split('\n')]))
class CompleteMessageBuilder(MessageBuilder):
def __init__(self, expr_name):
super().__init__(expr_name)
self.start_at = None
self.end_at = None
def set_start_time(self, timestamp):
self.start_at = timestamp
def set_end_time(self, timestamp):
self.end_at = timestamp
def __create_header(self):
return self._create_section('* Experiment Completed:* _{}_'.format(
self.expr_name))
def __create_time_info(self):
time_format = '<!date^{}^{{date_num}} {{time_secs}}|{}>'
start = time_format.format(
int(self.start_at),
time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(self.start_at)))
end = time_format.format(
int(self.end_at),
time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(self.end_at)))
eplapsed_time = int(self.end_at - self.start_at)
seconds = eplapsed_time % 60
minutes = (eplapsed_time // 60) % 60
hours = (eplapsed_time // 3600) % 24
days = eplapsed_time // (24 * 3600)
duration = '{:02d}:{:02d}:{:02d}:{:02d}'.format(
days, hours, minutes, seconds)
return self._create_mrkdwn_fields(('Start', start), ('End', end),
('Duration', duration))
def create_message(self, message=None):
blocks = [
self.__create_header(),
self.DEVIDER,
self.__create_time_info(),
]
if message is not None:
blocks += [
self.DEVIDER,
self._quote(message),
]
return blocks
|
sci-slacker
|
/sci_slacker-0.0.1-py3-none-any.whl/slacker/message_builder.py
|
message_builder.py
|
import time
class MessageBuilder:
DEVIDER = {"type": "divider"}
def __init__(self, expr_name):
self.expr_name = expr_name
def create_message(self, message=None):
raise NotImplementedError
def _create_section(self, text, text_type='mrkdwn'):
return {
'type': 'section',
'text': {
'type': text_type,
'text': text,
}
}
def _create_mrkdwn_fields(self, *attributes):
fields = [{
'type': 'mrkdwn',
'text': '*{}:*\n{}'.format(k, v),
} for k, v in attributes]
return {
'type': 'section',
'fields': fields,
}
def _quote(self, content):
return self._create_section('\n'.join(
['>' + line for line in content.split('\n')]))
class CompleteMessageBuilder(MessageBuilder):
def __init__(self, expr_name):
super().__init__(expr_name)
self.start_at = None
self.end_at = None
def set_start_time(self, timestamp):
self.start_at = timestamp
def set_end_time(self, timestamp):
self.end_at = timestamp
def __create_header(self):
return self._create_section('* Experiment Completed:* _{}_'.format(
self.expr_name))
def __create_time_info(self):
time_format = '<!date^{}^{{date_num}} {{time_secs}}|{}>'
start = time_format.format(
int(self.start_at),
time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(self.start_at)))
end = time_format.format(
int(self.end_at),
time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(self.end_at)))
eplapsed_time = int(self.end_at - self.start_at)
seconds = eplapsed_time % 60
minutes = (eplapsed_time // 60) % 60
hours = (eplapsed_time // 3600) % 24
days = eplapsed_time // (24 * 3600)
duration = '{:02d}:{:02d}:{:02d}:{:02d}'.format(
days, hours, minutes, seconds)
return self._create_mrkdwn_fields(('Start', start), ('End', end),
('Duration', duration))
def create_message(self, message=None):
blocks = [
self.__create_header(),
self.DEVIDER,
self.__create_time_info(),
]
if message is not None:
blocks += [
self.DEVIDER,
self._quote(message),
]
return blocks
| 0.446736 | 0.133585 |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu = 0, sigma = 1):
Distribution.__init__(self,mu,sigma)
def calculate_mean(self):
"""Method to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
self.mean = sum(self.data)/len(self.data)
return self.mean
def calculate_stdev(self, sample=True):
"""Method to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
data = [(x-self.mean)**2 for x in self.data]
summation = sum(data)
n = len(self.data)
if sample:
self.stdev=math.sqrt(summation/(n -1))
else:
self.stdev = math.sqrt(summation/n)
return self.stdev
def read_data_file(self, file_name, sample=True):
"""Method to read in data from a txt file. The txt file should have
one number (float) per line. The numbers are stored in the data attribute.
After reading in the file, the mean and standard deviation are calculated
Args:
file_name (string): name of a file to read from
Returns:
None
"""
Distribution.read_data_file(self,file_name,sample)
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev(sample)
def plot_histogram(self):
"""Method to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.xlabel("data")
plt.ylabel("count")
plt.title("data distribution")
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
coeffient = 1/(self.stdev * math.sqrt(2*math.pi))
euler_exponent = -0.5 * ((x-self.mean)/self.stdev)**2
return coeffient*math.exp(euler_exponent)
def plot_histogram_pdf(self, n_spaces = 50):
"""Method to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Magic method to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean+other.mean
result.stdev = math.sqrt(self.stdev**2 + other.stdev**2)
return result
def __repr__(self):
"""Magic method to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean,self.stdev)
|
sci-stats-dist
|
/sci_stats_dist-0.0.2.tar.gz/sci_stats_dist-0.0.2/sci_stats_dist/Gaussiandistribution.py
|
Gaussiandistribution.py
|
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu = 0, sigma = 1):
Distribution.__init__(self,mu,sigma)
def calculate_mean(self):
"""Method to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
self.mean = sum(self.data)/len(self.data)
return self.mean
def calculate_stdev(self, sample=True):
"""Method to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
data = [(x-self.mean)**2 for x in self.data]
summation = sum(data)
n = len(self.data)
if sample:
self.stdev=math.sqrt(summation/(n -1))
else:
self.stdev = math.sqrt(summation/n)
return self.stdev
def read_data_file(self, file_name, sample=True):
"""Method to read in data from a txt file. The txt file should have
one number (float) per line. The numbers are stored in the data attribute.
After reading in the file, the mean and standard deviation are calculated
Args:
file_name (string): name of a file to read from
Returns:
None
"""
Distribution.read_data_file(self,file_name,sample)
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev(sample)
def plot_histogram(self):
"""Method to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.xlabel("data")
plt.ylabel("count")
plt.title("data distribution")
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
coeffient = 1/(self.stdev * math.sqrt(2*math.pi))
euler_exponent = -0.5 * ((x-self.mean)/self.stdev)**2
return coeffient*math.exp(euler_exponent)
def plot_histogram_pdf(self, n_spaces = 50):
"""Method to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Magic method to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean+other.mean
result.stdev = math.sqrt(self.stdev**2 + other.stdev**2)
return result
def __repr__(self):
"""Magic method to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean,self.stdev)
| 0.807916 | 0.804598 |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Binomial(Distribution):
""" Binomial distribution class for calculating and
visualizing a Binomial distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats to be extracted from the data file
p (float) representing the probability of an event occurring
n (int) the total number of trials
"""
def __init__(self, prob=.5, size=20):
self.p = prob
self.n=size
Distribution.__init__(self,self.calculate_mean() , self.calculate_stdev())
def calculate_mean(self):
"""Function to calculate the mean from p and n
Args:
None
Returns:
float: mean of the data set
"""
self.mean = self.n * self.p
return self.mean
def calculate_stdev(self):
"""Function to calculate the standard deviation from p and n.
Args:
None
Returns:
float: standard deviation of the data set
"""
self.stdev=math.sqrt(self.n *self.p*(1-self.p))
return self.stdev
def replace_stats_with_data(self):
"""Function to calculate p and n from the data set
Args:
None
Returns:
float: the p value
float: the n value
"""
self.n = len(self.data)
self.p = self.data.count(1)/self.n
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev()
return self.p,self.n
def plot_bar(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.bar(["0","1"] , [self.n * (1-self.p) , self.n*self.p])
plt.xlabel("data")
plt.ylabel("counts")
plt.show()
def __nCk(self,n,k):
k = min(k,n-k)
nchoosek=1
for i in range(1,k+1):
nchoosek*=(n-i+1)
nchoosek /=i
return nchoosek
def pdf(self, k):
"""Probability density function calculator for the gaussian distribution.
Args:
k (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return self.__nCk(self.n,k) * self.p**k * (1-self.p)**(self.n-k)
def plot_bar_pdf(self):
"""Function to plot the pdf of the binomial distribution
Args:
None
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
x= list(range(0,self.n+1))
y = [pdf(k) for k in x ]
plt.bar(x,y)
plt.show()
plt.title('Distribution of Outcomes')
plt.ylabel('Probability')
plt.xlabel('Outcome')
return x,y
def __add__(self, other):
"""Function to add together two Binomial distributions with equal p
Args:
other (Binomial): Binomial instance
Returns:
Binomial: Binomial distribution
"""
try:
assert self.p == other.p, 'p values are not equal'
except AssertionError as error:
raise
return Binomial(self.p,self.n+other.n)
def __repr__(self):
"""Function to output the characteristics of the Binomial instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return f"mean {self.mean}, standard deviation {self.stdev}, p {self.p}, n {self.n}"
|
sci-stats-dist
|
/sci_stats_dist-0.0.2.tar.gz/sci_stats_dist-0.0.2/sci_stats_dist/Binomialdistribution.py
|
Binomialdistribution.py
|
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Binomial(Distribution):
""" Binomial distribution class for calculating and
visualizing a Binomial distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats to be extracted from the data file
p (float) representing the probability of an event occurring
n (int) the total number of trials
"""
def __init__(self, prob=.5, size=20):
self.p = prob
self.n=size
Distribution.__init__(self,self.calculate_mean() , self.calculate_stdev())
def calculate_mean(self):
"""Function to calculate the mean from p and n
Args:
None
Returns:
float: mean of the data set
"""
self.mean = self.n * self.p
return self.mean
def calculate_stdev(self):
"""Function to calculate the standard deviation from p and n.
Args:
None
Returns:
float: standard deviation of the data set
"""
self.stdev=math.sqrt(self.n *self.p*(1-self.p))
return self.stdev
def replace_stats_with_data(self):
"""Function to calculate p and n from the data set
Args:
None
Returns:
float: the p value
float: the n value
"""
self.n = len(self.data)
self.p = self.data.count(1)/self.n
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev()
return self.p,self.n
def plot_bar(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.bar(["0","1"] , [self.n * (1-self.p) , self.n*self.p])
plt.xlabel("data")
plt.ylabel("counts")
plt.show()
def __nCk(self,n,k):
k = min(k,n-k)
nchoosek=1
for i in range(1,k+1):
nchoosek*=(n-i+1)
nchoosek /=i
return nchoosek
def pdf(self, k):
"""Probability density function calculator for the gaussian distribution.
Args:
k (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return self.__nCk(self.n,k) * self.p**k * (1-self.p)**(self.n-k)
def plot_bar_pdf(self):
"""Function to plot the pdf of the binomial distribution
Args:
None
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
x= list(range(0,self.n+1))
y = [pdf(k) for k in x ]
plt.bar(x,y)
plt.show()
plt.title('Distribution of Outcomes')
plt.ylabel('Probability')
plt.xlabel('Outcome')
return x,y
def __add__(self, other):
"""Function to add together two Binomial distributions with equal p
Args:
other (Binomial): Binomial instance
Returns:
Binomial: Binomial distribution
"""
try:
assert self.p == other.p, 'p values are not equal'
except AssertionError as error:
raise
return Binomial(self.p,self.n+other.n)
def __repr__(self):
"""Function to output the characteristics of the Binomial instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return f"mean {self.mean}, standard deviation {self.stdev}, p {self.p}, n {self.n}"
| 0.830044 | 0.804598 |
from PySide6 import QtCore, QtWidgets, QtGui
import pyqtgraph as pg
import numpy as np
import sys
import os
import ctypes
import json
import glob
import h5py
from scipy import interpolate
from scipy.signal import decimate
from functools import partial
import time as timing
myappid = 'sci.streak' # arbitrary string
ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID(myappid)
path = f'{os.path.dirname(sys.argv[0])}/gui'
with open('data/experiment.json', 'r', encoding='utf-8') as f:
experiment = json.load(f)
experiment_list = list(experiment.keys())
experiment_list.insert(0, '#')
with open(f'{path}/ui/stylesheet.qss', 'r', encoding='utf-8') as file:
stylesheet = file.read()
if len(glob.glob('data/*.hdf5'))==0:
raise Exception('You should use the bksub script to create the hdf5 file.')
class MainWindow(QtWidgets.QMainWindow):
def __init__(self, *args, **kwargs):
super(MainWindow, self).__init__(*args, **kwargs)
self.setStyleSheet(stylesheet)
self.initUI()
def initUI(self):
self.menuBarOutline()
self.statusBar().showMessage('Ready')
self.statusBar().setStyleSheet(stylesheet)
self.setWindowIcon(QtGui.QIcon(f'{path}/icons/icon.png'))
self.setWindowTitle('sci-streak')
self.resize(800, 600) # Fix for not starting maximized.
self.showMaximized()
# Window Layout
self.mainWidget = QtWidgets.QWidget()
self.hbox = QtWidgets.QHBoxLayout()
self.splitter1 = QtWidgets.QSplitter(QtCore.Qt.Horizontal)
self.splitter1.setStyleSheet(stylesheet)
self.setCentralWidget(self.mainWidget)
# Populate Layout
self.plotLayout()
self.treeWidget()
self.controlWidgets()
self.rightLayout()
self.splitter1.addWidget(self.leftwidget)
self.splitter1.addWidget(self.rightwidget)
self.hbox.addWidget(self.splitter1)
self.mainWidget.setLayout(self.hbox)
def menuBarOutline(self):
exitAction = QtGui.QAction(QtGui.QIcon(f'{path}icons/exit.png'), '&Exit', self)
exitAction.setShortcut('Ctrl+Q')
exitAction.setStatusTip('Exit application')
exitAction.triggered.connect(self.close)
self.fileMenu = self.menuBar().addMenu('&File')
self.fileMenu.addAction(exitAction)
self.analyzeMenu = self.menuBar().addMenu('&Analysis')
self.settingsMenu = self.menuBar().addMenu('&Settings')
self.colorMenu = self.settingsMenu.addMenu('Choose Colormap')
self.inferno = QtGui.QAction('inferno', self)
self.colorMenu.addAction(self.inferno)
self.inferno.triggered.connect(partial(self.changeColormap, 'inferno'))
self.turbo = QtGui.QAction('turbo', self)
self.colorMenu.addAction(self.turbo)
self.turbo.triggered.connect(partial(self.changeColormap, 'turbo'))
self.viridis = QtGui.QAction('viridis', self)
self.colorMenu.addAction(self.viridis)
self.viridis.triggered.connect(partial(self.changeColormap, 'viridis'))
self.spectral = QtGui.QAction('spectral', self)
self.colorMenu.addAction(self.spectral)
self.spectral.triggered.connect(partial(self.changeColormap, 'nipy_spectral'))
self.menuBar().setStyleSheet(stylesheet)
def plotLayout(self):
self.leftwidget = pg.GraphicsLayoutWidget()
self.downsampleFactor()
wavel, time, self.inten = self.openhdf5(0)
self.plot(wavel, time, self.inten)
self.hist()
self.roiRow = self.leftwidget.addLayout(row=1, col=0, colspan=6)
self.decay_plot = self.roiRow.addPlot(row=1, col=0)
self.decay_plot.setMaximumHeight(250)
self.decay_plot.showAxes(True)
self.spectrum_plot = self.roiRow.addPlot(row=1, col=3)
self.spectrum_plot.setMaximumHeight(250)
self.spectrum_plot.showAxes(True)
self.leftwidget.show()
self.roiWidget(wavel, time)
self.roi.sigRegionChanged.connect(self.updateDecayROI)
self.roi.sigRegionChanged.connect(self.updateSpectrumROI)
self.updateDecayROI()
self.updateSpectrumROI()
def rightLayout(self):
self.rightwidget = QtWidgets.QWidget()
self.optionsLayout = QtWidgets.QVBoxLayout()
self.optionsLayout.addWidget(self.log_widget)
self.optionsLayout.addLayout(self.controlsLayout)
self.rightwidget.setLayout(self.optionsLayout)
def plot(self, x, y, z):
self.ax2D = self.leftwidget.addPlot(row=0, col=0, colspan=5)
self.img = pg.ImageItem()
# print(z.shape)
self.img.setImage(z)
self.ax2D.addItem(self.img)
# Move the image by half a pixel so that the center
# of the pixels are located at the coordinate values
dx = x[1] - x[0]
dy = y[1] - y[0]
print('pixel size x: {}, pixel size y: {}'.format(dx, dy))
rect = QtCore.QRectF(x[0] - dx / 2, y[0] - dy / 2, x[-1] - x[0], y[-1] - y[0])
print(rect)
self.img.setRect(rect)
self.ax2D.setLabels(left='Time (ps)', bottom='Energy (eV)')
def updatePlot(self, x, y, z):
self.img.setImage(z)
# print(z.shape)
# Move the image by half a pixel so that the center of the pixels are
# located at the coordinate values
dx = x[1] - x[0]
dy = y[1] - y[0]
print('pixel size x: {}, pixel size y: {}'.format(dx, dy))
rect = QtCore.QRectF(x[0] - dx / 2, y[0] - dy / 2, x[-1] - x[0], y[-1] - y[0])
print(rect)
self.img.setRect(rect)
def hist(self):
# Contrast/color control
self.histItem = pg.HistogramLUTItem()
maxi = np.max(self.inten) / 2
mini = np.average(self.inten) + 0.2
self.histItem.setImageItem(self.img)
cmap = pg.colormap.get('inferno', source='matplotlib', skipCache=False)
self.histItem.gradient.setColorMap(cmap)
self.histItem.gradient.showTicks(show=False)
self.histItem.setLevels(mini, maxi)
self.leftwidget.addItem(self.histItem, row=0, col=5, colspan=1)
def roiWidget(self, wavel, time):
# Custom ROI for selecting an image region
self.roi = pg.ROI([wavel[0], time[0]],
[np.abs(wavel[0] - wavel[-1]) / 10, np.abs(time[0] - time[-1]) / 10],
rotatable=False)
self.roi.handleSize = 7
self.roi.addScaleHandle([1, 1], [0, 0])
self.roi.addScaleHandle([1, 0.5], [0, 0.5])
self.roi.addScaleHandle([0.5, 1], [0.5, 0])
self.ax2D.addItem(self.roi)
self.roi.setZValue(10) # make sure ROI is drawn above image
self.roi.getArrayRegion(self.inten, self.img, returnMappedCoords=True)
def updateDecayROI(self):
selected = self.roi.getArrayRegion(self.inten, self.img, returnMappedCoords=True)
axis_select = 1
xaxis = selected[1][0][:, 0]
self.decay_plot.plot(xaxis, selected[0].mean(axis=axis_select), clear=True)
def updateSpectrumROI(self):
selected = self.roi.getArrayRegion(self.inten, self.img, returnMappedCoords=True)
axis_select = 0
xaxis = selected[1][1][0]
self.spectrum_plot.plot(xaxis, selected[0].mean(axis=axis_select), clear=True)
def treeWidget(self):
self.log_widget = QtWidgets.QTreeWidget()
self.log_widget.setHeaderItem(QtWidgets.QTreeWidgetItem(experiment_list))
self.treeParents = {}
self.buttons = {}
for i in range(len(experiment['names'])):
self.treeParents[i] = QtWidgets.QTreeWidgetItem([f'{i:02d}'])
self.log_widget.addTopLevelItem(self.treeParents[i])
self.buttons[i] = QtWidgets.QPushButton(experiment['sample'])
self.log_widget.setItemWidget(self.treeParents[i], 1, self.buttons[i])
self.buttons[i].clicked.connect(partial(self.button, i))
for x in range(len(experiment_list) - 2): # -2 then +2 to account for # and sample cols.
x += 2
self.treeParents[i].setText(x, str(experiment[experiment_list[x]][i]))
self.log_widget.setStyleSheet(stylesheet)
def openhdf5(self, idx):
"""
Method to import data for an .hdf5 file.
Note that there should be only one hdf5 file in the working directory.
"""
with h5py.File(glob.glob('data/*.hdf5')[0], 'r') as f:
if isinstance(idx, str):
pass
else:
idx = str(idx)
data = np.array(f.get(str(idx)))
wavel = data[1:, 0]
wavel += 0.008 # Correction for this specific dataset.
time = data[0, 1:]
inten = data[1:, 1:]
# print(inten.shape)
xlabels, ylabels, data = self.downsample(wavel, time, inten)
xlabels, ylabels, data = self.rebin(xlabels, ylabels, data, self.downsample_factor)
# print(data.shape)
return xlabels, ylabels, data
def downsampleFactor(self, value=1):
"""
TODO: add slider and button to choose downsample size.
Currently fixed to 2.
See the downsample method.
"""
self.downsample_factor = value
def rebin(self, wavel, time, inten, downsample):
"""
Method used to downsample or rebin data for speed reasons.
There is a trade off between speed and resolution.
Note that the energy and time resolution is much less than the pixel size.
e.g. time resolution of ~4 ps and pixel size of ~0.35ps per pixel.
If the downsample factor is not a factor of axis sizes:
- the x axis remainder sliced from the high energy side and
- the y axis remainder sliced from before time zero.
These areas are the most likely to not contain any important data.
"""
M, N = inten.shape
if M % downsample != 0:
remove = -1 * (M % downsample)
inten = inten[:remove, :]
wavel = wavel[:remove]
if N % downsample != 0:
remove = N % downsample
inten = inten[:, remove:]
time = time[remove:]
M, N = inten.shape
m, n = M // downsample, N // downsample
wavel = np.average(wavel.reshape(-1, downsample), axis=1)
time = np.average(time.reshape(-1, downsample), axis=1)
print(n, m, N, M)
return wavel, time, inten.reshape((m, downsample, n, downsample)).mean(3).mean(1)
def downsample(self, wavel, time, inten):
"""
Method used to even the spacing between each point on both axes.
For example the time axis ranges between ~0.25 to ~0.45ps spacing.
This method uses interpolation to even the spacing to the mean.
Currently RectBivariateSpline is used.
"""
# Currently using the mean window sizes for the final step size.
step_x = np.ediff1d(wavel).mean()
step_y = np.ediff1d(time).mean()
# Function to generate interpolated values from our irregular grid
t0 = timing.time()
f = interpolate.RectBivariateSpline(wavel, time, inten)
t1 = timing.time()
print(f'Interpolate time: {t1 - t0}')
# Generate new data on the regular grid
xlabels = np.arange(wavel[0], wavel[-1] + step_x, step_x)
ylabels = np.arange(time[0], time[-1] + step_y, step_y)
data = f(xlabels, ylabels)
# print(data.shape)
return xlabels, ylabels, data
def button(self, idx):
if type(idx) == int:
wavel, time, self.inten = self.openhdf5(idx)
self.updatePlot(wavel, time, self.inten)
maxi = np.max(self.inten) / 2
mini = np.average(self.inten) + 0.2
self.histItem.setLevels(mini, maxi)
self.updateSpectrumROI()
self.updateDecayROI()
def changeColormap(self, cmapstr):
cmap = pg.colormap.get(cmapstr, source='matplotlib', skipCache=False)
self.histItem.gradient.setColorMap(cmap)
self.histItem.gradient.showTicks(show=False)
def controlWidgets(self):
downsampleLabel = QtWidgets.QLabel('Choose the downsample value: ')
downsampleSpinBox = QtWidgets.QSpinBox()
downsampleSpinBox.setValue(1)
downsampleSpinBox.setRange(1, 5)
colormapLabel = QtWidgets.QLabel('Choose the colormap: ')
colormapCombo = QtWidgets.QComboBox()
colormapCombo.addItem('inferno')
colormapCombo.addItem('turbo')
colormapCombo.addItem('viridis')
colormapCombo.addItem('nipy_spectral')
downsampleLabel.setStyleSheet(stylesheet)
downsampleSpinBox.setStyleSheet(stylesheet)
colormapLabel.setStyleSheet(stylesheet)
colormapCombo.setStyleSheet(stylesheet)
downsampleSpinBox.valueChanged.connect(self.downsampleFactor)
colormapCombo.currentTextChanged.connect(self.changeColormap)
self.controlsLayout = QtWidgets.QGridLayout()
self.controlsLayout.addWidget(downsampleLabel, 0, 0)
self.controlsLayout.addWidget(downsampleSpinBox, 0, 1)
self.controlsLayout.addWidget(colormapLabel, 1, 0)
self.controlsLayout.addWidget(colormapCombo, 1, 1)
def application():
if not QtWidgets.QApplication.instance():
app = QtWidgets.QApplication(sys.argv)
else:
app = QtWidgets.QApplication.instance()
main = MainWindow()
main.show()
sys.exit(app.exec())
|
sci-streak
|
/sci_streak-0.3.0-py3-none-any.whl/streakgui/gui/gui.py
|
gui.py
|
from PySide6 import QtCore, QtWidgets, QtGui
import pyqtgraph as pg
import numpy as np
import sys
import os
import ctypes
import json
import glob
import h5py
from scipy import interpolate
from scipy.signal import decimate
from functools import partial
import time as timing
myappid = 'sci.streak' # arbitrary string
ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID(myappid)
path = f'{os.path.dirname(sys.argv[0])}/gui'
with open('data/experiment.json', 'r', encoding='utf-8') as f:
experiment = json.load(f)
experiment_list = list(experiment.keys())
experiment_list.insert(0, '#')
with open(f'{path}/ui/stylesheet.qss', 'r', encoding='utf-8') as file:
stylesheet = file.read()
if len(glob.glob('data/*.hdf5'))==0:
raise Exception('You should use the bksub script to create the hdf5 file.')
class MainWindow(QtWidgets.QMainWindow):
def __init__(self, *args, **kwargs):
super(MainWindow, self).__init__(*args, **kwargs)
self.setStyleSheet(stylesheet)
self.initUI()
def initUI(self):
self.menuBarOutline()
self.statusBar().showMessage('Ready')
self.statusBar().setStyleSheet(stylesheet)
self.setWindowIcon(QtGui.QIcon(f'{path}/icons/icon.png'))
self.setWindowTitle('sci-streak')
self.resize(800, 600) # Fix for not starting maximized.
self.showMaximized()
# Window Layout
self.mainWidget = QtWidgets.QWidget()
self.hbox = QtWidgets.QHBoxLayout()
self.splitter1 = QtWidgets.QSplitter(QtCore.Qt.Horizontal)
self.splitter1.setStyleSheet(stylesheet)
self.setCentralWidget(self.mainWidget)
# Populate Layout
self.plotLayout()
self.treeWidget()
self.controlWidgets()
self.rightLayout()
self.splitter1.addWidget(self.leftwidget)
self.splitter1.addWidget(self.rightwidget)
self.hbox.addWidget(self.splitter1)
self.mainWidget.setLayout(self.hbox)
def menuBarOutline(self):
exitAction = QtGui.QAction(QtGui.QIcon(f'{path}icons/exit.png'), '&Exit', self)
exitAction.setShortcut('Ctrl+Q')
exitAction.setStatusTip('Exit application')
exitAction.triggered.connect(self.close)
self.fileMenu = self.menuBar().addMenu('&File')
self.fileMenu.addAction(exitAction)
self.analyzeMenu = self.menuBar().addMenu('&Analysis')
self.settingsMenu = self.menuBar().addMenu('&Settings')
self.colorMenu = self.settingsMenu.addMenu('Choose Colormap')
self.inferno = QtGui.QAction('inferno', self)
self.colorMenu.addAction(self.inferno)
self.inferno.triggered.connect(partial(self.changeColormap, 'inferno'))
self.turbo = QtGui.QAction('turbo', self)
self.colorMenu.addAction(self.turbo)
self.turbo.triggered.connect(partial(self.changeColormap, 'turbo'))
self.viridis = QtGui.QAction('viridis', self)
self.colorMenu.addAction(self.viridis)
self.viridis.triggered.connect(partial(self.changeColormap, 'viridis'))
self.spectral = QtGui.QAction('spectral', self)
self.colorMenu.addAction(self.spectral)
self.spectral.triggered.connect(partial(self.changeColormap, 'nipy_spectral'))
self.menuBar().setStyleSheet(stylesheet)
def plotLayout(self):
self.leftwidget = pg.GraphicsLayoutWidget()
self.downsampleFactor()
wavel, time, self.inten = self.openhdf5(0)
self.plot(wavel, time, self.inten)
self.hist()
self.roiRow = self.leftwidget.addLayout(row=1, col=0, colspan=6)
self.decay_plot = self.roiRow.addPlot(row=1, col=0)
self.decay_plot.setMaximumHeight(250)
self.decay_plot.showAxes(True)
self.spectrum_plot = self.roiRow.addPlot(row=1, col=3)
self.spectrum_plot.setMaximumHeight(250)
self.spectrum_plot.showAxes(True)
self.leftwidget.show()
self.roiWidget(wavel, time)
self.roi.sigRegionChanged.connect(self.updateDecayROI)
self.roi.sigRegionChanged.connect(self.updateSpectrumROI)
self.updateDecayROI()
self.updateSpectrumROI()
def rightLayout(self):
self.rightwidget = QtWidgets.QWidget()
self.optionsLayout = QtWidgets.QVBoxLayout()
self.optionsLayout.addWidget(self.log_widget)
self.optionsLayout.addLayout(self.controlsLayout)
self.rightwidget.setLayout(self.optionsLayout)
def plot(self, x, y, z):
self.ax2D = self.leftwidget.addPlot(row=0, col=0, colspan=5)
self.img = pg.ImageItem()
# print(z.shape)
self.img.setImage(z)
self.ax2D.addItem(self.img)
# Move the image by half a pixel so that the center
# of the pixels are located at the coordinate values
dx = x[1] - x[0]
dy = y[1] - y[0]
print('pixel size x: {}, pixel size y: {}'.format(dx, dy))
rect = QtCore.QRectF(x[0] - dx / 2, y[0] - dy / 2, x[-1] - x[0], y[-1] - y[0])
print(rect)
self.img.setRect(rect)
self.ax2D.setLabels(left='Time (ps)', bottom='Energy (eV)')
def updatePlot(self, x, y, z):
self.img.setImage(z)
# print(z.shape)
# Move the image by half a pixel so that the center of the pixels are
# located at the coordinate values
dx = x[1] - x[0]
dy = y[1] - y[0]
print('pixel size x: {}, pixel size y: {}'.format(dx, dy))
rect = QtCore.QRectF(x[0] - dx / 2, y[0] - dy / 2, x[-1] - x[0], y[-1] - y[0])
print(rect)
self.img.setRect(rect)
def hist(self):
# Contrast/color control
self.histItem = pg.HistogramLUTItem()
maxi = np.max(self.inten) / 2
mini = np.average(self.inten) + 0.2
self.histItem.setImageItem(self.img)
cmap = pg.colormap.get('inferno', source='matplotlib', skipCache=False)
self.histItem.gradient.setColorMap(cmap)
self.histItem.gradient.showTicks(show=False)
self.histItem.setLevels(mini, maxi)
self.leftwidget.addItem(self.histItem, row=0, col=5, colspan=1)
def roiWidget(self, wavel, time):
# Custom ROI for selecting an image region
self.roi = pg.ROI([wavel[0], time[0]],
[np.abs(wavel[0] - wavel[-1]) / 10, np.abs(time[0] - time[-1]) / 10],
rotatable=False)
self.roi.handleSize = 7
self.roi.addScaleHandle([1, 1], [0, 0])
self.roi.addScaleHandle([1, 0.5], [0, 0.5])
self.roi.addScaleHandle([0.5, 1], [0.5, 0])
self.ax2D.addItem(self.roi)
self.roi.setZValue(10) # make sure ROI is drawn above image
self.roi.getArrayRegion(self.inten, self.img, returnMappedCoords=True)
def updateDecayROI(self):
selected = self.roi.getArrayRegion(self.inten, self.img, returnMappedCoords=True)
axis_select = 1
xaxis = selected[1][0][:, 0]
self.decay_plot.plot(xaxis, selected[0].mean(axis=axis_select), clear=True)
def updateSpectrumROI(self):
selected = self.roi.getArrayRegion(self.inten, self.img, returnMappedCoords=True)
axis_select = 0
xaxis = selected[1][1][0]
self.spectrum_plot.plot(xaxis, selected[0].mean(axis=axis_select), clear=True)
def treeWidget(self):
self.log_widget = QtWidgets.QTreeWidget()
self.log_widget.setHeaderItem(QtWidgets.QTreeWidgetItem(experiment_list))
self.treeParents = {}
self.buttons = {}
for i in range(len(experiment['names'])):
self.treeParents[i] = QtWidgets.QTreeWidgetItem([f'{i:02d}'])
self.log_widget.addTopLevelItem(self.treeParents[i])
self.buttons[i] = QtWidgets.QPushButton(experiment['sample'])
self.log_widget.setItemWidget(self.treeParents[i], 1, self.buttons[i])
self.buttons[i].clicked.connect(partial(self.button, i))
for x in range(len(experiment_list) - 2): # -2 then +2 to account for # and sample cols.
x += 2
self.treeParents[i].setText(x, str(experiment[experiment_list[x]][i]))
self.log_widget.setStyleSheet(stylesheet)
def openhdf5(self, idx):
"""
Method to import data for an .hdf5 file.
Note that there should be only one hdf5 file in the working directory.
"""
with h5py.File(glob.glob('data/*.hdf5')[0], 'r') as f:
if isinstance(idx, str):
pass
else:
idx = str(idx)
data = np.array(f.get(str(idx)))
wavel = data[1:, 0]
wavel += 0.008 # Correction for this specific dataset.
time = data[0, 1:]
inten = data[1:, 1:]
# print(inten.shape)
xlabels, ylabels, data = self.downsample(wavel, time, inten)
xlabels, ylabels, data = self.rebin(xlabels, ylabels, data, self.downsample_factor)
# print(data.shape)
return xlabels, ylabels, data
def downsampleFactor(self, value=1):
"""
TODO: add slider and button to choose downsample size.
Currently fixed to 2.
See the downsample method.
"""
self.downsample_factor = value
def rebin(self, wavel, time, inten, downsample):
"""
Method used to downsample or rebin data for speed reasons.
There is a trade off between speed and resolution.
Note that the energy and time resolution is much less than the pixel size.
e.g. time resolution of ~4 ps and pixel size of ~0.35ps per pixel.
If the downsample factor is not a factor of axis sizes:
- the x axis remainder sliced from the high energy side and
- the y axis remainder sliced from before time zero.
These areas are the most likely to not contain any important data.
"""
M, N = inten.shape
if M % downsample != 0:
remove = -1 * (M % downsample)
inten = inten[:remove, :]
wavel = wavel[:remove]
if N % downsample != 0:
remove = N % downsample
inten = inten[:, remove:]
time = time[remove:]
M, N = inten.shape
m, n = M // downsample, N // downsample
wavel = np.average(wavel.reshape(-1, downsample), axis=1)
time = np.average(time.reshape(-1, downsample), axis=1)
print(n, m, N, M)
return wavel, time, inten.reshape((m, downsample, n, downsample)).mean(3).mean(1)
def downsample(self, wavel, time, inten):
"""
Method used to even the spacing between each point on both axes.
For example the time axis ranges between ~0.25 to ~0.45ps spacing.
This method uses interpolation to even the spacing to the mean.
Currently RectBivariateSpline is used.
"""
# Currently using the mean window sizes for the final step size.
step_x = np.ediff1d(wavel).mean()
step_y = np.ediff1d(time).mean()
# Function to generate interpolated values from our irregular grid
t0 = timing.time()
f = interpolate.RectBivariateSpline(wavel, time, inten)
t1 = timing.time()
print(f'Interpolate time: {t1 - t0}')
# Generate new data on the regular grid
xlabels = np.arange(wavel[0], wavel[-1] + step_x, step_x)
ylabels = np.arange(time[0], time[-1] + step_y, step_y)
data = f(xlabels, ylabels)
# print(data.shape)
return xlabels, ylabels, data
def button(self, idx):
if type(idx) == int:
wavel, time, self.inten = self.openhdf5(idx)
self.updatePlot(wavel, time, self.inten)
maxi = np.max(self.inten) / 2
mini = np.average(self.inten) + 0.2
self.histItem.setLevels(mini, maxi)
self.updateSpectrumROI()
self.updateDecayROI()
def changeColormap(self, cmapstr):
cmap = pg.colormap.get(cmapstr, source='matplotlib', skipCache=False)
self.histItem.gradient.setColorMap(cmap)
self.histItem.gradient.showTicks(show=False)
def controlWidgets(self):
downsampleLabel = QtWidgets.QLabel('Choose the downsample value: ')
downsampleSpinBox = QtWidgets.QSpinBox()
downsampleSpinBox.setValue(1)
downsampleSpinBox.setRange(1, 5)
colormapLabel = QtWidgets.QLabel('Choose the colormap: ')
colormapCombo = QtWidgets.QComboBox()
colormapCombo.addItem('inferno')
colormapCombo.addItem('turbo')
colormapCombo.addItem('viridis')
colormapCombo.addItem('nipy_spectral')
downsampleLabel.setStyleSheet(stylesheet)
downsampleSpinBox.setStyleSheet(stylesheet)
colormapLabel.setStyleSheet(stylesheet)
colormapCombo.setStyleSheet(stylesheet)
downsampleSpinBox.valueChanged.connect(self.downsampleFactor)
colormapCombo.currentTextChanged.connect(self.changeColormap)
self.controlsLayout = QtWidgets.QGridLayout()
self.controlsLayout.addWidget(downsampleLabel, 0, 0)
self.controlsLayout.addWidget(downsampleSpinBox, 0, 1)
self.controlsLayout.addWidget(colormapLabel, 1, 0)
self.controlsLayout.addWidget(colormapCombo, 1, 1)
def application():
if not QtWidgets.QApplication.instance():
app = QtWidgets.QApplication(sys.argv)
else:
app = QtWidgets.QApplication.instance()
main = MainWindow()
main.show()
sys.exit(app.exec())
| 0.388618 | 0.083516 |
import traceback
from typing import Union
import pandas as pd
import numpy as np
def combine_csv_files(from_files: list, to_file: str, wanted_cols: Union[list, str, None] = None, *args, **kwargs) -> pd.DataFrame:
"""
Covert several csv files to ONE csv file with specified columns.
:param na_vals:
:param sep:
:param from_files: a list of csv file paths which represent the source files to combine, <br>
eg. ['path/to/source_file_1.csv', 'path/to/source_file_2.csv'] <br> <br>
:param to_file: the csv file path which designate the destinate location to store the result, <br>
eg. 'path/to/save_file.csv' <br> <br>
:param wanted_cols: the filter columns, which will be the result csv columns, <br>
no data in this column and this column will be empty, <br>
no wanted_cols provided (None), all columns will be preserved. <br>
:return: pd.DataFrame which is the data content store in the "to_file" csv file.
"""
if from_files is None:
raise ValueError('from_files cannot be None')
elif type(from_files) is not list:
raise ValueError('from_files must be <type: list>')
elif len(from_files) == 0:
raise ValueError('from_files cannot be empty')
if to_file is None:
raise ValueError('to_file cannot be None')
elif type(to_file) is not str:
raise ValueError('to_file must be <type: str>')
elif len(to_file) == 0:
raise ValueError('to_file cannot be empty')
dfs = []
for _from_file in from_files:
try:
_df = pd.read_csv(_from_file, *args, **kwargs)
dfs.append(_df)
except:
print('*'*32)
print(f'- pd.read_csv error with input file: "{_from_file}"')
traceback.print_exc()
print('*'*32)
continue
# combine all dfs with concat 'outer' join,
# ignore_index will allow concat directly and add columns automatically,
# axis=0 means concat follow vertical direction.
final_combined_df = pd.concat(dfs, axis=0, ignore_index=True, sort=False)
if wanted_cols is None \
or (type(wanted_cols) is list and len(wanted_cols) == 0) \
or (type(wanted_cols) is not list and type(wanted_cols) is not str):
final_combined_df = final_combined_df
else:
current_cols = final_combined_df.columns.to_list()
if type(wanted_cols) is list:
for _col in wanted_cols:
if _col not in current_cols:
final_combined_df[_col] = np.nan
elif type(wanted_cols) is str:
if wanted_cols not in current_cols:
final_combined_df[wanted_cols] = np.nan
final_combined_df = final_combined_df[wanted_cols]
final_combined_df.to_csv(to_file, header=True)
return final_combined_df
if __name__ == '__main__':
d1 = {'A': [2, 3, 4], 'B': ['a', 'b', 'c'], 'C': ['10002', 'sss', 'msc23d']}
d2 = {'A': [12, 13, 4, 15], 'B': ['1a', 'b', 'c', '1Z'], 'Z': ['333', '444', '555', 'ZZZ']}
df1 = pd.DataFrame(d1)
df2 = pd.DataFrame(d2)
df1_pth = 'df1_test.csv'
df2_pth = 'df2_test.csv'
df1.to_csv(df1_pth, index=False)
df2.to_csv(df2_pth, index=False)
# dfNone = combine_csv_files(from_files=[df1_pth, df2_pth], to_file='dfcombine_test_None.csv', wanted_cols=None)
# dfAZC = combine_csv_files(from_files=[df1_pth, df2_pth], to_file='dfcombine_test_AZC.csv', wanted_cols=['A', 'Z', 'C'])
dfNone = combine_csv_files([df1_pth, df2_pth], 'dfcombine_test_None.csv', None)
dfAZC = combine_csv_files([df1_pth, df2_pth], 'dfcombine_test_AZC.csv', ['A', 'Z', 'C'])
dfZ = combine_csv_files([df1_pth, df2_pth], 'dfcombine_test_Z.csv', 'Z')
dfZZZ = combine_csv_files([df1_pth, df2_pth], 'dfcombine_test_ZZZ.csv', 'ZZZ')
print('df1 === \n', df1)
print('df2 === \n', df2)
print('dfNone === \n', dfNone)
print('dfAZC === \n', dfAZC)
print('dfZ === \n', dfZ)
print('dfZZZ === \n', dfZZZ)
|
sci-util
|
/sci-util-1.2.7.tar.gz/sci-util-1.2.7/sci_util/pd/csv.py
|
csv.py
|
import traceback
from typing import Union
import pandas as pd
import numpy as np
def combine_csv_files(from_files: list, to_file: str, wanted_cols: Union[list, str, None] = None, *args, **kwargs) -> pd.DataFrame:
"""
Covert several csv files to ONE csv file with specified columns.
:param na_vals:
:param sep:
:param from_files: a list of csv file paths which represent the source files to combine, <br>
eg. ['path/to/source_file_1.csv', 'path/to/source_file_2.csv'] <br> <br>
:param to_file: the csv file path which designate the destinate location to store the result, <br>
eg. 'path/to/save_file.csv' <br> <br>
:param wanted_cols: the filter columns, which will be the result csv columns, <br>
no data in this column and this column will be empty, <br>
no wanted_cols provided (None), all columns will be preserved. <br>
:return: pd.DataFrame which is the data content store in the "to_file" csv file.
"""
if from_files is None:
raise ValueError('from_files cannot be None')
elif type(from_files) is not list:
raise ValueError('from_files must be <type: list>')
elif len(from_files) == 0:
raise ValueError('from_files cannot be empty')
if to_file is None:
raise ValueError('to_file cannot be None')
elif type(to_file) is not str:
raise ValueError('to_file must be <type: str>')
elif len(to_file) == 0:
raise ValueError('to_file cannot be empty')
dfs = []
for _from_file in from_files:
try:
_df = pd.read_csv(_from_file, *args, **kwargs)
dfs.append(_df)
except:
print('*'*32)
print(f'- pd.read_csv error with input file: "{_from_file}"')
traceback.print_exc()
print('*'*32)
continue
# combine all dfs with concat 'outer' join,
# ignore_index will allow concat directly and add columns automatically,
# axis=0 means concat follow vertical direction.
final_combined_df = pd.concat(dfs, axis=0, ignore_index=True, sort=False)
if wanted_cols is None \
or (type(wanted_cols) is list and len(wanted_cols) == 0) \
or (type(wanted_cols) is not list and type(wanted_cols) is not str):
final_combined_df = final_combined_df
else:
current_cols = final_combined_df.columns.to_list()
if type(wanted_cols) is list:
for _col in wanted_cols:
if _col not in current_cols:
final_combined_df[_col] = np.nan
elif type(wanted_cols) is str:
if wanted_cols not in current_cols:
final_combined_df[wanted_cols] = np.nan
final_combined_df = final_combined_df[wanted_cols]
final_combined_df.to_csv(to_file, header=True)
return final_combined_df
if __name__ == '__main__':
d1 = {'A': [2, 3, 4], 'B': ['a', 'b', 'c'], 'C': ['10002', 'sss', 'msc23d']}
d2 = {'A': [12, 13, 4, 15], 'B': ['1a', 'b', 'c', '1Z'], 'Z': ['333', '444', '555', 'ZZZ']}
df1 = pd.DataFrame(d1)
df2 = pd.DataFrame(d2)
df1_pth = 'df1_test.csv'
df2_pth = 'df2_test.csv'
df1.to_csv(df1_pth, index=False)
df2.to_csv(df2_pth, index=False)
# dfNone = combine_csv_files(from_files=[df1_pth, df2_pth], to_file='dfcombine_test_None.csv', wanted_cols=None)
# dfAZC = combine_csv_files(from_files=[df1_pth, df2_pth], to_file='dfcombine_test_AZC.csv', wanted_cols=['A', 'Z', 'C'])
dfNone = combine_csv_files([df1_pth, df2_pth], 'dfcombine_test_None.csv', None)
dfAZC = combine_csv_files([df1_pth, df2_pth], 'dfcombine_test_AZC.csv', ['A', 'Z', 'C'])
dfZ = combine_csv_files([df1_pth, df2_pth], 'dfcombine_test_Z.csv', 'Z')
dfZZZ = combine_csv_files([df1_pth, df2_pth], 'dfcombine_test_ZZZ.csv', 'ZZZ')
print('df1 === \n', df1)
print('df2 === \n', df2)
print('dfNone === \n', dfNone)
print('dfAZC === \n', dfAZC)
print('dfZ === \n', dfZ)
print('dfZZZ === \n', dfZZZ)
| 0.702632 | 0.382459 |
def cnt_split(tar_list, cnt_per_slice):
"""
Yield successive n-sized(cnt_per_slice) chunks from l(tar_list).
>>> x = list(range(34))
>>> for i in cnt_split(x, 5):
>>> print(i)
<<< print result ...
<<< [0, 1, 2, 3, 4]
<<< [5, 6, 7, 8, 9]
<<< [10, 11, 12, 13, 14]
<<< [15, 16, 17, 18, 19]
<<< [20, 21, 22, 23, 24]
<<< [25, 26, 27, 28, 29]
<<< [30, 31, 32, 33]
The targetlist can be split into slices with a NAX size of 'cnt_per_slice' ...
:param tar_list: target list to split
:param cnt_per_slice: slice per max size...
:return: yield one result.
"""
for i in range(0, len(tar_list), cnt_per_slice):
yield tar_list[i:i + cnt_per_slice]
def n_split(tar_list, n):
"""
Yield successive n-sized(cnt_per_slice) chunks from l(tar_list).
>>> x = list(range(33))
>>> for i in n_split(x, 5):
>>> print(i)
<<< print result ...
<<< [0, 1, 2, 3, 4, 5, 6]
<<< [7, 8, 9, 10, 11, 12, 13]
<<< [14, 15, 16, 17, 18, 19, 20]
<<< [21, 22, 23, 24, 25, 26]
<<< [27, 28, 29, 30, 31, 32]
The targetlist can be split into slices with a NAX size of 'cnt_per_slice' ...
:param tar_list: target list to split
:param n: slice counts ...
:return: yield one result.
"""
slice_len = int(len(tar_list) / n)
slice_len_1 = slice_len + 1
slice_remain = int(len(tar_list) % n)
cur_idx = 0
for i in range(n):
# print(f'{i} < {slice_remain} : [{cur_idx}: {cur_idx+(slice_len_1 if i < slice_remain else slice_len)}]')
yield tar_list[cur_idx: cur_idx+(slice_len_1 if i < slice_remain else slice_len)]
cur_idx += slice_len_1 if i < slice_remain else slice_len
def n_split_idx(tar_list_len, n):
"""
Yield successive n-sized(cnt_per_slice) chunks from l(tar_list).
>>> x = list(range(33))
>>> n_split_idx(len(x), 3)
<<< [11, 11, 11]
>>> n_split_idx(len(x), 4)
<<< [9, 8, 8, 8]
>>> n_split_idx(len(x), 5)
<<< [7, 7, 7, 6, 6]
>>> n_split_idx(len(x), 6)
<<< [6, 6, 6, 5, 5, 5]
>>> n_split_idx(len(x), 7)
<<< [5, 5, 5, 5, 5, 4, 4]
The targetlist can be split into slices with a NAX size of 'cnt_per_slice' ...
:param tar_list_len: target list length to split
:param n: slice counts ...
:return: list of each slice length.
"""
slice_len = int(tar_list_len / n)
slice_remain = int(tar_list_len % n)
res = []
for i in range(n):
if i<slice_remain:
res.append(slice_len+1)
else:
res.append(slice_len)
return res
|
sci-util
|
/sci-util-1.2.7.tar.gz/sci-util-1.2.7/sci_util/list_util/split_util.py
|
split_util.py
|
def cnt_split(tar_list, cnt_per_slice):
"""
Yield successive n-sized(cnt_per_slice) chunks from l(tar_list).
>>> x = list(range(34))
>>> for i in cnt_split(x, 5):
>>> print(i)
<<< print result ...
<<< [0, 1, 2, 3, 4]
<<< [5, 6, 7, 8, 9]
<<< [10, 11, 12, 13, 14]
<<< [15, 16, 17, 18, 19]
<<< [20, 21, 22, 23, 24]
<<< [25, 26, 27, 28, 29]
<<< [30, 31, 32, 33]
The targetlist can be split into slices with a NAX size of 'cnt_per_slice' ...
:param tar_list: target list to split
:param cnt_per_slice: slice per max size...
:return: yield one result.
"""
for i in range(0, len(tar_list), cnt_per_slice):
yield tar_list[i:i + cnt_per_slice]
def n_split(tar_list, n):
"""
Yield successive n-sized(cnt_per_slice) chunks from l(tar_list).
>>> x = list(range(33))
>>> for i in n_split(x, 5):
>>> print(i)
<<< print result ...
<<< [0, 1, 2, 3, 4, 5, 6]
<<< [7, 8, 9, 10, 11, 12, 13]
<<< [14, 15, 16, 17, 18, 19, 20]
<<< [21, 22, 23, 24, 25, 26]
<<< [27, 28, 29, 30, 31, 32]
The targetlist can be split into slices with a NAX size of 'cnt_per_slice' ...
:param tar_list: target list to split
:param n: slice counts ...
:return: yield one result.
"""
slice_len = int(len(tar_list) / n)
slice_len_1 = slice_len + 1
slice_remain = int(len(tar_list) % n)
cur_idx = 0
for i in range(n):
# print(f'{i} < {slice_remain} : [{cur_idx}: {cur_idx+(slice_len_1 if i < slice_remain else slice_len)}]')
yield tar_list[cur_idx: cur_idx+(slice_len_1 if i < slice_remain else slice_len)]
cur_idx += slice_len_1 if i < slice_remain else slice_len
def n_split_idx(tar_list_len, n):
"""
Yield successive n-sized(cnt_per_slice) chunks from l(tar_list).
>>> x = list(range(33))
>>> n_split_idx(len(x), 3)
<<< [11, 11, 11]
>>> n_split_idx(len(x), 4)
<<< [9, 8, 8, 8]
>>> n_split_idx(len(x), 5)
<<< [7, 7, 7, 6, 6]
>>> n_split_idx(len(x), 6)
<<< [6, 6, 6, 5, 5, 5]
>>> n_split_idx(len(x), 7)
<<< [5, 5, 5, 5, 5, 4, 4]
The targetlist can be split into slices with a NAX size of 'cnt_per_slice' ...
:param tar_list_len: target list length to split
:param n: slice counts ...
:return: list of each slice length.
"""
slice_len = int(tar_list_len / n)
slice_remain = int(tar_list_len % n)
res = []
for i in range(n):
if i<slice_remain:
res.append(slice_len+1)
else:
res.append(slice_len)
return res
| 0.459561 | 0.480052 |
from sklearn.metrics import (
accuracy_score,
confusion_matrix,
classification_report,
roc_curve,
roc_auc_score,
)
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
def show_classification(y_test, y_pred):
r"""
Confusion matrix
- Binary:
- y_test: [1, 0, 1, 0, 1]
- y_pred: [0.1, 0.2, 0.9, 0, 0.8]
- Multi:
- y_test: [1, 2, 1, 0, 1]
- y_pred: [[0.1, 0.8, 0.1], [0.1, 0.2, 0.7], [0.1, 0.6, 0.3], [0.5, 0.3. 0.2], [0.1, 0.6, 0.4]]
"""
cm = confusion_matrix(y_test, y_pred)
TN = cm[0, 0]
TP = cm[1, 1]
FP = cm[0, 1]
FN = cm[1, 0]
print(sum(y_test), sum(y_pred))
print("Confusion matrix\n\n", cm)
print("\nTrue Negatives(TN) = ", TN)
print("\nTrue Positives(TP) = ", TP)
print("\nFalse Positives(FP) = ", FP)
print("\nFalse Negatives(FN) = ", FN)
classification_accuracy = (TP + TN) / float(TP + TN + FP + FN)
print("Classification accuracy : {0:0.4f}".format(classification_accuracy))
classification_error = (FP + FN) / float(TP + TN + FP + FN)
print("Classification error : {0:0.4f}".format(classification_error))
precision = TP / float(TP + FP)
print("Precision : {0:0.4f}".format(precision))
recall = TP / float(TP + FN)
print("Recall or Sensitivity : {0:0.4f}".format(recall))
true_positive_rate = TP / float(TP + FN)
print("True Positive Rate : {0:0.4f}".format(true_positive_rate))
false_positive_rate = FP / float(FP + TN)
print("False Positive Rate : {0:0.4f}".format(false_positive_rate))
specificity = TN / (TN + FP)
print("Specificity : {0:0.4f}".format(specificity))
cm_matrix = pd.DataFrame(
data=cm.T,
columns=["Actual Negative:0", "Actual Positive:1"],
index=["Predict Negative:0", "Predict Positive:1"],
)
sns.heatmap(cm_matrix, annot=True, fmt="d", cmap="YlGnBu")
fpr, tpr, thresholds = roc_curve(y_test, y_pred)
plt.figure(figsize=(6, 4))
plt.plot(fpr, tpr, linewidth=2)
plt.plot([0, 1], [0, 1], "k--")
plt.rcParams["font.size"] = 12
plt.title("ROC curve for Predicting a Pulsar Star classifier")
plt.xlabel("False Positive Rate (1 - Specificity)")
plt.ylabel("True Positive Rate (Sensitivity)")
plt.show()
ROC_AUC = roc_auc_score(y_test, y_pred)
print("ROC AUC : {:.4f}".format(ROC_AUC))
|
sci-ztools
|
/sci_ztools-0.1.4-py3-none-any.whl/z/metrics.py
|
metrics.py
|
from sklearn.metrics import (
accuracy_score,
confusion_matrix,
classification_report,
roc_curve,
roc_auc_score,
)
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
def show_classification(y_test, y_pred):
r"""
Confusion matrix
- Binary:
- y_test: [1, 0, 1, 0, 1]
- y_pred: [0.1, 0.2, 0.9, 0, 0.8]
- Multi:
- y_test: [1, 2, 1, 0, 1]
- y_pred: [[0.1, 0.8, 0.1], [0.1, 0.2, 0.7], [0.1, 0.6, 0.3], [0.5, 0.3. 0.2], [0.1, 0.6, 0.4]]
"""
cm = confusion_matrix(y_test, y_pred)
TN = cm[0, 0]
TP = cm[1, 1]
FP = cm[0, 1]
FN = cm[1, 0]
print(sum(y_test), sum(y_pred))
print("Confusion matrix\n\n", cm)
print("\nTrue Negatives(TN) = ", TN)
print("\nTrue Positives(TP) = ", TP)
print("\nFalse Positives(FP) = ", FP)
print("\nFalse Negatives(FN) = ", FN)
classification_accuracy = (TP + TN) / float(TP + TN + FP + FN)
print("Classification accuracy : {0:0.4f}".format(classification_accuracy))
classification_error = (FP + FN) / float(TP + TN + FP + FN)
print("Classification error : {0:0.4f}".format(classification_error))
precision = TP / float(TP + FP)
print("Precision : {0:0.4f}".format(precision))
recall = TP / float(TP + FN)
print("Recall or Sensitivity : {0:0.4f}".format(recall))
true_positive_rate = TP / float(TP + FN)
print("True Positive Rate : {0:0.4f}".format(true_positive_rate))
false_positive_rate = FP / float(FP + TN)
print("False Positive Rate : {0:0.4f}".format(false_positive_rate))
specificity = TN / (TN + FP)
print("Specificity : {0:0.4f}".format(specificity))
cm_matrix = pd.DataFrame(
data=cm.T,
columns=["Actual Negative:0", "Actual Positive:1"],
index=["Predict Negative:0", "Predict Positive:1"],
)
sns.heatmap(cm_matrix, annot=True, fmt="d", cmap="YlGnBu")
fpr, tpr, thresholds = roc_curve(y_test, y_pred)
plt.figure(figsize=(6, 4))
plt.plot(fpr, tpr, linewidth=2)
plt.plot([0, 1], [0, 1], "k--")
plt.rcParams["font.size"] = 12
plt.title("ROC curve for Predicting a Pulsar Star classifier")
plt.xlabel("False Positive Rate (1 - Specificity)")
plt.ylabel("True Positive Rate (Sensitivity)")
plt.show()
ROC_AUC = roc_auc_score(y_test, y_pred)
print("ROC AUC : {:.4f}".format(ROC_AUC))
| 0.805096 | 0.590012 |
from pathlib import Path
import shutil
from typing import Optional, Union, List
try:
import gzip
import tarfile
except:
raise ImportError
def get_path(path: Union[Path, str]) -> Path:
"""Transform to `Path`.
Args:
path (str): The path to be transformed.
Returns:
Path: the `pathlib.Path` class
"""
if isinstance(path, Path):
return path
else:
return Path(path)
def get_path_out(
path_in: Path, rename: str, path_out: Optional[Union[Path, str]] = None
):
r"""
Adaptor pathout to path_in
"""
if path_out is None:
return path_in.parent / rename
else:
_path_out = get_path(path_out)
if _path_out.is_dir():
return _path_out / rename
elif path_out.is_file():
return _path_out
def zip(path_in: Union[Path, str], path_out: Optional[Union[Path, str]] = None):
r""" """
_path_in = get_path(path_in)
assert _path_in.is_file(), f"{path_in} is not a file"
rename = _path_in.name + ".gz"
_path_out = get_path_out(_path_in, rename, path_out)
with open(_path_in, "rb") as f_in:
with gzip.open(_path_out, "wb") as f_out:
f_out.write(f_in.read())
def unzip(path_in: Union[Path, str], path_out: Optional[Union[Path, str]] = None):
_path_in = get_path(path_in)
assert _path_in.is_file(), f"{path_in} is not a file"
assert _path_in.suffix == ".gz", f"not .gz file name"
rename = _path_in.name.rstrip(".gz") # rip
_path_out = get_path_out(_path_in, rename, path_out)
with gzip.open(_path_in, "rb") as f_in:
with open(_path_out, "wb") as f_out:
f_out.write(f_in.read())
def tar(
path: Union[Path, str], staffs: Union[List[Union[Path, str]], Union[Path, str]]
):
_path = get_path(path)
with tarfile.open(_path, "w:gz") as tar:
if isinstance(staffs, (str, Path)):
tar.add(staffs)
print(f"add {staffs}")
elif isinstance(staffs, List):
for staff in staffs:
tar.add(staff)
print(f"add {staff}")
def untar(path_in: Union[Path, str], path_out: Optional[Union[Path, str]] = None):
_path_in = get_path(path_in)
assert _path_in.is_file(), f"{path_in} is not a file"
rename = _path_in.name.rstrip(".tar.gz") # rip
_path_out = get_path_out(_path_in, rename, path_out)
with tarfile.open(_path_in, "r:gz") as tar:
tar.extract(_path_out)
|
sci-ztools
|
/sci_ztools-0.1.4-py3-none-any.whl/z/sh.py
|
sh.py
|
from pathlib import Path
import shutil
from typing import Optional, Union, List
try:
import gzip
import tarfile
except:
raise ImportError
def get_path(path: Union[Path, str]) -> Path:
"""Transform to `Path`.
Args:
path (str): The path to be transformed.
Returns:
Path: the `pathlib.Path` class
"""
if isinstance(path, Path):
return path
else:
return Path(path)
def get_path_out(
path_in: Path, rename: str, path_out: Optional[Union[Path, str]] = None
):
r"""
Adaptor pathout to path_in
"""
if path_out is None:
return path_in.parent / rename
else:
_path_out = get_path(path_out)
if _path_out.is_dir():
return _path_out / rename
elif path_out.is_file():
return _path_out
def zip(path_in: Union[Path, str], path_out: Optional[Union[Path, str]] = None):
r""" """
_path_in = get_path(path_in)
assert _path_in.is_file(), f"{path_in} is not a file"
rename = _path_in.name + ".gz"
_path_out = get_path_out(_path_in, rename, path_out)
with open(_path_in, "rb") as f_in:
with gzip.open(_path_out, "wb") as f_out:
f_out.write(f_in.read())
def unzip(path_in: Union[Path, str], path_out: Optional[Union[Path, str]] = None):
_path_in = get_path(path_in)
assert _path_in.is_file(), f"{path_in} is not a file"
assert _path_in.suffix == ".gz", f"not .gz file name"
rename = _path_in.name.rstrip(".gz") # rip
_path_out = get_path_out(_path_in, rename, path_out)
with gzip.open(_path_in, "rb") as f_in:
with open(_path_out, "wb") as f_out:
f_out.write(f_in.read())
def tar(
path: Union[Path, str], staffs: Union[List[Union[Path, str]], Union[Path, str]]
):
_path = get_path(path)
with tarfile.open(_path, "w:gz") as tar:
if isinstance(staffs, (str, Path)):
tar.add(staffs)
print(f"add {staffs}")
elif isinstance(staffs, List):
for staff in staffs:
tar.add(staff)
print(f"add {staff}")
def untar(path_in: Union[Path, str], path_out: Optional[Union[Path, str]] = None):
_path_in = get_path(path_in)
assert _path_in.is_file(), f"{path_in} is not a file"
rename = _path_in.name.rstrip(".tar.gz") # rip
_path_out = get_path_out(_path_in, rename, path_out)
with tarfile.open(_path_in, "r:gz") as tar:
tar.extract(_path_out)
| 0.900157 | 0.292709 |
import os
import random
from itertools import takewhile, repeat
from pathlib import Path
from typing import Union, List, Optional
import numpy as np
import pandas as pd
import torch
from rich import console
from rich.table import Table
from sklearn.model_selection import KFold # Kfold cross validation
import logging
from rich.logging import RichHandler
from logging import FileHandler
from typing import Optional
from sklearn.utils import shuffle
def get_logger(
name: Optional[str] = None, filename: Optional[str] = None, level: str = "INFO"
) -> logging.Logger:
"""Get glorified Rich Logger"""
name = name if name else __name__
handlers = [
RichHandler(
rich_tracebacks=True,
)
]
if filename:
handlers.append(FileHandler(filename))
logging.basicConfig(format="%(name)s: %(message)s", handlers=handlers)
log = logging.getLogger(name)
log.setLevel(level)
return log
log = get_logger()
def read_excel(
paths: Union[Path, List[Path]], drop_by: Optional[str] = None
) -> pd.DataFrame:
"""Read excel and get pandas.DataFrame"""
if isinstance(paths, List):
# use openpyxl for better excel
df = pd.concat([pd.read_excel(path, engine="openpyxl") for path in paths])
elif isinstance(paths, Path):
df = pd.read_excel(paths, engine="openpyxl")
else:
raise NotImplementedError
# remove blank lines in the tail of xlsx
# use drop to make sure the index order
if drop_by:
df.dropna(subset=[drop_by], inplace=True)
df.reset_index(drop=True, inplace=True)
return df
def get_device():
"get device (CPU or GPU)"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpu = torch.cuda.device_count()
print("%s (%d GPUs)" % (device, n_gpu))
return device
def iter_count(file_name):
"""Text lines counter"""
buffer = 1024 * 1024
with open(file_name) as f:
buf_gen = takewhile(lambda x: x, (f.read(buffer) for _ in repeat(None)))
return sum(buf.count("\n") for buf in buf_gen)
def df_to_table(
pandas_dataframe: pd.DataFrame,
rich_table: Table,
show_index: bool = True,
index_name: Optional[str] = None,
) -> Table:
"""Convert a pandas.DataFrame obj into a rich.Table obj.
Args:
pandas_dataframe (DataFrame): A Pandas DataFrame to be converted to a rich Table.
rich_table (Table): A rich Table that should be populated by the DataFrame values.
show_index (bool): Add a column with a row count to the table. Defaults to True.
index_name (str, optional): The column name to give to the index column. Defaults to None, showing no value.
Returns:
Table: The rich Table instance passed, populated with the DataFrame values."""
if show_index:
index_name = str(index_name) if index_name else ""
rich_table.add_column(index_name)
for column in pandas_dataframe.columns:
rich_table.add_column(str(column))
for index, value_list in enumerate(pandas_dataframe.values.tolist()):
row = [str(index)] if show_index else []
row += [str(x) for x in value_list]
rich_table.add_row(*row)
return rich_table
def print_df(
pandas_dataframe: pd.DataFrame,
title: str = None,
):
console.Console().print(df_to_table(pandas_dataframe, Table(title=title)))
def set_seeds(seed):
"set random seeds"
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
def show_ratio(df: pd.DataFrame, label="label", sort=None, n=5) -> None:
"""print the label proportion in pd.DataFrame
Args:
sort: 'value' or 'label'
"""
n_all = len(df)
if sort == "value":
n_classes = (
df[label]
.value_counts()
.reset_index()
.sort_values(by=label, ascending=False)
)
elif sort == "label":
n_classes = df[label].value_counts().reset_index().sort_values(by="index")
else:
n_classes = df[label].value_counts().reset_index()
n_classes = n_classes[:n]
for i in n_classes.index:
log.info(
f'Label {n_classes.at[i, "index"]} takes: {n_classes.at[i, label] / n_all * 100:.2f}%, Nums: {n_classes.at[i, label]}'
)
def split_df(df: pd.DataFrame, shuf=True, val=True, random_state=42):
"""Split df into train/val/test set and write into files
ratio: 8:1:1 or 9:1
Args:
- df (DataFrame): some data
- shuf (bool, default=True): shuffle the DataFrame
- val (bool, default=True): split into three set, train/val/test
"""
if shuf:
df = shuffle(df, random_state=random_state)
sep = int(len(df) * 0.1)
if val:
test_df = df.iloc[:sep]
val_df = df.iloc[sep : sep * 2]
train_df = df.iloc[sep * 2 :]
return train_df, val_df, test_df
else:
test_df = df.iloc[:sep]
train_df = df.iloc[sep:]
return train_df, test_df
def kfold(df: pd.DataFrame, n_splits=5, shuffle=True, random_state=42) -> pd.DataFrame:
"""
:param df: make sure the index correct
:param n_splits:
:param shuffle:
:param random_state:
:return:
"""
_df = df.copy()
if shuffle:
kf = KFold(n_splits=n_splits, shuffle=shuffle, random_state=random_state)
else:
kf = KFold(n_splits=n_splits)
for fold in range(n_splits):
_df[f"fold{fold}"] = False
fold = 0
for train_idxs, test_idxs in kf.split(_df):
print(train_idxs, test_idxs)
for i in test_idxs:
_df.loc[i, f"fold{fold}"] = True
fold += 1
return _df
def get_CV(
df: pd.DataFrame,
n_splits=5,
dir: Path = Path("CV"),
header=True,
index=True,
cols=None,
):
os.makedirs(dir, exist_ok=True)
for fold in range(n_splits):
_df = df.copy()
df_fold_test = _df[_df[f"fold{fold}"]]
df_fold_train = _df[~_df[f"fold{fold}"]]
if cols:
df_fold_test = df_fold_test[cols]
df_fold_train = df_fold_train[cols]
_df = _df[cols]
fold_dir = dir / f"fold{fold}"
os.makedirs(fold_dir, exist_ok=True)
df_fold_test.to_csv(fold_dir / "test.csv", header=header, index=index)
df_fold_train.to_csv(fold_dir / "train.csv", header=header, index=index)
_df.to_csv(fold_dir / "all.csv", header=header, index=index)
if __name__ == "__main__":
df = pd.DataFrame(
{"a": [1, 2, 3, 4, 5, 6, 7, 1, 1], "b": [4, 5, 6, 7, 8, 9, 10, 2, 1]}
)
print(kfold(df))
|
sci-ztools
|
/sci_ztools-0.1.4-py3-none-any.whl/z/utils.py
|
utils.py
|
import os
import random
from itertools import takewhile, repeat
from pathlib import Path
from typing import Union, List, Optional
import numpy as np
import pandas as pd
import torch
from rich import console
from rich.table import Table
from sklearn.model_selection import KFold # Kfold cross validation
import logging
from rich.logging import RichHandler
from logging import FileHandler
from typing import Optional
from sklearn.utils import shuffle
def get_logger(
name: Optional[str] = None, filename: Optional[str] = None, level: str = "INFO"
) -> logging.Logger:
"""Get glorified Rich Logger"""
name = name if name else __name__
handlers = [
RichHandler(
rich_tracebacks=True,
)
]
if filename:
handlers.append(FileHandler(filename))
logging.basicConfig(format="%(name)s: %(message)s", handlers=handlers)
log = logging.getLogger(name)
log.setLevel(level)
return log
log = get_logger()
def read_excel(
paths: Union[Path, List[Path]], drop_by: Optional[str] = None
) -> pd.DataFrame:
"""Read excel and get pandas.DataFrame"""
if isinstance(paths, List):
# use openpyxl for better excel
df = pd.concat([pd.read_excel(path, engine="openpyxl") for path in paths])
elif isinstance(paths, Path):
df = pd.read_excel(paths, engine="openpyxl")
else:
raise NotImplementedError
# remove blank lines in the tail of xlsx
# use drop to make sure the index order
if drop_by:
df.dropna(subset=[drop_by], inplace=True)
df.reset_index(drop=True, inplace=True)
return df
def get_device():
"get device (CPU or GPU)"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpu = torch.cuda.device_count()
print("%s (%d GPUs)" % (device, n_gpu))
return device
def iter_count(file_name):
"""Text lines counter"""
buffer = 1024 * 1024
with open(file_name) as f:
buf_gen = takewhile(lambda x: x, (f.read(buffer) for _ in repeat(None)))
return sum(buf.count("\n") for buf in buf_gen)
def df_to_table(
pandas_dataframe: pd.DataFrame,
rich_table: Table,
show_index: bool = True,
index_name: Optional[str] = None,
) -> Table:
"""Convert a pandas.DataFrame obj into a rich.Table obj.
Args:
pandas_dataframe (DataFrame): A Pandas DataFrame to be converted to a rich Table.
rich_table (Table): A rich Table that should be populated by the DataFrame values.
show_index (bool): Add a column with a row count to the table. Defaults to True.
index_name (str, optional): The column name to give to the index column. Defaults to None, showing no value.
Returns:
Table: The rich Table instance passed, populated with the DataFrame values."""
if show_index:
index_name = str(index_name) if index_name else ""
rich_table.add_column(index_name)
for column in pandas_dataframe.columns:
rich_table.add_column(str(column))
for index, value_list in enumerate(pandas_dataframe.values.tolist()):
row = [str(index)] if show_index else []
row += [str(x) for x in value_list]
rich_table.add_row(*row)
return rich_table
def print_df(
pandas_dataframe: pd.DataFrame,
title: str = None,
):
console.Console().print(df_to_table(pandas_dataframe, Table(title=title)))
def set_seeds(seed):
"set random seeds"
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
def show_ratio(df: pd.DataFrame, label="label", sort=None, n=5) -> None:
"""print the label proportion in pd.DataFrame
Args:
sort: 'value' or 'label'
"""
n_all = len(df)
if sort == "value":
n_classes = (
df[label]
.value_counts()
.reset_index()
.sort_values(by=label, ascending=False)
)
elif sort == "label":
n_classes = df[label].value_counts().reset_index().sort_values(by="index")
else:
n_classes = df[label].value_counts().reset_index()
n_classes = n_classes[:n]
for i in n_classes.index:
log.info(
f'Label {n_classes.at[i, "index"]} takes: {n_classes.at[i, label] / n_all * 100:.2f}%, Nums: {n_classes.at[i, label]}'
)
def split_df(df: pd.DataFrame, shuf=True, val=True, random_state=42):
"""Split df into train/val/test set and write into files
ratio: 8:1:1 or 9:1
Args:
- df (DataFrame): some data
- shuf (bool, default=True): shuffle the DataFrame
- val (bool, default=True): split into three set, train/val/test
"""
if shuf:
df = shuffle(df, random_state=random_state)
sep = int(len(df) * 0.1)
if val:
test_df = df.iloc[:sep]
val_df = df.iloc[sep : sep * 2]
train_df = df.iloc[sep * 2 :]
return train_df, val_df, test_df
else:
test_df = df.iloc[:sep]
train_df = df.iloc[sep:]
return train_df, test_df
def kfold(df: pd.DataFrame, n_splits=5, shuffle=True, random_state=42) -> pd.DataFrame:
"""
:param df: make sure the index correct
:param n_splits:
:param shuffle:
:param random_state:
:return:
"""
_df = df.copy()
if shuffle:
kf = KFold(n_splits=n_splits, shuffle=shuffle, random_state=random_state)
else:
kf = KFold(n_splits=n_splits)
for fold in range(n_splits):
_df[f"fold{fold}"] = False
fold = 0
for train_idxs, test_idxs in kf.split(_df):
print(train_idxs, test_idxs)
for i in test_idxs:
_df.loc[i, f"fold{fold}"] = True
fold += 1
return _df
def get_CV(
df: pd.DataFrame,
n_splits=5,
dir: Path = Path("CV"),
header=True,
index=True,
cols=None,
):
os.makedirs(dir, exist_ok=True)
for fold in range(n_splits):
_df = df.copy()
df_fold_test = _df[_df[f"fold{fold}"]]
df_fold_train = _df[~_df[f"fold{fold}"]]
if cols:
df_fold_test = df_fold_test[cols]
df_fold_train = df_fold_train[cols]
_df = _df[cols]
fold_dir = dir / f"fold{fold}"
os.makedirs(fold_dir, exist_ok=True)
df_fold_test.to_csv(fold_dir / "test.csv", header=header, index=index)
df_fold_train.to_csv(fold_dir / "train.csv", header=header, index=index)
_df.to_csv(fold_dir / "all.csv", header=header, index=index)
if __name__ == "__main__":
df = pd.DataFrame(
{"a": [1, 2, 3, 4, 5, 6, 7, 1, 1], "b": [4, 5, 6, 7, 8, 9, 10, 2, 1]}
)
print(kfold(df))
| 0.826116 | 0.332581 |
import pandas as pd
from sklearn.utils import shuffle
from sklearn.model_selection import (
StratifiedShuffleSplit,
StratifiedKFold,
KFold,
train_test_split,
)
from typing import Optional, Union, List, Tuple
from pathlib import Path
import copy
class DataFrame():
def __init__(
self, df: pd.DataFrame, random_state: int = 42, *args, **kargs
) -> None:
super(DataFrame, self).__init__()
self.df = df.copy(deep=True) # get a copy of original dataframe
self.random_state = random_state
def __repr__(self) -> str:
return repr(self.df)
def split_df(
self,
val: bool = False,
test_size: float = 0.1,
val_size: Optional[float] = None,
):
if val:
assert val_size is not None, "val size needed"
val_num, test_num = len(self.df) * [val_size, test_size]
train_df, val_df = train_test_split(
self.df, test_size=int(val_num), random_state=self.random_state
)
train_df, test_df = train_test_split(
train_df, test_size=int(test_num), random_state=self.random_state
)
return train_df, val_df, test_df
else:
train_df, test_df = train_test_split(
train_df, test_size=test_size, random_state=self.random_state
)
return train_df, test_df
def stratified_split_df(
self, labels: Union[List[str], str], n_splits: int = 1, test_size: float = 0.1
) -> Union[List, Tuple]:
split = StratifiedShuffleSplit(
n_splits=n_splits, test_size=test_size, random_state=self.random_state
)
df_trains = []
df_tests = []
for train_index, test_index in split.split(self.df, self.df[labels]):
strat_train_set = self.df.loc[train_index]
strat_test_set = self.df.loc[test_index]
df_trains.append(strat_train_set)
df_tests.append(strat_test_set)
return (
(strat_train_set, strat_test_set)
if n_splits == 1
else (df_trains, df_tests)
)
def stratified_kfold_split_df(
self, labels: Union[List[str], str], n_splits: int = 2
) -> Tuple:
assert n_splits >= 2, "At least 2 fold"
skf = StratifiedKFold(
n_splits=n_splits, shuffle=True, random_state=self.random_state
)
for train_index, test_index in skf.split(self.df, self.df[labels]):
strat_train_set = self.df.loc[train_index]
strat_test_set = self.df.loc[test_index]
yield strat_train_set, strat_test_set
def kfold_split_df(self, labels: Union[List[str], str], n_splits: int = 2) -> Tuple:
assert n_splits >= 2, "At least 2 fold"
df_trains = []
df_tests = []
skf = StratifiedKFold(
n_splits=n_splits, shuffle=True, random_state=self.random_state
)
for train_index, test_index in skf.split(self.df, self.df[labels]):
strat_train_set = self.df.loc[train_index]
strat_test_set = self.df.loc[test_index]
df_trains.append(strat_train_set)
df_tests.append(strat_test_set)
return df_trains, df_tests
def show_ratio(self, label="label", sort=None, n: Optional[int] = None):
"""print the label proportion in pd.DataFrame
Args:
sort: 'value' or 'label'
"""
n_all = len(self.df)
if sort == "value":
n_classes = (
self.df[label]
.value_counts()
.reset_index()
.sort_values(by=label, ascending=False)
)
elif sort == "label":
n_classes = (
self.df[label].value_counts().reset_index().sort_values(by="index")
)
else:
n_classes = self.df[label].value_counts().reset_index()
if n:
n_classes = n_classes[:n]
for i in n_classes.index:
print(
f'Label, {n_classes.at[i, "index"]} takes: {n_classes.at[i, label] / n_all * 100:.2f}%, Nums: {n_classes.at[i, label]}'
)
def read_csv(path: str = "", random_state: int = 42, *args, **kargs):
_path = Path(path)
assert _path.is_file(), "not a file"
return DataFrame(df=pd.read_csv(path, *args, **kargs), random_state=random_state)
def split_df(df: pd.DataFrame, shuf=True, val=True, random_state=42):
"""Split df into train/val/test set and write into files
ratio: 8:1:1 or 9:1
Args:
- df (DataFrame): some data
- shuf (bool, default=True): shuffle the DataFrame
- val (bool, default=True): split into three set, train/val/test
"""
if shuf:
df = shuffle(df, random_state=random_state)
sep = int(len(df) * 0.1)
if val:
test_df = df.iloc[:sep]
val_df = df.iloc[sep : sep * 2]
train_df = df.iloc[sep * 2 :]
return train_df, val_df, test_df
else:
test_df = df.iloc[:sep]
train_df = df.iloc[sep:]
return train_df, test_df
def show_ratio(df: pd.DataFrame, label="label", sort=None, n=5) -> None:
"""print the label proportion in pd.DataFrame
Args:
sort: 'value' or 'label'
"""
n_all = len(df)
if sort == "value":
n_classes = (
df[label]
.value_counts()
.reset_index()
.sort_values(by=label, ascending=False)
)
elif sort == "label":
n_classes = df[label].value_counts().reset_index().sort_values(by="index")
else:
n_classes = df[label].value_counts().reset_index()
n_classes = n_classes[:n]
for i in n_classes.index:
print(
f'Label {n_classes.at[i, "index"]} takes: {n_classes.at[i, label] / n_all * 100:.2f}%, Nums: {n_classes.at[i, label]}'
)
|
sci-ztools
|
/sci_ztools-0.1.4-py3-none-any.whl/z/pandas.py
|
pandas.py
|
import pandas as pd
from sklearn.utils import shuffle
from sklearn.model_selection import (
StratifiedShuffleSplit,
StratifiedKFold,
KFold,
train_test_split,
)
from typing import Optional, Union, List, Tuple
from pathlib import Path
import copy
class DataFrame():
def __init__(
self, df: pd.DataFrame, random_state: int = 42, *args, **kargs
) -> None:
super(DataFrame, self).__init__()
self.df = df.copy(deep=True) # get a copy of original dataframe
self.random_state = random_state
def __repr__(self) -> str:
return repr(self.df)
def split_df(
self,
val: bool = False,
test_size: float = 0.1,
val_size: Optional[float] = None,
):
if val:
assert val_size is not None, "val size needed"
val_num, test_num = len(self.df) * [val_size, test_size]
train_df, val_df = train_test_split(
self.df, test_size=int(val_num), random_state=self.random_state
)
train_df, test_df = train_test_split(
train_df, test_size=int(test_num), random_state=self.random_state
)
return train_df, val_df, test_df
else:
train_df, test_df = train_test_split(
train_df, test_size=test_size, random_state=self.random_state
)
return train_df, test_df
def stratified_split_df(
self, labels: Union[List[str], str], n_splits: int = 1, test_size: float = 0.1
) -> Union[List, Tuple]:
split = StratifiedShuffleSplit(
n_splits=n_splits, test_size=test_size, random_state=self.random_state
)
df_trains = []
df_tests = []
for train_index, test_index in split.split(self.df, self.df[labels]):
strat_train_set = self.df.loc[train_index]
strat_test_set = self.df.loc[test_index]
df_trains.append(strat_train_set)
df_tests.append(strat_test_set)
return (
(strat_train_set, strat_test_set)
if n_splits == 1
else (df_trains, df_tests)
)
def stratified_kfold_split_df(
self, labels: Union[List[str], str], n_splits: int = 2
) -> Tuple:
assert n_splits >= 2, "At least 2 fold"
skf = StratifiedKFold(
n_splits=n_splits, shuffle=True, random_state=self.random_state
)
for train_index, test_index in skf.split(self.df, self.df[labels]):
strat_train_set = self.df.loc[train_index]
strat_test_set = self.df.loc[test_index]
yield strat_train_set, strat_test_set
def kfold_split_df(self, labels: Union[List[str], str], n_splits: int = 2) -> Tuple:
assert n_splits >= 2, "At least 2 fold"
df_trains = []
df_tests = []
skf = StratifiedKFold(
n_splits=n_splits, shuffle=True, random_state=self.random_state
)
for train_index, test_index in skf.split(self.df, self.df[labels]):
strat_train_set = self.df.loc[train_index]
strat_test_set = self.df.loc[test_index]
df_trains.append(strat_train_set)
df_tests.append(strat_test_set)
return df_trains, df_tests
def show_ratio(self, label="label", sort=None, n: Optional[int] = None):
"""print the label proportion in pd.DataFrame
Args:
sort: 'value' or 'label'
"""
n_all = len(self.df)
if sort == "value":
n_classes = (
self.df[label]
.value_counts()
.reset_index()
.sort_values(by=label, ascending=False)
)
elif sort == "label":
n_classes = (
self.df[label].value_counts().reset_index().sort_values(by="index")
)
else:
n_classes = self.df[label].value_counts().reset_index()
if n:
n_classes = n_classes[:n]
for i in n_classes.index:
print(
f'Label, {n_classes.at[i, "index"]} takes: {n_classes.at[i, label] / n_all * 100:.2f}%, Nums: {n_classes.at[i, label]}'
)
def read_csv(path: str = "", random_state: int = 42, *args, **kargs):
_path = Path(path)
assert _path.is_file(), "not a file"
return DataFrame(df=pd.read_csv(path, *args, **kargs), random_state=random_state)
def split_df(df: pd.DataFrame, shuf=True, val=True, random_state=42):
"""Split df into train/val/test set and write into files
ratio: 8:1:1 or 9:1
Args:
- df (DataFrame): some data
- shuf (bool, default=True): shuffle the DataFrame
- val (bool, default=True): split into three set, train/val/test
"""
if shuf:
df = shuffle(df, random_state=random_state)
sep = int(len(df) * 0.1)
if val:
test_df = df.iloc[:sep]
val_df = df.iloc[sep : sep * 2]
train_df = df.iloc[sep * 2 :]
return train_df, val_df, test_df
else:
test_df = df.iloc[:sep]
train_df = df.iloc[sep:]
return train_df, test_df
def show_ratio(df: pd.DataFrame, label="label", sort=None, n=5) -> None:
"""print the label proportion in pd.DataFrame
Args:
sort: 'value' or 'label'
"""
n_all = len(df)
if sort == "value":
n_classes = (
df[label]
.value_counts()
.reset_index()
.sort_values(by=label, ascending=False)
)
elif sort == "label":
n_classes = df[label].value_counts().reset_index().sort_values(by="index")
else:
n_classes = df[label].value_counts().reset_index()
n_classes = n_classes[:n]
for i in n_classes.index:
print(
f'Label {n_classes.at[i, "index"]} takes: {n_classes.at[i, label] / n_all * 100:.2f}%, Nums: {n_classes.at[i, label]}'
)
| 0.813979 | 0.419648 |
sci Package
===========
A Python 3 package or simply a collection of convenience and wrapper functions supporting tasks
frequently needed by scientists.
install
-------
| ``pip3 install sci`` or
| ``pip3 install --user --upgrade sci``
use the ``--user`` flag on shared computers where you do not have root access. It will install the
package in a place like ``~/.local/lib/python3.x/site-packages/sci`` . You can delete that folder
later to remove the package.
design goals
------------
1. Simplicity: wrap frequently used workflows into simple functions that make life easier for
scientists. Most import statements should move into functions (except for frequently used ones
like os, sys) to avoid confusion during autocomplete with VS Code and other IDEs.
2. Verbose but easily consumable docstrings: Docstrings are accessible via code autocomplete in IDEs
such as VS Code or Atom and through automated documentation environments such as sphinx. Each
docsting should start with an example use case for the function.
3. functional programming paradigm: While using classes is permitted we encourage writing fewer
classes and more functions (perhaps with decorators)
4. Cross-platform: The code should work on Linux (RHEL and Debian based), Windows and Mac OS X.
5. Python 3 only compatibility, Python 2.x is legacy and we do not want to invest any extra time in
it.
How to contribute your own code
-------------------------------
1. ``git clone [email protected]:FredHutch/sci-pkg.git``
2. create a new branch (eg : sci-yourname)
3. paste your function into module sci/new.py.
4. Make sure you add an example call in the first line of the doc string.
5. Add your test case to sci-pkg/sci/tests
|
sci
|
/sci-0.1.7.tar.gz/sci-0.1.7/README.rst
|
README.rst
|
sci Package
===========
A Python 3 package or simply a collection of convenience and wrapper functions supporting tasks
frequently needed by scientists.
install
-------
| ``pip3 install sci`` or
| ``pip3 install --user --upgrade sci``
use the ``--user`` flag on shared computers where you do not have root access. It will install the
package in a place like ``~/.local/lib/python3.x/site-packages/sci`` . You can delete that folder
later to remove the package.
design goals
------------
1. Simplicity: wrap frequently used workflows into simple functions that make life easier for
scientists. Most import statements should move into functions (except for frequently used ones
like os, sys) to avoid confusion during autocomplete with VS Code and other IDEs.
2. Verbose but easily consumable docstrings: Docstrings are accessible via code autocomplete in IDEs
such as VS Code or Atom and through automated documentation environments such as sphinx. Each
docsting should start with an example use case for the function.
3. functional programming paradigm: While using classes is permitted we encourage writing fewer
classes and more functions (perhaps with decorators)
4. Cross-platform: The code should work on Linux (RHEL and Debian based), Windows and Mac OS X.
5. Python 3 only compatibility, Python 2.x is legacy and we do not want to invest any extra time in
it.
How to contribute your own code
-------------------------------
1. ``git clone [email protected]:FredHutch/sci-pkg.git``
2. create a new branch (eg : sci-yourname)
3. paste your function into module sci/new.py.
4. Make sure you add an example call in the first line of the doc string.
5. Add your test case to sci-pkg/sci/tests
| 0.758063 | 0.598342 |
# Sci4All -- scientific toolbox for high school and higher education
Sci4All is a scientific toolbox for high school and higher education.
Sci4All builds on scipy with the aim to deliver simple, expressive and
intuitive interfaces for various operations, intended to be appropriate for
interactive use in tools such as Jupyter Notebook. The library aims to offer
interfaces that are effective and powerful, yet simple to use.
Key objectives of Sci4All are:
- It should be easy to use with a natural and intuitive API, in
a way that makes it suitable for non-programmers at the high school level.
- It should prepare students for using scientific tools in higher education.
Interfaces should be kept simple, but not at the expense of over-simplifying
or "dumbing down" the tools.
- It should build on and provide access to strong lower level scientific
tools, methodologies and algorithmic programming, within an environment that
is suitable for continued use by students in further higher education studies
and academic work
- Though the main target audience for the toolbox is high school students,
when possible it should aim to design interfaces that are also appropriate
for use in research and higher education, offering more convenient, intuitive
or terse access to functionality in libraries that Sci4All builds on.
Sci4All provides friendly, expressive and intuitive interfaces to some
capabilities of modules such as SciPy and Matplotlib, thus making certain
scientific functionality of those modules accessible to non-programmers.
However, the full power of those modules as well as the Python programming are
still available, allowing students to gradually adopt more advanced use of
those libraries and as their skills grow.
By offering Sci4All for scientific work at the high school level, our hope
is that students will have access to a set of tools that are not only
appropriate for their current educational level, but also provide experience
with a proper modern programming language (python), enable methodologies and
work habits appropriate for higher level scientific work, and offer hands-on
experience with an ecosystem of open source tools that can also serve their
future needs.
# Sci4All license
Sci4All is released under the [GNU Lesser General Public License
v3.0](https://www.gnu.org/licenses/lgpl-3.0-standalone.html) or later. License
details are included with the library source code.
# Installing
Once released, the library with source code can be installed from
[PyPI](https://pypi.org/), and can also be downloaded from the
[Sci4All website](https://www.sci4all.org).
# Getting information about Sci4All
Information will be available at the
[Sci4All website](https://www.sci4all.org).
# Project status
The project is in early development and will be released as a pre-alpha
in near future. Stay tuned ...
|
sci4all
|
/sci4all-0.0.1a1.tar.gz/sci4all-0.0.1a1/README.md
|
README.md
|
# Sci4All -- scientific toolbox for high school and higher education
Sci4All is a scientific toolbox for high school and higher education.
Sci4All builds on scipy with the aim to deliver simple, expressive and
intuitive interfaces for various operations, intended to be appropriate for
interactive use in tools such as Jupyter Notebook. The library aims to offer
interfaces that are effective and powerful, yet simple to use.
Key objectives of Sci4All are:
- It should be easy to use with a natural and intuitive API, in
a way that makes it suitable for non-programmers at the high school level.
- It should prepare students for using scientific tools in higher education.
Interfaces should be kept simple, but not at the expense of over-simplifying
or "dumbing down" the tools.
- It should build on and provide access to strong lower level scientific
tools, methodologies and algorithmic programming, within an environment that
is suitable for continued use by students in further higher education studies
and academic work
- Though the main target audience for the toolbox is high school students,
when possible it should aim to design interfaces that are also appropriate
for use in research and higher education, offering more convenient, intuitive
or terse access to functionality in libraries that Sci4All builds on.
Sci4All provides friendly, expressive and intuitive interfaces to some
capabilities of modules such as SciPy and Matplotlib, thus making certain
scientific functionality of those modules accessible to non-programmers.
However, the full power of those modules as well as the Python programming are
still available, allowing students to gradually adopt more advanced use of
those libraries and as their skills grow.
By offering Sci4All for scientific work at the high school level, our hope
is that students will have access to a set of tools that are not only
appropriate for their current educational level, but also provide experience
with a proper modern programming language (python), enable methodologies and
work habits appropriate for higher level scientific work, and offer hands-on
experience with an ecosystem of open source tools that can also serve their
future needs.
# Sci4All license
Sci4All is released under the [GNU Lesser General Public License
v3.0](https://www.gnu.org/licenses/lgpl-3.0-standalone.html) or later. License
details are included with the library source code.
# Installing
Once released, the library with source code can be installed from
[PyPI](https://pypi.org/), and can also be downloaded from the
[Sci4All website](https://www.sci4all.org).
# Getting information about Sci4All
Information will be available at the
[Sci4All website](https://www.sci4all.org).
# Project status
The project is in early development and will be released as a pre-alpha
in near future. Stay tuned ...
| 0.552298 | 0.759091 |
## What's `sciPyFoam`
**sciPyFoam** is a python package for OpenFOAM 2D simulation results visualization. Although paraview, tecplot, ... provide a convenient way to visualize OpenFOAM results and do some post-processing, but Paraview can not generate high quality figures for publication purpose, tecplot sometimes is not so convenient to make figures. So sciPyFoam could help to generate very nice and complex figures. The main logics of sciPyFoam is using python to read native OpenFOAM polyMesh and results files (e.g. points, faces, ..., T, p) and then using VTK to convert polygonal cell data to triangle point data, finally using matplotlib to plot it.
**Here I provide a demo to illustrate how to use it.**
## Install
```bash
pip install sciPyFoam
```
## How to use ?
```py
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import sciPyFoam.polyMesh2d as mesh2d
# Read data
caseDir='cases/blockMesh' # case dir
fieldNames=['T','U'] # field name list
times,times_value=mesh2d.getTimes(caseDir) # get all times name and value
MeshData=mesh2d.getMesh(caseDir, 'frontAndBack')
# read field data dict contains point data and cell data
fieldData=mesh2d.readCellData_to_pointData(caseDir, times[-1], fieldNames,MeshData)
# Plot
fig=plt.figure(figsize=(14,6))
ax=plt.gca()
ax.tricontourf(MeshData['x'],MeshData['y'],MeshData['triangles'],fieldData['pointData']['T'],levels=50,cmap='rainbow')
# ax.invert_yaxis()
plt.tight_layout()
plt.savefig('test.pdf')
```
## Example
There is a [mini demo](example/MiniDemo.py) and [complex demo](example/plotField.py) could help you understand the usage.
1. Plot 2D regular mesh generated by blockMesh
```bash
cd example
python plotField.py cases/blockMesh latestTime T
```


2. Plot 2D unstructured mesh generated by Gmsh
```bash
cd example
python plotField.py cases/triMesh latestTime T
```


3. Plot 2D hybrid mesh generated by Gmsh
```bash
cd example
python plotField.py cases/hybridMesh latestTime T
```


4. Plot 2D mesh generated by snappyHexMesh
```bash
cd example
python plotField.py cases/hybridMesh latestTime p
```


```bash
cd example
python plotField.py cases/hybridMesh latestTime U
```

|
sciPyFoam
|
/sciPyFoam-0.4.1.tar.gz/sciPyFoam-0.4.1/readme.md
|
readme.md
|
pip install sciPyFoam
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import sciPyFoam.polyMesh2d as mesh2d
# Read data
caseDir='cases/blockMesh' # case dir
fieldNames=['T','U'] # field name list
times,times_value=mesh2d.getTimes(caseDir) # get all times name and value
MeshData=mesh2d.getMesh(caseDir, 'frontAndBack')
# read field data dict contains point data and cell data
fieldData=mesh2d.readCellData_to_pointData(caseDir, times[-1], fieldNames,MeshData)
# Plot
fig=plt.figure(figsize=(14,6))
ax=plt.gca()
ax.tricontourf(MeshData['x'],MeshData['y'],MeshData['triangles'],fieldData['pointData']['T'],levels=50,cmap='rainbow')
# ax.invert_yaxis()
plt.tight_layout()
plt.savefig('test.pdf')
cd example
python plotField.py cases/blockMesh latestTime T
cd example
python plotField.py cases/triMesh latestTime T
cd example
python plotField.py cases/hybridMesh latestTime T
cd example
python plotField.py cases/hybridMesh latestTime p
cd example
python plotField.py cases/hybridMesh latestTime U
| 0.628863 | 0.932453 |
.. sciPyFoam documentation master file, created by
sphinx-quickstart on Sat Apr 25 15:47:31 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to sciPyFoam's documentation!
=====================================
.. toctree::
:maxdepth: 2
:caption: Contents:
sciPyFoam/modules.rst
sciPyFoam/postProcessing/modules.rst
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
|
sciPyFoam
|
/sciPyFoam-0.4.1.tar.gz/sciPyFoam-0.4.1/docs/source/index.rst
|
index.rst
|
.. sciPyFoam documentation master file, created by
sphinx-quickstart on Sat Apr 25 15:47:31 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to sciPyFoam's documentation!
=====================================
.. toctree::
:maxdepth: 2
:caption: Contents:
sciPyFoam/modules.rst
sciPyFoam/postProcessing/modules.rst
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 0.59843 | 0.232419 |
Copy of the original Sphinx Read The Docs theme from
https://github.com/readthedocs/sphinx_rtd_theme version 0.4.3.
**************************
Read the Docs Sphinx Theme
**************************
.. image:: https://img.shields.io/pypi/v/sphinx_rtd_theme.svg
:target: https://pypi.python.org/pypi/sphinx_rtd_theme
:alt: Pypi Version
.. image:: https://travis-ci.org/readthedocs/sphinx_rtd_theme.svg?branch=master
:target: https://travis-ci.org/readthedocs/sphinx_rtd_theme
:alt: Build Status
.. image:: https://img.shields.io/pypi/l/sphinx_rtd_theme.svg
:target: https://pypi.python.org/pypi/sphinx_rtd_theme/
:alt: License
.. image:: https://readthedocs.org/projects/sphinx-rtd-theme/badge/?version=latest
:target: http://sphinx-rtd-theme.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
This Sphinx_ theme was designed to provide a great reader experience for
documentation users on both desktop and mobile devices. This theme is used
primarily on `Read the Docs`_ but can work with any Sphinx project. You can find
a working demo of the theme in the `theme documentation`_
.. _Sphinx: http://www.sphinx-doc.org
.. _Read the Docs: http://www.readthedocs.org
.. _theme documentation: https://sphinx-rtd-theme.readthedocs.io/en/latest/
Installation
============
This theme is distributed on PyPI_ and can be installed with ``pip``:
.. code:: console
pip install sphinx-rtd-theme
To use the theme in your Sphinx project, you will need to add the following to
your ``conf.py`` file:
.. code:: python
import sphinx_rtd_theme
extensions = [
...
"sphinx_rtd_theme",
]
html_theme = "sphinx_rtd_theme"
For more information read the full documentation on `installing the theme`_
.. _PyPI: https://pypi.python.org/pypi/sphinx_rtd_theme
.. _installing the theme: https://sphinx-rtd-theme.readthedocs.io/en/latest/installing.html
Configuration
=============
This theme is highly customizable on both the page level and on a global level.
To see all the possible configuration options, read the documentation on
`configuring the theme`_.
.. _configuring the theme: https://sphinx-rtd-theme.readthedocs.io/en/latest/configuring.html
Contributing
============
If you would like to help modify or translate the theme, you'll find more
information on contributing in our `contributing guide`_.
.. _contributing guide: https://sphinx-rtd-theme.readthedocs.io/en/latest/contributing.html
|
sciPyFoam
|
/sciPyFoam-0.4.1.tar.gz/sciPyFoam-0.4.1/docs/source/themes/rtd/README.md
|
README.md
|
Copy of the original Sphinx Read The Docs theme from
https://github.com/readthedocs/sphinx_rtd_theme version 0.4.3.
**************************
Read the Docs Sphinx Theme
**************************
.. image:: https://img.shields.io/pypi/v/sphinx_rtd_theme.svg
:target: https://pypi.python.org/pypi/sphinx_rtd_theme
:alt: Pypi Version
.. image:: https://travis-ci.org/readthedocs/sphinx_rtd_theme.svg?branch=master
:target: https://travis-ci.org/readthedocs/sphinx_rtd_theme
:alt: Build Status
.. image:: https://img.shields.io/pypi/l/sphinx_rtd_theme.svg
:target: https://pypi.python.org/pypi/sphinx_rtd_theme/
:alt: License
.. image:: https://readthedocs.org/projects/sphinx-rtd-theme/badge/?version=latest
:target: http://sphinx-rtd-theme.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
This Sphinx_ theme was designed to provide a great reader experience for
documentation users on both desktop and mobile devices. This theme is used
primarily on `Read the Docs`_ but can work with any Sphinx project. You can find
a working demo of the theme in the `theme documentation`_
.. _Sphinx: http://www.sphinx-doc.org
.. _Read the Docs: http://www.readthedocs.org
.. _theme documentation: https://sphinx-rtd-theme.readthedocs.io/en/latest/
Installation
============
This theme is distributed on PyPI_ and can be installed with ``pip``:
.. code:: console
pip install sphinx-rtd-theme
To use the theme in your Sphinx project, you will need to add the following to
your ``conf.py`` file:
.. code:: python
import sphinx_rtd_theme
extensions = [
...
"sphinx_rtd_theme",
]
html_theme = "sphinx_rtd_theme"
For more information read the full documentation on `installing the theme`_
.. _PyPI: https://pypi.python.org/pypi/sphinx_rtd_theme
.. _installing the theme: https://sphinx-rtd-theme.readthedocs.io/en/latest/installing.html
Configuration
=============
This theme is highly customizable on both the page level and on a global level.
To see all the possible configuration options, read the documentation on
`configuring the theme`_.
.. _configuring the theme: https://sphinx-rtd-theme.readthedocs.io/en/latest/configuring.html
Contributing
============
If you would like to help modify or translate the theme, you'll find more
information on contributing in our `contributing guide`_.
.. _contributing guide: https://sphinx-rtd-theme.readthedocs.io/en/latest/contributing.html
| 0.781956 | 0.349255 |
!function(e){var t={};function n(r){if(t[r])return t[r].exports;var i=t[r]={i:r,l:!1,exports:{}};return e[r].call(i.exports,i,i.exports,n),i.l=!0,i.exports}n.m=e,n.c=t,n.d=function(e,t,r){n.o(e,t)||Object.defineProperty(e,t,{enumerable:!0,get:r})},n.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},n.t=function(e,t){if(1&t&&(e=n(e)),8&t)return e;if(4&t&&"object"==typeof e&&e&&e.__esModule)return e;var r=Object.create(null);if(n.r(r),Object.defineProperty(r,"default",{enumerable:!0,value:e}),2&t&&"string"!=typeof e)for(var i in e)n.d(r,i,function(t){return e[t]}.bind(null,i));return r},n.n=function(e){var t=e&&e.__esModule?function(){return e.default}:function(){return e};return n.d(t,"a",t),t},n.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},n.p="",n(n.s=0)}([function(e,t,n){"use strict";n.r(t);n(1),n(2),n(3)},function(e,t,n){},function(e,t,n){},function(e,t,n){(function(){var t="undefined"!=typeof window?window.jQuery:n(4);e.exports.ThemeNav={navBar:null,win:null,winScroll:!1,winResize:!1,linkScroll:!1,winPosition:0,winHeight:null,docHeight:null,isRunning:!1,enable:function(e){var n=this;void 0===e&&(e=!0),n.isRunning||(n.isRunning=!0,t(function(t){n.init(t),n.reset(),n.win.on("hashchange",n.reset),e&&n.win.on("scroll",function(){n.linkScroll||n.winScroll||(n.winScroll=!0,requestAnimationFrame(function(){n.onScroll()}))}),n.win.on("resize",function(){n.winResize||(n.winResize=!0,requestAnimationFrame(function(){n.onResize()}))}),n.onResize()}))},enableSticky:function(){this.enable(!0)},init:function(e){e(document);var t=this;this.navBar=e("div.wy-side-scroll:first"),this.win=e(window),e(document).on("click","[data-toggle='wy-nav-top']",function(){e("[data-toggle='wy-nav-shift']").toggleClass("shift"),e("[data-toggle='rst-versions']").toggleClass("shift")}).on("click",".wy-menu-vertical .current ul li a",function(){var n=e(this);e("[data-toggle='wy-nav-shift']").removeClass("shift"),e("[data-toggle='rst-versions']").toggleClass("shift"),t.toggleCurrent(n),t.hashChange()}).on("click","[data-toggle='rst-current-version']",function(){e("[data-toggle='rst-versions']").toggleClass("shift-up")}),e("table.docutils:not(.field-list,.footnote,.citation)").wrap("<div class='wy-table-responsive'></div>"),e("table.docutils.footnote").wrap("<div class='wy-table-responsive footnote'></div>"),e("table.docutils.citation").wrap("<div class='wy-table-responsive citation'></div>"),e(".wy-menu-vertical ul").not(".simple").siblings("a").each(function(){var n=e(this);expand=e('<span class="toctree-expand"></span>'),expand.on("click",function(e){return t.toggleCurrent(n),e.stopPropagation(),!1}),n.prepend(expand)})},reset:function(){var e=encodeURI(window.location.hash)||"#";try{var t=$(".wy-menu-vertical"),n=t.find('[href="'+e+'"]');if(0===n.length){var r=$('.document [id="'+e.substring(1)+'"]').closest("div.section");0===(n=t.find('[href="#'+r.attr("id")+'"]')).length&&(n=t.find('[href="#"]'))}n.length>0&&($(".wy-menu-vertical .current").removeClass("current"),n.addClass("current"),n.closest("li.toctree-l1").addClass("current"),n.closest("li.toctree-l1").parent().addClass("current"),n.closest("li.toctree-l1").addClass("current"),n.closest("li.toctree-l2").addClass("current"),n.closest("li.toctree-l3").addClass("current"),n.closest("li.toctree-l4").addClass("current"),n[0].scrollIntoView())}catch(e){console.log("Error expanding nav for anchor",e)}},onScroll:function(){this.winScroll=!1;var e=this.win.scrollTop(),t=e+this.winHeight,n=this.navBar.scrollTop()+(e-this.winPosition);e<0||t>this.docHeight||(this.navBar.scrollTop(n),this.winPosition=e)},onResize:function(){this.winResize=!1,this.winHeight=this.win.height(),this.docHeight=$(document).height()},hashChange:function(){this.linkScroll=!0,this.win.one("hashchange",function(){this.linkScroll=!1})},toggleCurrent:function(e){var t=e.closest("li");t.siblings("li.current").removeClass("current"),t.siblings().find("li.current").removeClass("current"),t.find("> ul li.current").removeClass("current"),t.toggleClass("current")}},"undefined"!=typeof window&&(window.SphinxRtdTheme={Navigation:e.exports.ThemeNav,StickyNav:e.exports.ThemeNav}),function(){for(var e=0,t=["ms","moz","webkit","o"],n=0;n<t.length&&!window.requestAnimationFrame;++n)window.requestAnimationFrame=window[t[n]+"RequestAnimationFrame"],window.cancelAnimationFrame=window[t[n]+"CancelAnimationFrame"]||window[t[n]+"CancelRequestAnimationFrame"];window.requestAnimationFrame||(window.requestAnimationFrame=function(t,n){var r=(new Date).getTime(),i=Math.max(0,16-(r-e)),o=window.setTimeout(function(){t(r+i)},i);return e=r+i,o}),window.cancelAnimationFrame||(window.cancelAnimationFrame=function(e){clearTimeout(e)})}()}).call(window)},function(e,t,n){var r;
/*!
* jQuery JavaScript Library v3.4.1
* https://jquery.com/
*
* Includes Sizzle.js
* https://sizzlejs.com/
*
* Copyright JS Foundation and other contributors
* Released under the MIT license
* https://jquery.org/license
*
* Date: 2019-05-01T21:04Z
*/
/*!
* jQuery JavaScript Library v3.4.1
* https://jquery.com/
*
* Includes Sizzle.js
* https://sizzlejs.com/
*
* Copyright JS Foundation and other contributors
* Released under the MIT license
* https://jquery.org/license
*
* Date: 2019-05-01T21:04Z
*/
!function(t,n){"use strict";"object"==typeof e.exports?e.exports=t.document?n(t,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return n(e)}:n(t)}("undefined"!=typeof window?window:this,function(n,i){"use strict";var o=[],a=n.document,s=Object.getPrototypeOf,u=o.slice,l=o.concat,c=o.push,f=o.indexOf,p={},d=p.toString,h=p.hasOwnProperty,g=h.toString,v=g.call(Object),m={},y=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType},x=function(e){return null!=e&&e===e.window},b={type:!0,src:!0,nonce:!0,noModule:!0};function w(e,t,n){var r,i,o=(n=n||a).createElement("script");if(o.text=e,t)for(r in b)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function T(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?p[d.call(e)]||"object":typeof e}var C=function(e,t){return new C.fn.init(e,t)},S=/^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g;function k(e){var t=!!e&&"length"in e&&e.length,n=T(e);return!y(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&t>0&&t-1 in e)}C.fn=C.prototype={jquery:"3.4.1",constructor:C,length:0,toArray:function(){return u.call(this)},get:function(e){return null==e?u.call(this):e<0?this[e+this.length]:this[e]},pushStack:function(e){var t=C.merge(this.constructor(),e);return t.prevObject=this,t},each:function(e){return C.each(this,e)},map:function(e){return this.pushStack(C.map(this,function(t,n){return e.call(t,n,t)}))},slice:function(){return this.pushStack(u.apply(this,arguments))},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},eq:function(e){var t=this.length,n=+e+(e<0?t:0);return this.pushStack(n>=0&&n<t?[this[n]]:[])},end:function(){return this.prevObject||this.constructor()},push:c,sort:o.sort,splice:o.splice},C.extend=C.fn.extend=function(){var e,t,n,r,i,o,a=arguments[0]||{},s=1,u=arguments.length,l=!1;for("boolean"==typeof a&&(l=a,a=arguments[s]||{},s++),"object"==typeof a||y(a)||(a={}),s===u&&(a=this,s--);s<u;s++)if(null!=(e=arguments[s]))for(t in e)r=e[t],"__proto__"!==t&&a!==r&&(l&&r&&(C.isPlainObject(r)||(i=Array.isArray(r)))?(n=a[t],o=i&&!Array.isArray(n)?[]:i||C.isPlainObject(n)?n:{},i=!1,a[t]=C.extend(l,o,r)):void 0!==r&&(a[t]=r));return a},C.extend({expando:"jQuery"+("3.4.1"+Math.random()).replace(/\D/g,""),isReady:!0,error:function(e){throw new Error(e)},noop:function(){},isPlainObject:function(e){var t,n;return!(!e||"[object Object]"!==d.call(e))&&(!(t=s(e))||"function"==typeof(n=h.call(t,"constructor")&&t.constructor)&&g.call(n)===v)},isEmptyObject:function(e){var t;for(t in e)return!1;return!0},globalEval:function(e,t){w(e,{nonce:t&&t.nonce})},each:function(e,t){var n,r=0;if(k(e))for(n=e.length;r<n&&!1!==t.call(e[r],r,e[r]);r++);else for(r in e)if(!1===t.call(e[r],r,e[r]))break;return e},trim:function(e){return null==e?"":(e+"").replace(S,"")},makeArray:function(e,t){var n=t||[];return null!=e&&(k(Object(e))?C.merge(n,"string"==typeof e?[e]:e):c.call(n,e)),n},inArray:function(e,t,n){return null==t?-1:f.call(t,e,n)},merge:function(e,t){for(var n=+t.length,r=0,i=e.length;r<n;r++)e[i++]=t[r];return e.length=i,e},grep:function(e,t,n){for(var r=[],i=0,o=e.length,a=!n;i<o;i++)!t(e[i],i)!==a&&r.push(e[i]);return r},map:function(e,t,n){var r,i,o=0,a=[];if(k(e))for(r=e.length;o<r;o++)null!=(i=t(e[o],o,n))&&a.push(i);else for(o in e)null!=(i=t(e[o],o,n))&&a.push(i);return l.apply([],a)},guid:1,support:m}),"function"==typeof Symbol&&(C.fn[Symbol.iterator]=o[Symbol.iterator]),C.each("Boolean Number String Function Array Date RegExp Object Error Symbol".split(" "),function(e,t){p["[object "+t+"]"]=t.toLowerCase()});var E=
/*!
* Sizzle CSS Selector Engine v2.3.4
* https://sizzlejs.com/
*
* Copyright JS Foundation and other contributors
* Released under the MIT license
* https://js.foundation/
*
* Date: 2019-04-08
*/
function(e){var t,n,r,i,o,a,s,u,l,c,f,p,d,h,g,v,m,y,x,b="sizzle"+1*new Date,w=e.document,T=0,C=0,S=ue(),k=ue(),E=ue(),A=ue(),N=function(e,t){return e===t&&(f=!0),0},j={}.hasOwnProperty,D=[],q=D.pop,L=D.push,H=D.push,O=D.slice,R=function(e,t){for(var n=0,r=e.length;n<r;n++)if(e[n]===t)return n;return-1},P="checked|selected|async|autofocus|autoplay|controls|defer|disabled|hidden|ismap|loop|multiple|open|readonly|required|scoped",M="[\\x20\\t\\r\\n\\f]",I="(?:\\\\.|[\\w-]|[^\0-\\xa0])+",F="\\["+M+"*("+I+")(?:"+M+"*([*^$|!~]?=)"+M+"*(?:'((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\"|("+I+"))|)"+M+"*\\]",$=":("+I+")(?:\\((('((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\")|((?:\\\\.|[^\\\\()[\\]]|"+F+")*)|.*)\\)|)",W=new RegExp(M+"+","g"),B=new RegExp("^"+M+"+|((?:^|[^\\\\])(?:\\\\.)*)"+M+"+$","g"),_=new RegExp("^"+M+"*,"+M+"*"),z=new RegExp("^"+M+"*([>+~]|"+M+")"+M+"*"),U=new RegExp(M+"|>"),X=new RegExp($),V=new RegExp("^"+I+"$"),G={ID:new RegExp("^#("+I+")"),CLASS:new RegExp("^\\.("+I+")"),TAG:new RegExp("^("+I+"|[*])"),ATTR:new RegExp("^"+F),PSEUDO:new RegExp("^"+$),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+P+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,Q=/^(?:input|select|textarea|button)$/i,J=/^h\d$/i,K=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\([\\da-f]{1,6}"+M+"?|("+M+")|.)","ig"),ne=function(e,t,n){var r="0x"+t-65536;return r!=r||n?t:r<0?String.fromCharCode(r+65536):String.fromCharCode(r>>10|55296,1023&r|56320)},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"�":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){p()},ae=be(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{H.apply(D=O.call(w.childNodes),w.childNodes),D[w.childNodes.length].nodeType}catch(e){H={apply:D.length?function(e,t){L.apply(e,O.call(t))}:function(e,t){for(var n=e.length,r=0;e[n++]=t[r++];);e.length=n-1}}}function se(e,t,r,i){var o,s,l,c,f,h,m,y=t&&t.ownerDocument,T=t?t.nodeType:9;if(r=r||[],"string"!=typeof e||!e||1!==T&&9!==T&&11!==T)return r;if(!i&&((t?t.ownerDocument||t:w)!==d&&p(t),t=t||d,g)){if(11!==T&&(f=Z.exec(e)))if(o=f[1]){if(9===T){if(!(l=t.getElementById(o)))return r;if(l.id===o)return r.push(l),r}else if(y&&(l=y.getElementById(o))&&x(t,l)&&l.id===o)return r.push(l),r}else{if(f[2])return H.apply(r,t.getElementsByTagName(e)),r;if((o=f[3])&&n.getElementsByClassName&&t.getElementsByClassName)return H.apply(r,t.getElementsByClassName(o)),r}if(n.qsa&&!A[e+" "]&&(!v||!v.test(e))&&(1!==T||"object"!==t.nodeName.toLowerCase())){if(m=e,y=t,1===T&&U.test(e)){for((c=t.getAttribute("id"))?c=c.replace(re,ie):t.setAttribute("id",c=b),s=(h=a(e)).length;s--;)h[s]="#"+c+" "+xe(h[s]);m=h.join(","),y=ee.test(e)&&me(t.parentNode)||t}try{return H.apply(r,y.querySelectorAll(m)),r}catch(t){A(e,!0)}finally{c===b&&t.removeAttribute("id")}}}return u(e.replace(B,"$1"),t,r,i)}function ue(){var e=[];return function t(n,i){return e.push(n+" ")>r.cacheLength&&delete t[e.shift()],t[n+" "]=i}}function le(e){return e[b]=!0,e}function ce(e){var t=d.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){for(var n=e.split("|"),i=n.length;i--;)r.attrHandle[n[i]]=t}function pe(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)for(;n=n.nextSibling;)if(n===t)return-1;return e?1:-1}function de(e){return function(t){return"input"===t.nodeName.toLowerCase()&&t.type===e}}function he(e){return function(t){var n=t.nodeName.toLowerCase();return("input"===n||"button"===n)&&t.type===e}}function ge(e){return function(t){return"form"in t?t.parentNode&&!1===t.disabled?"label"in t?"label"in t.parentNode?t.parentNode.disabled===e:t.disabled===e:t.isDisabled===e||t.isDisabled!==!e&&ae(t)===e:t.disabled===e:"label"in t&&t.disabled===e}}function ve(e){return le(function(t){return t=+t,le(function(n,r){for(var i,o=e([],n.length,t),a=o.length;a--;)n[i=o[a]]&&(n[i]=!(r[i]=n[i]))})})}function me(e){return e&&void 0!==e.getElementsByTagName&&e}for(t in n=se.support={},o=se.isXML=function(e){var t=e.namespaceURI,n=(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},p=se.setDocument=function(e){var t,i,a=e?e.ownerDocument||e:w;return a!==d&&9===a.nodeType&&a.documentElement?(h=(d=a).documentElement,g=!o(d),w!==d&&(i=d.defaultView)&&i.top!==i&&(i.addEventListener?i.addEventListener("unload",oe,!1):i.attachEvent&&i.attachEvent("onunload",oe)),n.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),n.getElementsByTagName=ce(function(e){return e.appendChild(d.createComment("")),!e.getElementsByTagName("*").length}),n.getElementsByClassName=K.test(d.getElementsByClassName),n.getById=ce(function(e){return h.appendChild(e).id=b,!d.getElementsByName||!d.getElementsByName(b).length}),n.getById?(r.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},r.find.ID=function(e,t){if(void 0!==t.getElementById&&g){var n=t.getElementById(e);return n?[n]:[]}}):(r.filter.ID=function(e){var t=e.replace(te,ne);return function(e){var n=void 0!==e.getAttributeNode&&e.getAttributeNode("id");return n&&n.value===t}},r.find.ID=function(e,t){if(void 0!==t.getElementById&&g){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];for(i=t.getElementsByName(e),r=0;o=i[r++];)if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),r.find.TAG=n.getElementsByTagName?function(e,t){return void 0!==t.getElementsByTagName?t.getElementsByTagName(e):n.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){for(;n=o[i++];)1===n.nodeType&&r.push(n);return r}return o},r.find.CLASS=n.getElementsByClassName&&function(e,t){if(void 0!==t.getElementsByClassName&&g)return t.getElementsByClassName(e)},m=[],v=[],(n.qsa=K.test(d.querySelectorAll))&&(ce(function(e){h.appendChild(e).innerHTML="<a id='"+b+"'></a><select id='"+b+"-\r\\' msallowcapture=''><option selected=''></option></select>",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+M+"*(?:value|"+P+")"),e.querySelectorAll("[id~="+b+"-]").length||v.push("~="),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+b+"+*").length||v.push(".#.+[+~]")}),ce(function(e){e.innerHTML="<a href='' disabled='disabled'></a><select disabled='disabled'><option/></select>";var t=d.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),h.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(n.matchesSelector=K.test(y=h.matches||h.webkitMatchesSelector||h.mozMatchesSelector||h.oMatchesSelector||h.msMatchesSelector))&&ce(function(e){n.disconnectedMatch=y.call(e,"*"),y.call(e,"[s!='']:x"),m.push("!=",$)}),v=v.length&&new RegExp(v.join("|")),m=m.length&&new RegExp(m.join("|")),t=K.test(h.compareDocumentPosition),x=t||K.test(h.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)for(;t=t.parentNode;)if(t===e)return!0;return!1},N=t?function(e,t){if(e===t)return f=!0,0;var r=!e.compareDocumentPosition-!t.compareDocumentPosition;return r||(1&(r=(e.ownerDocument||e)===(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!n.sortDetached&&t.compareDocumentPosition(e)===r?e===d||e.ownerDocument===w&&x(w,e)?-1:t===d||t.ownerDocument===w&&x(w,t)?1:c?R(c,e)-R(c,t):0:4&r?-1:1)}:function(e,t){if(e===t)return f=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e===d?-1:t===d?1:i?-1:o?1:c?R(c,e)-R(c,t):0;if(i===o)return pe(e,t);for(n=e;n=n.parentNode;)a.unshift(n);for(n=t;n=n.parentNode;)s.unshift(n);for(;a[r]===s[r];)r++;return r?pe(a[r],s[r]):a[r]===w?-1:s[r]===w?1:0},d):d},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if((e.ownerDocument||e)!==d&&p(e),n.matchesSelector&&g&&!A[t+" "]&&(!m||!m.test(t))&&(!v||!v.test(t)))try{var r=y.call(e,t);if(r||n.disconnectedMatch||e.document&&11!==e.document.nodeType)return r}catch(e){A(t,!0)}return se(t,d,null,[e]).length>0},se.contains=function(e,t){return(e.ownerDocument||e)!==d&&p(e),x(e,t)},se.attr=function(e,t){(e.ownerDocument||e)!==d&&p(e);var i=r.attrHandle[t.toLowerCase()],o=i&&j.call(r.attrHandle,t.toLowerCase())?i(e,t,!g):void 0;return void 0!==o?o:n.attributes||!g?e.getAttribute(t):(o=e.getAttributeNode(t))&&o.specified?o.value:null},se.escape=function(e){return(e+"").replace(re,ie)},se.error=function(e){throw new Error("Syntax error, unrecognized expression: "+e)},se.uniqueSort=function(e){var t,r=[],i=0,o=0;if(f=!n.detectDuplicates,c=!n.sortStable&&e.slice(0),e.sort(N),f){for(;t=e[o++];)t===e[o]&&(i=r.push(o));for(;i--;)e.splice(r[i],1)}return c=null,e},i=se.getText=function(e){var t,n="",r=0,o=e.nodeType;if(o){if(1===o||9===o||11===o){if("string"==typeof e.textContent)return e.textContent;for(e=e.firstChild;e;e=e.nextSibling)n+=i(e)}else if(3===o||4===o)return e.nodeValue}else for(;t=e[r++];)n+=i(t);return n},(r=se.selectors={cacheLength:50,createPseudo:le,match:G,attrHandle:{},find:{},relative:{">":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return G.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=a(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=S[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&S(e,function(e){return t.test("string"==typeof e.className&&e.className||void 0!==e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(e,t,n){return function(r){var i=se.attr(r,e);return null==i?"!="===t:!t||(i+="","="===t?i===n:"!="===t?i!==n:"^="===t?n&&0===i.indexOf(n):"*="===t?n&&i.indexOf(n)>-1:"$="===t?n&&i.slice(-n.length)===n:"~="===t?(" "+i.replace(W," ")+" ").indexOf(n)>-1:"|="===t&&(i===n||i.slice(0,n.length+1)===n+"-"))}},CHILD:function(e,t,n,r,i){var o="nth"!==e.slice(0,3),a="last"!==e.slice(-4),s="of-type"===t;return 1===r&&0===i?function(e){return!!e.parentNode}:function(t,n,u){var l,c,f,p,d,h,g=o!==a?"nextSibling":"previousSibling",v=t.parentNode,m=s&&t.nodeName.toLowerCase(),y=!u&&!s,x=!1;if(v){if(o){for(;g;){for(p=t;p=p[g];)if(s?p.nodeName.toLowerCase()===m:1===p.nodeType)return!1;h=g="only"===e&&!h&&"nextSibling"}return!0}if(h=[a?v.firstChild:v.lastChild],a&&y){for(x=(d=(l=(c=(f=(p=v)[b]||(p[b]={}))[p.uniqueID]||(f[p.uniqueID]={}))[e]||[])[0]===T&&l[1])&&l[2],p=d&&v.childNodes[d];p=++d&&p&&p[g]||(x=d=0)||h.pop();)if(1===p.nodeType&&++x&&p===t){c[e]=[T,d,x];break}}else if(y&&(x=d=(l=(c=(f=(p=t)[b]||(p[b]={}))[p.uniqueID]||(f[p.uniqueID]={}))[e]||[])[0]===T&&l[1]),!1===x)for(;(p=++d&&p&&p[g]||(x=d=0)||h.pop())&&((s?p.nodeName.toLowerCase()!==m:1!==p.nodeType)||!++x||(y&&((c=(f=p[b]||(p[b]={}))[p.uniqueID]||(f[p.uniqueID]={}))[e]=[T,x]),p!==t)););return(x-=i)===r||x%r==0&&x/r>=0}}},PSEUDO:function(e,t){var n,i=r.pseudos[e]||r.setFilters[e.toLowerCase()]||se.error("unsupported pseudo: "+e);return i[b]?i(t):i.length>1?(n=[e,e,"",t],r.setFilters.hasOwnProperty(e.toLowerCase())?le(function(e,n){for(var r,o=i(e,t),a=o.length;a--;)e[r=R(e,o[a])]=!(n[r]=o[a])}):function(e){return i(e,0,n)}):i}},pseudos:{not:le(function(e){var t=[],n=[],r=s(e.replace(B,"$1"));return r[b]?le(function(e,t,n,i){for(var o,a=r(e,null,i,[]),s=e.length;s--;)(o=a[s])&&(e[s]=!(t[s]=o))}):function(e,i,o){return t[0]=e,r(t,null,o,n),t[0]=null,!n.pop()}}),has:le(function(e){return function(t){return se(e,t).length>0}}),contains:le(function(e){return e=e.replace(te,ne),function(t){return(t.textContent||i(t)).indexOf(e)>-1}}),lang:le(function(e){return V.test(e||"")||se.error("unsupported lang: "+e),e=e.replace(te,ne).toLowerCase(),function(t){var n;do{if(n=g?t.lang:t.getAttribute("xml:lang")||t.getAttribute("lang"))return(n=n.toLowerCase())===e||0===n.indexOf(e+"-")}while((t=t.parentNode)&&1===t.nodeType);return!1}}),target:function(t){var n=e.location&&e.location.hash;return n&&n.slice(1)===t.id},root:function(e){return e===h},focus:function(e){return e===d.activeElement&&(!d.hasFocus||d.hasFocus())&&!!(e.type||e.href||~e.tabIndex)},enabled:ge(!1),disabled:ge(!0),checked:function(e){var t=e.nodeName.toLowerCase();return"input"===t&&!!e.checked||"option"===t&&!!e.selected},selected:function(e){return e.parentNode&&e.parentNode.selectedIndex,!0===e.selected},empty:function(e){for(e=e.firstChild;e;e=e.nextSibling)if(e.nodeType<6)return!1;return!0},parent:function(e){return!r.pseudos.empty(e)},header:function(e){return J.test(e.nodeName)},input:function(e){return Q.test(e.nodeName)},button:function(e){var t=e.nodeName.toLowerCase();return"input"===t&&"button"===e.type||"button"===t},text:function(e){var t;return"input"===e.nodeName.toLowerCase()&&"text"===e.type&&(null==(t=e.getAttribute("type"))||"text"===t.toLowerCase())},first:ve(function(){return[0]}),last:ve(function(e,t){return[t-1]}),eq:ve(function(e,t,n){return[n<0?n+t:n]}),even:ve(function(e,t){for(var n=0;n<t;n+=2)e.push(n);return e}),odd:ve(function(e,t){for(var n=1;n<t;n+=2)e.push(n);return e}),lt:ve(function(e,t,n){for(var r=n<0?n+t:n>t?t:n;--r>=0;)e.push(r);return e}),gt:ve(function(e,t,n){for(var r=n<0?n+t:n;++r<t;)e.push(r);return e})}}).pseudos.nth=r.pseudos.eq,{radio:!0,checkbox:!0,file:!0,password:!0,image:!0})r.pseudos[t]=de(t);for(t in{submit:!0,reset:!0})r.pseudos[t]=he(t);function ye(){}function xe(e){for(var t=0,n=e.length,r="";t<n;t++)r+=e[t].value;return r}function be(e,t,n){var r=t.dir,i=t.next,o=i||r,a=n&&"parentNode"===o,s=C++;return t.first?function(t,n,i){for(;t=t[r];)if(1===t.nodeType||a)return e(t,n,i);return!1}:function(t,n,u){var l,c,f,p=[T,s];if(u){for(;t=t[r];)if((1===t.nodeType||a)&&e(t,n,u))return!0}else for(;t=t[r];)if(1===t.nodeType||a)if(c=(f=t[b]||(t[b]={}))[t.uniqueID]||(f[t.uniqueID]={}),i&&i===t.nodeName.toLowerCase())t=t[r]||t;else{if((l=c[o])&&l[0]===T&&l[1]===s)return p[2]=l[2];if(c[o]=p,p[2]=e(t,n,u))return!0}return!1}}function we(e){return e.length>1?function(t,n,r){for(var i=e.length;i--;)if(!e[i](t,n,r))return!1;return!0}:e[0]}function Te(e,t,n,r,i){for(var o,a=[],s=0,u=e.length,l=null!=t;s<u;s++)(o=e[s])&&(n&&!n(o,r,i)||(a.push(o),l&&t.push(s)));return a}function Ce(e,t,n,r,i,o){return r&&!r[b]&&(r=Ce(r)),i&&!i[b]&&(i=Ce(i,o)),le(function(o,a,s,u){var l,c,f,p=[],d=[],h=a.length,g=o||function(e,t,n){for(var r=0,i=t.length;r<i;r++)se(e,t[r],n);return n}(t||"*",s.nodeType?[s]:s,[]),v=!e||!o&&t?g:Te(g,p,e,s,u),m=n?i||(o?e:h||r)?[]:a:v;if(n&&n(v,m,s,u),r)for(l=Te(m,d),r(l,[],s,u),c=l.length;c--;)(f=l[c])&&(m[d[c]]=!(v[d[c]]=f));if(o){if(i||e){if(i){for(l=[],c=m.length;c--;)(f=m[c])&&l.push(v[c]=f);i(null,m=[],l,u)}for(c=m.length;c--;)(f=m[c])&&(l=i?R(o,f):p[c])>-1&&(o[l]=!(a[l]=f))}}else m=Te(m===a?m.splice(h,m.length):m),i?i(null,a,m,u):H.apply(a,m)})}function Se(e){for(var t,n,i,o=e.length,a=r.relative[e[0].type],s=a||r.relative[" "],u=a?1:0,c=be(function(e){return e===t},s,!0),f=be(function(e){return R(t,e)>-1},s,!0),p=[function(e,n,r){var i=!a&&(r||n!==l)||((t=n).nodeType?c(e,n,r):f(e,n,r));return t=null,i}];u<o;u++)if(n=r.relative[e[u].type])p=[be(we(p),n)];else{if((n=r.filter[e[u].type].apply(null,e[u].matches))[b]){for(i=++u;i<o&&!r.relative[e[i].type];i++);return Ce(u>1&&we(p),u>1&&xe(e.slice(0,u-1).concat({value:" "===e[u-2].type?"*":""})).replace(B,"$1"),n,u<i&&Se(e.slice(u,i)),i<o&&Se(e=e.slice(i)),i<o&&xe(e))}p.push(n)}return we(p)}return ye.prototype=r.filters=r.pseudos,r.setFilters=new ye,a=se.tokenize=function(e,t){var n,i,o,a,s,u,l,c=k[e+" "];if(c)return t?0:c.slice(0);for(s=e,u=[],l=r.preFilter;s;){for(a in n&&!(i=_.exec(s))||(i&&(s=s.slice(i[0].length)||s),u.push(o=[])),n=!1,(i=z.exec(s))&&(n=i.shift(),o.push({value:n,type:i[0].replace(B," ")}),s=s.slice(n.length)),r.filter)!(i=G[a].exec(s))||l[a]&&!(i=l[a](i))||(n=i.shift(),o.push({value:n,type:a,matches:i}),s=s.slice(n.length));if(!n)break}return t?s.length:s?se.error(e):k(e,u).slice(0)},s=se.compile=function(e,t){var n,i=[],o=[],s=E[e+" "];if(!s){for(t||(t=a(e)),n=t.length;n--;)(s=Se(t[n]))[b]?i.push(s):o.push(s);(s=E(e,function(e,t){var n=t.length>0,i=e.length>0,o=function(o,a,s,u,c){var f,h,v,m=0,y="0",x=o&&[],b=[],w=l,C=o||i&&r.find.TAG("*",c),S=T+=null==w?1:Math.random()||.1,k=C.length;for(c&&(l=a===d||a||c);y!==k&&null!=(f=C[y]);y++){if(i&&f){for(h=0,a||f.ownerDocument===d||(p(f),s=!g);v=e[h++];)if(v(f,a||d,s)){u.push(f);break}c&&(T=S)}n&&((f=!v&&f)&&m--,o&&x.push(f))}if(m+=y,n&&y!==m){for(h=0;v=t[h++];)v(x,b,a,s);if(o){if(m>0)for(;y--;)x[y]||b[y]||(b[y]=q.call(u));b=Te(b)}H.apply(u,b),c&&!o&&b.length>0&&m+t.length>1&&se.uniqueSort(u)}return c&&(T=S,l=w),x};return n?le(o):o}(o,i))).selector=e}return s},u=se.select=function(e,t,n,i){var o,u,l,c,f,p="function"==typeof e&&e,d=!i&&a(e=p.selector||e);if(n=n||[],1===d.length){if((u=d[0]=d[0].slice(0)).length>2&&"ID"===(l=u[0]).type&&9===t.nodeType&&g&&r.relative[u[1].type]){if(!(t=(r.find.ID(l.matches[0].replace(te,ne),t)||[])[0]))return n;p&&(t=t.parentNode),e=e.slice(u.shift().value.length)}for(o=G.needsContext.test(e)?0:u.length;o--&&(l=u[o],!r.relative[c=l.type]);)if((f=r.find[c])&&(i=f(l.matches[0].replace(te,ne),ee.test(u[0].type)&&me(t.parentNode)||t))){if(u.splice(o,1),!(e=i.length&&xe(u)))return H.apply(n,i),n;break}}return(p||s(e,d))(i,t,!g,n,!t||ee.test(e)&&me(t.parentNode)||t),n},n.sortStable=b.split("").sort(N).join("")===b,n.detectDuplicates=!!f,p(),n.sortDetached=ce(function(e){return 1&e.compareDocumentPosition(d.createElement("fieldset"))}),ce(function(e){return e.innerHTML="<a href='#'></a>","#"===e.firstChild.getAttribute("href")})||fe("type|href|height|width",function(e,t,n){if(!n)return e.getAttribute(t,"type"===t.toLowerCase()?1:2)}),n.attributes&&ce(function(e){return e.innerHTML="<input/>",e.firstChild.setAttribute("value",""),""===e.firstChild.getAttribute("value")})||fe("value",function(e,t,n){if(!n&&"input"===e.nodeName.toLowerCase())return e.defaultValue}),ce(function(e){return null==e.getAttribute("disabled")})||fe(P,function(e,t,n){var r;if(!n)return!0===e[t]?t.toLowerCase():(r=e.getAttributeNode(t))&&r.specified?r.value:null}),se}(n);C.find=E,C.expr=E.selectors,C.expr[":"]=C.expr.pseudos,C.uniqueSort=C.unique=E.uniqueSort,C.text=E.getText,C.isXMLDoc=E.isXML,C.contains=E.contains,C.escapeSelector=E.escape;var A=function(e,t,n){for(var r=[],i=void 0!==n;(e=e[t])&&9!==e.nodeType;)if(1===e.nodeType){if(i&&C(e).is(n))break;r.push(e)}return r},N=function(e,t){for(var n=[];e;e=e.nextSibling)1===e.nodeType&&e!==t&&n.push(e);return n},j=C.expr.match.needsContext;function D(e,t){return e.nodeName&&e.nodeName.toLowerCase()===t.toLowerCase()}var q=/^<([a-z][^\/\0>:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function L(e,t,n){return y(t)?C.grep(e,function(e,r){return!!t.call(e,r,e)!==n}):t.nodeType?C.grep(e,function(e){return e===t!==n}):"string"!=typeof t?C.grep(e,function(e){return f.call(t,e)>-1!==n}):C.filter(t,e,n)}C.filter=function(e,t,n){var r=t[0];return n&&(e=":not("+e+")"),1===t.length&&1===r.nodeType?C.find.matchesSelector(r,e)?[r]:[]:C.find.matches(e,C.grep(t,function(e){return 1===e.nodeType}))},C.fn.extend({find:function(e){var t,n,r=this.length,i=this;if("string"!=typeof e)return this.pushStack(C(e).filter(function(){for(t=0;t<r;t++)if(C.contains(i[t],this))return!0}));for(n=this.pushStack([]),t=0;t<r;t++)C.find(e,i[t],n);return r>1?C.uniqueSort(n):n},filter:function(e){return this.pushStack(L(this,e||[],!1))},not:function(e){return this.pushStack(L(this,e||[],!0))},is:function(e){return!!L(this,"string"==typeof e&&j.test(e)?C(e):e||[],!1).length}});var H,O=/^(?:\s*(<[\w\W]+>)[^>]*|#([\w-]+))$/;(C.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||H,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&e.length>=3?[null,e,null]:O.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof C?t[0]:t,C.merge(this,C.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:a,!0)),q.test(r[1])&&C.isPlainObject(t))for(r in t)y(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=a.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):y(e)?void 0!==n.ready?n.ready(e):e(C):C.makeArray(e,this)}).prototype=C.fn,H=C(a);var R=/^(?:parents|prev(?:Until|All))/,P={children:!0,contents:!0,next:!0,prev:!0};function M(e,t){for(;(e=e[t])&&1!==e.nodeType;);return e}C.fn.extend({has:function(e){var t=C(e,this),n=t.length;return this.filter(function(){for(var e=0;e<n;e++)if(C.contains(this,t[e]))return!0})},closest:function(e,t){var n,r=0,i=this.length,o=[],a="string"!=typeof e&&C(e);if(!j.test(e))for(;r<i;r++)for(n=this[r];n&&n!==t;n=n.parentNode)if(n.nodeType<11&&(a?a.index(n)>-1:1===n.nodeType&&C.find.matchesSelector(n,e))){o.push(n);break}return this.pushStack(o.length>1?C.uniqueSort(o):o)},index:function(e){return e?"string"==typeof e?f.call(C(e),this[0]):f.call(this,e.jquery?e[0]:e):this[0]&&this[0].parentNode?this.first().prevAll().length:-1},add:function(e,t){return this.pushStack(C.uniqueSort(C.merge(this.get(),C(e,t))))},addBack:function(e){return this.add(null==e?this.prevObject:this.prevObject.filter(e))}}),C.each({parent:function(e){var t=e.parentNode;return t&&11!==t.nodeType?t:null},parents:function(e){return A(e,"parentNode")},parentsUntil:function(e,t,n){return A(e,"parentNode",n)},next:function(e){return M(e,"nextSibling")},prev:function(e){return M(e,"previousSibling")},nextAll:function(e){return A(e,"nextSibling")},prevAll:function(e){return A(e,"previousSibling")},nextUntil:function(e,t,n){return A(e,"nextSibling",n)},prevUntil:function(e,t,n){return A(e,"previousSibling",n)},siblings:function(e){return N((e.parentNode||{}).firstChild,e)},children:function(e){return N(e.firstChild)},contents:function(e){return void 0!==e.contentDocument?e.contentDocument:(D(e,"template")&&(e=e.content||e),C.merge([],e.childNodes))}},function(e,t){C.fn[e]=function(n,r){var i=C.map(this,t,n);return"Until"!==e.slice(-5)&&(r=n),r&&"string"==typeof r&&(i=C.filter(r,i)),this.length>1&&(P[e]||C.uniqueSort(i),R.test(e)&&i.reverse()),this.pushStack(i)}});var I=/[^\x20\t\r\n\f]+/g;function F(e){return e}function $(e){throw e}function W(e,t,n,r){var i;try{e&&y(i=e.promise)?i.call(e).done(t).fail(n):e&&y(i=e.then)?i.call(e,t,n):t.apply(void 0,[e].slice(r))}catch(e){n.apply(void 0,[e])}}C.Callbacks=function(e){e="string"==typeof e?function(e){var t={};return C.each(e.match(I)||[],function(e,n){t[n]=!0}),t}(e):C.extend({},e);var t,n,r,i,o=[],a=[],s=-1,u=function(){for(i=i||e.once,r=t=!0;a.length;s=-1)for(n=a.shift();++s<o.length;)!1===o[s].apply(n[0],n[1])&&e.stopOnFalse&&(s=o.length,n=!1);e.memory||(n=!1),t=!1,i&&(o=n?[]:"")},l={add:function(){return o&&(n&&!t&&(s=o.length-1,a.push(n)),function t(n){C.each(n,function(n,r){y(r)?e.unique&&l.has(r)||o.push(r):r&&r.length&&"string"!==T(r)&&t(r)})}(arguments),n&&!t&&u()),this},remove:function(){return C.each(arguments,function(e,t){for(var n;(n=C.inArray(t,o,n))>-1;)o.splice(n,1),n<=s&&s--}),this},has:function(e){return e?C.inArray(e,o)>-1:o.length>0},empty:function(){return o&&(o=[]),this},disable:function(){return i=a=[],o=n="",this},disabled:function(){return!o},lock:function(){return i=a=[],n||t||(o=n=""),this},locked:function(){return!!i},fireWith:function(e,n){return i||(n=[e,(n=n||[]).slice?n.slice():n],a.push(n),t||u()),this},fire:function(){return l.fireWith(this,arguments),this},fired:function(){return!!r}};return l},C.extend({Deferred:function(e){var t=[["notify","progress",C.Callbacks("memory"),C.Callbacks("memory"),2],["resolve","done",C.Callbacks("once memory"),C.Callbacks("once memory"),0,"resolved"],["reject","fail",C.Callbacks("once memory"),C.Callbacks("once memory"),1,"rejected"]],r="pending",i={state:function(){return r},always:function(){return o.done(arguments).fail(arguments),this},catch:function(e){return i.then(null,e)},pipe:function(){var e=arguments;return C.Deferred(function(n){C.each(t,function(t,r){var i=y(e[r[4]])&&e[r[4]];o[r[1]](function(){var e=i&&i.apply(this,arguments);e&&y(e.promise)?e.promise().progress(n.notify).done(n.resolve).fail(n.reject):n[r[0]+"With"](this,i?[e]:arguments)})}),e=null}).promise()},then:function(e,r,i){var o=0;function a(e,t,r,i){return function(){var s=this,u=arguments,l=function(){var n,l;if(!(e<o)){if((n=r.apply(s,u))===t.promise())throw new TypeError("Thenable self-resolution");l=n&&("object"==typeof n||"function"==typeof n)&&n.then,y(l)?i?l.call(n,a(o,t,F,i),a(o,t,$,i)):(o++,l.call(n,a(o,t,F,i),a(o,t,$,i),a(o,t,F,t.notifyWith))):(r!==F&&(s=void 0,u=[n]),(i||t.resolveWith)(s,u))}},c=i?l:function(){try{l()}catch(n){C.Deferred.exceptionHook&&C.Deferred.exceptionHook(n,c.stackTrace),e+1>=o&&(r!==$&&(s=void 0,u=[n]),t.rejectWith(s,u))}};e?c():(C.Deferred.getStackHook&&(c.stackTrace=C.Deferred.getStackHook()),n.setTimeout(c))}}return C.Deferred(function(n){t[0][3].add(a(0,n,y(i)?i:F,n.notifyWith)),t[1][3].add(a(0,n,y(e)?e:F)),t[2][3].add(a(0,n,y(r)?r:$))}).promise()},promise:function(e){return null!=e?C.extend(e,i):i}},o={};return C.each(t,function(e,n){var a=n[2],s=n[5];i[n[1]]=a.add,s&&a.add(function(){r=s},t[3-e][2].disable,t[3-e][3].disable,t[0][2].lock,t[0][3].lock),a.add(n[3].fire),o[n[0]]=function(){return o[n[0]+"With"](this===o?void 0:this,arguments),this},o[n[0]+"With"]=a.fireWith}),i.promise(o),e&&e.call(o,o),o},when:function(e){var t=arguments.length,n=t,r=Array(n),i=u.call(arguments),o=C.Deferred(),a=function(e){return function(n){r[e]=this,i[e]=arguments.length>1?u.call(arguments):n,--t||o.resolveWith(r,i)}};if(t<=1&&(W(e,o.done(a(n)).resolve,o.reject,!t),"pending"===o.state()||y(i[n]&&i[n].then)))return o.then();for(;n--;)W(i[n],a(n),o.reject);return o.promise()}});var B=/^(Eval|Internal|Range|Reference|Syntax|Type|URI)Error$/;C.Deferred.exceptionHook=function(e,t){n.console&&n.console.warn&&e&&B.test(e.name)&&n.console.warn("jQuery.Deferred exception: "+e.message,e.stack,t)},C.readyException=function(e){n.setTimeout(function(){throw e})};var _=C.Deferred();function z(){a.removeEventListener("DOMContentLoaded",z),n.removeEventListener("load",z),C.ready()}C.fn.ready=function(e){return _.then(e).catch(function(e){C.readyException(e)}),this},C.extend({isReady:!1,readyWait:1,ready:function(e){(!0===e?--C.readyWait:C.isReady)||(C.isReady=!0,!0!==e&&--C.readyWait>0||_.resolveWith(a,[C]))}}),C.ready.then=_.then,"complete"===a.readyState||"loading"!==a.readyState&&!a.documentElement.doScroll?n.setTimeout(C.ready):(a.addEventListener("DOMContentLoaded",z),n.addEventListener("load",z));var U=function(e,t,n,r,i,o,a){var s=0,u=e.length,l=null==n;if("object"===T(n))for(s in i=!0,n)U(e,t,s,n[s],!0,o,a);else if(void 0!==r&&(i=!0,y(r)||(a=!0),l&&(a?(t.call(e,r),t=null):(l=t,t=function(e,t,n){return l.call(C(e),n)})),t))for(;s<u;s++)t(e[s],n,a?r:r.call(e[s],s,t(e[s],n)));return i?e:l?t.call(e):u?t(e[0],n):o},X=/^-ms-/,V=/-([a-z])/g;function G(e,t){return t.toUpperCase()}function Y(e){return e.replace(X,"ms-").replace(V,G)}var Q=function(e){return 1===e.nodeType||9===e.nodeType||!+e.nodeType};function J(){this.expando=C.expando+J.uid++}J.uid=1,J.prototype={cache:function(e){var t=e[this.expando];return t||(t={},Q(e)&&(e.nodeType?e[this.expando]=t:Object.defineProperty(e,this.expando,{value:t,configurable:!0}))),t},set:function(e,t,n){var r,i=this.cache(e);if("string"==typeof t)i[Y(t)]=n;else for(r in t)i[Y(r)]=t[r];return i},get:function(e,t){return void 0===t?this.cache(e):e[this.expando]&&e[this.expando][Y(t)]},access:function(e,t,n){return void 0===t||t&&"string"==typeof t&&void 0===n?this.get(e,t):(this.set(e,t,n),void 0!==n?n:t)},remove:function(e,t){var n,r=e[this.expando];if(void 0!==r){if(void 0!==t){n=(t=Array.isArray(t)?t.map(Y):(t=Y(t))in r?[t]:t.match(I)||[]).length;for(;n--;)delete r[t[n]]}(void 0===t||C.isEmptyObject(r))&&(e.nodeType?e[this.expando]=void 0:delete e[this.expando])}},hasData:function(e){var t=e[this.expando];return void 0!==t&&!C.isEmptyObject(t)}};var K=new J,Z=new J,ee=/^(?:\{[\w\W]*\}|\[[\w\W]*\])$/,te=/[A-Z]/g;function ne(e,t,n){var r;if(void 0===n&&1===e.nodeType)if(r="data-"+t.replace(te,"-$&").toLowerCase(),"string"==typeof(n=e.getAttribute(r))){try{n=function(e){return"true"===e||"false"!==e&&("null"===e?null:e===+e+""?+e:ee.test(e)?JSON.parse(e):e)}(n)}catch(e){}Z.set(e,t,n)}else n=void 0;return n}C.extend({hasData:function(e){return Z.hasData(e)||K.hasData(e)},data:function(e,t,n){return Z.access(e,t,n)},removeData:function(e,t){Z.remove(e,t)},_data:function(e,t,n){return K.access(e,t,n)},_removeData:function(e,t){K.remove(e,t)}}),C.fn.extend({data:function(e,t){var n,r,i,o=this[0],a=o&&o.attributes;if(void 0===e){if(this.length&&(i=Z.get(o),1===o.nodeType&&!K.get(o,"hasDataAttrs"))){for(n=a.length;n--;)a[n]&&0===(r=a[n].name).indexOf("data-")&&(r=Y(r.slice(5)),ne(o,r,i[r]));K.set(o,"hasDataAttrs",!0)}return i}return"object"==typeof e?this.each(function(){Z.set(this,e)}):U(this,function(t){var n;if(o&&void 0===t)return void 0!==(n=Z.get(o,e))?n:void 0!==(n=ne(o,e))?n:void 0;this.each(function(){Z.set(this,e,t)})},null,t,arguments.length>1,null,!0)},removeData:function(e){return this.each(function(){Z.remove(this,e)})}}),C.extend({queue:function(e,t,n){var r;if(e)return t=(t||"fx")+"queue",r=K.get(e,t),n&&(!r||Array.isArray(n)?r=K.access(e,t,C.makeArray(n)):r.push(n)),r||[]},dequeue:function(e,t){t=t||"fx";var n=C.queue(e,t),r=n.length,i=n.shift(),o=C._queueHooks(e,t);"inprogress"===i&&(i=n.shift(),r--),i&&("fx"===t&&n.unshift("inprogress"),delete o.stop,i.call(e,function(){C.dequeue(e,t)},o)),!r&&o&&o.empty.fire()},_queueHooks:function(e,t){var n=t+"queueHooks";return K.get(e,n)||K.access(e,n,{empty:C.Callbacks("once memory").add(function(){K.remove(e,[t+"queue",n])})})}}),C.fn.extend({queue:function(e,t){var n=2;return"string"!=typeof e&&(t=e,e="fx",n--),arguments.length<n?C.queue(this[0],e):void 0===t?this:this.each(function(){var n=C.queue(this,e,t);C._queueHooks(this,e),"fx"===e&&"inprogress"!==n[0]&&C.dequeue(this,e)})},dequeue:function(e){return this.each(function(){C.dequeue(this,e)})},clearQueue:function(e){return this.queue(e||"fx",[])},promise:function(e,t){var n,r=1,i=C.Deferred(),o=this,a=this.length,s=function(){--r||i.resolveWith(o,[o])};for("string"!=typeof e&&(t=e,e=void 0),e=e||"fx";a--;)(n=K.get(o[a],e+"queueHooks"))&&n.empty&&(r++,n.empty.add(s));return s(),i.promise(t)}});var re=/[+-]?(?:\d*\.|)\d+(?:[eE][+-]?\d+|)/.source,ie=new RegExp("^(?:([+-])=|)("+re+")([a-z%]*)$","i"),oe=["Top","Right","Bottom","Left"],ae=a.documentElement,se=function(e){return C.contains(e.ownerDocument,e)},ue={composed:!0};ae.getRootNode&&(se=function(e){return C.contains(e.ownerDocument,e)||e.getRootNode(ue)===e.ownerDocument});var le=function(e,t){return"none"===(e=t||e).style.display||""===e.style.display&&se(e)&&"none"===C.css(e,"display")},ce=function(e,t,n,r){var i,o,a={};for(o in t)a[o]=e.style[o],e.style[o]=t[o];for(o in i=n.apply(e,r||[]),t)e.style[o]=a[o];return i};function fe(e,t,n,r){var i,o,a=20,s=r?function(){return r.cur()}:function(){return C.css(e,t,"")},u=s(),l=n&&n[3]||(C.cssNumber[t]?"":"px"),c=e.nodeType&&(C.cssNumber[t]||"px"!==l&&+u)&&ie.exec(C.css(e,t));if(c&&c[3]!==l){for(u/=2,l=l||c[3],c=+u||1;a--;)C.style(e,t,c+l),(1-o)*(1-(o=s()/u||.5))<=0&&(a=0),c/=o;c*=2,C.style(e,t,c+l),n=n||[]}return n&&(c=+c||+u||0,i=n[1]?c+(n[1]+1)*n[2]:+n[2],r&&(r.unit=l,r.start=c,r.end=i)),i}var pe={};function de(e){var t,n=e.ownerDocument,r=e.nodeName,i=pe[r];return i||(t=n.body.appendChild(n.createElement(r)),i=C.css(t,"display"),t.parentNode.removeChild(t),"none"===i&&(i="block"),pe[r]=i,i)}function he(e,t){for(var n,r,i=[],o=0,a=e.length;o<a;o++)(r=e[o]).style&&(n=r.style.display,t?("none"===n&&(i[o]=K.get(r,"display")||null,i[o]||(r.style.display="")),""===r.style.display&&le(r)&&(i[o]=de(r))):"none"!==n&&(i[o]="none",K.set(r,"display",n)));for(o=0;o<a;o++)null!=i[o]&&(e[o].style.display=i[o]);return e}C.fn.extend({show:function(){return he(this,!0)},hide:function(){return he(this)},toggle:function(e){return"boolean"==typeof e?e?this.show():this.hide():this.each(function(){le(this)?C(this).show():C(this).hide()})}});var ge=/^(?:checkbox|radio)$/i,ve=/<([a-z][^\/\0>\x20\t\r\n\f]*)/i,me=/^$|^module$|\/(?:java|ecma)script/i,ye={option:[1,"<select multiple='multiple'>","</select>"],thead:[1,"<table>","</table>"],col:[2,"<table><colgroup>","</colgroup></table>"],tr:[2,"<table><tbody>","</tbody></table>"],td:[3,"<table><tbody><tr>","</tr></tbody></table>"],_default:[0,"",""]};function xe(e,t){var n;return n=void 0!==e.getElementsByTagName?e.getElementsByTagName(t||"*"):void 0!==e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&D(e,t)?C.merge([e],n):n}function be(e,t){for(var n=0,r=e.length;n<r;n++)K.set(e[n],"globalEval",!t||K.get(t[n],"globalEval"))}ye.optgroup=ye.option,ye.tbody=ye.tfoot=ye.colgroup=ye.caption=ye.thead,ye.th=ye.td;var we,Te,Ce=/<|&#?\w+;/;function Se(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),p=[],d=0,h=e.length;d<h;d++)if((o=e[d])||0===o)if("object"===T(o))C.merge(p,o.nodeType?[o]:o);else if(Ce.test(o)){for(a=a||f.appendChild(t.createElement("div")),s=(ve.exec(o)||["",""])[1].toLowerCase(),u=ye[s]||ye._default,a.innerHTML=u[1]+C.htmlPrefilter(o)+u[2],c=u[0];c--;)a=a.lastChild;C.merge(p,a.childNodes),(a=f.firstChild).textContent=""}else p.push(t.createTextNode(o));for(f.textContent="",d=0;o=p[d++];)if(r&&C.inArray(o,r)>-1)i&&i.push(o);else if(l=se(o),a=xe(f.appendChild(o),"script"),l&&be(a),n)for(c=0;o=a[c++];)me.test(o.type||"")&&n.push(o);return f}we=a.createDocumentFragment().appendChild(a.createElement("div")),(Te=a.createElement("input")).setAttribute("type","radio"),Te.setAttribute("checked","checked"),Te.setAttribute("name","t"),we.appendChild(Te),m.checkClone=we.cloneNode(!0).cloneNode(!0).lastChild.checked,we.innerHTML="<textarea>x</textarea>",m.noCloneChecked=!!we.cloneNode(!0).lastChild.defaultValue;var ke=/^key/,Ee=/^(?:mouse|pointer|contextmenu|drag|drop)|click/,Ae=/^([^.]*)(?:\.(.+)|)/;function Ne(){return!0}function je(){return!1}function De(e,t){return e===function(){try{return a.activeElement}catch(e){}}()==("focus"===t)}function qe(e,t,n,r,i,o){var a,s;if("object"==typeof t){for(s in"string"!=typeof n&&(r=r||n,n=void 0),t)qe(e,s,n,r,t[s],o);return e}if(null==r&&null==i?(i=n,r=n=void 0):null==i&&("string"==typeof n?(i=r,r=void 0):(i=r,r=n,n=void 0)),!1===i)i=je;else if(!i)return e;return 1===o&&(a=i,(i=function(e){return C().off(e),a.apply(this,arguments)}).guid=a.guid||(a.guid=C.guid++)),e.each(function(){C.event.add(this,t,i,r,n)})}function Le(e,t,n){n?(K.set(e,t,!1),C.event.add(e,t,{namespace:!1,handler:function(e){var r,i,o=K.get(this,t);if(1&e.isTrigger&&this[t]){if(o.length)(C.event.special[t]||{}).delegateType&&e.stopPropagation();else if(o=u.call(arguments),K.set(this,t,o),r=n(this,t),this[t](),o!==(i=K.get(this,t))||r?K.set(this,t,!1):i={},o!==i)return e.stopImmediatePropagation(),e.preventDefault(),i.value}else o.length&&(K.set(this,t,{value:C.event.trigger(C.extend(o[0],C.Event.prototype),o.slice(1),this)}),e.stopImmediatePropagation())}})):void 0===K.get(e,t)&&C.event.add(e,t,Ne)}C.event={global:{},add:function(e,t,n,r,i){var o,a,s,u,l,c,f,p,d,h,g,v=K.get(e);if(v)for(n.handler&&(n=(o=n).handler,i=o.selector),i&&C.find.matchesSelector(ae,i),n.guid||(n.guid=C.guid++),(u=v.events)||(u=v.events={}),(a=v.handle)||(a=v.handle=function(t){return void 0!==C&&C.event.triggered!==t.type?C.event.dispatch.apply(e,arguments):void 0}),l=(t=(t||"").match(I)||[""]).length;l--;)d=g=(s=Ae.exec(t[l])||[])[1],h=(s[2]||"").split(".").sort(),d&&(f=C.event.special[d]||{},d=(i?f.delegateType:f.bindType)||d,f=C.event.special[d]||{},c=C.extend({type:d,origType:g,data:r,handler:n,guid:n.guid,selector:i,needsContext:i&&C.expr.match.needsContext.test(i),namespace:h.join(".")},o),(p=u[d])||((p=u[d]=[]).delegateCount=0,f.setup&&!1!==f.setup.call(e,r,h,a)||e.addEventListener&&e.addEventListener(d,a)),f.add&&(f.add.call(e,c),c.handler.guid||(c.handler.guid=n.guid)),i?p.splice(p.delegateCount++,0,c):p.push(c),C.event.global[d]=!0)},remove:function(e,t,n,r,i){var o,a,s,u,l,c,f,p,d,h,g,v=K.hasData(e)&&K.get(e);if(v&&(u=v.events)){for(l=(t=(t||"").match(I)||[""]).length;l--;)if(d=g=(s=Ae.exec(t[l])||[])[1],h=(s[2]||"").split(".").sort(),d){for(f=C.event.special[d]||{},p=u[d=(r?f.delegateType:f.bindType)||d]||[],s=s[2]&&new RegExp("(^|\\.)"+h.join("\\.(?:.*\\.|)")+"(\\.|$)"),a=o=p.length;o--;)c=p[o],!i&&g!==c.origType||n&&n.guid!==c.guid||s&&!s.test(c.namespace)||r&&r!==c.selector&&("**"!==r||!c.selector)||(p.splice(o,1),c.selector&&p.delegateCount--,f.remove&&f.remove.call(e,c));a&&!p.length&&(f.teardown&&!1!==f.teardown.call(e,h,v.handle)||C.removeEvent(e,d,v.handle),delete u[d])}else for(d in u)C.event.remove(e,d+t[l],n,r,!0);C.isEmptyObject(u)&&K.remove(e,"handle events")}},dispatch:function(e){var t,n,r,i,o,a,s=C.event.fix(e),u=new Array(arguments.length),l=(K.get(this,"events")||{})[s.type]||[],c=C.event.special[s.type]||{};for(u[0]=s,t=1;t<arguments.length;t++)u[t]=arguments[t];if(s.delegateTarget=this,!c.preDispatch||!1!==c.preDispatch.call(this,s)){for(a=C.event.handlers.call(this,s,l),t=0;(i=a[t++])&&!s.isPropagationStopped();)for(s.currentTarget=i.elem,n=0;(o=i.handlers[n++])&&!s.isImmediatePropagationStopped();)s.rnamespace&&!1!==o.namespace&&!s.rnamespace.test(o.namespace)||(s.handleObj=o,s.data=o.data,void 0!==(r=((C.event.special[o.origType]||{}).handle||o.handler).apply(i.elem,u))&&!1===(s.result=r)&&(s.preventDefault(),s.stopPropagation()));return c.postDispatch&&c.postDispatch.call(this,s),s.result}},handlers:function(e,t){var n,r,i,o,a,s=[],u=t.delegateCount,l=e.target;if(u&&l.nodeType&&!("click"===e.type&&e.button>=1))for(;l!==this;l=l.parentNode||this)if(1===l.nodeType&&("click"!==e.type||!0!==l.disabled)){for(o=[],a={},n=0;n<u;n++)void 0===a[i=(r=t[n]).selector+" "]&&(a[i]=r.needsContext?C(i,this).index(l)>-1:C.find(i,this,null,[l]).length),a[i]&&o.push(r);o.length&&s.push({elem:l,handlers:o})}return l=this,u<t.length&&s.push({elem:l,handlers:t.slice(u)}),s},addProp:function(e,t){Object.defineProperty(C.Event.prototype,e,{enumerable:!0,configurable:!0,get:y(t)?function(){if(this.originalEvent)return t(this.originalEvent)}:function(){if(this.originalEvent)return this.originalEvent[e]},set:function(t){Object.defineProperty(this,e,{enumerable:!0,configurable:!0,writable:!0,value:t})}})},fix:function(e){return e[C.expando]?e:new C.Event(e)},special:{load:{noBubble:!0},click:{setup:function(e){var t=this||e;return ge.test(t.type)&&t.click&&D(t,"input")&&Le(t,"click",Ne),!1},trigger:function(e){var t=this||e;return ge.test(t.type)&&t.click&&D(t,"input")&&Le(t,"click"),!0},_default:function(e){var t=e.target;return ge.test(t.type)&&t.click&&D(t,"input")&&K.get(t,"click")||D(t,"a")}},beforeunload:{postDispatch:function(e){void 0!==e.result&&e.originalEvent&&(e.originalEvent.returnValue=e.result)}}}},C.removeEvent=function(e,t,n){e.removeEventListener&&e.removeEventListener(t,n)},C.Event=function(e,t){if(!(this instanceof C.Event))return new C.Event(e,t);e&&e.type?(this.originalEvent=e,this.type=e.type,this.isDefaultPrevented=e.defaultPrevented||void 0===e.defaultPrevented&&!1===e.returnValue?Ne:je,this.target=e.target&&3===e.target.nodeType?e.target.parentNode:e.target,this.currentTarget=e.currentTarget,this.relatedTarget=e.relatedTarget):this.type=e,t&&C.extend(this,t),this.timeStamp=e&&e.timeStamp||Date.now(),this[C.expando]=!0},C.Event.prototype={constructor:C.Event,isDefaultPrevented:je,isPropagationStopped:je,isImmediatePropagationStopped:je,isSimulated:!1,preventDefault:function(){var e=this.originalEvent;this.isDefaultPrevented=Ne,e&&!this.isSimulated&&e.preventDefault()},stopPropagation:function(){var e=this.originalEvent;this.isPropagationStopped=Ne,e&&!this.isSimulated&&e.stopPropagation()},stopImmediatePropagation:function(){var e=this.originalEvent;this.isImmediatePropagationStopped=Ne,e&&!this.isSimulated&&e.stopImmediatePropagation(),this.stopPropagation()}},C.each({altKey:!0,bubbles:!0,cancelable:!0,changedTouches:!0,ctrlKey:!0,detail:!0,eventPhase:!0,metaKey:!0,pageX:!0,pageY:!0,shiftKey:!0,view:!0,char:!0,code:!0,charCode:!0,key:!0,keyCode:!0,button:!0,buttons:!0,clientX:!0,clientY:!0,offsetX:!0,offsetY:!0,pointerId:!0,pointerType:!0,screenX:!0,screenY:!0,targetTouches:!0,toElement:!0,touches:!0,which:function(e){var t=e.button;return null==e.which&&ke.test(e.type)?null!=e.charCode?e.charCode:e.keyCode:!e.which&&void 0!==t&&Ee.test(e.type)?1&t?1:2&t?3:4&t?2:0:e.which}},C.event.addProp),C.each({focus:"focusin",blur:"focusout"},function(e,t){C.event.special[e]={setup:function(){return Le(this,e,De),!1},trigger:function(){return Le(this,e),!0},delegateType:t}}),C.each({mouseenter:"mouseover",mouseleave:"mouseout",pointerenter:"pointerover",pointerleave:"pointerout"},function(e,t){C.event.special[e]={delegateType:t,bindType:t,handle:function(e){var n,r=this,i=e.relatedTarget,o=e.handleObj;return i&&(i===r||C.contains(r,i))||(e.type=o.origType,n=o.handler.apply(this,arguments),e.type=t),n}}}),C.fn.extend({on:function(e,t,n,r){return qe(this,e,t,n,r)},one:function(e,t,n,r){return qe(this,e,t,n,r,1)},off:function(e,t,n){var r,i;if(e&&e.preventDefault&&e.handleObj)return r=e.handleObj,C(e.delegateTarget).off(r.namespace?r.origType+"."+r.namespace:r.origType,r.selector,r.handler),this;if("object"==typeof e){for(i in e)this.off(i,t,e[i]);return this}return!1!==t&&"function"!=typeof t||(n=t,t=void 0),!1===n&&(n=je),this.each(function(){C.event.remove(this,e,n,t)})}});var He=/<(?!area|br|col|embed|hr|img|input|link|meta|param)(([a-z][^\/\0>\x20\t\r\n\f]*)[^>]*)\/>/gi,Oe=/<script|<style|<link/i,Re=/checked\s*(?:[^=]|=\s*.checked.)/i,Pe=/^\s*<!(?:\[CDATA\[|--)|(?:\]\]|--)>\s*$/g;function Me(e,t){return D(e,"table")&&D(11!==t.nodeType?t:t.firstChild,"tr")&&C(e).children("tbody")[0]||e}function Ie(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function Fe(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function $e(e,t){var n,r,i,o,a,s,u,l;if(1===t.nodeType){if(K.hasData(e)&&(o=K.access(e),a=K.set(t,o),l=o.events))for(i in delete a.handle,a.events={},l)for(n=0,r=l[i].length;n<r;n++)C.event.add(t,i,l[i][n]);Z.hasData(e)&&(s=Z.access(e),u=C.extend({},s),Z.set(t,u))}}function We(e,t){var n=t.nodeName.toLowerCase();"input"===n&&ge.test(e.type)?t.checked=e.checked:"input"!==n&&"textarea"!==n||(t.defaultValue=e.defaultValue)}function Be(e,t,n,r){t=l.apply([],t);var i,o,a,s,u,c,f=0,p=e.length,d=p-1,h=t[0],g=y(h);if(g||p>1&&"string"==typeof h&&!m.checkClone&&Re.test(h))return e.each(function(i){var o=e.eq(i);g&&(t[0]=h.call(this,i,o.html())),Be(o,t,n,r)});if(p&&(o=(i=Se(t,e[0].ownerDocument,!1,e,r)).firstChild,1===i.childNodes.length&&(i=o),o||r)){for(s=(a=C.map(xe(i,"script"),Ie)).length;f<p;f++)u=i,f!==d&&(u=C.clone(u,!0,!0),s&&C.merge(a,xe(u,"script"))),n.call(e[f],u,f);if(s)for(c=a[a.length-1].ownerDocument,C.map(a,Fe),f=0;f<s;f++)u=a[f],me.test(u.type||"")&&!K.access(u,"globalEval")&&C.contains(c,u)&&(u.src&&"module"!==(u.type||"").toLowerCase()?C._evalUrl&&!u.noModule&&C._evalUrl(u.src,{nonce:u.nonce||u.getAttribute("nonce")}):w(u.textContent.replace(Pe,""),u,c))}return e}function _e(e,t,n){for(var r,i=t?C.filter(t,e):e,o=0;null!=(r=i[o]);o++)n||1!==r.nodeType||C.cleanData(xe(r)),r.parentNode&&(n&&se(r)&&be(xe(r,"script")),r.parentNode.removeChild(r));return e}C.extend({htmlPrefilter:function(e){return e.replace(He,"<$1></$2>")},clone:function(e,t,n){var r,i,o,a,s=e.cloneNode(!0),u=se(e);if(!(m.noCloneChecked||1!==e.nodeType&&11!==e.nodeType||C.isXMLDoc(e)))for(a=xe(s),r=0,i=(o=xe(e)).length;r<i;r++)We(o[r],a[r]);if(t)if(n)for(o=o||xe(e),a=a||xe(s),r=0,i=o.length;r<i;r++)$e(o[r],a[r]);else $e(e,s);return(a=xe(s,"script")).length>0&&be(a,!u&&xe(e,"script")),s},cleanData:function(e){for(var t,n,r,i=C.event.special,o=0;void 0!==(n=e[o]);o++)if(Q(n)){if(t=n[K.expando]){if(t.events)for(r in t.events)i[r]?C.event.remove(n,r):C.removeEvent(n,r,t.handle);n[K.expando]=void 0}n[Z.expando]&&(n[Z.expando]=void 0)}}}),C.fn.extend({detach:function(e){return _e(this,e,!0)},remove:function(e){return _e(this,e)},text:function(e){return U(this,function(e){return void 0===e?C.text(this):this.empty().each(function(){1!==this.nodeType&&11!==this.nodeType&&9!==this.nodeType||(this.textContent=e)})},null,e,arguments.length)},append:function(){return Be(this,arguments,function(e){1!==this.nodeType&&11!==this.nodeType&&9!==this.nodeType||Me(this,e).appendChild(e)})},prepend:function(){return Be(this,arguments,function(e){if(1===this.nodeType||11===this.nodeType||9===this.nodeType){var t=Me(this,e);t.insertBefore(e,t.firstChild)}})},before:function(){return Be(this,arguments,function(e){this.parentNode&&this.parentNode.insertBefore(e,this)})},after:function(){return Be(this,arguments,function(e){this.parentNode&&this.parentNode.insertBefore(e,this.nextSibling)})},empty:function(){for(var e,t=0;null!=(e=this[t]);t++)1===e.nodeType&&(C.cleanData(xe(e,!1)),e.textContent="");return this},clone:function(e,t){return e=null!=e&&e,t=null==t?e:t,this.map(function(){return C.clone(this,e,t)})},html:function(e){return U(this,function(e){var t=this[0]||{},n=0,r=this.length;if(void 0===e&&1===t.nodeType)return t.innerHTML;if("string"==typeof e&&!Oe.test(e)&&!ye[(ve.exec(e)||["",""])[1].toLowerCase()]){e=C.htmlPrefilter(e);try{for(;n<r;n++)1===(t=this[n]||{}).nodeType&&(C.cleanData(xe(t,!1)),t.innerHTML=e);t=0}catch(e){}}t&&this.empty().append(e)},null,e,arguments.length)},replaceWith:function(){var e=[];return Be(this,arguments,function(t){var n=this.parentNode;C.inArray(this,e)<0&&(C.cleanData(xe(this)),n&&n.replaceChild(t,this))},e)}}),C.each({appendTo:"append",prependTo:"prepend",insertBefore:"before",insertAfter:"after",replaceAll:"replaceWith"},function(e,t){C.fn[e]=function(e){for(var n,r=[],i=C(e),o=i.length-1,a=0;a<=o;a++)n=a===o?this:this.clone(!0),C(i[a])[t](n),c.apply(r,n.get());return this.pushStack(r)}});var ze=new RegExp("^("+re+")(?!px)[a-z%]+$","i"),Ue=function(e){var t=e.ownerDocument.defaultView;return t&&t.opener||(t=n),t.getComputedStyle(e)},Xe=new RegExp(oe.join("|"),"i");function Ve(e,t,n){var r,i,o,a,s=e.style;return(n=n||Ue(e))&&(""!==(a=n.getPropertyValue(t)||n[t])||se(e)||(a=C.style(e,t)),!m.pixelBoxStyles()&&ze.test(a)&&Xe.test(t)&&(r=s.width,i=s.minWidth,o=s.maxWidth,s.minWidth=s.maxWidth=s.width=a,a=n.width,s.width=r,s.minWidth=i,s.maxWidth=o)),void 0!==a?a+"":a}function Ge(e,t){return{get:function(){if(!e())return(this.get=t).apply(this,arguments);delete this.get}}}!function(){function e(){if(c){l.style.cssText="position:absolute;left:-11111px;width:60px;margin-top:1px;padding:0;border:0",c.style.cssText="position:relative;display:block;box-sizing:border-box;overflow:scroll;margin:auto;border:1px;padding:1px;width:60%;top:1%",ae.appendChild(l).appendChild(c);var e=n.getComputedStyle(c);r="1%"!==e.top,u=12===t(e.marginLeft),c.style.right="60%",s=36===t(e.right),i=36===t(e.width),c.style.position="absolute",o=12===t(c.offsetWidth/3),ae.removeChild(l),c=null}}function t(e){return Math.round(parseFloat(e))}var r,i,o,s,u,l=a.createElement("div"),c=a.createElement("div");c.style&&(c.style.backgroundClip="content-box",c.cloneNode(!0).style.backgroundClip="",m.clearCloneStyle="content-box"===c.style.backgroundClip,C.extend(m,{boxSizingReliable:function(){return e(),i},pixelBoxStyles:function(){return e(),s},pixelPosition:function(){return e(),r},reliableMarginLeft:function(){return e(),u},scrollboxSize:function(){return e(),o}}))}();var Ye=["Webkit","Moz","ms"],Qe=a.createElement("div").style,Je={};function Ke(e){var t=C.cssProps[e]||Je[e];return t||(e in Qe?e:Je[e]=function(e){for(var t=e[0].toUpperCase()+e.slice(1),n=Ye.length;n--;)if((e=Ye[n]+t)in Qe)return e}(e)||e)}var Ze=/^(none|table(?!-c[ea]).+)/,et=/^--/,tt={position:"absolute",visibility:"hidden",display:"block"},nt={letterSpacing:"0",fontWeight:"400"};function rt(e,t,n){var r=ie.exec(t);return r?Math.max(0,r[2]-(n||0))+(r[3]||"px"):t}function it(e,t,n,r,i,o){var a="width"===t?1:0,s=0,u=0;if(n===(r?"border":"content"))return 0;for(;a<4;a+=2)"margin"===n&&(u+=C.css(e,n+oe[a],!0,i)),r?("content"===n&&(u-=C.css(e,"padding"+oe[a],!0,i)),"margin"!==n&&(u-=C.css(e,"border"+oe[a]+"Width",!0,i))):(u+=C.css(e,"padding"+oe[a],!0,i),"padding"!==n?u+=C.css(e,"border"+oe[a]+"Width",!0,i):s+=C.css(e,"border"+oe[a]+"Width",!0,i));return!r&&o>=0&&(u+=Math.max(0,Math.ceil(e["offset"+t[0].toUpperCase()+t.slice(1)]-o-u-s-.5))||0),u}function ot(e,t,n){var r=Ue(e),i=(!m.boxSizingReliable()||n)&&"border-box"===C.css(e,"boxSizing",!1,r),o=i,a=Ve(e,t,r),s="offset"+t[0].toUpperCase()+t.slice(1);if(ze.test(a)){if(!n)return a;a="auto"}return(!m.boxSizingReliable()&&i||"auto"===a||!parseFloat(a)&&"inline"===C.css(e,"display",!1,r))&&e.getClientRects().length&&(i="border-box"===C.css(e,"boxSizing",!1,r),(o=s in e)&&(a=e[s])),(a=parseFloat(a)||0)+it(e,t,n||(i?"border":"content"),o,r,a)+"px"}function at(e,t,n,r,i){return new at.prototype.init(e,t,n,r,i)}C.extend({cssHooks:{opacity:{get:function(e,t){if(t){var n=Ve(e,"opacity");return""===n?"1":n}}}},cssNumber:{animationIterationCount:!0,columnCount:!0,fillOpacity:!0,flexGrow:!0,flexShrink:!0,fontWeight:!0,gridArea:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnStart:!0,gridRow:!0,gridRowEnd:!0,gridRowStart:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,widows:!0,zIndex:!0,zoom:!0},cssProps:{},style:function(e,t,n,r){if(e&&3!==e.nodeType&&8!==e.nodeType&&e.style){var i,o,a,s=Y(t),u=et.test(t),l=e.style;if(u||(t=Ke(s)),a=C.cssHooks[t]||C.cssHooks[s],void 0===n)return a&&"get"in a&&void 0!==(i=a.get(e,!1,r))?i:l[t];"string"===(o=typeof n)&&(i=ie.exec(n))&&i[1]&&(n=fe(e,t,i),o="number"),null!=n&&n==n&&("number"!==o||u||(n+=i&&i[3]||(C.cssNumber[s]?"":"px")),m.clearCloneStyle||""!==n||0!==t.indexOf("background")||(l[t]="inherit"),a&&"set"in a&&void 0===(n=a.set(e,n,r))||(u?l.setProperty(t,n):l[t]=n))}},css:function(e,t,n,r){var i,o,a,s=Y(t);return et.test(t)||(t=Ke(s)),(a=C.cssHooks[t]||C.cssHooks[s])&&"get"in a&&(i=a.get(e,!0,n)),void 0===i&&(i=Ve(e,t,r)),"normal"===i&&t in nt&&(i=nt[t]),""===n||n?(o=parseFloat(i),!0===n||isFinite(o)?o||0:i):i}}),C.each(["height","width"],function(e,t){C.cssHooks[t]={get:function(e,n,r){if(n)return!Ze.test(C.css(e,"display"))||e.getClientRects().length&&e.getBoundingClientRect().width?ot(e,t,r):ce(e,tt,function(){return ot(e,t,r)})},set:function(e,n,r){var i,o=Ue(e),a=!m.scrollboxSize()&&"absolute"===o.position,s=(a||r)&&"border-box"===C.css(e,"boxSizing",!1,o),u=r?it(e,t,r,s,o):0;return s&&a&&(u-=Math.ceil(e["offset"+t[0].toUpperCase()+t.slice(1)]-parseFloat(o[t])-it(e,t,"border",!1,o)-.5)),u&&(i=ie.exec(n))&&"px"!==(i[3]||"px")&&(e.style[t]=n,n=C.css(e,t)),rt(0,n,u)}}}),C.cssHooks.marginLeft=Ge(m.reliableMarginLeft,function(e,t){if(t)return(parseFloat(Ve(e,"marginLeft"))||e.getBoundingClientRect().left-ce(e,{marginLeft:0},function(){return e.getBoundingClientRect().left}))+"px"}),C.each({margin:"",padding:"",border:"Width"},function(e,t){C.cssHooks[e+t]={expand:function(n){for(var r=0,i={},o="string"==typeof n?n.split(" "):[n];r<4;r++)i[e+oe[r]+t]=o[r]||o[r-2]||o[0];return i}},"margin"!==e&&(C.cssHooks[e+t].set=rt)}),C.fn.extend({css:function(e,t){return U(this,function(e,t,n){var r,i,o={},a=0;if(Array.isArray(t)){for(r=Ue(e),i=t.length;a<i;a++)o[t[a]]=C.css(e,t[a],!1,r);return o}return void 0!==n?C.style(e,t,n):C.css(e,t)},e,t,arguments.length>1)}}),C.Tween=at,at.prototype={constructor:at,init:function(e,t,n,r,i,o){this.elem=e,this.prop=n,this.easing=i||C.easing._default,this.options=t,this.start=this.now=this.cur(),this.end=r,this.unit=o||(C.cssNumber[n]?"":"px")},cur:function(){var e=at.propHooks[this.prop];return e&&e.get?e.get(this):at.propHooks._default.get(this)},run:function(e){var t,n=at.propHooks[this.prop];return this.options.duration?this.pos=t=C.easing[this.easing](e,this.options.duration*e,0,1,this.options.duration):this.pos=t=e,this.now=(this.end-this.start)*t+this.start,this.options.step&&this.options.step.call(this.elem,this.now,this),n&&n.set?n.set(this):at.propHooks._default.set(this),this}},at.prototype.init.prototype=at.prototype,at.propHooks={_default:{get:function(e){var t;return 1!==e.elem.nodeType||null!=e.elem[e.prop]&&null==e.elem.style[e.prop]?e.elem[e.prop]:(t=C.css(e.elem,e.prop,""))&&"auto"!==t?t:0},set:function(e){C.fx.step[e.prop]?C.fx.step[e.prop](e):1!==e.elem.nodeType||!C.cssHooks[e.prop]&&null==e.elem.style[Ke(e.prop)]?e.elem[e.prop]=e.now:C.style(e.elem,e.prop,e.now+e.unit)}}},at.propHooks.scrollTop=at.propHooks.scrollLeft={set:function(e){e.elem.nodeType&&e.elem.parentNode&&(e.elem[e.prop]=e.now)}},C.easing={linear:function(e){return e},swing:function(e){return.5-Math.cos(e*Math.PI)/2},_default:"swing"},C.fx=at.prototype.init,C.fx.step={};var st,ut,lt=/^(?:toggle|show|hide)$/,ct=/queueHooks$/;function ft(){ut&&(!1===a.hidden&&n.requestAnimationFrame?n.requestAnimationFrame(ft):n.setTimeout(ft,C.fx.interval),C.fx.tick())}function pt(){return n.setTimeout(function(){st=void 0}),st=Date.now()}function dt(e,t){var n,r=0,i={height:e};for(t=t?1:0;r<4;r+=2-t)i["margin"+(n=oe[r])]=i["padding"+n]=e;return t&&(i.opacity=i.width=e),i}function ht(e,t,n){for(var r,i=(gt.tweeners[t]||[]).concat(gt.tweeners["*"]),o=0,a=i.length;o<a;o++)if(r=i[o].call(n,t,e))return r}function gt(e,t,n){var r,i,o=0,a=gt.prefilters.length,s=C.Deferred().always(function(){delete u.elem}),u=function(){if(i)return!1;for(var t=st||pt(),n=Math.max(0,l.startTime+l.duration-t),r=1-(n/l.duration||0),o=0,a=l.tweens.length;o<a;o++)l.tweens[o].run(r);return s.notifyWith(e,[l,r,n]),r<1&&a?n:(a||s.notifyWith(e,[l,1,0]),s.resolveWith(e,[l]),!1)},l=s.promise({elem:e,props:C.extend({},t),opts:C.extend(!0,{specialEasing:{},easing:C.easing._default},n),originalProperties:t,originalOptions:n,startTime:st||pt(),duration:n.duration,tweens:[],createTween:function(t,n){var r=C.Tween(e,l.opts,t,n,l.opts.specialEasing[t]||l.opts.easing);return l.tweens.push(r),r},stop:function(t){var n=0,r=t?l.tweens.length:0;if(i)return this;for(i=!0;n<r;n++)l.tweens[n].run(1);return t?(s.notifyWith(e,[l,1,0]),s.resolveWith(e,[l,t])):s.rejectWith(e,[l,t]),this}}),c=l.props;for(!function(e,t){var n,r,i,o,a;for(n in e)if(i=t[r=Y(n)],o=e[n],Array.isArray(o)&&(i=o[1],o=e[n]=o[0]),n!==r&&(e[r]=o,delete e[n]),(a=C.cssHooks[r])&&"expand"in a)for(n in o=a.expand(o),delete e[r],o)n in e||(e[n]=o[n],t[n]=i);else t[r]=i}(c,l.opts.specialEasing);o<a;o++)if(r=gt.prefilters[o].call(l,e,c,l.opts))return y(r.stop)&&(C._queueHooks(l.elem,l.opts.queue).stop=r.stop.bind(r)),r;return C.map(c,ht,l),y(l.opts.start)&&l.opts.start.call(e,l),l.progress(l.opts.progress).done(l.opts.done,l.opts.complete).fail(l.opts.fail).always(l.opts.always),C.fx.timer(C.extend(u,{elem:e,anim:l,queue:l.opts.queue})),l}C.Animation=C.extend(gt,{tweeners:{"*":[function(e,t){var n=this.createTween(e,t);return fe(n.elem,e,ie.exec(t),n),n}]},tweener:function(e,t){y(e)?(t=e,e=["*"]):e=e.match(I);for(var n,r=0,i=e.length;r<i;r++)n=e[r],gt.tweeners[n]=gt.tweeners[n]||[],gt.tweeners[n].unshift(t)},prefilters:[function(e,t,n){var r,i,o,a,s,u,l,c,f="width"in t||"height"in t,p=this,d={},h=e.style,g=e.nodeType&&le(e),v=K.get(e,"fxshow");for(r in n.queue||(null==(a=C._queueHooks(e,"fx")).unqueued&&(a.unqueued=0,s=a.empty.fire,a.empty.fire=function(){a.unqueued||s()}),a.unqueued++,p.always(function(){p.always(function(){a.unqueued--,C.queue(e,"fx").length||a.empty.fire()})})),t)if(i=t[r],lt.test(i)){if(delete t[r],o=o||"toggle"===i,i===(g?"hide":"show")){if("show"!==i||!v||void 0===v[r])continue;g=!0}d[r]=v&&v[r]||C.style(e,r)}if((u=!C.isEmptyObject(t))||!C.isEmptyObject(d))for(r in f&&1===e.nodeType&&(n.overflow=[h.overflow,h.overflowX,h.overflowY],null==(l=v&&v.display)&&(l=K.get(e,"display")),"none"===(c=C.css(e,"display"))&&(l?c=l:(he([e],!0),l=e.style.display||l,c=C.css(e,"display"),he([e]))),("inline"===c||"inline-block"===c&&null!=l)&&"none"===C.css(e,"float")&&(u||(p.done(function(){h.display=l}),null==l&&(c=h.display,l="none"===c?"":c)),h.display="inline-block")),n.overflow&&(h.overflow="hidden",p.always(function(){h.overflow=n.overflow[0],h.overflowX=n.overflow[1],h.overflowY=n.overflow[2]})),u=!1,d)u||(v?"hidden"in v&&(g=v.hidden):v=K.access(e,"fxshow",{display:l}),o&&(v.hidden=!g),g&&he([e],!0),p.done(function(){for(r in g||he([e]),K.remove(e,"fxshow"),d)C.style(e,r,d[r])})),u=ht(g?v[r]:0,r,p),r in v||(v[r]=u.start,g&&(u.end=u.start,u.start=0))}],prefilter:function(e,t){t?gt.prefilters.unshift(e):gt.prefilters.push(e)}}),C.speed=function(e,t,n){var r=e&&"object"==typeof e?C.extend({},e):{complete:n||!n&&t||y(e)&&e,duration:e,easing:n&&t||t&&!y(t)&&t};return C.fx.off?r.duration=0:"number"!=typeof r.duration&&(r.duration in C.fx.speeds?r.duration=C.fx.speeds[r.duration]:r.duration=C.fx.speeds._default),null!=r.queue&&!0!==r.queue||(r.queue="fx"),r.old=r.complete,r.complete=function(){y(r.old)&&r.old.call(this),r.queue&&C.dequeue(this,r.queue)},r},C.fn.extend({fadeTo:function(e,t,n,r){return this.filter(le).css("opacity",0).show().end().animate({opacity:t},e,n,r)},animate:function(e,t,n,r){var i=C.isEmptyObject(e),o=C.speed(t,n,r),a=function(){var t=gt(this,C.extend({},e),o);(i||K.get(this,"finish"))&&t.stop(!0)};return a.finish=a,i||!1===o.queue?this.each(a):this.queue(o.queue,a)},stop:function(e,t,n){var r=function(e){var t=e.stop;delete e.stop,t(n)};return"string"!=typeof e&&(n=t,t=e,e=void 0),t&&!1!==e&&this.queue(e||"fx",[]),this.each(function(){var t=!0,i=null!=e&&e+"queueHooks",o=C.timers,a=K.get(this);if(i)a[i]&&a[i].stop&&r(a[i]);else for(i in a)a[i]&&a[i].stop&&ct.test(i)&&r(a[i]);for(i=o.length;i--;)o[i].elem!==this||null!=e&&o[i].queue!==e||(o[i].anim.stop(n),t=!1,o.splice(i,1));!t&&n||C.dequeue(this,e)})},finish:function(e){return!1!==e&&(e=e||"fx"),this.each(function(){var t,n=K.get(this),r=n[e+"queue"],i=n[e+"queueHooks"],o=C.timers,a=r?r.length:0;for(n.finish=!0,C.queue(this,e,[]),i&&i.stop&&i.stop.call(this,!0),t=o.length;t--;)o[t].elem===this&&o[t].queue===e&&(o[t].anim.stop(!0),o.splice(t,1));for(t=0;t<a;t++)r[t]&&r[t].finish&&r[t].finish.call(this);delete n.finish})}}),C.each(["toggle","show","hide"],function(e,t){var n=C.fn[t];C.fn[t]=function(e,r,i){return null==e||"boolean"==typeof e?n.apply(this,arguments):this.animate(dt(t,!0),e,r,i)}}),C.each({slideDown:dt("show"),slideUp:dt("hide"),slideToggle:dt("toggle"),fadeIn:{opacity:"show"},fadeOut:{opacity:"hide"},fadeToggle:{opacity:"toggle"}},function(e,t){C.fn[e]=function(e,n,r){return this.animate(t,e,n,r)}}),C.timers=[],C.fx.tick=function(){var e,t=0,n=C.timers;for(st=Date.now();t<n.length;t++)(e=n[t])()||n[t]!==e||n.splice(t--,1);n.length||C.fx.stop(),st=void 0},C.fx.timer=function(e){C.timers.push(e),C.fx.start()},C.fx.interval=13,C.fx.start=function(){ut||(ut=!0,ft())},C.fx.stop=function(){ut=null},C.fx.speeds={slow:600,fast:200,_default:400},C.fn.delay=function(e,t){return e=C.fx&&C.fx.speeds[e]||e,t=t||"fx",this.queue(t,function(t,r){var i=n.setTimeout(t,e);r.stop=function(){n.clearTimeout(i)}})},function(){var e=a.createElement("input"),t=a.createElement("select").appendChild(a.createElement("option"));e.type="checkbox",m.checkOn=""!==e.value,m.optSelected=t.selected,(e=a.createElement("input")).value="t",e.type="radio",m.radioValue="t"===e.value}();var vt,mt=C.expr.attrHandle;C.fn.extend({attr:function(e,t){return U(this,C.attr,e,t,arguments.length>1)},removeAttr:function(e){return this.each(function(){C.removeAttr(this,e)})}}),C.extend({attr:function(e,t,n){var r,i,o=e.nodeType;if(3!==o&&8!==o&&2!==o)return void 0===e.getAttribute?C.prop(e,t,n):(1===o&&C.isXMLDoc(e)||(i=C.attrHooks[t.toLowerCase()]||(C.expr.match.bool.test(t)?vt:void 0)),void 0!==n?null===n?void C.removeAttr(e,t):i&&"set"in i&&void 0!==(r=i.set(e,n,t))?r:(e.setAttribute(t,n+""),n):i&&"get"in i&&null!==(r=i.get(e,t))?r:null==(r=C.find.attr(e,t))?void 0:r)},attrHooks:{type:{set:function(e,t){if(!m.radioValue&&"radio"===t&&D(e,"input")){var n=e.value;return e.setAttribute("type",t),n&&(e.value=n),t}}}},removeAttr:function(e,t){var n,r=0,i=t&&t.match(I);if(i&&1===e.nodeType)for(;n=i[r++];)e.removeAttribute(n)}}),vt={set:function(e,t,n){return!1===t?C.removeAttr(e,n):e.setAttribute(n,n),n}},C.each(C.expr.match.bool.source.match(/\w+/g),function(e,t){var n=mt[t]||C.find.attr;mt[t]=function(e,t,r){var i,o,a=t.toLowerCase();return r||(o=mt[a],mt[a]=i,i=null!=n(e,t,r)?a:null,mt[a]=o),i}});var yt=/^(?:input|select|textarea|button)$/i,xt=/^(?:a|area)$/i;function bt(e){return(e.match(I)||[]).join(" ")}function wt(e){return e.getAttribute&&e.getAttribute("class")||""}function Tt(e){return Array.isArray(e)?e:"string"==typeof e&&e.match(I)||[]}C.fn.extend({prop:function(e,t){return U(this,C.prop,e,t,arguments.length>1)},removeProp:function(e){return this.each(function(){delete this[C.propFix[e]||e]})}}),C.extend({prop:function(e,t,n){var r,i,o=e.nodeType;if(3!==o&&8!==o&&2!==o)return 1===o&&C.isXMLDoc(e)||(t=C.propFix[t]||t,i=C.propHooks[t]),void 0!==n?i&&"set"in i&&void 0!==(r=i.set(e,n,t))?r:e[t]=n:i&&"get"in i&&null!==(r=i.get(e,t))?r:e[t]},propHooks:{tabIndex:{get:function(e){var t=C.find.attr(e,"tabindex");return t?parseInt(t,10):yt.test(e.nodeName)||xt.test(e.nodeName)&&e.href?0:-1}}},propFix:{for:"htmlFor",class:"className"}}),m.optSelected||(C.propHooks.selected={get:function(e){var t=e.parentNode;return t&&t.parentNode&&t.parentNode.selectedIndex,null},set:function(e){var t=e.parentNode;t&&(t.selectedIndex,t.parentNode&&t.parentNode.selectedIndex)}}),C.each(["tabIndex","readOnly","maxLength","cellSpacing","cellPadding","rowSpan","colSpan","useMap","frameBorder","contentEditable"],function(){C.propFix[this.toLowerCase()]=this}),C.fn.extend({addClass:function(e){var t,n,r,i,o,a,s,u=0;if(y(e))return this.each(function(t){C(this).addClass(e.call(this,t,wt(this)))});if((t=Tt(e)).length)for(;n=this[u++];)if(i=wt(n),r=1===n.nodeType&&" "+bt(i)+" "){for(a=0;o=t[a++];)r.indexOf(" "+o+" ")<0&&(r+=o+" ");i!==(s=bt(r))&&n.setAttribute("class",s)}return this},removeClass:function(e){var t,n,r,i,o,a,s,u=0;if(y(e))return this.each(function(t){C(this).removeClass(e.call(this,t,wt(this)))});if(!arguments.length)return this.attr("class","");if((t=Tt(e)).length)for(;n=this[u++];)if(i=wt(n),r=1===n.nodeType&&" "+bt(i)+" "){for(a=0;o=t[a++];)for(;r.indexOf(" "+o+" ")>-1;)r=r.replace(" "+o+" "," ");i!==(s=bt(r))&&n.setAttribute("class",s)}return this},toggleClass:function(e,t){var n=typeof e,r="string"===n||Array.isArray(e);return"boolean"==typeof t&&r?t?this.addClass(e):this.removeClass(e):y(e)?this.each(function(n){C(this).toggleClass(e.call(this,n,wt(this),t),t)}):this.each(function(){var t,i,o,a;if(r)for(i=0,o=C(this),a=Tt(e);t=a[i++];)o.hasClass(t)?o.removeClass(t):o.addClass(t);else void 0!==e&&"boolean"!==n||((t=wt(this))&&K.set(this,"__className__",t),this.setAttribute&&this.setAttribute("class",t||!1===e?"":K.get(this,"__className__")||""))})},hasClass:function(e){var t,n,r=0;for(t=" "+e+" ";n=this[r++];)if(1===n.nodeType&&(" "+bt(wt(n))+" ").indexOf(t)>-1)return!0;return!1}});var Ct=/\r/g;C.fn.extend({val:function(e){var t,n,r,i=this[0];return arguments.length?(r=y(e),this.each(function(n){var i;1===this.nodeType&&(null==(i=r?e.call(this,n,C(this).val()):e)?i="":"number"==typeof i?i+="":Array.isArray(i)&&(i=C.map(i,function(e){return null==e?"":e+""})),(t=C.valHooks[this.type]||C.valHooks[this.nodeName.toLowerCase()])&&"set"in t&&void 0!==t.set(this,i,"value")||(this.value=i))})):i?(t=C.valHooks[i.type]||C.valHooks[i.nodeName.toLowerCase()])&&"get"in t&&void 0!==(n=t.get(i,"value"))?n:"string"==typeof(n=i.value)?n.replace(Ct,""):null==n?"":n:void 0}}),C.extend({valHooks:{option:{get:function(e){var t=C.find.attr(e,"value");return null!=t?t:bt(C.text(e))}},select:{get:function(e){var t,n,r,i=e.options,o=e.selectedIndex,a="select-one"===e.type,s=a?null:[],u=a?o+1:i.length;for(r=o<0?u:a?o:0;r<u;r++)if(((n=i[r]).selected||r===o)&&!n.disabled&&(!n.parentNode.disabled||!D(n.parentNode,"optgroup"))){if(t=C(n).val(),a)return t;s.push(t)}return s},set:function(e,t){for(var n,r,i=e.options,o=C.makeArray(t),a=i.length;a--;)((r=i[a]).selected=C.inArray(C.valHooks.option.get(r),o)>-1)&&(n=!0);return n||(e.selectedIndex=-1),o}}}}),C.each(["radio","checkbox"],function(){C.valHooks[this]={set:function(e,t){if(Array.isArray(t))return e.checked=C.inArray(C(e).val(),t)>-1}},m.checkOn||(C.valHooks[this].get=function(e){return null===e.getAttribute("value")?"on":e.value})}),m.focusin="onfocusin"in n;var St=/^(?:focusinfocus|focusoutblur)$/,kt=function(e){e.stopPropagation()};C.extend(C.event,{trigger:function(e,t,r,i){var o,s,u,l,c,f,p,d,g=[r||a],v=h.call(e,"type")?e.type:e,m=h.call(e,"namespace")?e.namespace.split("."):[];if(s=d=u=r=r||a,3!==r.nodeType&&8!==r.nodeType&&!St.test(v+C.event.triggered)&&(v.indexOf(".")>-1&&(m=v.split("."),v=m.shift(),m.sort()),c=v.indexOf(":")<0&&"on"+v,(e=e[C.expando]?e:new C.Event(v,"object"==typeof e&&e)).isTrigger=i?2:3,e.namespace=m.join("."),e.rnamespace=e.namespace?new RegExp("(^|\\.)"+m.join("\\.(?:.*\\.|)")+"(\\.|$)"):null,e.result=void 0,e.target||(e.target=r),t=null==t?[e]:C.makeArray(t,[e]),p=C.event.special[v]||{},i||!p.trigger||!1!==p.trigger.apply(r,t))){if(!i&&!p.noBubble&&!x(r)){for(l=p.delegateType||v,St.test(l+v)||(s=s.parentNode);s;s=s.parentNode)g.push(s),u=s;u===(r.ownerDocument||a)&&g.push(u.defaultView||u.parentWindow||n)}for(o=0;(s=g[o++])&&!e.isPropagationStopped();)d=s,e.type=o>1?l:p.bindType||v,(f=(K.get(s,"events")||{})[e.type]&&K.get(s,"handle"))&&f.apply(s,t),(f=c&&s[c])&&f.apply&&Q(s)&&(e.result=f.apply(s,t),!1===e.result&&e.preventDefault());return e.type=v,i||e.isDefaultPrevented()||p._default&&!1!==p._default.apply(g.pop(),t)||!Q(r)||c&&y(r[v])&&!x(r)&&((u=r[c])&&(r[c]=null),C.event.triggered=v,e.isPropagationStopped()&&d.addEventListener(v,kt),r[v](),e.isPropagationStopped()&&d.removeEventListener(v,kt),C.event.triggered=void 0,u&&(r[c]=u)),e.result}},simulate:function(e,t,n){var r=C.extend(new C.Event,n,{type:e,isSimulated:!0});C.event.trigger(r,null,t)}}),C.fn.extend({trigger:function(e,t){return this.each(function(){C.event.trigger(e,t,this)})},triggerHandler:function(e,t){var n=this[0];if(n)return C.event.trigger(e,t,n,!0)}}),m.focusin||C.each({focus:"focusin",blur:"focusout"},function(e,t){var n=function(e){C.event.simulate(t,e.target,C.event.fix(e))};C.event.special[t]={setup:function(){var r=this.ownerDocument||this,i=K.access(r,t);i||r.addEventListener(e,n,!0),K.access(r,t,(i||0)+1)},teardown:function(){var r=this.ownerDocument||this,i=K.access(r,t)-1;i?K.access(r,t,i):(r.removeEventListener(e,n,!0),K.remove(r,t))}}});var Et=n.location,At=Date.now(),Nt=/\?/;C.parseXML=function(e){var t;if(!e||"string"!=typeof e)return null;try{t=(new n.DOMParser).parseFromString(e,"text/xml")}catch(e){t=void 0}return t&&!t.getElementsByTagName("parsererror").length||C.error("Invalid XML: "+e),t};var jt=/\[\]$/,Dt=/\r?\n/g,qt=/^(?:submit|button|image|reset|file)$/i,Lt=/^(?:input|select|textarea|keygen)/i;function Ht(e,t,n,r){var i;if(Array.isArray(t))C.each(t,function(t,i){n||jt.test(e)?r(e,i):Ht(e+"["+("object"==typeof i&&null!=i?t:"")+"]",i,n,r)});else if(n||"object"!==T(t))r(e,t);else for(i in t)Ht(e+"["+i+"]",t[i],n,r)}C.param=function(e,t){var n,r=[],i=function(e,t){var n=y(t)?t():t;r[r.length]=encodeURIComponent(e)+"="+encodeURIComponent(null==n?"":n)};if(null==e)return"";if(Array.isArray(e)||e.jquery&&!C.isPlainObject(e))C.each(e,function(){i(this.name,this.value)});else for(n in e)Ht(n,e[n],t,i);return r.join("&")},C.fn.extend({serialize:function(){return C.param(this.serializeArray())},serializeArray:function(){return this.map(function(){var e=C.prop(this,"elements");return e?C.makeArray(e):this}).filter(function(){var e=this.type;return this.name&&!C(this).is(":disabled")&&Lt.test(this.nodeName)&&!qt.test(e)&&(this.checked||!ge.test(e))}).map(function(e,t){var n=C(this).val();return null==n?null:Array.isArray(n)?C.map(n,function(e){return{name:t.name,value:e.replace(Dt,"\r\n")}}):{name:t.name,value:n.replace(Dt,"\r\n")}}).get()}});var Ot=/%20/g,Rt=/#.*$/,Pt=/([?&])_=[^&]*/,Mt=/^(.*?):[ \t]*([^\r\n]*)$/gm,It=/^(?:GET|HEAD)$/,Ft=/^\/\//,$t={},Wt={},Bt="*/".concat("*"),_t=a.createElement("a");function zt(e){return function(t,n){"string"!=typeof t&&(n=t,t="*");var r,i=0,o=t.toLowerCase().match(I)||[];if(y(n))for(;r=o[i++];)"+"===r[0]?(r=r.slice(1)||"*",(e[r]=e[r]||[]).unshift(n)):(e[r]=e[r]||[]).push(n)}}function Ut(e,t,n,r){var i={},o=e===Wt;function a(s){var u;return i[s]=!0,C.each(e[s]||[],function(e,s){var l=s(t,n,r);return"string"!=typeof l||o||i[l]?o?!(u=l):void 0:(t.dataTypes.unshift(l),a(l),!1)}),u}return a(t.dataTypes[0])||!i["*"]&&a("*")}function Xt(e,t){var n,r,i=C.ajaxSettings.flatOptions||{};for(n in t)void 0!==t[n]&&((i[n]?e:r||(r={}))[n]=t[n]);return r&&C.extend(!0,e,r),e}_t.href=Et.href,C.extend({active:0,lastModified:{},etag:{},ajaxSettings:{url:Et.href,type:"GET",isLocal:/^(?:about|app|app-storage|.+-extension|file|res|widget):$/.test(Et.protocol),global:!0,processData:!0,async:!0,contentType:"application/x-www-form-urlencoded; charset=UTF-8",accepts:{"*":Bt,text:"text/plain",html:"text/html",xml:"application/xml, text/xml",json:"application/json, text/javascript"},contents:{xml:/\bxml\b/,html:/\bhtml/,json:/\bjson\b/},responseFields:{xml:"responseXML",text:"responseText",json:"responseJSON"},converters:{"* text":String,"text html":!0,"text json":JSON.parse,"text xml":C.parseXML},flatOptions:{url:!0,context:!0}},ajaxSetup:function(e,t){return t?Xt(Xt(e,C.ajaxSettings),t):Xt(C.ajaxSettings,e)},ajaxPrefilter:zt($t),ajaxTransport:zt(Wt),ajax:function(e,t){"object"==typeof e&&(t=e,e=void 0),t=t||{};var r,i,o,s,u,l,c,f,p,d,h=C.ajaxSetup({},t),g=h.context||h,v=h.context&&(g.nodeType||g.jquery)?C(g):C.event,m=C.Deferred(),y=C.Callbacks("once memory"),x=h.statusCode||{},b={},w={},T="canceled",S={readyState:0,getResponseHeader:function(e){var t;if(c){if(!s)for(s={};t=Mt.exec(o);)s[t[1].toLowerCase()+" "]=(s[t[1].toLowerCase()+" "]||[]).concat(t[2]);t=s[e.toLowerCase()+" "]}return null==t?null:t.join(", ")},getAllResponseHeaders:function(){return c?o:null},setRequestHeader:function(e,t){return null==c&&(e=w[e.toLowerCase()]=w[e.toLowerCase()]||e,b[e]=t),this},overrideMimeType:function(e){return null==c&&(h.mimeType=e),this},statusCode:function(e){var t;if(e)if(c)S.always(e[S.status]);else for(t in e)x[t]=[x[t],e[t]];return this},abort:function(e){var t=e||T;return r&&r.abort(t),k(0,t),this}};if(m.promise(S),h.url=((e||h.url||Et.href)+"").replace(Ft,Et.protocol+"//"),h.type=t.method||t.type||h.method||h.type,h.dataTypes=(h.dataType||"*").toLowerCase().match(I)||[""],null==h.crossDomain){l=a.createElement("a");try{l.href=h.url,l.href=l.href,h.crossDomain=_t.protocol+"//"+_t.host!=l.protocol+"//"+l.host}catch(e){h.crossDomain=!0}}if(h.data&&h.processData&&"string"!=typeof h.data&&(h.data=C.param(h.data,h.traditional)),Ut($t,h,t,S),c)return S;for(p in(f=C.event&&h.global)&&0==C.active++&&C.event.trigger("ajaxStart"),h.type=h.type.toUpperCase(),h.hasContent=!It.test(h.type),i=h.url.replace(Rt,""),h.hasContent?h.data&&h.processData&&0===(h.contentType||"").indexOf("application/x-www-form-urlencoded")&&(h.data=h.data.replace(Ot,"+")):(d=h.url.slice(i.length),h.data&&(h.processData||"string"==typeof h.data)&&(i+=(Nt.test(i)?"&":"?")+h.data,delete h.data),!1===h.cache&&(i=i.replace(Pt,"$1"),d=(Nt.test(i)?"&":"?")+"_="+At+++d),h.url=i+d),h.ifModified&&(C.lastModified[i]&&S.setRequestHeader("If-Modified-Since",C.lastModified[i]),C.etag[i]&&S.setRequestHeader("If-None-Match",C.etag[i])),(h.data&&h.hasContent&&!1!==h.contentType||t.contentType)&&S.setRequestHeader("Content-Type",h.contentType),S.setRequestHeader("Accept",h.dataTypes[0]&&h.accepts[h.dataTypes[0]]?h.accepts[h.dataTypes[0]]+("*"!==h.dataTypes[0]?", "+Bt+"; q=0.01":""):h.accepts["*"]),h.headers)S.setRequestHeader(p,h.headers[p]);if(h.beforeSend&&(!1===h.beforeSend.call(g,S,h)||c))return S.abort();if(T="abort",y.add(h.complete),S.done(h.success),S.fail(h.error),r=Ut(Wt,h,t,S)){if(S.readyState=1,f&&v.trigger("ajaxSend",[S,h]),c)return S;h.async&&h.timeout>0&&(u=n.setTimeout(function(){S.abort("timeout")},h.timeout));try{c=!1,r.send(b,k)}catch(e){if(c)throw e;k(-1,e)}}else k(-1,"No Transport");function k(e,t,a,s){var l,p,d,b,w,T=t;c||(c=!0,u&&n.clearTimeout(u),r=void 0,o=s||"",S.readyState=e>0?4:0,l=e>=200&&e<300||304===e,a&&(b=function(e,t,n){for(var r,i,o,a,s=e.contents,u=e.dataTypes;"*"===u[0];)u.shift(),void 0===r&&(r=e.mimeType||t.getResponseHeader("Content-Type"));if(r)for(i in s)if(s[i]&&s[i].test(r)){u.unshift(i);break}if(u[0]in n)o=u[0];else{for(i in n){if(!u[0]||e.converters[i+" "+u[0]]){o=i;break}a||(a=i)}o=o||a}if(o)return o!==u[0]&&u.unshift(o),n[o]}(h,S,a)),b=function(e,t,n,r){var i,o,a,s,u,l={},c=e.dataTypes.slice();if(c[1])for(a in e.converters)l[a.toLowerCase()]=e.converters[a];for(o=c.shift();o;)if(e.responseFields[o]&&(n[e.responseFields[o]]=t),!u&&r&&e.dataFilter&&(t=e.dataFilter(t,e.dataType)),u=o,o=c.shift())if("*"===o)o=u;else if("*"!==u&&u!==o){if(!(a=l[u+" "+o]||l["* "+o]))for(i in l)if((s=i.split(" "))[1]===o&&(a=l[u+" "+s[0]]||l["* "+s[0]])){!0===a?a=l[i]:!0!==l[i]&&(o=s[0],c.unshift(s[1]));break}if(!0!==a)if(a&&e.throws)t=a(t);else try{t=a(t)}catch(e){return{state:"parsererror",error:a?e:"No conversion from "+u+" to "+o}}}return{state:"success",data:t}}(h,b,S,l),l?(h.ifModified&&((w=S.getResponseHeader("Last-Modified"))&&(C.lastModified[i]=w),(w=S.getResponseHeader("etag"))&&(C.etag[i]=w)),204===e||"HEAD"===h.type?T="nocontent":304===e?T="notmodified":(T=b.state,p=b.data,l=!(d=b.error))):(d=T,!e&&T||(T="error",e<0&&(e=0))),S.status=e,S.statusText=(t||T)+"",l?m.resolveWith(g,[p,T,S]):m.rejectWith(g,[S,T,d]),S.statusCode(x),x=void 0,f&&v.trigger(l?"ajaxSuccess":"ajaxError",[S,h,l?p:d]),y.fireWith(g,[S,T]),f&&(v.trigger("ajaxComplete",[S,h]),--C.active||C.event.trigger("ajaxStop")))}return S},getJSON:function(e,t,n){return C.get(e,t,n,"json")},getScript:function(e,t){return C.get(e,void 0,t,"script")}}),C.each(["get","post"],function(e,t){C[t]=function(e,n,r,i){return y(n)&&(i=i||r,r=n,n=void 0),C.ajax(C.extend({url:e,type:t,dataType:i,data:n,success:r},C.isPlainObject(e)&&e))}}),C._evalUrl=function(e,t){return C.ajax({url:e,type:"GET",dataType:"script",cache:!0,async:!1,global:!1,converters:{"text script":function(){}},dataFilter:function(e){C.globalEval(e,t)}})},C.fn.extend({wrapAll:function(e){var t;return this[0]&&(y(e)&&(e=e.call(this[0])),t=C(e,this[0].ownerDocument).eq(0).clone(!0),this[0].parentNode&&t.insertBefore(this[0]),t.map(function(){for(var e=this;e.firstElementChild;)e=e.firstElementChild;return e}).append(this)),this},wrapInner:function(e){return y(e)?this.each(function(t){C(this).wrapInner(e.call(this,t))}):this.each(function(){var t=C(this),n=t.contents();n.length?n.wrapAll(e):t.append(e)})},wrap:function(e){var t=y(e);return this.each(function(n){C(this).wrapAll(t?e.call(this,n):e)})},unwrap:function(e){return this.parent(e).not("body").each(function(){C(this).replaceWith(this.childNodes)}),this}}),C.expr.pseudos.hidden=function(e){return!C.expr.pseudos.visible(e)},C.expr.pseudos.visible=function(e){return!!(e.offsetWidth||e.offsetHeight||e.getClientRects().length)},C.ajaxSettings.xhr=function(){try{return new n.XMLHttpRequest}catch(e){}};var Vt={0:200,1223:204},Gt=C.ajaxSettings.xhr();m.cors=!!Gt&&"withCredentials"in Gt,m.ajax=Gt=!!Gt,C.ajaxTransport(function(e){var t,r;if(m.cors||Gt&&!e.crossDomain)return{send:function(i,o){var a,s=e.xhr();if(s.open(e.type,e.url,e.async,e.username,e.password),e.xhrFields)for(a in e.xhrFields)s[a]=e.xhrFields[a];for(a in e.mimeType&&s.overrideMimeType&&s.overrideMimeType(e.mimeType),e.crossDomain||i["X-Requested-With"]||(i["X-Requested-With"]="XMLHttpRequest"),i)s.setRequestHeader(a,i[a]);t=function(e){return function(){t&&(t=r=s.onload=s.onerror=s.onabort=s.ontimeout=s.onreadystatechange=null,"abort"===e?s.abort():"error"===e?"number"!=typeof s.status?o(0,"error"):o(s.status,s.statusText):o(Vt[s.status]||s.status,s.statusText,"text"!==(s.responseType||"text")||"string"!=typeof s.responseText?{binary:s.response}:{text:s.responseText},s.getAllResponseHeaders()))}},s.onload=t(),r=s.onerror=s.ontimeout=t("error"),void 0!==s.onabort?s.onabort=r:s.onreadystatechange=function(){4===s.readyState&&n.setTimeout(function(){t&&r()})},t=t("abort");try{s.send(e.hasContent&&e.data||null)}catch(e){if(t)throw e}},abort:function(){t&&t()}}}),C.ajaxPrefilter(function(e){e.crossDomain&&(e.contents.script=!1)}),C.ajaxSetup({accepts:{script:"text/javascript, application/javascript, application/ecmascript, application/x-ecmascript"},contents:{script:/\b(?:java|ecma)script\b/},converters:{"text script":function(e){return C.globalEval(e),e}}}),C.ajaxPrefilter("script",function(e){void 0===e.cache&&(e.cache=!1),e.crossDomain&&(e.type="GET")}),C.ajaxTransport("script",function(e){var t,n;if(e.crossDomain||e.scriptAttrs)return{send:function(r,i){t=C("<script>").attr(e.scriptAttrs||{}).prop({charset:e.scriptCharset,src:e.url}).on("load error",n=function(e){t.remove(),n=null,e&&i("error"===e.type?404:200,e.type)}),a.head.appendChild(t[0])},abort:function(){n&&n()}}});var Yt,Qt=[],Jt=/(=)\?(?=&|$)|\?\?/;C.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=Qt.pop()||C.expando+"_"+At++;return this[e]=!0,e}}),C.ajaxPrefilter("json jsonp",function(e,t,r){var i,o,a,s=!1!==e.jsonp&&(Jt.test(e.url)?"url":"string"==typeof e.data&&0===(e.contentType||"").indexOf("application/x-www-form-urlencoded")&&Jt.test(e.data)&&"data");if(s||"jsonp"===e.dataTypes[0])return i=e.jsonpCallback=y(e.jsonpCallback)?e.jsonpCallback():e.jsonpCallback,s?e[s]=e[s].replace(Jt,"$1"+i):!1!==e.jsonp&&(e.url+=(Nt.test(e.url)?"&":"?")+e.jsonp+"="+i),e.converters["script json"]=function(){return a||C.error(i+" was not called"),a[0]},e.dataTypes[0]="json",o=n[i],n[i]=function(){a=arguments},r.always(function(){void 0===o?C(n).removeProp(i):n[i]=o,e[i]&&(e.jsonpCallback=t.jsonpCallback,Qt.push(i)),a&&y(o)&&o(a[0]),a=o=void 0}),"script"}),m.createHTMLDocument=((Yt=a.implementation.createHTMLDocument("").body).innerHTML="<form></form><form></form>",2===Yt.childNodes.length),C.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(m.createHTMLDocument?((r=(t=a.implementation.createHTMLDocument("")).createElement("base")).href=a.location.href,t.head.appendChild(r)):t=a),o=!n&&[],(i=q.exec(e))?[t.createElement(i[1])]:(i=Se([e],t,o),o&&o.length&&C(o).remove(),C.merge([],i.childNodes)));var r,i,o},C.fn.load=function(e,t,n){var r,i,o,a=this,s=e.indexOf(" ");return s>-1&&(r=bt(e.slice(s)),e=e.slice(0,s)),y(t)?(n=t,t=void 0):t&&"object"==typeof t&&(i="POST"),a.length>0&&C.ajax({url:e,type:i||"GET",dataType:"html",data:t}).done(function(e){o=arguments,a.html(r?C("<div>").append(C.parseHTML(e)).find(r):e)}).always(n&&function(e,t){a.each(function(){n.apply(this,o||[e.responseText,t,e])})}),this},C.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(e,t){C.fn[t]=function(e){return this.on(t,e)}}),C.expr.pseudos.animated=function(e){return C.grep(C.timers,function(t){return e===t.elem}).length},C.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=C.css(e,"position"),c=C(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=C.css(e,"top"),u=C.css(e,"left"),("absolute"===l||"fixed"===l)&&(o+u).indexOf("auto")>-1?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),y(t)&&(t=t.call(e,n,C.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):c.css(f)}},C.fn.extend({offset:function(e){if(arguments.length)return void 0===e?this:this.each(function(t){C.offset.setOffset(this,e,t)});var t,n,r=this[0];return r?r.getClientRects().length?(t=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:t.top+n.pageYOffset,left:t.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===C.css(r,"position"))t=r.getBoundingClientRect();else{for(t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;e&&(e===n.body||e===n.documentElement)&&"static"===C.css(e,"position");)e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=C(e).offset()).top+=C.css(e,"borderTopWidth",!0),i.left+=C.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-C.css(r,"marginTop",!0),left:t.left-i.left-C.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){for(var e=this.offsetParent;e&&"static"===C.css(e,"position");)e=e.offsetParent;return e||ae})}}),C.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(e,t){var n="pageYOffset"===t;C.fn[e]=function(r){return U(this,function(e,r,i){var o;if(x(e)?o=e:9===e.nodeType&&(o=e.defaultView),void 0===i)return o?o[t]:e[r];o?o.scrollTo(n?o.pageXOffset:i,n?i:o.pageYOffset):e[r]=i},e,r,arguments.length)}}),C.each(["top","left"],function(e,t){C.cssHooks[t]=Ge(m.pixelPosition,function(e,n){if(n)return n=Ve(e,t),ze.test(n)?C(e).position()[t]+"px":n})}),C.each({Height:"height",Width:"width"},function(e,t){C.each({padding:"inner"+e,content:t,"":"outer"+e},function(n,r){C.fn[r]=function(i,o){var a=arguments.length&&(n||"boolean"!=typeof i),s=n||(!0===i||!0===o?"margin":"border");return U(this,function(t,n,i){var o;return x(t)?0===r.indexOf("outer")?t["inner"+e]:t.document.documentElement["client"+e]:9===t.nodeType?(o=t.documentElement,Math.max(t.body["scroll"+e],o["scroll"+e],t.body["offset"+e],o["offset"+e],o["client"+e])):void 0===i?C.css(t,n,s):C.style(t,n,i,s)},t,a?i:void 0,a)}})}),C.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,t){C.fn[t]=function(e,n){return arguments.length>0?this.on(t,null,e,n):this.trigger(t)}}),C.fn.extend({hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),C.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)}}),C.proxy=function(e,t){var n,r,i;if("string"==typeof t&&(n=e[t],t=e,e=n),y(e))return r=u.call(arguments,2),(i=function(){return e.apply(t||this,r.concat(u.call(arguments)))}).guid=e.guid=e.guid||C.guid++,i},C.holdReady=function(e){e?C.readyWait++:C.ready(!0)},C.isArray=Array.isArray,C.parseJSON=JSON.parse,C.nodeName=D,C.isFunction=y,C.isWindow=x,C.camelCase=Y,C.type=T,C.now=Date.now,C.isNumeric=function(e){var t=C.type(e);return("number"===t||"string"===t)&&!isNaN(e-parseFloat(e))},void 0===(r=function(){return C}.apply(t,[]))||(e.exports=r);var Kt=n.jQuery,Zt=n.$;return C.noConflict=function(e){return n.$===C&&(n.$=Zt),e&&n.jQuery===C&&(n.jQuery=Kt),C},i||(n.jQuery=n.$=C),C})}]);
|
sciPyFoam
|
/sciPyFoam-0.4.1.tar.gz/sciPyFoam-0.4.1/docs/source/themes/rtd/static/js/theme.js
|
theme.js
|
!function(e){var t={};function n(r){if(t[r])return t[r].exports;var i=t[r]={i:r,l:!1,exports:{}};return e[r].call(i.exports,i,i.exports,n),i.l=!0,i.exports}n.m=e,n.c=t,n.d=function(e,t,r){n.o(e,t)||Object.defineProperty(e,t,{enumerable:!0,get:r})},n.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},n.t=function(e,t){if(1&t&&(e=n(e)),8&t)return e;if(4&t&&"object"==typeof e&&e&&e.__esModule)return e;var r=Object.create(null);if(n.r(r),Object.defineProperty(r,"default",{enumerable:!0,value:e}),2&t&&"string"!=typeof e)for(var i in e)n.d(r,i,function(t){return e[t]}.bind(null,i));return r},n.n=function(e){var t=e&&e.__esModule?function(){return e.default}:function(){return e};return n.d(t,"a",t),t},n.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},n.p="",n(n.s=0)}([function(e,t,n){"use strict";n.r(t);n(1),n(2),n(3)},function(e,t,n){},function(e,t,n){},function(e,t,n){(function(){var t="undefined"!=typeof window?window.jQuery:n(4);e.exports.ThemeNav={navBar:null,win:null,winScroll:!1,winResize:!1,linkScroll:!1,winPosition:0,winHeight:null,docHeight:null,isRunning:!1,enable:function(e){var n=this;void 0===e&&(e=!0),n.isRunning||(n.isRunning=!0,t(function(t){n.init(t),n.reset(),n.win.on("hashchange",n.reset),e&&n.win.on("scroll",function(){n.linkScroll||n.winScroll||(n.winScroll=!0,requestAnimationFrame(function(){n.onScroll()}))}),n.win.on("resize",function(){n.winResize||(n.winResize=!0,requestAnimationFrame(function(){n.onResize()}))}),n.onResize()}))},enableSticky:function(){this.enable(!0)},init:function(e){e(document);var t=this;this.navBar=e("div.wy-side-scroll:first"),this.win=e(window),e(document).on("click","[data-toggle='wy-nav-top']",function(){e("[data-toggle='wy-nav-shift']").toggleClass("shift"),e("[data-toggle='rst-versions']").toggleClass("shift")}).on("click",".wy-menu-vertical .current ul li a",function(){var n=e(this);e("[data-toggle='wy-nav-shift']").removeClass("shift"),e("[data-toggle='rst-versions']").toggleClass("shift"),t.toggleCurrent(n),t.hashChange()}).on("click","[data-toggle='rst-current-version']",function(){e("[data-toggle='rst-versions']").toggleClass("shift-up")}),e("table.docutils:not(.field-list,.footnote,.citation)").wrap("<div class='wy-table-responsive'></div>"),e("table.docutils.footnote").wrap("<div class='wy-table-responsive footnote'></div>"),e("table.docutils.citation").wrap("<div class='wy-table-responsive citation'></div>"),e(".wy-menu-vertical ul").not(".simple").siblings("a").each(function(){var n=e(this);expand=e('<span class="toctree-expand"></span>'),expand.on("click",function(e){return t.toggleCurrent(n),e.stopPropagation(),!1}),n.prepend(expand)})},reset:function(){var e=encodeURI(window.location.hash)||"#";try{var t=$(".wy-menu-vertical"),n=t.find('[href="'+e+'"]');if(0===n.length){var r=$('.document [id="'+e.substring(1)+'"]').closest("div.section");0===(n=t.find('[href="#'+r.attr("id")+'"]')).length&&(n=t.find('[href="#"]'))}n.length>0&&($(".wy-menu-vertical .current").removeClass("current"),n.addClass("current"),n.closest("li.toctree-l1").addClass("current"),n.closest("li.toctree-l1").parent().addClass("current"),n.closest("li.toctree-l1").addClass("current"),n.closest("li.toctree-l2").addClass("current"),n.closest("li.toctree-l3").addClass("current"),n.closest("li.toctree-l4").addClass("current"),n[0].scrollIntoView())}catch(e){console.log("Error expanding nav for anchor",e)}},onScroll:function(){this.winScroll=!1;var e=this.win.scrollTop(),t=e+this.winHeight,n=this.navBar.scrollTop()+(e-this.winPosition);e<0||t>this.docHeight||(this.navBar.scrollTop(n),this.winPosition=e)},onResize:function(){this.winResize=!1,this.winHeight=this.win.height(),this.docHeight=$(document).height()},hashChange:function(){this.linkScroll=!0,this.win.one("hashchange",function(){this.linkScroll=!1})},toggleCurrent:function(e){var t=e.closest("li");t.siblings("li.current").removeClass("current"),t.siblings().find("li.current").removeClass("current"),t.find("> ul li.current").removeClass("current"),t.toggleClass("current")}},"undefined"!=typeof window&&(window.SphinxRtdTheme={Navigation:e.exports.ThemeNav,StickyNav:e.exports.ThemeNav}),function(){for(var e=0,t=["ms","moz","webkit","o"],n=0;n<t.length&&!window.requestAnimationFrame;++n)window.requestAnimationFrame=window[t[n]+"RequestAnimationFrame"],window.cancelAnimationFrame=window[t[n]+"CancelAnimationFrame"]||window[t[n]+"CancelRequestAnimationFrame"];window.requestAnimationFrame||(window.requestAnimationFrame=function(t,n){var r=(new Date).getTime(),i=Math.max(0,16-(r-e)),o=window.setTimeout(function(){t(r+i)},i);return e=r+i,o}),window.cancelAnimationFrame||(window.cancelAnimationFrame=function(e){clearTimeout(e)})}()}).call(window)},function(e,t,n){var r;
/*!
* jQuery JavaScript Library v3.4.1
* https://jquery.com/
*
* Includes Sizzle.js
* https://sizzlejs.com/
*
* Copyright JS Foundation and other contributors
* Released under the MIT license
* https://jquery.org/license
*
* Date: 2019-05-01T21:04Z
*/
/*!
* jQuery JavaScript Library v3.4.1
* https://jquery.com/
*
* Includes Sizzle.js
* https://sizzlejs.com/
*
* Copyright JS Foundation and other contributors
* Released under the MIT license
* https://jquery.org/license
*
* Date: 2019-05-01T21:04Z
*/
!function(t,n){"use strict";"object"==typeof e.exports?e.exports=t.document?n(t,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return n(e)}:n(t)}("undefined"!=typeof window?window:this,function(n,i){"use strict";var o=[],a=n.document,s=Object.getPrototypeOf,u=o.slice,l=o.concat,c=o.push,f=o.indexOf,p={},d=p.toString,h=p.hasOwnProperty,g=h.toString,v=g.call(Object),m={},y=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType},x=function(e){return null!=e&&e===e.window},b={type:!0,src:!0,nonce:!0,noModule:!0};function w(e,t,n){var r,i,o=(n=n||a).createElement("script");if(o.text=e,t)for(r in b)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function T(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?p[d.call(e)]||"object":typeof e}var C=function(e,t){return new C.fn.init(e,t)},S=/^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g;function k(e){var t=!!e&&"length"in e&&e.length,n=T(e);return!y(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&t>0&&t-1 in e)}C.fn=C.prototype={jquery:"3.4.1",constructor:C,length:0,toArray:function(){return u.call(this)},get:function(e){return null==e?u.call(this):e<0?this[e+this.length]:this[e]},pushStack:function(e){var t=C.merge(this.constructor(),e);return t.prevObject=this,t},each:function(e){return C.each(this,e)},map:function(e){return this.pushStack(C.map(this,function(t,n){return e.call(t,n,t)}))},slice:function(){return this.pushStack(u.apply(this,arguments))},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},eq:function(e){var t=this.length,n=+e+(e<0?t:0);return this.pushStack(n>=0&&n<t?[this[n]]:[])},end:function(){return this.prevObject||this.constructor()},push:c,sort:o.sort,splice:o.splice},C.extend=C.fn.extend=function(){var e,t,n,r,i,o,a=arguments[0]||{},s=1,u=arguments.length,l=!1;for("boolean"==typeof a&&(l=a,a=arguments[s]||{},s++),"object"==typeof a||y(a)||(a={}),s===u&&(a=this,s--);s<u;s++)if(null!=(e=arguments[s]))for(t in e)r=e[t],"__proto__"!==t&&a!==r&&(l&&r&&(C.isPlainObject(r)||(i=Array.isArray(r)))?(n=a[t],o=i&&!Array.isArray(n)?[]:i||C.isPlainObject(n)?n:{},i=!1,a[t]=C.extend(l,o,r)):void 0!==r&&(a[t]=r));return a},C.extend({expando:"jQuery"+("3.4.1"+Math.random()).replace(/\D/g,""),isReady:!0,error:function(e){throw new Error(e)},noop:function(){},isPlainObject:function(e){var t,n;return!(!e||"[object Object]"!==d.call(e))&&(!(t=s(e))||"function"==typeof(n=h.call(t,"constructor")&&t.constructor)&&g.call(n)===v)},isEmptyObject:function(e){var t;for(t in e)return!1;return!0},globalEval:function(e,t){w(e,{nonce:t&&t.nonce})},each:function(e,t){var n,r=0;if(k(e))for(n=e.length;r<n&&!1!==t.call(e[r],r,e[r]);r++);else for(r in e)if(!1===t.call(e[r],r,e[r]))break;return e},trim:function(e){return null==e?"":(e+"").replace(S,"")},makeArray:function(e,t){var n=t||[];return null!=e&&(k(Object(e))?C.merge(n,"string"==typeof e?[e]:e):c.call(n,e)),n},inArray:function(e,t,n){return null==t?-1:f.call(t,e,n)},merge:function(e,t){for(var n=+t.length,r=0,i=e.length;r<n;r++)e[i++]=t[r];return e.length=i,e},grep:function(e,t,n){for(var r=[],i=0,o=e.length,a=!n;i<o;i++)!t(e[i],i)!==a&&r.push(e[i]);return r},map:function(e,t,n){var r,i,o=0,a=[];if(k(e))for(r=e.length;o<r;o++)null!=(i=t(e[o],o,n))&&a.push(i);else for(o in e)null!=(i=t(e[o],o,n))&&a.push(i);return l.apply([],a)},guid:1,support:m}),"function"==typeof Symbol&&(C.fn[Symbol.iterator]=o[Symbol.iterator]),C.each("Boolean Number String Function Array Date RegExp Object Error Symbol".split(" "),function(e,t){p["[object "+t+"]"]=t.toLowerCase()});var E=
/*!
* Sizzle CSS Selector Engine v2.3.4
* https://sizzlejs.com/
*
* Copyright JS Foundation and other contributors
* Released under the MIT license
* https://js.foundation/
*
* Date: 2019-04-08
*/
function(e){var t,n,r,i,o,a,s,u,l,c,f,p,d,h,g,v,m,y,x,b="sizzle"+1*new Date,w=e.document,T=0,C=0,S=ue(),k=ue(),E=ue(),A=ue(),N=function(e,t){return e===t&&(f=!0),0},j={}.hasOwnProperty,D=[],q=D.pop,L=D.push,H=D.push,O=D.slice,R=function(e,t){for(var n=0,r=e.length;n<r;n++)if(e[n]===t)return n;return-1},P="checked|selected|async|autofocus|autoplay|controls|defer|disabled|hidden|ismap|loop|multiple|open|readonly|required|scoped",M="[\\x20\\t\\r\\n\\f]",I="(?:\\\\.|[\\w-]|[^\0-\\xa0])+",F="\\["+M+"*("+I+")(?:"+M+"*([*^$|!~]?=)"+M+"*(?:'((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\"|("+I+"))|)"+M+"*\\]",$=":("+I+")(?:\\((('((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\")|((?:\\\\.|[^\\\\()[\\]]|"+F+")*)|.*)\\)|)",W=new RegExp(M+"+","g"),B=new RegExp("^"+M+"+|((?:^|[^\\\\])(?:\\\\.)*)"+M+"+$","g"),_=new RegExp("^"+M+"*,"+M+"*"),z=new RegExp("^"+M+"*([>+~]|"+M+")"+M+"*"),U=new RegExp(M+"|>"),X=new RegExp($),V=new RegExp("^"+I+"$"),G={ID:new RegExp("^#("+I+")"),CLASS:new RegExp("^\\.("+I+")"),TAG:new RegExp("^("+I+"|[*])"),ATTR:new RegExp("^"+F),PSEUDO:new RegExp("^"+$),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+P+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,Q=/^(?:input|select|textarea|button)$/i,J=/^h\d$/i,K=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\([\\da-f]{1,6}"+M+"?|("+M+")|.)","ig"),ne=function(e,t,n){var r="0x"+t-65536;return r!=r||n?t:r<0?String.fromCharCode(r+65536):String.fromCharCode(r>>10|55296,1023&r|56320)},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"�":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){p()},ae=be(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{H.apply(D=O.call(w.childNodes),w.childNodes),D[w.childNodes.length].nodeType}catch(e){H={apply:D.length?function(e,t){L.apply(e,O.call(t))}:function(e,t){for(var n=e.length,r=0;e[n++]=t[r++];);e.length=n-1}}}function se(e,t,r,i){var o,s,l,c,f,h,m,y=t&&t.ownerDocument,T=t?t.nodeType:9;if(r=r||[],"string"!=typeof e||!e||1!==T&&9!==T&&11!==T)return r;if(!i&&((t?t.ownerDocument||t:w)!==d&&p(t),t=t||d,g)){if(11!==T&&(f=Z.exec(e)))if(o=f[1]){if(9===T){if(!(l=t.getElementById(o)))return r;if(l.id===o)return r.push(l),r}else if(y&&(l=y.getElementById(o))&&x(t,l)&&l.id===o)return r.push(l),r}else{if(f[2])return H.apply(r,t.getElementsByTagName(e)),r;if((o=f[3])&&n.getElementsByClassName&&t.getElementsByClassName)return H.apply(r,t.getElementsByClassName(o)),r}if(n.qsa&&!A[e+" "]&&(!v||!v.test(e))&&(1!==T||"object"!==t.nodeName.toLowerCase())){if(m=e,y=t,1===T&&U.test(e)){for((c=t.getAttribute("id"))?c=c.replace(re,ie):t.setAttribute("id",c=b),s=(h=a(e)).length;s--;)h[s]="#"+c+" "+xe(h[s]);m=h.join(","),y=ee.test(e)&&me(t.parentNode)||t}try{return H.apply(r,y.querySelectorAll(m)),r}catch(t){A(e,!0)}finally{c===b&&t.removeAttribute("id")}}}return u(e.replace(B,"$1"),t,r,i)}function ue(){var e=[];return function t(n,i){return e.push(n+" ")>r.cacheLength&&delete t[e.shift()],t[n+" "]=i}}function le(e){return e[b]=!0,e}function ce(e){var t=d.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){for(var n=e.split("|"),i=n.length;i--;)r.attrHandle[n[i]]=t}function pe(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)for(;n=n.nextSibling;)if(n===t)return-1;return e?1:-1}function de(e){return function(t){return"input"===t.nodeName.toLowerCase()&&t.type===e}}function he(e){return function(t){var n=t.nodeName.toLowerCase();return("input"===n||"button"===n)&&t.type===e}}function ge(e){return function(t){return"form"in t?t.parentNode&&!1===t.disabled?"label"in t?"label"in t.parentNode?t.parentNode.disabled===e:t.disabled===e:t.isDisabled===e||t.isDisabled!==!e&&ae(t)===e:t.disabled===e:"label"in t&&t.disabled===e}}function ve(e){return le(function(t){return t=+t,le(function(n,r){for(var i,o=e([],n.length,t),a=o.length;a--;)n[i=o[a]]&&(n[i]=!(r[i]=n[i]))})})}function me(e){return e&&void 0!==e.getElementsByTagName&&e}for(t in n=se.support={},o=se.isXML=function(e){var t=e.namespaceURI,n=(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},p=se.setDocument=function(e){var t,i,a=e?e.ownerDocument||e:w;return a!==d&&9===a.nodeType&&a.documentElement?(h=(d=a).documentElement,g=!o(d),w!==d&&(i=d.defaultView)&&i.top!==i&&(i.addEventListener?i.addEventListener("unload",oe,!1):i.attachEvent&&i.attachEvent("onunload",oe)),n.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),n.getElementsByTagName=ce(function(e){return e.appendChild(d.createComment("")),!e.getElementsByTagName("*").length}),n.getElementsByClassName=K.test(d.getElementsByClassName),n.getById=ce(function(e){return h.appendChild(e).id=b,!d.getElementsByName||!d.getElementsByName(b).length}),n.getById?(r.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},r.find.ID=function(e,t){if(void 0!==t.getElementById&&g){var n=t.getElementById(e);return n?[n]:[]}}):(r.filter.ID=function(e){var t=e.replace(te,ne);return function(e){var n=void 0!==e.getAttributeNode&&e.getAttributeNode("id");return n&&n.value===t}},r.find.ID=function(e,t){if(void 0!==t.getElementById&&g){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];for(i=t.getElementsByName(e),r=0;o=i[r++];)if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),r.find.TAG=n.getElementsByTagName?function(e,t){return void 0!==t.getElementsByTagName?t.getElementsByTagName(e):n.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){for(;n=o[i++];)1===n.nodeType&&r.push(n);return r}return o},r.find.CLASS=n.getElementsByClassName&&function(e,t){if(void 0!==t.getElementsByClassName&&g)return t.getElementsByClassName(e)},m=[],v=[],(n.qsa=K.test(d.querySelectorAll))&&(ce(function(e){h.appendChild(e).innerHTML="<a id='"+b+"'></a><select id='"+b+"-\r\\' msallowcapture=''><option selected=''></option></select>",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+M+"*(?:value|"+P+")"),e.querySelectorAll("[id~="+b+"-]").length||v.push("~="),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+b+"+*").length||v.push(".#.+[+~]")}),ce(function(e){e.innerHTML="<a href='' disabled='disabled'></a><select disabled='disabled'><option/></select>";var t=d.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),h.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(n.matchesSelector=K.test(y=h.matches||h.webkitMatchesSelector||h.mozMatchesSelector||h.oMatchesSelector||h.msMatchesSelector))&&ce(function(e){n.disconnectedMatch=y.call(e,"*"),y.call(e,"[s!='']:x"),m.push("!=",$)}),v=v.length&&new RegExp(v.join("|")),m=m.length&&new RegExp(m.join("|")),t=K.test(h.compareDocumentPosition),x=t||K.test(h.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)for(;t=t.parentNode;)if(t===e)return!0;return!1},N=t?function(e,t){if(e===t)return f=!0,0;var r=!e.compareDocumentPosition-!t.compareDocumentPosition;return r||(1&(r=(e.ownerDocument||e)===(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!n.sortDetached&&t.compareDocumentPosition(e)===r?e===d||e.ownerDocument===w&&x(w,e)?-1:t===d||t.ownerDocument===w&&x(w,t)?1:c?R(c,e)-R(c,t):0:4&r?-1:1)}:function(e,t){if(e===t)return f=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e===d?-1:t===d?1:i?-1:o?1:c?R(c,e)-R(c,t):0;if(i===o)return pe(e,t);for(n=e;n=n.parentNode;)a.unshift(n);for(n=t;n=n.parentNode;)s.unshift(n);for(;a[r]===s[r];)r++;return r?pe(a[r],s[r]):a[r]===w?-1:s[r]===w?1:0},d):d},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if((e.ownerDocument||e)!==d&&p(e),n.matchesSelector&&g&&!A[t+" "]&&(!m||!m.test(t))&&(!v||!v.test(t)))try{var r=y.call(e,t);if(r||n.disconnectedMatch||e.document&&11!==e.document.nodeType)return r}catch(e){A(t,!0)}return se(t,d,null,[e]).length>0},se.contains=function(e,t){return(e.ownerDocument||e)!==d&&p(e),x(e,t)},se.attr=function(e,t){(e.ownerDocument||e)!==d&&p(e);var i=r.attrHandle[t.toLowerCase()],o=i&&j.call(r.attrHandle,t.toLowerCase())?i(e,t,!g):void 0;return void 0!==o?o:n.attributes||!g?e.getAttribute(t):(o=e.getAttributeNode(t))&&o.specified?o.value:null},se.escape=function(e){return(e+"").replace(re,ie)},se.error=function(e){throw new Error("Syntax error, unrecognized expression: "+e)},se.uniqueSort=function(e){var t,r=[],i=0,o=0;if(f=!n.detectDuplicates,c=!n.sortStable&&e.slice(0),e.sort(N),f){for(;t=e[o++];)t===e[o]&&(i=r.push(o));for(;i--;)e.splice(r[i],1)}return c=null,e},i=se.getText=function(e){var t,n="",r=0,o=e.nodeType;if(o){if(1===o||9===o||11===o){if("string"==typeof e.textContent)return e.textContent;for(e=e.firstChild;e;e=e.nextSibling)n+=i(e)}else if(3===o||4===o)return e.nodeValue}else for(;t=e[r++];)n+=i(t);return n},(r=se.selectors={cacheLength:50,createPseudo:le,match:G,attrHandle:{},find:{},relative:{">":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return G.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=a(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=S[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&S(e,function(e){return t.test("string"==typeof e.className&&e.className||void 0!==e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(e,t,n){return function(r){var i=se.attr(r,e);return null==i?"!="===t:!t||(i+="","="===t?i===n:"!="===t?i!==n:"^="===t?n&&0===i.indexOf(n):"*="===t?n&&i.indexOf(n)>-1:"$="===t?n&&i.slice(-n.length)===n:"~="===t?(" "+i.replace(W," ")+" ").indexOf(n)>-1:"|="===t&&(i===n||i.slice(0,n.length+1)===n+"-"))}},CHILD:function(e,t,n,r,i){var o="nth"!==e.slice(0,3),a="last"!==e.slice(-4),s="of-type"===t;return 1===r&&0===i?function(e){return!!e.parentNode}:function(t,n,u){var l,c,f,p,d,h,g=o!==a?"nextSibling":"previousSibling",v=t.parentNode,m=s&&t.nodeName.toLowerCase(),y=!u&&!s,x=!1;if(v){if(o){for(;g;){for(p=t;p=p[g];)if(s?p.nodeName.toLowerCase()===m:1===p.nodeType)return!1;h=g="only"===e&&!h&&"nextSibling"}return!0}if(h=[a?v.firstChild:v.lastChild],a&&y){for(x=(d=(l=(c=(f=(p=v)[b]||(p[b]={}))[p.uniqueID]||(f[p.uniqueID]={}))[e]||[])[0]===T&&l[1])&&l[2],p=d&&v.childNodes[d];p=++d&&p&&p[g]||(x=d=0)||h.pop();)if(1===p.nodeType&&++x&&p===t){c[e]=[T,d,x];break}}else if(y&&(x=d=(l=(c=(f=(p=t)[b]||(p[b]={}))[p.uniqueID]||(f[p.uniqueID]={}))[e]||[])[0]===T&&l[1]),!1===x)for(;(p=++d&&p&&p[g]||(x=d=0)||h.pop())&&((s?p.nodeName.toLowerCase()!==m:1!==p.nodeType)||!++x||(y&&((c=(f=p[b]||(p[b]={}))[p.uniqueID]||(f[p.uniqueID]={}))[e]=[T,x]),p!==t)););return(x-=i)===r||x%r==0&&x/r>=0}}},PSEUDO:function(e,t){var n,i=r.pseudos[e]||r.setFilters[e.toLowerCase()]||se.error("unsupported pseudo: "+e);return i[b]?i(t):i.length>1?(n=[e,e,"",t],r.setFilters.hasOwnProperty(e.toLowerCase())?le(function(e,n){for(var r,o=i(e,t),a=o.length;a--;)e[r=R(e,o[a])]=!(n[r]=o[a])}):function(e){return i(e,0,n)}):i}},pseudos:{not:le(function(e){var t=[],n=[],r=s(e.replace(B,"$1"));return r[b]?le(function(e,t,n,i){for(var o,a=r(e,null,i,[]),s=e.length;s--;)(o=a[s])&&(e[s]=!(t[s]=o))}):function(e,i,o){return t[0]=e,r(t,null,o,n),t[0]=null,!n.pop()}}),has:le(function(e){return function(t){return se(e,t).length>0}}),contains:le(function(e){return e=e.replace(te,ne),function(t){return(t.textContent||i(t)).indexOf(e)>-1}}),lang:le(function(e){return V.test(e||"")||se.error("unsupported lang: "+e),e=e.replace(te,ne).toLowerCase(),function(t){var n;do{if(n=g?t.lang:t.getAttribute("xml:lang")||t.getAttribute("lang"))return(n=n.toLowerCase())===e||0===n.indexOf(e+"-")}while((t=t.parentNode)&&1===t.nodeType);return!1}}),target:function(t){var n=e.location&&e.location.hash;return n&&n.slice(1)===t.id},root:function(e){return e===h},focus:function(e){return e===d.activeElement&&(!d.hasFocus||d.hasFocus())&&!!(e.type||e.href||~e.tabIndex)},enabled:ge(!1),disabled:ge(!0),checked:function(e){var t=e.nodeName.toLowerCase();return"input"===t&&!!e.checked||"option"===t&&!!e.selected},selected:function(e){return e.parentNode&&e.parentNode.selectedIndex,!0===e.selected},empty:function(e){for(e=e.firstChild;e;e=e.nextSibling)if(e.nodeType<6)return!1;return!0},parent:function(e){return!r.pseudos.empty(e)},header:function(e){return J.test(e.nodeName)},input:function(e){return Q.test(e.nodeName)},button:function(e){var t=e.nodeName.toLowerCase();return"input"===t&&"button"===e.type||"button"===t},text:function(e){var t;return"input"===e.nodeName.toLowerCase()&&"text"===e.type&&(null==(t=e.getAttribute("type"))||"text"===t.toLowerCase())},first:ve(function(){return[0]}),last:ve(function(e,t){return[t-1]}),eq:ve(function(e,t,n){return[n<0?n+t:n]}),even:ve(function(e,t){for(var n=0;n<t;n+=2)e.push(n);return e}),odd:ve(function(e,t){for(var n=1;n<t;n+=2)e.push(n);return e}),lt:ve(function(e,t,n){for(var r=n<0?n+t:n>t?t:n;--r>=0;)e.push(r);return e}),gt:ve(function(e,t,n){for(var r=n<0?n+t:n;++r<t;)e.push(r);return e})}}).pseudos.nth=r.pseudos.eq,{radio:!0,checkbox:!0,file:!0,password:!0,image:!0})r.pseudos[t]=de(t);for(t in{submit:!0,reset:!0})r.pseudos[t]=he(t);function ye(){}function xe(e){for(var t=0,n=e.length,r="";t<n;t++)r+=e[t].value;return r}function be(e,t,n){var r=t.dir,i=t.next,o=i||r,a=n&&"parentNode"===o,s=C++;return t.first?function(t,n,i){for(;t=t[r];)if(1===t.nodeType||a)return e(t,n,i);return!1}:function(t,n,u){var l,c,f,p=[T,s];if(u){for(;t=t[r];)if((1===t.nodeType||a)&&e(t,n,u))return!0}else for(;t=t[r];)if(1===t.nodeType||a)if(c=(f=t[b]||(t[b]={}))[t.uniqueID]||(f[t.uniqueID]={}),i&&i===t.nodeName.toLowerCase())t=t[r]||t;else{if((l=c[o])&&l[0]===T&&l[1]===s)return p[2]=l[2];if(c[o]=p,p[2]=e(t,n,u))return!0}return!1}}function we(e){return e.length>1?function(t,n,r){for(var i=e.length;i--;)if(!e[i](t,n,r))return!1;return!0}:e[0]}function Te(e,t,n,r,i){for(var o,a=[],s=0,u=e.length,l=null!=t;s<u;s++)(o=e[s])&&(n&&!n(o,r,i)||(a.push(o),l&&t.push(s)));return a}function Ce(e,t,n,r,i,o){return r&&!r[b]&&(r=Ce(r)),i&&!i[b]&&(i=Ce(i,o)),le(function(o,a,s,u){var l,c,f,p=[],d=[],h=a.length,g=o||function(e,t,n){for(var r=0,i=t.length;r<i;r++)se(e,t[r],n);return n}(t||"*",s.nodeType?[s]:s,[]),v=!e||!o&&t?g:Te(g,p,e,s,u),m=n?i||(o?e:h||r)?[]:a:v;if(n&&n(v,m,s,u),r)for(l=Te(m,d),r(l,[],s,u),c=l.length;c--;)(f=l[c])&&(m[d[c]]=!(v[d[c]]=f));if(o){if(i||e){if(i){for(l=[],c=m.length;c--;)(f=m[c])&&l.push(v[c]=f);i(null,m=[],l,u)}for(c=m.length;c--;)(f=m[c])&&(l=i?R(o,f):p[c])>-1&&(o[l]=!(a[l]=f))}}else m=Te(m===a?m.splice(h,m.length):m),i?i(null,a,m,u):H.apply(a,m)})}function Se(e){for(var t,n,i,o=e.length,a=r.relative[e[0].type],s=a||r.relative[" "],u=a?1:0,c=be(function(e){return e===t},s,!0),f=be(function(e){return R(t,e)>-1},s,!0),p=[function(e,n,r){var i=!a&&(r||n!==l)||((t=n).nodeType?c(e,n,r):f(e,n,r));return t=null,i}];u<o;u++)if(n=r.relative[e[u].type])p=[be(we(p),n)];else{if((n=r.filter[e[u].type].apply(null,e[u].matches))[b]){for(i=++u;i<o&&!r.relative[e[i].type];i++);return Ce(u>1&&we(p),u>1&&xe(e.slice(0,u-1).concat({value:" "===e[u-2].type?"*":""})).replace(B,"$1"),n,u<i&&Se(e.slice(u,i)),i<o&&Se(e=e.slice(i)),i<o&&xe(e))}p.push(n)}return we(p)}return ye.prototype=r.filters=r.pseudos,r.setFilters=new ye,a=se.tokenize=function(e,t){var n,i,o,a,s,u,l,c=k[e+" "];if(c)return t?0:c.slice(0);for(s=e,u=[],l=r.preFilter;s;){for(a in n&&!(i=_.exec(s))||(i&&(s=s.slice(i[0].length)||s),u.push(o=[])),n=!1,(i=z.exec(s))&&(n=i.shift(),o.push({value:n,type:i[0].replace(B," ")}),s=s.slice(n.length)),r.filter)!(i=G[a].exec(s))||l[a]&&!(i=l[a](i))||(n=i.shift(),o.push({value:n,type:a,matches:i}),s=s.slice(n.length));if(!n)break}return t?s.length:s?se.error(e):k(e,u).slice(0)},s=se.compile=function(e,t){var n,i=[],o=[],s=E[e+" "];if(!s){for(t||(t=a(e)),n=t.length;n--;)(s=Se(t[n]))[b]?i.push(s):o.push(s);(s=E(e,function(e,t){var n=t.length>0,i=e.length>0,o=function(o,a,s,u,c){var f,h,v,m=0,y="0",x=o&&[],b=[],w=l,C=o||i&&r.find.TAG("*",c),S=T+=null==w?1:Math.random()||.1,k=C.length;for(c&&(l=a===d||a||c);y!==k&&null!=(f=C[y]);y++){if(i&&f){for(h=0,a||f.ownerDocument===d||(p(f),s=!g);v=e[h++];)if(v(f,a||d,s)){u.push(f);break}c&&(T=S)}n&&((f=!v&&f)&&m--,o&&x.push(f))}if(m+=y,n&&y!==m){for(h=0;v=t[h++];)v(x,b,a,s);if(o){if(m>0)for(;y--;)x[y]||b[y]||(b[y]=q.call(u));b=Te(b)}H.apply(u,b),c&&!o&&b.length>0&&m+t.length>1&&se.uniqueSort(u)}return c&&(T=S,l=w),x};return n?le(o):o}(o,i))).selector=e}return s},u=se.select=function(e,t,n,i){var o,u,l,c,f,p="function"==typeof e&&e,d=!i&&a(e=p.selector||e);if(n=n||[],1===d.length){if((u=d[0]=d[0].slice(0)).length>2&&"ID"===(l=u[0]).type&&9===t.nodeType&&g&&r.relative[u[1].type]){if(!(t=(r.find.ID(l.matches[0].replace(te,ne),t)||[])[0]))return n;p&&(t=t.parentNode),e=e.slice(u.shift().value.length)}for(o=G.needsContext.test(e)?0:u.length;o--&&(l=u[o],!r.relative[c=l.type]);)if((f=r.find[c])&&(i=f(l.matches[0].replace(te,ne),ee.test(u[0].type)&&me(t.parentNode)||t))){if(u.splice(o,1),!(e=i.length&&xe(u)))return H.apply(n,i),n;break}}return(p||s(e,d))(i,t,!g,n,!t||ee.test(e)&&me(t.parentNode)||t),n},n.sortStable=b.split("").sort(N).join("")===b,n.detectDuplicates=!!f,p(),n.sortDetached=ce(function(e){return 1&e.compareDocumentPosition(d.createElement("fieldset"))}),ce(function(e){return e.innerHTML="<a href='#'></a>","#"===e.firstChild.getAttribute("href")})||fe("type|href|height|width",function(e,t,n){if(!n)return e.getAttribute(t,"type"===t.toLowerCase()?1:2)}),n.attributes&&ce(function(e){return e.innerHTML="<input/>",e.firstChild.setAttribute("value",""),""===e.firstChild.getAttribute("value")})||fe("value",function(e,t,n){if(!n&&"input"===e.nodeName.toLowerCase())return e.defaultValue}),ce(function(e){return null==e.getAttribute("disabled")})||fe(P,function(e,t,n){var r;if(!n)return!0===e[t]?t.toLowerCase():(r=e.getAttributeNode(t))&&r.specified?r.value:null}),se}(n);C.find=E,C.expr=E.selectors,C.expr[":"]=C.expr.pseudos,C.uniqueSort=C.unique=E.uniqueSort,C.text=E.getText,C.isXMLDoc=E.isXML,C.contains=E.contains,C.escapeSelector=E.escape;var A=function(e,t,n){for(var r=[],i=void 0!==n;(e=e[t])&&9!==e.nodeType;)if(1===e.nodeType){if(i&&C(e).is(n))break;r.push(e)}return r},N=function(e,t){for(var n=[];e;e=e.nextSibling)1===e.nodeType&&e!==t&&n.push(e);return n},j=C.expr.match.needsContext;function D(e,t){return e.nodeName&&e.nodeName.toLowerCase()===t.toLowerCase()}var q=/^<([a-z][^\/\0>:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function L(e,t,n){return y(t)?C.grep(e,function(e,r){return!!t.call(e,r,e)!==n}):t.nodeType?C.grep(e,function(e){return e===t!==n}):"string"!=typeof t?C.grep(e,function(e){return f.call(t,e)>-1!==n}):C.filter(t,e,n)}C.filter=function(e,t,n){var r=t[0];return n&&(e=":not("+e+")"),1===t.length&&1===r.nodeType?C.find.matchesSelector(r,e)?[r]:[]:C.find.matches(e,C.grep(t,function(e){return 1===e.nodeType}))},C.fn.extend({find:function(e){var t,n,r=this.length,i=this;if("string"!=typeof e)return this.pushStack(C(e).filter(function(){for(t=0;t<r;t++)if(C.contains(i[t],this))return!0}));for(n=this.pushStack([]),t=0;t<r;t++)C.find(e,i[t],n);return r>1?C.uniqueSort(n):n},filter:function(e){return this.pushStack(L(this,e||[],!1))},not:function(e){return this.pushStack(L(this,e||[],!0))},is:function(e){return!!L(this,"string"==typeof e&&j.test(e)?C(e):e||[],!1).length}});var H,O=/^(?:\s*(<[\w\W]+>)[^>]*|#([\w-]+))$/;(C.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||H,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&e.length>=3?[null,e,null]:O.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof C?t[0]:t,C.merge(this,C.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:a,!0)),q.test(r[1])&&C.isPlainObject(t))for(r in t)y(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=a.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):y(e)?void 0!==n.ready?n.ready(e):e(C):C.makeArray(e,this)}).prototype=C.fn,H=C(a);var R=/^(?:parents|prev(?:Until|All))/,P={children:!0,contents:!0,next:!0,prev:!0};function M(e,t){for(;(e=e[t])&&1!==e.nodeType;);return e}C.fn.extend({has:function(e){var t=C(e,this),n=t.length;return this.filter(function(){for(var e=0;e<n;e++)if(C.contains(this,t[e]))return!0})},closest:function(e,t){var n,r=0,i=this.length,o=[],a="string"!=typeof e&&C(e);if(!j.test(e))for(;r<i;r++)for(n=this[r];n&&n!==t;n=n.parentNode)if(n.nodeType<11&&(a?a.index(n)>-1:1===n.nodeType&&C.find.matchesSelector(n,e))){o.push(n);break}return this.pushStack(o.length>1?C.uniqueSort(o):o)},index:function(e){return e?"string"==typeof e?f.call(C(e),this[0]):f.call(this,e.jquery?e[0]:e):this[0]&&this[0].parentNode?this.first().prevAll().length:-1},add:function(e,t){return this.pushStack(C.uniqueSort(C.merge(this.get(),C(e,t))))},addBack:function(e){return this.add(null==e?this.prevObject:this.prevObject.filter(e))}}),C.each({parent:function(e){var t=e.parentNode;return t&&11!==t.nodeType?t:null},parents:function(e){return A(e,"parentNode")},parentsUntil:function(e,t,n){return A(e,"parentNode",n)},next:function(e){return M(e,"nextSibling")},prev:function(e){return M(e,"previousSibling")},nextAll:function(e){return A(e,"nextSibling")},prevAll:function(e){return A(e,"previousSibling")},nextUntil:function(e,t,n){return A(e,"nextSibling",n)},prevUntil:function(e,t,n){return A(e,"previousSibling",n)},siblings:function(e){return N((e.parentNode||{}).firstChild,e)},children:function(e){return N(e.firstChild)},contents:function(e){return void 0!==e.contentDocument?e.contentDocument:(D(e,"template")&&(e=e.content||e),C.merge([],e.childNodes))}},function(e,t){C.fn[e]=function(n,r){var i=C.map(this,t,n);return"Until"!==e.slice(-5)&&(r=n),r&&"string"==typeof r&&(i=C.filter(r,i)),this.length>1&&(P[e]||C.uniqueSort(i),R.test(e)&&i.reverse()),this.pushStack(i)}});var I=/[^\x20\t\r\n\f]+/g;function F(e){return e}function $(e){throw e}function W(e,t,n,r){var i;try{e&&y(i=e.promise)?i.call(e).done(t).fail(n):e&&y(i=e.then)?i.call(e,t,n):t.apply(void 0,[e].slice(r))}catch(e){n.apply(void 0,[e])}}C.Callbacks=function(e){e="string"==typeof e?function(e){var t={};return C.each(e.match(I)||[],function(e,n){t[n]=!0}),t}(e):C.extend({},e);var t,n,r,i,o=[],a=[],s=-1,u=function(){for(i=i||e.once,r=t=!0;a.length;s=-1)for(n=a.shift();++s<o.length;)!1===o[s].apply(n[0],n[1])&&e.stopOnFalse&&(s=o.length,n=!1);e.memory||(n=!1),t=!1,i&&(o=n?[]:"")},l={add:function(){return o&&(n&&!t&&(s=o.length-1,a.push(n)),function t(n){C.each(n,function(n,r){y(r)?e.unique&&l.has(r)||o.push(r):r&&r.length&&"string"!==T(r)&&t(r)})}(arguments),n&&!t&&u()),this},remove:function(){return C.each(arguments,function(e,t){for(var n;(n=C.inArray(t,o,n))>-1;)o.splice(n,1),n<=s&&s--}),this},has:function(e){return e?C.inArray(e,o)>-1:o.length>0},empty:function(){return o&&(o=[]),this},disable:function(){return i=a=[],o=n="",this},disabled:function(){return!o},lock:function(){return i=a=[],n||t||(o=n=""),this},locked:function(){return!!i},fireWith:function(e,n){return i||(n=[e,(n=n||[]).slice?n.slice():n],a.push(n),t||u()),this},fire:function(){return l.fireWith(this,arguments),this},fired:function(){return!!r}};return l},C.extend({Deferred:function(e){var t=[["notify","progress",C.Callbacks("memory"),C.Callbacks("memory"),2],["resolve","done",C.Callbacks("once memory"),C.Callbacks("once memory"),0,"resolved"],["reject","fail",C.Callbacks("once memory"),C.Callbacks("once memory"),1,"rejected"]],r="pending",i={state:function(){return r},always:function(){return o.done(arguments).fail(arguments),this},catch:function(e){return i.then(null,e)},pipe:function(){var e=arguments;return C.Deferred(function(n){C.each(t,function(t,r){var i=y(e[r[4]])&&e[r[4]];o[r[1]](function(){var e=i&&i.apply(this,arguments);e&&y(e.promise)?e.promise().progress(n.notify).done(n.resolve).fail(n.reject):n[r[0]+"With"](this,i?[e]:arguments)})}),e=null}).promise()},then:function(e,r,i){var o=0;function a(e,t,r,i){return function(){var s=this,u=arguments,l=function(){var n,l;if(!(e<o)){if((n=r.apply(s,u))===t.promise())throw new TypeError("Thenable self-resolution");l=n&&("object"==typeof n||"function"==typeof n)&&n.then,y(l)?i?l.call(n,a(o,t,F,i),a(o,t,$,i)):(o++,l.call(n,a(o,t,F,i),a(o,t,$,i),a(o,t,F,t.notifyWith))):(r!==F&&(s=void 0,u=[n]),(i||t.resolveWith)(s,u))}},c=i?l:function(){try{l()}catch(n){C.Deferred.exceptionHook&&C.Deferred.exceptionHook(n,c.stackTrace),e+1>=o&&(r!==$&&(s=void 0,u=[n]),t.rejectWith(s,u))}};e?c():(C.Deferred.getStackHook&&(c.stackTrace=C.Deferred.getStackHook()),n.setTimeout(c))}}return C.Deferred(function(n){t[0][3].add(a(0,n,y(i)?i:F,n.notifyWith)),t[1][3].add(a(0,n,y(e)?e:F)),t[2][3].add(a(0,n,y(r)?r:$))}).promise()},promise:function(e){return null!=e?C.extend(e,i):i}},o={};return C.each(t,function(e,n){var a=n[2],s=n[5];i[n[1]]=a.add,s&&a.add(function(){r=s},t[3-e][2].disable,t[3-e][3].disable,t[0][2].lock,t[0][3].lock),a.add(n[3].fire),o[n[0]]=function(){return o[n[0]+"With"](this===o?void 0:this,arguments),this},o[n[0]+"With"]=a.fireWith}),i.promise(o),e&&e.call(o,o),o},when:function(e){var t=arguments.length,n=t,r=Array(n),i=u.call(arguments),o=C.Deferred(),a=function(e){return function(n){r[e]=this,i[e]=arguments.length>1?u.call(arguments):n,--t||o.resolveWith(r,i)}};if(t<=1&&(W(e,o.done(a(n)).resolve,o.reject,!t),"pending"===o.state()||y(i[n]&&i[n].then)))return o.then();for(;n--;)W(i[n],a(n),o.reject);return o.promise()}});var B=/^(Eval|Internal|Range|Reference|Syntax|Type|URI)Error$/;C.Deferred.exceptionHook=function(e,t){n.console&&n.console.warn&&e&&B.test(e.name)&&n.console.warn("jQuery.Deferred exception: "+e.message,e.stack,t)},C.readyException=function(e){n.setTimeout(function(){throw e})};var _=C.Deferred();function z(){a.removeEventListener("DOMContentLoaded",z),n.removeEventListener("load",z),C.ready()}C.fn.ready=function(e){return _.then(e).catch(function(e){C.readyException(e)}),this},C.extend({isReady:!1,readyWait:1,ready:function(e){(!0===e?--C.readyWait:C.isReady)||(C.isReady=!0,!0!==e&&--C.readyWait>0||_.resolveWith(a,[C]))}}),C.ready.then=_.then,"complete"===a.readyState||"loading"!==a.readyState&&!a.documentElement.doScroll?n.setTimeout(C.ready):(a.addEventListener("DOMContentLoaded",z),n.addEventListener("load",z));var U=function(e,t,n,r,i,o,a){var s=0,u=e.length,l=null==n;if("object"===T(n))for(s in i=!0,n)U(e,t,s,n[s],!0,o,a);else if(void 0!==r&&(i=!0,y(r)||(a=!0),l&&(a?(t.call(e,r),t=null):(l=t,t=function(e,t,n){return l.call(C(e),n)})),t))for(;s<u;s++)t(e[s],n,a?r:r.call(e[s],s,t(e[s],n)));return i?e:l?t.call(e):u?t(e[0],n):o},X=/^-ms-/,V=/-([a-z])/g;function G(e,t){return t.toUpperCase()}function Y(e){return e.replace(X,"ms-").replace(V,G)}var Q=function(e){return 1===e.nodeType||9===e.nodeType||!+e.nodeType};function J(){this.expando=C.expando+J.uid++}J.uid=1,J.prototype={cache:function(e){var t=e[this.expando];return t||(t={},Q(e)&&(e.nodeType?e[this.expando]=t:Object.defineProperty(e,this.expando,{value:t,configurable:!0}))),t},set:function(e,t,n){var r,i=this.cache(e);if("string"==typeof t)i[Y(t)]=n;else for(r in t)i[Y(r)]=t[r];return i},get:function(e,t){return void 0===t?this.cache(e):e[this.expando]&&e[this.expando][Y(t)]},access:function(e,t,n){return void 0===t||t&&"string"==typeof t&&void 0===n?this.get(e,t):(this.set(e,t,n),void 0!==n?n:t)},remove:function(e,t){var n,r=e[this.expando];if(void 0!==r){if(void 0!==t){n=(t=Array.isArray(t)?t.map(Y):(t=Y(t))in r?[t]:t.match(I)||[]).length;for(;n--;)delete r[t[n]]}(void 0===t||C.isEmptyObject(r))&&(e.nodeType?e[this.expando]=void 0:delete e[this.expando])}},hasData:function(e){var t=e[this.expando];return void 0!==t&&!C.isEmptyObject(t)}};var K=new J,Z=new J,ee=/^(?:\{[\w\W]*\}|\[[\w\W]*\])$/,te=/[A-Z]/g;function ne(e,t,n){var r;if(void 0===n&&1===e.nodeType)if(r="data-"+t.replace(te,"-$&").toLowerCase(),"string"==typeof(n=e.getAttribute(r))){try{n=function(e){return"true"===e||"false"!==e&&("null"===e?null:e===+e+""?+e:ee.test(e)?JSON.parse(e):e)}(n)}catch(e){}Z.set(e,t,n)}else n=void 0;return n}C.extend({hasData:function(e){return Z.hasData(e)||K.hasData(e)},data:function(e,t,n){return Z.access(e,t,n)},removeData:function(e,t){Z.remove(e,t)},_data:function(e,t,n){return K.access(e,t,n)},_removeData:function(e,t){K.remove(e,t)}}),C.fn.extend({data:function(e,t){var n,r,i,o=this[0],a=o&&o.attributes;if(void 0===e){if(this.length&&(i=Z.get(o),1===o.nodeType&&!K.get(o,"hasDataAttrs"))){for(n=a.length;n--;)a[n]&&0===(r=a[n].name).indexOf("data-")&&(r=Y(r.slice(5)),ne(o,r,i[r]));K.set(o,"hasDataAttrs",!0)}return i}return"object"==typeof e?this.each(function(){Z.set(this,e)}):U(this,function(t){var n;if(o&&void 0===t)return void 0!==(n=Z.get(o,e))?n:void 0!==(n=ne(o,e))?n:void 0;this.each(function(){Z.set(this,e,t)})},null,t,arguments.length>1,null,!0)},removeData:function(e){return this.each(function(){Z.remove(this,e)})}}),C.extend({queue:function(e,t,n){var r;if(e)return t=(t||"fx")+"queue",r=K.get(e,t),n&&(!r||Array.isArray(n)?r=K.access(e,t,C.makeArray(n)):r.push(n)),r||[]},dequeue:function(e,t){t=t||"fx";var n=C.queue(e,t),r=n.length,i=n.shift(),o=C._queueHooks(e,t);"inprogress"===i&&(i=n.shift(),r--),i&&("fx"===t&&n.unshift("inprogress"),delete o.stop,i.call(e,function(){C.dequeue(e,t)},o)),!r&&o&&o.empty.fire()},_queueHooks:function(e,t){var n=t+"queueHooks";return K.get(e,n)||K.access(e,n,{empty:C.Callbacks("once memory").add(function(){K.remove(e,[t+"queue",n])})})}}),C.fn.extend({queue:function(e,t){var n=2;return"string"!=typeof e&&(t=e,e="fx",n--),arguments.length<n?C.queue(this[0],e):void 0===t?this:this.each(function(){var n=C.queue(this,e,t);C._queueHooks(this,e),"fx"===e&&"inprogress"!==n[0]&&C.dequeue(this,e)})},dequeue:function(e){return this.each(function(){C.dequeue(this,e)})},clearQueue:function(e){return this.queue(e||"fx",[])},promise:function(e,t){var n,r=1,i=C.Deferred(),o=this,a=this.length,s=function(){--r||i.resolveWith(o,[o])};for("string"!=typeof e&&(t=e,e=void 0),e=e||"fx";a--;)(n=K.get(o[a],e+"queueHooks"))&&n.empty&&(r++,n.empty.add(s));return s(),i.promise(t)}});var re=/[+-]?(?:\d*\.|)\d+(?:[eE][+-]?\d+|)/.source,ie=new RegExp("^(?:([+-])=|)("+re+")([a-z%]*)$","i"),oe=["Top","Right","Bottom","Left"],ae=a.documentElement,se=function(e){return C.contains(e.ownerDocument,e)},ue={composed:!0};ae.getRootNode&&(se=function(e){return C.contains(e.ownerDocument,e)||e.getRootNode(ue)===e.ownerDocument});var le=function(e,t){return"none"===(e=t||e).style.display||""===e.style.display&&se(e)&&"none"===C.css(e,"display")},ce=function(e,t,n,r){var i,o,a={};for(o in t)a[o]=e.style[o],e.style[o]=t[o];for(o in i=n.apply(e,r||[]),t)e.style[o]=a[o];return i};function fe(e,t,n,r){var i,o,a=20,s=r?function(){return r.cur()}:function(){return C.css(e,t,"")},u=s(),l=n&&n[3]||(C.cssNumber[t]?"":"px"),c=e.nodeType&&(C.cssNumber[t]||"px"!==l&&+u)&&ie.exec(C.css(e,t));if(c&&c[3]!==l){for(u/=2,l=l||c[3],c=+u||1;a--;)C.style(e,t,c+l),(1-o)*(1-(o=s()/u||.5))<=0&&(a=0),c/=o;c*=2,C.style(e,t,c+l),n=n||[]}return n&&(c=+c||+u||0,i=n[1]?c+(n[1]+1)*n[2]:+n[2],r&&(r.unit=l,r.start=c,r.end=i)),i}var pe={};function de(e){var t,n=e.ownerDocument,r=e.nodeName,i=pe[r];return i||(t=n.body.appendChild(n.createElement(r)),i=C.css(t,"display"),t.parentNode.removeChild(t),"none"===i&&(i="block"),pe[r]=i,i)}function he(e,t){for(var n,r,i=[],o=0,a=e.length;o<a;o++)(r=e[o]).style&&(n=r.style.display,t?("none"===n&&(i[o]=K.get(r,"display")||null,i[o]||(r.style.display="")),""===r.style.display&&le(r)&&(i[o]=de(r))):"none"!==n&&(i[o]="none",K.set(r,"display",n)));for(o=0;o<a;o++)null!=i[o]&&(e[o].style.display=i[o]);return e}C.fn.extend({show:function(){return he(this,!0)},hide:function(){return he(this)},toggle:function(e){return"boolean"==typeof e?e?this.show():this.hide():this.each(function(){le(this)?C(this).show():C(this).hide()})}});var ge=/^(?:checkbox|radio)$/i,ve=/<([a-z][^\/\0>\x20\t\r\n\f]*)/i,me=/^$|^module$|\/(?:java|ecma)script/i,ye={option:[1,"<select multiple='multiple'>","</select>"],thead:[1,"<table>","</table>"],col:[2,"<table><colgroup>","</colgroup></table>"],tr:[2,"<table><tbody>","</tbody></table>"],td:[3,"<table><tbody><tr>","</tr></tbody></table>"],_default:[0,"",""]};function xe(e,t){var n;return n=void 0!==e.getElementsByTagName?e.getElementsByTagName(t||"*"):void 0!==e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&D(e,t)?C.merge([e],n):n}function be(e,t){for(var n=0,r=e.length;n<r;n++)K.set(e[n],"globalEval",!t||K.get(t[n],"globalEval"))}ye.optgroup=ye.option,ye.tbody=ye.tfoot=ye.colgroup=ye.caption=ye.thead,ye.th=ye.td;var we,Te,Ce=/<|&#?\w+;/;function Se(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),p=[],d=0,h=e.length;d<h;d++)if((o=e[d])||0===o)if("object"===T(o))C.merge(p,o.nodeType?[o]:o);else if(Ce.test(o)){for(a=a||f.appendChild(t.createElement("div")),s=(ve.exec(o)||["",""])[1].toLowerCase(),u=ye[s]||ye._default,a.innerHTML=u[1]+C.htmlPrefilter(o)+u[2],c=u[0];c--;)a=a.lastChild;C.merge(p,a.childNodes),(a=f.firstChild).textContent=""}else p.push(t.createTextNode(o));for(f.textContent="",d=0;o=p[d++];)if(r&&C.inArray(o,r)>-1)i&&i.push(o);else if(l=se(o),a=xe(f.appendChild(o),"script"),l&&be(a),n)for(c=0;o=a[c++];)me.test(o.type||"")&&n.push(o);return f}we=a.createDocumentFragment().appendChild(a.createElement("div")),(Te=a.createElement("input")).setAttribute("type","radio"),Te.setAttribute("checked","checked"),Te.setAttribute("name","t"),we.appendChild(Te),m.checkClone=we.cloneNode(!0).cloneNode(!0).lastChild.checked,we.innerHTML="<textarea>x</textarea>",m.noCloneChecked=!!we.cloneNode(!0).lastChild.defaultValue;var ke=/^key/,Ee=/^(?:mouse|pointer|contextmenu|drag|drop)|click/,Ae=/^([^.]*)(?:\.(.+)|)/;function Ne(){return!0}function je(){return!1}function De(e,t){return e===function(){try{return a.activeElement}catch(e){}}()==("focus"===t)}function qe(e,t,n,r,i,o){var a,s;if("object"==typeof t){for(s in"string"!=typeof n&&(r=r||n,n=void 0),t)qe(e,s,n,r,t[s],o);return e}if(null==r&&null==i?(i=n,r=n=void 0):null==i&&("string"==typeof n?(i=r,r=void 0):(i=r,r=n,n=void 0)),!1===i)i=je;else if(!i)return e;return 1===o&&(a=i,(i=function(e){return C().off(e),a.apply(this,arguments)}).guid=a.guid||(a.guid=C.guid++)),e.each(function(){C.event.add(this,t,i,r,n)})}function Le(e,t,n){n?(K.set(e,t,!1),C.event.add(e,t,{namespace:!1,handler:function(e){var r,i,o=K.get(this,t);if(1&e.isTrigger&&this[t]){if(o.length)(C.event.special[t]||{}).delegateType&&e.stopPropagation();else if(o=u.call(arguments),K.set(this,t,o),r=n(this,t),this[t](),o!==(i=K.get(this,t))||r?K.set(this,t,!1):i={},o!==i)return e.stopImmediatePropagation(),e.preventDefault(),i.value}else o.length&&(K.set(this,t,{value:C.event.trigger(C.extend(o[0],C.Event.prototype),o.slice(1),this)}),e.stopImmediatePropagation())}})):void 0===K.get(e,t)&&C.event.add(e,t,Ne)}C.event={global:{},add:function(e,t,n,r,i){var o,a,s,u,l,c,f,p,d,h,g,v=K.get(e);if(v)for(n.handler&&(n=(o=n).handler,i=o.selector),i&&C.find.matchesSelector(ae,i),n.guid||(n.guid=C.guid++),(u=v.events)||(u=v.events={}),(a=v.handle)||(a=v.handle=function(t){return void 0!==C&&C.event.triggered!==t.type?C.event.dispatch.apply(e,arguments):void 0}),l=(t=(t||"").match(I)||[""]).length;l--;)d=g=(s=Ae.exec(t[l])||[])[1],h=(s[2]||"").split(".").sort(),d&&(f=C.event.special[d]||{},d=(i?f.delegateType:f.bindType)||d,f=C.event.special[d]||{},c=C.extend({type:d,origType:g,data:r,handler:n,guid:n.guid,selector:i,needsContext:i&&C.expr.match.needsContext.test(i),namespace:h.join(".")},o),(p=u[d])||((p=u[d]=[]).delegateCount=0,f.setup&&!1!==f.setup.call(e,r,h,a)||e.addEventListener&&e.addEventListener(d,a)),f.add&&(f.add.call(e,c),c.handler.guid||(c.handler.guid=n.guid)),i?p.splice(p.delegateCount++,0,c):p.push(c),C.event.global[d]=!0)},remove:function(e,t,n,r,i){var o,a,s,u,l,c,f,p,d,h,g,v=K.hasData(e)&&K.get(e);if(v&&(u=v.events)){for(l=(t=(t||"").match(I)||[""]).length;l--;)if(d=g=(s=Ae.exec(t[l])||[])[1],h=(s[2]||"").split(".").sort(),d){for(f=C.event.special[d]||{},p=u[d=(r?f.delegateType:f.bindType)||d]||[],s=s[2]&&new RegExp("(^|\\.)"+h.join("\\.(?:.*\\.|)")+"(\\.|$)"),a=o=p.length;o--;)c=p[o],!i&&g!==c.origType||n&&n.guid!==c.guid||s&&!s.test(c.namespace)||r&&r!==c.selector&&("**"!==r||!c.selector)||(p.splice(o,1),c.selector&&p.delegateCount--,f.remove&&f.remove.call(e,c));a&&!p.length&&(f.teardown&&!1!==f.teardown.call(e,h,v.handle)||C.removeEvent(e,d,v.handle),delete u[d])}else for(d in u)C.event.remove(e,d+t[l],n,r,!0);C.isEmptyObject(u)&&K.remove(e,"handle events")}},dispatch:function(e){var t,n,r,i,o,a,s=C.event.fix(e),u=new Array(arguments.length),l=(K.get(this,"events")||{})[s.type]||[],c=C.event.special[s.type]||{};for(u[0]=s,t=1;t<arguments.length;t++)u[t]=arguments[t];if(s.delegateTarget=this,!c.preDispatch||!1!==c.preDispatch.call(this,s)){for(a=C.event.handlers.call(this,s,l),t=0;(i=a[t++])&&!s.isPropagationStopped();)for(s.currentTarget=i.elem,n=0;(o=i.handlers[n++])&&!s.isImmediatePropagationStopped();)s.rnamespace&&!1!==o.namespace&&!s.rnamespace.test(o.namespace)||(s.handleObj=o,s.data=o.data,void 0!==(r=((C.event.special[o.origType]||{}).handle||o.handler).apply(i.elem,u))&&!1===(s.result=r)&&(s.preventDefault(),s.stopPropagation()));return c.postDispatch&&c.postDispatch.call(this,s),s.result}},handlers:function(e,t){var n,r,i,o,a,s=[],u=t.delegateCount,l=e.target;if(u&&l.nodeType&&!("click"===e.type&&e.button>=1))for(;l!==this;l=l.parentNode||this)if(1===l.nodeType&&("click"!==e.type||!0!==l.disabled)){for(o=[],a={},n=0;n<u;n++)void 0===a[i=(r=t[n]).selector+" "]&&(a[i]=r.needsContext?C(i,this).index(l)>-1:C.find(i,this,null,[l]).length),a[i]&&o.push(r);o.length&&s.push({elem:l,handlers:o})}return l=this,u<t.length&&s.push({elem:l,handlers:t.slice(u)}),s},addProp:function(e,t){Object.defineProperty(C.Event.prototype,e,{enumerable:!0,configurable:!0,get:y(t)?function(){if(this.originalEvent)return t(this.originalEvent)}:function(){if(this.originalEvent)return this.originalEvent[e]},set:function(t){Object.defineProperty(this,e,{enumerable:!0,configurable:!0,writable:!0,value:t})}})},fix:function(e){return e[C.expando]?e:new C.Event(e)},special:{load:{noBubble:!0},click:{setup:function(e){var t=this||e;return ge.test(t.type)&&t.click&&D(t,"input")&&Le(t,"click",Ne),!1},trigger:function(e){var t=this||e;return ge.test(t.type)&&t.click&&D(t,"input")&&Le(t,"click"),!0},_default:function(e){var t=e.target;return ge.test(t.type)&&t.click&&D(t,"input")&&K.get(t,"click")||D(t,"a")}},beforeunload:{postDispatch:function(e){void 0!==e.result&&e.originalEvent&&(e.originalEvent.returnValue=e.result)}}}},C.removeEvent=function(e,t,n){e.removeEventListener&&e.removeEventListener(t,n)},C.Event=function(e,t){if(!(this instanceof C.Event))return new C.Event(e,t);e&&e.type?(this.originalEvent=e,this.type=e.type,this.isDefaultPrevented=e.defaultPrevented||void 0===e.defaultPrevented&&!1===e.returnValue?Ne:je,this.target=e.target&&3===e.target.nodeType?e.target.parentNode:e.target,this.currentTarget=e.currentTarget,this.relatedTarget=e.relatedTarget):this.type=e,t&&C.extend(this,t),this.timeStamp=e&&e.timeStamp||Date.now(),this[C.expando]=!0},C.Event.prototype={constructor:C.Event,isDefaultPrevented:je,isPropagationStopped:je,isImmediatePropagationStopped:je,isSimulated:!1,preventDefault:function(){var e=this.originalEvent;this.isDefaultPrevented=Ne,e&&!this.isSimulated&&e.preventDefault()},stopPropagation:function(){var e=this.originalEvent;this.isPropagationStopped=Ne,e&&!this.isSimulated&&e.stopPropagation()},stopImmediatePropagation:function(){var e=this.originalEvent;this.isImmediatePropagationStopped=Ne,e&&!this.isSimulated&&e.stopImmediatePropagation(),this.stopPropagation()}},C.each({altKey:!0,bubbles:!0,cancelable:!0,changedTouches:!0,ctrlKey:!0,detail:!0,eventPhase:!0,metaKey:!0,pageX:!0,pageY:!0,shiftKey:!0,view:!0,char:!0,code:!0,charCode:!0,key:!0,keyCode:!0,button:!0,buttons:!0,clientX:!0,clientY:!0,offsetX:!0,offsetY:!0,pointerId:!0,pointerType:!0,screenX:!0,screenY:!0,targetTouches:!0,toElement:!0,touches:!0,which:function(e){var t=e.button;return null==e.which&&ke.test(e.type)?null!=e.charCode?e.charCode:e.keyCode:!e.which&&void 0!==t&&Ee.test(e.type)?1&t?1:2&t?3:4&t?2:0:e.which}},C.event.addProp),C.each({focus:"focusin",blur:"focusout"},function(e,t){C.event.special[e]={setup:function(){return Le(this,e,De),!1},trigger:function(){return Le(this,e),!0},delegateType:t}}),C.each({mouseenter:"mouseover",mouseleave:"mouseout",pointerenter:"pointerover",pointerleave:"pointerout"},function(e,t){C.event.special[e]={delegateType:t,bindType:t,handle:function(e){var n,r=this,i=e.relatedTarget,o=e.handleObj;return i&&(i===r||C.contains(r,i))||(e.type=o.origType,n=o.handler.apply(this,arguments),e.type=t),n}}}),C.fn.extend({on:function(e,t,n,r){return qe(this,e,t,n,r)},one:function(e,t,n,r){return qe(this,e,t,n,r,1)},off:function(e,t,n){var r,i;if(e&&e.preventDefault&&e.handleObj)return r=e.handleObj,C(e.delegateTarget).off(r.namespace?r.origType+"."+r.namespace:r.origType,r.selector,r.handler),this;if("object"==typeof e){for(i in e)this.off(i,t,e[i]);return this}return!1!==t&&"function"!=typeof t||(n=t,t=void 0),!1===n&&(n=je),this.each(function(){C.event.remove(this,e,n,t)})}});var He=/<(?!area|br|col|embed|hr|img|input|link|meta|param)(([a-z][^\/\0>\x20\t\r\n\f]*)[^>]*)\/>/gi,Oe=/<script|<style|<link/i,Re=/checked\s*(?:[^=]|=\s*.checked.)/i,Pe=/^\s*<!(?:\[CDATA\[|--)|(?:\]\]|--)>\s*$/g;function Me(e,t){return D(e,"table")&&D(11!==t.nodeType?t:t.firstChild,"tr")&&C(e).children("tbody")[0]||e}function Ie(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function Fe(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function $e(e,t){var n,r,i,o,a,s,u,l;if(1===t.nodeType){if(K.hasData(e)&&(o=K.access(e),a=K.set(t,o),l=o.events))for(i in delete a.handle,a.events={},l)for(n=0,r=l[i].length;n<r;n++)C.event.add(t,i,l[i][n]);Z.hasData(e)&&(s=Z.access(e),u=C.extend({},s),Z.set(t,u))}}function We(e,t){var n=t.nodeName.toLowerCase();"input"===n&&ge.test(e.type)?t.checked=e.checked:"input"!==n&&"textarea"!==n||(t.defaultValue=e.defaultValue)}function Be(e,t,n,r){t=l.apply([],t);var i,o,a,s,u,c,f=0,p=e.length,d=p-1,h=t[0],g=y(h);if(g||p>1&&"string"==typeof h&&!m.checkClone&&Re.test(h))return e.each(function(i){var o=e.eq(i);g&&(t[0]=h.call(this,i,o.html())),Be(o,t,n,r)});if(p&&(o=(i=Se(t,e[0].ownerDocument,!1,e,r)).firstChild,1===i.childNodes.length&&(i=o),o||r)){for(s=(a=C.map(xe(i,"script"),Ie)).length;f<p;f++)u=i,f!==d&&(u=C.clone(u,!0,!0),s&&C.merge(a,xe(u,"script"))),n.call(e[f],u,f);if(s)for(c=a[a.length-1].ownerDocument,C.map(a,Fe),f=0;f<s;f++)u=a[f],me.test(u.type||"")&&!K.access(u,"globalEval")&&C.contains(c,u)&&(u.src&&"module"!==(u.type||"").toLowerCase()?C._evalUrl&&!u.noModule&&C._evalUrl(u.src,{nonce:u.nonce||u.getAttribute("nonce")}):w(u.textContent.replace(Pe,""),u,c))}return e}function _e(e,t,n){for(var r,i=t?C.filter(t,e):e,o=0;null!=(r=i[o]);o++)n||1!==r.nodeType||C.cleanData(xe(r)),r.parentNode&&(n&&se(r)&&be(xe(r,"script")),r.parentNode.removeChild(r));return e}C.extend({htmlPrefilter:function(e){return e.replace(He,"<$1></$2>")},clone:function(e,t,n){var r,i,o,a,s=e.cloneNode(!0),u=se(e);if(!(m.noCloneChecked||1!==e.nodeType&&11!==e.nodeType||C.isXMLDoc(e)))for(a=xe(s),r=0,i=(o=xe(e)).length;r<i;r++)We(o[r],a[r]);if(t)if(n)for(o=o||xe(e),a=a||xe(s),r=0,i=o.length;r<i;r++)$e(o[r],a[r]);else $e(e,s);return(a=xe(s,"script")).length>0&&be(a,!u&&xe(e,"script")),s},cleanData:function(e){for(var t,n,r,i=C.event.special,o=0;void 0!==(n=e[o]);o++)if(Q(n)){if(t=n[K.expando]){if(t.events)for(r in t.events)i[r]?C.event.remove(n,r):C.removeEvent(n,r,t.handle);n[K.expando]=void 0}n[Z.expando]&&(n[Z.expando]=void 0)}}}),C.fn.extend({detach:function(e){return _e(this,e,!0)},remove:function(e){return _e(this,e)},text:function(e){return U(this,function(e){return void 0===e?C.text(this):this.empty().each(function(){1!==this.nodeType&&11!==this.nodeType&&9!==this.nodeType||(this.textContent=e)})},null,e,arguments.length)},append:function(){return Be(this,arguments,function(e){1!==this.nodeType&&11!==this.nodeType&&9!==this.nodeType||Me(this,e).appendChild(e)})},prepend:function(){return Be(this,arguments,function(e){if(1===this.nodeType||11===this.nodeType||9===this.nodeType){var t=Me(this,e);t.insertBefore(e,t.firstChild)}})},before:function(){return Be(this,arguments,function(e){this.parentNode&&this.parentNode.insertBefore(e,this)})},after:function(){return Be(this,arguments,function(e){this.parentNode&&this.parentNode.insertBefore(e,this.nextSibling)})},empty:function(){for(var e,t=0;null!=(e=this[t]);t++)1===e.nodeType&&(C.cleanData(xe(e,!1)),e.textContent="");return this},clone:function(e,t){return e=null!=e&&e,t=null==t?e:t,this.map(function(){return C.clone(this,e,t)})},html:function(e){return U(this,function(e){var t=this[0]||{},n=0,r=this.length;if(void 0===e&&1===t.nodeType)return t.innerHTML;if("string"==typeof e&&!Oe.test(e)&&!ye[(ve.exec(e)||["",""])[1].toLowerCase()]){e=C.htmlPrefilter(e);try{for(;n<r;n++)1===(t=this[n]||{}).nodeType&&(C.cleanData(xe(t,!1)),t.innerHTML=e);t=0}catch(e){}}t&&this.empty().append(e)},null,e,arguments.length)},replaceWith:function(){var e=[];return Be(this,arguments,function(t){var n=this.parentNode;C.inArray(this,e)<0&&(C.cleanData(xe(this)),n&&n.replaceChild(t,this))},e)}}),C.each({appendTo:"append",prependTo:"prepend",insertBefore:"before",insertAfter:"after",replaceAll:"replaceWith"},function(e,t){C.fn[e]=function(e){for(var n,r=[],i=C(e),o=i.length-1,a=0;a<=o;a++)n=a===o?this:this.clone(!0),C(i[a])[t](n),c.apply(r,n.get());return this.pushStack(r)}});var ze=new RegExp("^("+re+")(?!px)[a-z%]+$","i"),Ue=function(e){var t=e.ownerDocument.defaultView;return t&&t.opener||(t=n),t.getComputedStyle(e)},Xe=new RegExp(oe.join("|"),"i");function Ve(e,t,n){var r,i,o,a,s=e.style;return(n=n||Ue(e))&&(""!==(a=n.getPropertyValue(t)||n[t])||se(e)||(a=C.style(e,t)),!m.pixelBoxStyles()&&ze.test(a)&&Xe.test(t)&&(r=s.width,i=s.minWidth,o=s.maxWidth,s.minWidth=s.maxWidth=s.width=a,a=n.width,s.width=r,s.minWidth=i,s.maxWidth=o)),void 0!==a?a+"":a}function Ge(e,t){return{get:function(){if(!e())return(this.get=t).apply(this,arguments);delete this.get}}}!function(){function e(){if(c){l.style.cssText="position:absolute;left:-11111px;width:60px;margin-top:1px;padding:0;border:0",c.style.cssText="position:relative;display:block;box-sizing:border-box;overflow:scroll;margin:auto;border:1px;padding:1px;width:60%;top:1%",ae.appendChild(l).appendChild(c);var e=n.getComputedStyle(c);r="1%"!==e.top,u=12===t(e.marginLeft),c.style.right="60%",s=36===t(e.right),i=36===t(e.width),c.style.position="absolute",o=12===t(c.offsetWidth/3),ae.removeChild(l),c=null}}function t(e){return Math.round(parseFloat(e))}var r,i,o,s,u,l=a.createElement("div"),c=a.createElement("div");c.style&&(c.style.backgroundClip="content-box",c.cloneNode(!0).style.backgroundClip="",m.clearCloneStyle="content-box"===c.style.backgroundClip,C.extend(m,{boxSizingReliable:function(){return e(),i},pixelBoxStyles:function(){return e(),s},pixelPosition:function(){return e(),r},reliableMarginLeft:function(){return e(),u},scrollboxSize:function(){return e(),o}}))}();var Ye=["Webkit","Moz","ms"],Qe=a.createElement("div").style,Je={};function Ke(e){var t=C.cssProps[e]||Je[e];return t||(e in Qe?e:Je[e]=function(e){for(var t=e[0].toUpperCase()+e.slice(1),n=Ye.length;n--;)if((e=Ye[n]+t)in Qe)return e}(e)||e)}var Ze=/^(none|table(?!-c[ea]).+)/,et=/^--/,tt={position:"absolute",visibility:"hidden",display:"block"},nt={letterSpacing:"0",fontWeight:"400"};function rt(e,t,n){var r=ie.exec(t);return r?Math.max(0,r[2]-(n||0))+(r[3]||"px"):t}function it(e,t,n,r,i,o){var a="width"===t?1:0,s=0,u=0;if(n===(r?"border":"content"))return 0;for(;a<4;a+=2)"margin"===n&&(u+=C.css(e,n+oe[a],!0,i)),r?("content"===n&&(u-=C.css(e,"padding"+oe[a],!0,i)),"margin"!==n&&(u-=C.css(e,"border"+oe[a]+"Width",!0,i))):(u+=C.css(e,"padding"+oe[a],!0,i),"padding"!==n?u+=C.css(e,"border"+oe[a]+"Width",!0,i):s+=C.css(e,"border"+oe[a]+"Width",!0,i));return!r&&o>=0&&(u+=Math.max(0,Math.ceil(e["offset"+t[0].toUpperCase()+t.slice(1)]-o-u-s-.5))||0),u}function ot(e,t,n){var r=Ue(e),i=(!m.boxSizingReliable()||n)&&"border-box"===C.css(e,"boxSizing",!1,r),o=i,a=Ve(e,t,r),s="offset"+t[0].toUpperCase()+t.slice(1);if(ze.test(a)){if(!n)return a;a="auto"}return(!m.boxSizingReliable()&&i||"auto"===a||!parseFloat(a)&&"inline"===C.css(e,"display",!1,r))&&e.getClientRects().length&&(i="border-box"===C.css(e,"boxSizing",!1,r),(o=s in e)&&(a=e[s])),(a=parseFloat(a)||0)+it(e,t,n||(i?"border":"content"),o,r,a)+"px"}function at(e,t,n,r,i){return new at.prototype.init(e,t,n,r,i)}C.extend({cssHooks:{opacity:{get:function(e,t){if(t){var n=Ve(e,"opacity");return""===n?"1":n}}}},cssNumber:{animationIterationCount:!0,columnCount:!0,fillOpacity:!0,flexGrow:!0,flexShrink:!0,fontWeight:!0,gridArea:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnStart:!0,gridRow:!0,gridRowEnd:!0,gridRowStart:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,widows:!0,zIndex:!0,zoom:!0},cssProps:{},style:function(e,t,n,r){if(e&&3!==e.nodeType&&8!==e.nodeType&&e.style){var i,o,a,s=Y(t),u=et.test(t),l=e.style;if(u||(t=Ke(s)),a=C.cssHooks[t]||C.cssHooks[s],void 0===n)return a&&"get"in a&&void 0!==(i=a.get(e,!1,r))?i:l[t];"string"===(o=typeof n)&&(i=ie.exec(n))&&i[1]&&(n=fe(e,t,i),o="number"),null!=n&&n==n&&("number"!==o||u||(n+=i&&i[3]||(C.cssNumber[s]?"":"px")),m.clearCloneStyle||""!==n||0!==t.indexOf("background")||(l[t]="inherit"),a&&"set"in a&&void 0===(n=a.set(e,n,r))||(u?l.setProperty(t,n):l[t]=n))}},css:function(e,t,n,r){var i,o,a,s=Y(t);return et.test(t)||(t=Ke(s)),(a=C.cssHooks[t]||C.cssHooks[s])&&"get"in a&&(i=a.get(e,!0,n)),void 0===i&&(i=Ve(e,t,r)),"normal"===i&&t in nt&&(i=nt[t]),""===n||n?(o=parseFloat(i),!0===n||isFinite(o)?o||0:i):i}}),C.each(["height","width"],function(e,t){C.cssHooks[t]={get:function(e,n,r){if(n)return!Ze.test(C.css(e,"display"))||e.getClientRects().length&&e.getBoundingClientRect().width?ot(e,t,r):ce(e,tt,function(){return ot(e,t,r)})},set:function(e,n,r){var i,o=Ue(e),a=!m.scrollboxSize()&&"absolute"===o.position,s=(a||r)&&"border-box"===C.css(e,"boxSizing",!1,o),u=r?it(e,t,r,s,o):0;return s&&a&&(u-=Math.ceil(e["offset"+t[0].toUpperCase()+t.slice(1)]-parseFloat(o[t])-it(e,t,"border",!1,o)-.5)),u&&(i=ie.exec(n))&&"px"!==(i[3]||"px")&&(e.style[t]=n,n=C.css(e,t)),rt(0,n,u)}}}),C.cssHooks.marginLeft=Ge(m.reliableMarginLeft,function(e,t){if(t)return(parseFloat(Ve(e,"marginLeft"))||e.getBoundingClientRect().left-ce(e,{marginLeft:0},function(){return e.getBoundingClientRect().left}))+"px"}),C.each({margin:"",padding:"",border:"Width"},function(e,t){C.cssHooks[e+t]={expand:function(n){for(var r=0,i={},o="string"==typeof n?n.split(" "):[n];r<4;r++)i[e+oe[r]+t]=o[r]||o[r-2]||o[0];return i}},"margin"!==e&&(C.cssHooks[e+t].set=rt)}),C.fn.extend({css:function(e,t){return U(this,function(e,t,n){var r,i,o={},a=0;if(Array.isArray(t)){for(r=Ue(e),i=t.length;a<i;a++)o[t[a]]=C.css(e,t[a],!1,r);return o}return void 0!==n?C.style(e,t,n):C.css(e,t)},e,t,arguments.length>1)}}),C.Tween=at,at.prototype={constructor:at,init:function(e,t,n,r,i,o){this.elem=e,this.prop=n,this.easing=i||C.easing._default,this.options=t,this.start=this.now=this.cur(),this.end=r,this.unit=o||(C.cssNumber[n]?"":"px")},cur:function(){var e=at.propHooks[this.prop];return e&&e.get?e.get(this):at.propHooks._default.get(this)},run:function(e){var t,n=at.propHooks[this.prop];return this.options.duration?this.pos=t=C.easing[this.easing](e,this.options.duration*e,0,1,this.options.duration):this.pos=t=e,this.now=(this.end-this.start)*t+this.start,this.options.step&&this.options.step.call(this.elem,this.now,this),n&&n.set?n.set(this):at.propHooks._default.set(this),this}},at.prototype.init.prototype=at.prototype,at.propHooks={_default:{get:function(e){var t;return 1!==e.elem.nodeType||null!=e.elem[e.prop]&&null==e.elem.style[e.prop]?e.elem[e.prop]:(t=C.css(e.elem,e.prop,""))&&"auto"!==t?t:0},set:function(e){C.fx.step[e.prop]?C.fx.step[e.prop](e):1!==e.elem.nodeType||!C.cssHooks[e.prop]&&null==e.elem.style[Ke(e.prop)]?e.elem[e.prop]=e.now:C.style(e.elem,e.prop,e.now+e.unit)}}},at.propHooks.scrollTop=at.propHooks.scrollLeft={set:function(e){e.elem.nodeType&&e.elem.parentNode&&(e.elem[e.prop]=e.now)}},C.easing={linear:function(e){return e},swing:function(e){return.5-Math.cos(e*Math.PI)/2},_default:"swing"},C.fx=at.prototype.init,C.fx.step={};var st,ut,lt=/^(?:toggle|show|hide)$/,ct=/queueHooks$/;function ft(){ut&&(!1===a.hidden&&n.requestAnimationFrame?n.requestAnimationFrame(ft):n.setTimeout(ft,C.fx.interval),C.fx.tick())}function pt(){return n.setTimeout(function(){st=void 0}),st=Date.now()}function dt(e,t){var n,r=0,i={height:e};for(t=t?1:0;r<4;r+=2-t)i["margin"+(n=oe[r])]=i["padding"+n]=e;return t&&(i.opacity=i.width=e),i}function ht(e,t,n){for(var r,i=(gt.tweeners[t]||[]).concat(gt.tweeners["*"]),o=0,a=i.length;o<a;o++)if(r=i[o].call(n,t,e))return r}function gt(e,t,n){var r,i,o=0,a=gt.prefilters.length,s=C.Deferred().always(function(){delete u.elem}),u=function(){if(i)return!1;for(var t=st||pt(),n=Math.max(0,l.startTime+l.duration-t),r=1-(n/l.duration||0),o=0,a=l.tweens.length;o<a;o++)l.tweens[o].run(r);return s.notifyWith(e,[l,r,n]),r<1&&a?n:(a||s.notifyWith(e,[l,1,0]),s.resolveWith(e,[l]),!1)},l=s.promise({elem:e,props:C.extend({},t),opts:C.extend(!0,{specialEasing:{},easing:C.easing._default},n),originalProperties:t,originalOptions:n,startTime:st||pt(),duration:n.duration,tweens:[],createTween:function(t,n){var r=C.Tween(e,l.opts,t,n,l.opts.specialEasing[t]||l.opts.easing);return l.tweens.push(r),r},stop:function(t){var n=0,r=t?l.tweens.length:0;if(i)return this;for(i=!0;n<r;n++)l.tweens[n].run(1);return t?(s.notifyWith(e,[l,1,0]),s.resolveWith(e,[l,t])):s.rejectWith(e,[l,t]),this}}),c=l.props;for(!function(e,t){var n,r,i,o,a;for(n in e)if(i=t[r=Y(n)],o=e[n],Array.isArray(o)&&(i=o[1],o=e[n]=o[0]),n!==r&&(e[r]=o,delete e[n]),(a=C.cssHooks[r])&&"expand"in a)for(n in o=a.expand(o),delete e[r],o)n in e||(e[n]=o[n],t[n]=i);else t[r]=i}(c,l.opts.specialEasing);o<a;o++)if(r=gt.prefilters[o].call(l,e,c,l.opts))return y(r.stop)&&(C._queueHooks(l.elem,l.opts.queue).stop=r.stop.bind(r)),r;return C.map(c,ht,l),y(l.opts.start)&&l.opts.start.call(e,l),l.progress(l.opts.progress).done(l.opts.done,l.opts.complete).fail(l.opts.fail).always(l.opts.always),C.fx.timer(C.extend(u,{elem:e,anim:l,queue:l.opts.queue})),l}C.Animation=C.extend(gt,{tweeners:{"*":[function(e,t){var n=this.createTween(e,t);return fe(n.elem,e,ie.exec(t),n),n}]},tweener:function(e,t){y(e)?(t=e,e=["*"]):e=e.match(I);for(var n,r=0,i=e.length;r<i;r++)n=e[r],gt.tweeners[n]=gt.tweeners[n]||[],gt.tweeners[n].unshift(t)},prefilters:[function(e,t,n){var r,i,o,a,s,u,l,c,f="width"in t||"height"in t,p=this,d={},h=e.style,g=e.nodeType&&le(e),v=K.get(e,"fxshow");for(r in n.queue||(null==(a=C._queueHooks(e,"fx")).unqueued&&(a.unqueued=0,s=a.empty.fire,a.empty.fire=function(){a.unqueued||s()}),a.unqueued++,p.always(function(){p.always(function(){a.unqueued--,C.queue(e,"fx").length||a.empty.fire()})})),t)if(i=t[r],lt.test(i)){if(delete t[r],o=o||"toggle"===i,i===(g?"hide":"show")){if("show"!==i||!v||void 0===v[r])continue;g=!0}d[r]=v&&v[r]||C.style(e,r)}if((u=!C.isEmptyObject(t))||!C.isEmptyObject(d))for(r in f&&1===e.nodeType&&(n.overflow=[h.overflow,h.overflowX,h.overflowY],null==(l=v&&v.display)&&(l=K.get(e,"display")),"none"===(c=C.css(e,"display"))&&(l?c=l:(he([e],!0),l=e.style.display||l,c=C.css(e,"display"),he([e]))),("inline"===c||"inline-block"===c&&null!=l)&&"none"===C.css(e,"float")&&(u||(p.done(function(){h.display=l}),null==l&&(c=h.display,l="none"===c?"":c)),h.display="inline-block")),n.overflow&&(h.overflow="hidden",p.always(function(){h.overflow=n.overflow[0],h.overflowX=n.overflow[1],h.overflowY=n.overflow[2]})),u=!1,d)u||(v?"hidden"in v&&(g=v.hidden):v=K.access(e,"fxshow",{display:l}),o&&(v.hidden=!g),g&&he([e],!0),p.done(function(){for(r in g||he([e]),K.remove(e,"fxshow"),d)C.style(e,r,d[r])})),u=ht(g?v[r]:0,r,p),r in v||(v[r]=u.start,g&&(u.end=u.start,u.start=0))}],prefilter:function(e,t){t?gt.prefilters.unshift(e):gt.prefilters.push(e)}}),C.speed=function(e,t,n){var r=e&&"object"==typeof e?C.extend({},e):{complete:n||!n&&t||y(e)&&e,duration:e,easing:n&&t||t&&!y(t)&&t};return C.fx.off?r.duration=0:"number"!=typeof r.duration&&(r.duration in C.fx.speeds?r.duration=C.fx.speeds[r.duration]:r.duration=C.fx.speeds._default),null!=r.queue&&!0!==r.queue||(r.queue="fx"),r.old=r.complete,r.complete=function(){y(r.old)&&r.old.call(this),r.queue&&C.dequeue(this,r.queue)},r},C.fn.extend({fadeTo:function(e,t,n,r){return this.filter(le).css("opacity",0).show().end().animate({opacity:t},e,n,r)},animate:function(e,t,n,r){var i=C.isEmptyObject(e),o=C.speed(t,n,r),a=function(){var t=gt(this,C.extend({},e),o);(i||K.get(this,"finish"))&&t.stop(!0)};return a.finish=a,i||!1===o.queue?this.each(a):this.queue(o.queue,a)},stop:function(e,t,n){var r=function(e){var t=e.stop;delete e.stop,t(n)};return"string"!=typeof e&&(n=t,t=e,e=void 0),t&&!1!==e&&this.queue(e||"fx",[]),this.each(function(){var t=!0,i=null!=e&&e+"queueHooks",o=C.timers,a=K.get(this);if(i)a[i]&&a[i].stop&&r(a[i]);else for(i in a)a[i]&&a[i].stop&&ct.test(i)&&r(a[i]);for(i=o.length;i--;)o[i].elem!==this||null!=e&&o[i].queue!==e||(o[i].anim.stop(n),t=!1,o.splice(i,1));!t&&n||C.dequeue(this,e)})},finish:function(e){return!1!==e&&(e=e||"fx"),this.each(function(){var t,n=K.get(this),r=n[e+"queue"],i=n[e+"queueHooks"],o=C.timers,a=r?r.length:0;for(n.finish=!0,C.queue(this,e,[]),i&&i.stop&&i.stop.call(this,!0),t=o.length;t--;)o[t].elem===this&&o[t].queue===e&&(o[t].anim.stop(!0),o.splice(t,1));for(t=0;t<a;t++)r[t]&&r[t].finish&&r[t].finish.call(this);delete n.finish})}}),C.each(["toggle","show","hide"],function(e,t){var n=C.fn[t];C.fn[t]=function(e,r,i){return null==e||"boolean"==typeof e?n.apply(this,arguments):this.animate(dt(t,!0),e,r,i)}}),C.each({slideDown:dt("show"),slideUp:dt("hide"),slideToggle:dt("toggle"),fadeIn:{opacity:"show"},fadeOut:{opacity:"hide"},fadeToggle:{opacity:"toggle"}},function(e,t){C.fn[e]=function(e,n,r){return this.animate(t,e,n,r)}}),C.timers=[],C.fx.tick=function(){var e,t=0,n=C.timers;for(st=Date.now();t<n.length;t++)(e=n[t])()||n[t]!==e||n.splice(t--,1);n.length||C.fx.stop(),st=void 0},C.fx.timer=function(e){C.timers.push(e),C.fx.start()},C.fx.interval=13,C.fx.start=function(){ut||(ut=!0,ft())},C.fx.stop=function(){ut=null},C.fx.speeds={slow:600,fast:200,_default:400},C.fn.delay=function(e,t){return e=C.fx&&C.fx.speeds[e]||e,t=t||"fx",this.queue(t,function(t,r){var i=n.setTimeout(t,e);r.stop=function(){n.clearTimeout(i)}})},function(){var e=a.createElement("input"),t=a.createElement("select").appendChild(a.createElement("option"));e.type="checkbox",m.checkOn=""!==e.value,m.optSelected=t.selected,(e=a.createElement("input")).value="t",e.type="radio",m.radioValue="t"===e.value}();var vt,mt=C.expr.attrHandle;C.fn.extend({attr:function(e,t){return U(this,C.attr,e,t,arguments.length>1)},removeAttr:function(e){return this.each(function(){C.removeAttr(this,e)})}}),C.extend({attr:function(e,t,n){var r,i,o=e.nodeType;if(3!==o&&8!==o&&2!==o)return void 0===e.getAttribute?C.prop(e,t,n):(1===o&&C.isXMLDoc(e)||(i=C.attrHooks[t.toLowerCase()]||(C.expr.match.bool.test(t)?vt:void 0)),void 0!==n?null===n?void C.removeAttr(e,t):i&&"set"in i&&void 0!==(r=i.set(e,n,t))?r:(e.setAttribute(t,n+""),n):i&&"get"in i&&null!==(r=i.get(e,t))?r:null==(r=C.find.attr(e,t))?void 0:r)},attrHooks:{type:{set:function(e,t){if(!m.radioValue&&"radio"===t&&D(e,"input")){var n=e.value;return e.setAttribute("type",t),n&&(e.value=n),t}}}},removeAttr:function(e,t){var n,r=0,i=t&&t.match(I);if(i&&1===e.nodeType)for(;n=i[r++];)e.removeAttribute(n)}}),vt={set:function(e,t,n){return!1===t?C.removeAttr(e,n):e.setAttribute(n,n),n}},C.each(C.expr.match.bool.source.match(/\w+/g),function(e,t){var n=mt[t]||C.find.attr;mt[t]=function(e,t,r){var i,o,a=t.toLowerCase();return r||(o=mt[a],mt[a]=i,i=null!=n(e,t,r)?a:null,mt[a]=o),i}});var yt=/^(?:input|select|textarea|button)$/i,xt=/^(?:a|area)$/i;function bt(e){return(e.match(I)||[]).join(" ")}function wt(e){return e.getAttribute&&e.getAttribute("class")||""}function Tt(e){return Array.isArray(e)?e:"string"==typeof e&&e.match(I)||[]}C.fn.extend({prop:function(e,t){return U(this,C.prop,e,t,arguments.length>1)},removeProp:function(e){return this.each(function(){delete this[C.propFix[e]||e]})}}),C.extend({prop:function(e,t,n){var r,i,o=e.nodeType;if(3!==o&&8!==o&&2!==o)return 1===o&&C.isXMLDoc(e)||(t=C.propFix[t]||t,i=C.propHooks[t]),void 0!==n?i&&"set"in i&&void 0!==(r=i.set(e,n,t))?r:e[t]=n:i&&"get"in i&&null!==(r=i.get(e,t))?r:e[t]},propHooks:{tabIndex:{get:function(e){var t=C.find.attr(e,"tabindex");return t?parseInt(t,10):yt.test(e.nodeName)||xt.test(e.nodeName)&&e.href?0:-1}}},propFix:{for:"htmlFor",class:"className"}}),m.optSelected||(C.propHooks.selected={get:function(e){var t=e.parentNode;return t&&t.parentNode&&t.parentNode.selectedIndex,null},set:function(e){var t=e.parentNode;t&&(t.selectedIndex,t.parentNode&&t.parentNode.selectedIndex)}}),C.each(["tabIndex","readOnly","maxLength","cellSpacing","cellPadding","rowSpan","colSpan","useMap","frameBorder","contentEditable"],function(){C.propFix[this.toLowerCase()]=this}),C.fn.extend({addClass:function(e){var t,n,r,i,o,a,s,u=0;if(y(e))return this.each(function(t){C(this).addClass(e.call(this,t,wt(this)))});if((t=Tt(e)).length)for(;n=this[u++];)if(i=wt(n),r=1===n.nodeType&&" "+bt(i)+" "){for(a=0;o=t[a++];)r.indexOf(" "+o+" ")<0&&(r+=o+" ");i!==(s=bt(r))&&n.setAttribute("class",s)}return this},removeClass:function(e){var t,n,r,i,o,a,s,u=0;if(y(e))return this.each(function(t){C(this).removeClass(e.call(this,t,wt(this)))});if(!arguments.length)return this.attr("class","");if((t=Tt(e)).length)for(;n=this[u++];)if(i=wt(n),r=1===n.nodeType&&" "+bt(i)+" "){for(a=0;o=t[a++];)for(;r.indexOf(" "+o+" ")>-1;)r=r.replace(" "+o+" "," ");i!==(s=bt(r))&&n.setAttribute("class",s)}return this},toggleClass:function(e,t){var n=typeof e,r="string"===n||Array.isArray(e);return"boolean"==typeof t&&r?t?this.addClass(e):this.removeClass(e):y(e)?this.each(function(n){C(this).toggleClass(e.call(this,n,wt(this),t),t)}):this.each(function(){var t,i,o,a;if(r)for(i=0,o=C(this),a=Tt(e);t=a[i++];)o.hasClass(t)?o.removeClass(t):o.addClass(t);else void 0!==e&&"boolean"!==n||((t=wt(this))&&K.set(this,"__className__",t),this.setAttribute&&this.setAttribute("class",t||!1===e?"":K.get(this,"__className__")||""))})},hasClass:function(e){var t,n,r=0;for(t=" "+e+" ";n=this[r++];)if(1===n.nodeType&&(" "+bt(wt(n))+" ").indexOf(t)>-1)return!0;return!1}});var Ct=/\r/g;C.fn.extend({val:function(e){var t,n,r,i=this[0];return arguments.length?(r=y(e),this.each(function(n){var i;1===this.nodeType&&(null==(i=r?e.call(this,n,C(this).val()):e)?i="":"number"==typeof i?i+="":Array.isArray(i)&&(i=C.map(i,function(e){return null==e?"":e+""})),(t=C.valHooks[this.type]||C.valHooks[this.nodeName.toLowerCase()])&&"set"in t&&void 0!==t.set(this,i,"value")||(this.value=i))})):i?(t=C.valHooks[i.type]||C.valHooks[i.nodeName.toLowerCase()])&&"get"in t&&void 0!==(n=t.get(i,"value"))?n:"string"==typeof(n=i.value)?n.replace(Ct,""):null==n?"":n:void 0}}),C.extend({valHooks:{option:{get:function(e){var t=C.find.attr(e,"value");return null!=t?t:bt(C.text(e))}},select:{get:function(e){var t,n,r,i=e.options,o=e.selectedIndex,a="select-one"===e.type,s=a?null:[],u=a?o+1:i.length;for(r=o<0?u:a?o:0;r<u;r++)if(((n=i[r]).selected||r===o)&&!n.disabled&&(!n.parentNode.disabled||!D(n.parentNode,"optgroup"))){if(t=C(n).val(),a)return t;s.push(t)}return s},set:function(e,t){for(var n,r,i=e.options,o=C.makeArray(t),a=i.length;a--;)((r=i[a]).selected=C.inArray(C.valHooks.option.get(r),o)>-1)&&(n=!0);return n||(e.selectedIndex=-1),o}}}}),C.each(["radio","checkbox"],function(){C.valHooks[this]={set:function(e,t){if(Array.isArray(t))return e.checked=C.inArray(C(e).val(),t)>-1}},m.checkOn||(C.valHooks[this].get=function(e){return null===e.getAttribute("value")?"on":e.value})}),m.focusin="onfocusin"in n;var St=/^(?:focusinfocus|focusoutblur)$/,kt=function(e){e.stopPropagation()};C.extend(C.event,{trigger:function(e,t,r,i){var o,s,u,l,c,f,p,d,g=[r||a],v=h.call(e,"type")?e.type:e,m=h.call(e,"namespace")?e.namespace.split("."):[];if(s=d=u=r=r||a,3!==r.nodeType&&8!==r.nodeType&&!St.test(v+C.event.triggered)&&(v.indexOf(".")>-1&&(m=v.split("."),v=m.shift(),m.sort()),c=v.indexOf(":")<0&&"on"+v,(e=e[C.expando]?e:new C.Event(v,"object"==typeof e&&e)).isTrigger=i?2:3,e.namespace=m.join("."),e.rnamespace=e.namespace?new RegExp("(^|\\.)"+m.join("\\.(?:.*\\.|)")+"(\\.|$)"):null,e.result=void 0,e.target||(e.target=r),t=null==t?[e]:C.makeArray(t,[e]),p=C.event.special[v]||{},i||!p.trigger||!1!==p.trigger.apply(r,t))){if(!i&&!p.noBubble&&!x(r)){for(l=p.delegateType||v,St.test(l+v)||(s=s.parentNode);s;s=s.parentNode)g.push(s),u=s;u===(r.ownerDocument||a)&&g.push(u.defaultView||u.parentWindow||n)}for(o=0;(s=g[o++])&&!e.isPropagationStopped();)d=s,e.type=o>1?l:p.bindType||v,(f=(K.get(s,"events")||{})[e.type]&&K.get(s,"handle"))&&f.apply(s,t),(f=c&&s[c])&&f.apply&&Q(s)&&(e.result=f.apply(s,t),!1===e.result&&e.preventDefault());return e.type=v,i||e.isDefaultPrevented()||p._default&&!1!==p._default.apply(g.pop(),t)||!Q(r)||c&&y(r[v])&&!x(r)&&((u=r[c])&&(r[c]=null),C.event.triggered=v,e.isPropagationStopped()&&d.addEventListener(v,kt),r[v](),e.isPropagationStopped()&&d.removeEventListener(v,kt),C.event.triggered=void 0,u&&(r[c]=u)),e.result}},simulate:function(e,t,n){var r=C.extend(new C.Event,n,{type:e,isSimulated:!0});C.event.trigger(r,null,t)}}),C.fn.extend({trigger:function(e,t){return this.each(function(){C.event.trigger(e,t,this)})},triggerHandler:function(e,t){var n=this[0];if(n)return C.event.trigger(e,t,n,!0)}}),m.focusin||C.each({focus:"focusin",blur:"focusout"},function(e,t){var n=function(e){C.event.simulate(t,e.target,C.event.fix(e))};C.event.special[t]={setup:function(){var r=this.ownerDocument||this,i=K.access(r,t);i||r.addEventListener(e,n,!0),K.access(r,t,(i||0)+1)},teardown:function(){var r=this.ownerDocument||this,i=K.access(r,t)-1;i?K.access(r,t,i):(r.removeEventListener(e,n,!0),K.remove(r,t))}}});var Et=n.location,At=Date.now(),Nt=/\?/;C.parseXML=function(e){var t;if(!e||"string"!=typeof e)return null;try{t=(new n.DOMParser).parseFromString(e,"text/xml")}catch(e){t=void 0}return t&&!t.getElementsByTagName("parsererror").length||C.error("Invalid XML: "+e),t};var jt=/\[\]$/,Dt=/\r?\n/g,qt=/^(?:submit|button|image|reset|file)$/i,Lt=/^(?:input|select|textarea|keygen)/i;function Ht(e,t,n,r){var i;if(Array.isArray(t))C.each(t,function(t,i){n||jt.test(e)?r(e,i):Ht(e+"["+("object"==typeof i&&null!=i?t:"")+"]",i,n,r)});else if(n||"object"!==T(t))r(e,t);else for(i in t)Ht(e+"["+i+"]",t[i],n,r)}C.param=function(e,t){var n,r=[],i=function(e,t){var n=y(t)?t():t;r[r.length]=encodeURIComponent(e)+"="+encodeURIComponent(null==n?"":n)};if(null==e)return"";if(Array.isArray(e)||e.jquery&&!C.isPlainObject(e))C.each(e,function(){i(this.name,this.value)});else for(n in e)Ht(n,e[n],t,i);return r.join("&")},C.fn.extend({serialize:function(){return C.param(this.serializeArray())},serializeArray:function(){return this.map(function(){var e=C.prop(this,"elements");return e?C.makeArray(e):this}).filter(function(){var e=this.type;return this.name&&!C(this).is(":disabled")&&Lt.test(this.nodeName)&&!qt.test(e)&&(this.checked||!ge.test(e))}).map(function(e,t){var n=C(this).val();return null==n?null:Array.isArray(n)?C.map(n,function(e){return{name:t.name,value:e.replace(Dt,"\r\n")}}):{name:t.name,value:n.replace(Dt,"\r\n")}}).get()}});var Ot=/%20/g,Rt=/#.*$/,Pt=/([?&])_=[^&]*/,Mt=/^(.*?):[ \t]*([^\r\n]*)$/gm,It=/^(?:GET|HEAD)$/,Ft=/^\/\//,$t={},Wt={},Bt="*/".concat("*"),_t=a.createElement("a");function zt(e){return function(t,n){"string"!=typeof t&&(n=t,t="*");var r,i=0,o=t.toLowerCase().match(I)||[];if(y(n))for(;r=o[i++];)"+"===r[0]?(r=r.slice(1)||"*",(e[r]=e[r]||[]).unshift(n)):(e[r]=e[r]||[]).push(n)}}function Ut(e,t,n,r){var i={},o=e===Wt;function a(s){var u;return i[s]=!0,C.each(e[s]||[],function(e,s){var l=s(t,n,r);return"string"!=typeof l||o||i[l]?o?!(u=l):void 0:(t.dataTypes.unshift(l),a(l),!1)}),u}return a(t.dataTypes[0])||!i["*"]&&a("*")}function Xt(e,t){var n,r,i=C.ajaxSettings.flatOptions||{};for(n in t)void 0!==t[n]&&((i[n]?e:r||(r={}))[n]=t[n]);return r&&C.extend(!0,e,r),e}_t.href=Et.href,C.extend({active:0,lastModified:{},etag:{},ajaxSettings:{url:Et.href,type:"GET",isLocal:/^(?:about|app|app-storage|.+-extension|file|res|widget):$/.test(Et.protocol),global:!0,processData:!0,async:!0,contentType:"application/x-www-form-urlencoded; charset=UTF-8",accepts:{"*":Bt,text:"text/plain",html:"text/html",xml:"application/xml, text/xml",json:"application/json, text/javascript"},contents:{xml:/\bxml\b/,html:/\bhtml/,json:/\bjson\b/},responseFields:{xml:"responseXML",text:"responseText",json:"responseJSON"},converters:{"* text":String,"text html":!0,"text json":JSON.parse,"text xml":C.parseXML},flatOptions:{url:!0,context:!0}},ajaxSetup:function(e,t){return t?Xt(Xt(e,C.ajaxSettings),t):Xt(C.ajaxSettings,e)},ajaxPrefilter:zt($t),ajaxTransport:zt(Wt),ajax:function(e,t){"object"==typeof e&&(t=e,e=void 0),t=t||{};var r,i,o,s,u,l,c,f,p,d,h=C.ajaxSetup({},t),g=h.context||h,v=h.context&&(g.nodeType||g.jquery)?C(g):C.event,m=C.Deferred(),y=C.Callbacks("once memory"),x=h.statusCode||{},b={},w={},T="canceled",S={readyState:0,getResponseHeader:function(e){var t;if(c){if(!s)for(s={};t=Mt.exec(o);)s[t[1].toLowerCase()+" "]=(s[t[1].toLowerCase()+" "]||[]).concat(t[2]);t=s[e.toLowerCase()+" "]}return null==t?null:t.join(", ")},getAllResponseHeaders:function(){return c?o:null},setRequestHeader:function(e,t){return null==c&&(e=w[e.toLowerCase()]=w[e.toLowerCase()]||e,b[e]=t),this},overrideMimeType:function(e){return null==c&&(h.mimeType=e),this},statusCode:function(e){var t;if(e)if(c)S.always(e[S.status]);else for(t in e)x[t]=[x[t],e[t]];return this},abort:function(e){var t=e||T;return r&&r.abort(t),k(0,t),this}};if(m.promise(S),h.url=((e||h.url||Et.href)+"").replace(Ft,Et.protocol+"//"),h.type=t.method||t.type||h.method||h.type,h.dataTypes=(h.dataType||"*").toLowerCase().match(I)||[""],null==h.crossDomain){l=a.createElement("a");try{l.href=h.url,l.href=l.href,h.crossDomain=_t.protocol+"//"+_t.host!=l.protocol+"//"+l.host}catch(e){h.crossDomain=!0}}if(h.data&&h.processData&&"string"!=typeof h.data&&(h.data=C.param(h.data,h.traditional)),Ut($t,h,t,S),c)return S;for(p in(f=C.event&&h.global)&&0==C.active++&&C.event.trigger("ajaxStart"),h.type=h.type.toUpperCase(),h.hasContent=!It.test(h.type),i=h.url.replace(Rt,""),h.hasContent?h.data&&h.processData&&0===(h.contentType||"").indexOf("application/x-www-form-urlencoded")&&(h.data=h.data.replace(Ot,"+")):(d=h.url.slice(i.length),h.data&&(h.processData||"string"==typeof h.data)&&(i+=(Nt.test(i)?"&":"?")+h.data,delete h.data),!1===h.cache&&(i=i.replace(Pt,"$1"),d=(Nt.test(i)?"&":"?")+"_="+At+++d),h.url=i+d),h.ifModified&&(C.lastModified[i]&&S.setRequestHeader("If-Modified-Since",C.lastModified[i]),C.etag[i]&&S.setRequestHeader("If-None-Match",C.etag[i])),(h.data&&h.hasContent&&!1!==h.contentType||t.contentType)&&S.setRequestHeader("Content-Type",h.contentType),S.setRequestHeader("Accept",h.dataTypes[0]&&h.accepts[h.dataTypes[0]]?h.accepts[h.dataTypes[0]]+("*"!==h.dataTypes[0]?", "+Bt+"; q=0.01":""):h.accepts["*"]),h.headers)S.setRequestHeader(p,h.headers[p]);if(h.beforeSend&&(!1===h.beforeSend.call(g,S,h)||c))return S.abort();if(T="abort",y.add(h.complete),S.done(h.success),S.fail(h.error),r=Ut(Wt,h,t,S)){if(S.readyState=1,f&&v.trigger("ajaxSend",[S,h]),c)return S;h.async&&h.timeout>0&&(u=n.setTimeout(function(){S.abort("timeout")},h.timeout));try{c=!1,r.send(b,k)}catch(e){if(c)throw e;k(-1,e)}}else k(-1,"No Transport");function k(e,t,a,s){var l,p,d,b,w,T=t;c||(c=!0,u&&n.clearTimeout(u),r=void 0,o=s||"",S.readyState=e>0?4:0,l=e>=200&&e<300||304===e,a&&(b=function(e,t,n){for(var r,i,o,a,s=e.contents,u=e.dataTypes;"*"===u[0];)u.shift(),void 0===r&&(r=e.mimeType||t.getResponseHeader("Content-Type"));if(r)for(i in s)if(s[i]&&s[i].test(r)){u.unshift(i);break}if(u[0]in n)o=u[0];else{for(i in n){if(!u[0]||e.converters[i+" "+u[0]]){o=i;break}a||(a=i)}o=o||a}if(o)return o!==u[0]&&u.unshift(o),n[o]}(h,S,a)),b=function(e,t,n,r){var i,o,a,s,u,l={},c=e.dataTypes.slice();if(c[1])for(a in e.converters)l[a.toLowerCase()]=e.converters[a];for(o=c.shift();o;)if(e.responseFields[o]&&(n[e.responseFields[o]]=t),!u&&r&&e.dataFilter&&(t=e.dataFilter(t,e.dataType)),u=o,o=c.shift())if("*"===o)o=u;else if("*"!==u&&u!==o){if(!(a=l[u+" "+o]||l["* "+o]))for(i in l)if((s=i.split(" "))[1]===o&&(a=l[u+" "+s[0]]||l["* "+s[0]])){!0===a?a=l[i]:!0!==l[i]&&(o=s[0],c.unshift(s[1]));break}if(!0!==a)if(a&&e.throws)t=a(t);else try{t=a(t)}catch(e){return{state:"parsererror",error:a?e:"No conversion from "+u+" to "+o}}}return{state:"success",data:t}}(h,b,S,l),l?(h.ifModified&&((w=S.getResponseHeader("Last-Modified"))&&(C.lastModified[i]=w),(w=S.getResponseHeader("etag"))&&(C.etag[i]=w)),204===e||"HEAD"===h.type?T="nocontent":304===e?T="notmodified":(T=b.state,p=b.data,l=!(d=b.error))):(d=T,!e&&T||(T="error",e<0&&(e=0))),S.status=e,S.statusText=(t||T)+"",l?m.resolveWith(g,[p,T,S]):m.rejectWith(g,[S,T,d]),S.statusCode(x),x=void 0,f&&v.trigger(l?"ajaxSuccess":"ajaxError",[S,h,l?p:d]),y.fireWith(g,[S,T]),f&&(v.trigger("ajaxComplete",[S,h]),--C.active||C.event.trigger("ajaxStop")))}return S},getJSON:function(e,t,n){return C.get(e,t,n,"json")},getScript:function(e,t){return C.get(e,void 0,t,"script")}}),C.each(["get","post"],function(e,t){C[t]=function(e,n,r,i){return y(n)&&(i=i||r,r=n,n=void 0),C.ajax(C.extend({url:e,type:t,dataType:i,data:n,success:r},C.isPlainObject(e)&&e))}}),C._evalUrl=function(e,t){return C.ajax({url:e,type:"GET",dataType:"script",cache:!0,async:!1,global:!1,converters:{"text script":function(){}},dataFilter:function(e){C.globalEval(e,t)}})},C.fn.extend({wrapAll:function(e){var t;return this[0]&&(y(e)&&(e=e.call(this[0])),t=C(e,this[0].ownerDocument).eq(0).clone(!0),this[0].parentNode&&t.insertBefore(this[0]),t.map(function(){for(var e=this;e.firstElementChild;)e=e.firstElementChild;return e}).append(this)),this},wrapInner:function(e){return y(e)?this.each(function(t){C(this).wrapInner(e.call(this,t))}):this.each(function(){var t=C(this),n=t.contents();n.length?n.wrapAll(e):t.append(e)})},wrap:function(e){var t=y(e);return this.each(function(n){C(this).wrapAll(t?e.call(this,n):e)})},unwrap:function(e){return this.parent(e).not("body").each(function(){C(this).replaceWith(this.childNodes)}),this}}),C.expr.pseudos.hidden=function(e){return!C.expr.pseudos.visible(e)},C.expr.pseudos.visible=function(e){return!!(e.offsetWidth||e.offsetHeight||e.getClientRects().length)},C.ajaxSettings.xhr=function(){try{return new n.XMLHttpRequest}catch(e){}};var Vt={0:200,1223:204},Gt=C.ajaxSettings.xhr();m.cors=!!Gt&&"withCredentials"in Gt,m.ajax=Gt=!!Gt,C.ajaxTransport(function(e){var t,r;if(m.cors||Gt&&!e.crossDomain)return{send:function(i,o){var a,s=e.xhr();if(s.open(e.type,e.url,e.async,e.username,e.password),e.xhrFields)for(a in e.xhrFields)s[a]=e.xhrFields[a];for(a in e.mimeType&&s.overrideMimeType&&s.overrideMimeType(e.mimeType),e.crossDomain||i["X-Requested-With"]||(i["X-Requested-With"]="XMLHttpRequest"),i)s.setRequestHeader(a,i[a]);t=function(e){return function(){t&&(t=r=s.onload=s.onerror=s.onabort=s.ontimeout=s.onreadystatechange=null,"abort"===e?s.abort():"error"===e?"number"!=typeof s.status?o(0,"error"):o(s.status,s.statusText):o(Vt[s.status]||s.status,s.statusText,"text"!==(s.responseType||"text")||"string"!=typeof s.responseText?{binary:s.response}:{text:s.responseText},s.getAllResponseHeaders()))}},s.onload=t(),r=s.onerror=s.ontimeout=t("error"),void 0!==s.onabort?s.onabort=r:s.onreadystatechange=function(){4===s.readyState&&n.setTimeout(function(){t&&r()})},t=t("abort");try{s.send(e.hasContent&&e.data||null)}catch(e){if(t)throw e}},abort:function(){t&&t()}}}),C.ajaxPrefilter(function(e){e.crossDomain&&(e.contents.script=!1)}),C.ajaxSetup({accepts:{script:"text/javascript, application/javascript, application/ecmascript, application/x-ecmascript"},contents:{script:/\b(?:java|ecma)script\b/},converters:{"text script":function(e){return C.globalEval(e),e}}}),C.ajaxPrefilter("script",function(e){void 0===e.cache&&(e.cache=!1),e.crossDomain&&(e.type="GET")}),C.ajaxTransport("script",function(e){var t,n;if(e.crossDomain||e.scriptAttrs)return{send:function(r,i){t=C("<script>").attr(e.scriptAttrs||{}).prop({charset:e.scriptCharset,src:e.url}).on("load error",n=function(e){t.remove(),n=null,e&&i("error"===e.type?404:200,e.type)}),a.head.appendChild(t[0])},abort:function(){n&&n()}}});var Yt,Qt=[],Jt=/(=)\?(?=&|$)|\?\?/;C.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=Qt.pop()||C.expando+"_"+At++;return this[e]=!0,e}}),C.ajaxPrefilter("json jsonp",function(e,t,r){var i,o,a,s=!1!==e.jsonp&&(Jt.test(e.url)?"url":"string"==typeof e.data&&0===(e.contentType||"").indexOf("application/x-www-form-urlencoded")&&Jt.test(e.data)&&"data");if(s||"jsonp"===e.dataTypes[0])return i=e.jsonpCallback=y(e.jsonpCallback)?e.jsonpCallback():e.jsonpCallback,s?e[s]=e[s].replace(Jt,"$1"+i):!1!==e.jsonp&&(e.url+=(Nt.test(e.url)?"&":"?")+e.jsonp+"="+i),e.converters["script json"]=function(){return a||C.error(i+" was not called"),a[0]},e.dataTypes[0]="json",o=n[i],n[i]=function(){a=arguments},r.always(function(){void 0===o?C(n).removeProp(i):n[i]=o,e[i]&&(e.jsonpCallback=t.jsonpCallback,Qt.push(i)),a&&y(o)&&o(a[0]),a=o=void 0}),"script"}),m.createHTMLDocument=((Yt=a.implementation.createHTMLDocument("").body).innerHTML="<form></form><form></form>",2===Yt.childNodes.length),C.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(m.createHTMLDocument?((r=(t=a.implementation.createHTMLDocument("")).createElement("base")).href=a.location.href,t.head.appendChild(r)):t=a),o=!n&&[],(i=q.exec(e))?[t.createElement(i[1])]:(i=Se([e],t,o),o&&o.length&&C(o).remove(),C.merge([],i.childNodes)));var r,i,o},C.fn.load=function(e,t,n){var r,i,o,a=this,s=e.indexOf(" ");return s>-1&&(r=bt(e.slice(s)),e=e.slice(0,s)),y(t)?(n=t,t=void 0):t&&"object"==typeof t&&(i="POST"),a.length>0&&C.ajax({url:e,type:i||"GET",dataType:"html",data:t}).done(function(e){o=arguments,a.html(r?C("<div>").append(C.parseHTML(e)).find(r):e)}).always(n&&function(e,t){a.each(function(){n.apply(this,o||[e.responseText,t,e])})}),this},C.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(e,t){C.fn[t]=function(e){return this.on(t,e)}}),C.expr.pseudos.animated=function(e){return C.grep(C.timers,function(t){return e===t.elem}).length},C.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=C.css(e,"position"),c=C(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=C.css(e,"top"),u=C.css(e,"left"),("absolute"===l||"fixed"===l)&&(o+u).indexOf("auto")>-1?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),y(t)&&(t=t.call(e,n,C.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):c.css(f)}},C.fn.extend({offset:function(e){if(arguments.length)return void 0===e?this:this.each(function(t){C.offset.setOffset(this,e,t)});var t,n,r=this[0];return r?r.getClientRects().length?(t=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:t.top+n.pageYOffset,left:t.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===C.css(r,"position"))t=r.getBoundingClientRect();else{for(t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;e&&(e===n.body||e===n.documentElement)&&"static"===C.css(e,"position");)e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=C(e).offset()).top+=C.css(e,"borderTopWidth",!0),i.left+=C.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-C.css(r,"marginTop",!0),left:t.left-i.left-C.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){for(var e=this.offsetParent;e&&"static"===C.css(e,"position");)e=e.offsetParent;return e||ae})}}),C.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(e,t){var n="pageYOffset"===t;C.fn[e]=function(r){return U(this,function(e,r,i){var o;if(x(e)?o=e:9===e.nodeType&&(o=e.defaultView),void 0===i)return o?o[t]:e[r];o?o.scrollTo(n?o.pageXOffset:i,n?i:o.pageYOffset):e[r]=i},e,r,arguments.length)}}),C.each(["top","left"],function(e,t){C.cssHooks[t]=Ge(m.pixelPosition,function(e,n){if(n)return n=Ve(e,t),ze.test(n)?C(e).position()[t]+"px":n})}),C.each({Height:"height",Width:"width"},function(e,t){C.each({padding:"inner"+e,content:t,"":"outer"+e},function(n,r){C.fn[r]=function(i,o){var a=arguments.length&&(n||"boolean"!=typeof i),s=n||(!0===i||!0===o?"margin":"border");return U(this,function(t,n,i){var o;return x(t)?0===r.indexOf("outer")?t["inner"+e]:t.document.documentElement["client"+e]:9===t.nodeType?(o=t.documentElement,Math.max(t.body["scroll"+e],o["scroll"+e],t.body["offset"+e],o["offset"+e],o["client"+e])):void 0===i?C.css(t,n,s):C.style(t,n,i,s)},t,a?i:void 0,a)}})}),C.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,t){C.fn[t]=function(e,n){return arguments.length>0?this.on(t,null,e,n):this.trigger(t)}}),C.fn.extend({hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),C.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)}}),C.proxy=function(e,t){var n,r,i;if("string"==typeof t&&(n=e[t],t=e,e=n),y(e))return r=u.call(arguments,2),(i=function(){return e.apply(t||this,r.concat(u.call(arguments)))}).guid=e.guid=e.guid||C.guid++,i},C.holdReady=function(e){e?C.readyWait++:C.ready(!0)},C.isArray=Array.isArray,C.parseJSON=JSON.parse,C.nodeName=D,C.isFunction=y,C.isWindow=x,C.camelCase=Y,C.type=T,C.now=Date.now,C.isNumeric=function(e){var t=C.type(e);return("number"===t||"string"===t)&&!isNaN(e-parseFloat(e))},void 0===(r=function(){return C}.apply(t,[]))||(e.exports=r);var Kt=n.jQuery,Zt=n.$;return C.noConflict=function(e){return n.$===C&&(n.$=Zt),e&&n.jQuery===C&&(n.jQuery=Kt),C},i||(n.jQuery=n.$=C),C})}]);
| 0.080506 | 0.160102 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.