id
stringlengths 1
265
| text
stringlengths 6
5.19M
| dataset_id
stringclasses 7
values |
---|---|---|
/FALCONN-1.3.1.tar.gz/FALCONN-1.3.1/external/pybind11/docs/basics.rst | .. _basics:
First steps
###########
This sections demonstrates the basic features of pybind11. Before getting
started, make sure that development environment is set up to compile the
included set of test cases.
Compiling the test cases
========================
Linux/MacOS
-----------
On Linux you'll need to install the **python-dev** or **python3-dev** packages as
well as **cmake**. On Mac OS, the included python version works out of the box,
but **cmake** must still be installed.
After installing the prerequisites, run
.. code-block:: bash
mkdir build
cd build
cmake ..
make check -j 4
The last line will both compile and run the tests.
Windows
-------
On Windows, only **Visual Studio 2015** and newer are supported since pybind11 relies
on various C++11 language features that break older versions of Visual Studio.
To compile and run the tests:
.. code-block:: batch
mkdir build
cd build
cmake ..
cmake --build . --config Release --target check
This will create a Visual Studio project, compile and run the target, all from the
command line.
.. Note::
If all tests fail, make sure that the Python binary and the testcases are compiled
for the same processor type and bitness (i.e. either **i386** or **x86_64**). You
can specify **x86_64** as the target architecture for the generated Visual Studio
project using ``cmake -A x64 ..``.
.. seealso::
Advanced users who are already familiar with Boost.Python may want to skip
the tutorial and look at the test cases in the :file:`tests` directory,
which exercise all features of pybind11.
Header and namespace conventions
================================
For brevity, all code examples assume that the following two lines are present:
.. code-block:: cpp
#include <pybind11/pybind11.h>
namespace py = pybind11;
Some features may require additional headers, but those will be specified as needed.
.. _simple_example:
Creating bindings for a simple function
=======================================
Let's start by creating Python bindings for an extremely simple function, which
adds two numbers and returns their result:
.. code-block:: cpp
int add(int i, int j) {
return i + j;
}
For simplicity [#f1]_, we'll put both this function and the binding code into
a file named :file:`example.cpp` with the following contents:
.. code-block:: cpp
#include <pybind11/pybind11.h>
int add(int i, int j) {
return i + j;
}
PYBIND11_MODULE(example, m) {
m.doc() = "pybind11 example plugin"; // optional module docstring
m.def("add", &add, "A function which adds two numbers");
}
.. [#f1] In practice, implementation and binding code will generally be located
in separate files.
The :func:`PYBIND11_MODULE` macro creates a function that will be called when an
``import`` statement is issued from within Python. The module name (``example``)
is given as the first macro argument (it should not be in quotes). The second
argument (``m``) defines a variable of type :class:`py::module <module>` which
is the main interface for creating bindings. The method :func:`module::def`
generates binding code that exposes the ``add()`` function to Python.
.. note::
Notice how little code was needed to expose our function to Python: all
details regarding the function's parameters and return value were
automatically inferred using template metaprogramming. This overall
approach and the used syntax are borrowed from Boost.Python, though the
underlying implementation is very different.
pybind11 is a header-only library, hence it is not necessary to link against
any special libraries and there are no intermediate (magic) translation steps.
On Linux, the above example can be compiled using the following command:
.. code-block:: bash
$ c++ -O3 -Wall -shared -std=c++11 -fPIC `python3 -m pybind11 --includes` example.cpp -o example`python3-config --extension-suffix`
For more details on the required compiler flags on Linux and MacOS, see
:ref:`building_manually`. For complete cross-platform compilation instructions,
refer to the :ref:`compiling` page.
The `python_example`_ and `cmake_example`_ repositories are also a good place
to start. They are both complete project examples with cross-platform build
systems. The only difference between the two is that `python_example`_ uses
Python's ``setuptools`` to build the module, while `cmake_example`_ uses CMake
(which may be preferable for existing C++ projects).
.. _python_example: https://github.com/pybind/python_example
.. _cmake_example: https://github.com/pybind/cmake_example
Building the above C++ code will produce a binary module file that can be
imported to Python. Assuming that the compiled module is located in the
current directory, the following interactive Python session shows how to
load and execute the example:
.. code-block:: pycon
$ python
Python 2.7.10 (default, Aug 22 2015, 20:33:39)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import example
>>> example.add(1, 2)
3L
>>>
.. _keyword_args:
Keyword arguments
=================
With a simple modification code, it is possible to inform Python about the
names of the arguments ("i" and "j" in this case).
.. code-block:: cpp
m.def("add", &add, "A function which adds two numbers",
py::arg("i"), py::arg("j"));
:class:`arg` is one of several special tag classes which can be used to pass
metadata into :func:`module::def`. With this modified binding code, we can now
call the function using keyword arguments, which is a more readable alternative
particularly for functions taking many parameters:
.. code-block:: pycon
>>> import example
>>> example.add(i=1, j=2)
3L
The keyword names also appear in the function signatures within the documentation.
.. code-block:: pycon
>>> help(example)
....
FUNCTIONS
add(...)
Signature : (i: int, j: int) -> int
A function which adds two numbers
A shorter notation for named arguments is also available:
.. code-block:: cpp
// regular notation
m.def("add1", &add, py::arg("i"), py::arg("j"));
// shorthand
using namespace pybind11::literals;
m.def("add2", &add, "i"_a, "j"_a);
The :var:`_a` suffix forms a C++11 literal which is equivalent to :class:`arg`.
Note that the literal operator must first be made visible with the directive
``using namespace pybind11::literals``. This does not bring in anything else
from the ``pybind11`` namespace except for literals.
.. _default_args:
Default arguments
=================
Suppose now that the function to be bound has default arguments, e.g.:
.. code-block:: cpp
int add(int i = 1, int j = 2) {
return i + j;
}
Unfortunately, pybind11 cannot automatically extract these parameters, since they
are not part of the function's type information. However, they are simple to specify
using an extension of :class:`arg`:
.. code-block:: cpp
m.def("add", &add, "A function which adds two numbers",
py::arg("i") = 1, py::arg("j") = 2);
The default values also appear within the documentation.
.. code-block:: pycon
>>> help(example)
....
FUNCTIONS
add(...)
Signature : (i: int = 1, j: int = 2) -> int
A function which adds two numbers
The shorthand notation is also available for default arguments:
.. code-block:: cpp
// regular notation
m.def("add1", &add, py::arg("i") = 1, py::arg("j") = 2);
// shorthand
m.def("add2", &add, "i"_a=1, "j"_a=2);
Exporting variables
===================
To expose a value from C++, use the ``attr`` function to register it in a
module as shown below. Built-in types and general objects (more on that later)
are automatically converted when assigned as attributes, and can be explicitly
converted using the function ``py::cast``.
.. code-block:: cpp
PYBIND11_MODULE(example, m) {
m.attr("the_answer") = 42;
py::object world = py::cast("World");
m.attr("what") = world;
}
These are then accessible from Python:
.. code-block:: pycon
>>> import example
>>> example.the_answer
42
>>> example.what
'World'
.. _supported_types:
Supported data types
====================
A large number of data types are supported out of the box and can be used
seamlessly as functions arguments, return values or with ``py::cast`` in general.
For a full overview, see the :doc:`advanced/cast/index` section.
| PypiClean |
/django-chuck-0.2.3.tar.gz/django-chuck/modules/feincms/project/static/scripts/libs/tiny_mce/themes/advanced/js/color_picker.js | tinyMCEPopup.requireLangPack();
var detail = 50, strhex = "0123456789abcdef", i, isMouseDown = false, isMouseOver = false;
var colors = [
"#000000","#000033","#000066","#000099","#0000cc","#0000ff","#330000","#330033",
"#330066","#330099","#3300cc","#3300ff","#660000","#660033","#660066","#660099",
"#6600cc","#6600ff","#990000","#990033","#990066","#990099","#9900cc","#9900ff",
"#cc0000","#cc0033","#cc0066","#cc0099","#cc00cc","#cc00ff","#ff0000","#ff0033",
"#ff0066","#ff0099","#ff00cc","#ff00ff","#003300","#003333","#003366","#003399",
"#0033cc","#0033ff","#333300","#333333","#333366","#333399","#3333cc","#3333ff",
"#663300","#663333","#663366","#663399","#6633cc","#6633ff","#993300","#993333",
"#993366","#993399","#9933cc","#9933ff","#cc3300","#cc3333","#cc3366","#cc3399",
"#cc33cc","#cc33ff","#ff3300","#ff3333","#ff3366","#ff3399","#ff33cc","#ff33ff",
"#006600","#006633","#006666","#006699","#0066cc","#0066ff","#336600","#336633",
"#336666","#336699","#3366cc","#3366ff","#666600","#666633","#666666","#666699",
"#6666cc","#6666ff","#996600","#996633","#996666","#996699","#9966cc","#9966ff",
"#cc6600","#cc6633","#cc6666","#cc6699","#cc66cc","#cc66ff","#ff6600","#ff6633",
"#ff6666","#ff6699","#ff66cc","#ff66ff","#009900","#009933","#009966","#009999",
"#0099cc","#0099ff","#339900","#339933","#339966","#339999","#3399cc","#3399ff",
"#669900","#669933","#669966","#669999","#6699cc","#6699ff","#999900","#999933",
"#999966","#999999","#9999cc","#9999ff","#cc9900","#cc9933","#cc9966","#cc9999",
"#cc99cc","#cc99ff","#ff9900","#ff9933","#ff9966","#ff9999","#ff99cc","#ff99ff",
"#00cc00","#00cc33","#00cc66","#00cc99","#00cccc","#00ccff","#33cc00","#33cc33",
"#33cc66","#33cc99","#33cccc","#33ccff","#66cc00","#66cc33","#66cc66","#66cc99",
"#66cccc","#66ccff","#99cc00","#99cc33","#99cc66","#99cc99","#99cccc","#99ccff",
"#cccc00","#cccc33","#cccc66","#cccc99","#cccccc","#ccccff","#ffcc00","#ffcc33",
"#ffcc66","#ffcc99","#ffcccc","#ffccff","#00ff00","#00ff33","#00ff66","#00ff99",
"#00ffcc","#00ffff","#33ff00","#33ff33","#33ff66","#33ff99","#33ffcc","#33ffff",
"#66ff00","#66ff33","#66ff66","#66ff99","#66ffcc","#66ffff","#99ff00","#99ff33",
"#99ff66","#99ff99","#99ffcc","#99ffff","#ccff00","#ccff33","#ccff66","#ccff99",
"#ccffcc","#ccffff","#ffff00","#ffff33","#ffff66","#ffff99","#ffffcc","#ffffff"
];
var named = {
'#F0F8FF':'Alice Blue','#FAEBD7':'Antique White','#00FFFF':'Aqua','#7FFFD4':'Aquamarine','#F0FFFF':'Azure','#F5F5DC':'Beige',
'#FFE4C4':'Bisque','#000000':'Black','#FFEBCD':'Blanched Almond','#0000FF':'Blue','#8A2BE2':'Blue Violet','#A52A2A':'Brown',
'#DEB887':'Burly Wood','#5F9EA0':'Cadet Blue','#7FFF00':'Chartreuse','#D2691E':'Chocolate','#FF7F50':'Coral','#6495ED':'Cornflower Blue',
'#FFF8DC':'Cornsilk','#DC143C':'Crimson','#00FFFF':'Cyan','#00008B':'Dark Blue','#008B8B':'Dark Cyan','#B8860B':'Dark Golden Rod',
'#A9A9A9':'Dark Gray','#A9A9A9':'Dark Grey','#006400':'Dark Green','#BDB76B':'Dark Khaki','#8B008B':'Dark Magenta','#556B2F':'Dark Olive Green',
'#FF8C00':'Darkorange','#9932CC':'Dark Orchid','#8B0000':'Dark Red','#E9967A':'Dark Salmon','#8FBC8F':'Dark Sea Green','#483D8B':'Dark Slate Blue',
'#2F4F4F':'Dark Slate Gray','#2F4F4F':'Dark Slate Grey','#00CED1':'Dark Turquoise','#9400D3':'Dark Violet','#FF1493':'Deep Pink','#00BFFF':'Deep Sky Blue',
'#696969':'Dim Gray','#696969':'Dim Grey','#1E90FF':'Dodger Blue','#B22222':'Fire Brick','#FFFAF0':'Floral White','#228B22':'Forest Green',
'#FF00FF':'Fuchsia','#DCDCDC':'Gainsboro','#F8F8FF':'Ghost White','#FFD700':'Gold','#DAA520':'Golden Rod','#808080':'Gray','#808080':'Grey',
'#008000':'Green','#ADFF2F':'Green Yellow','#F0FFF0':'Honey Dew','#FF69B4':'Hot Pink','#CD5C5C':'Indian Red','#4B0082':'Indigo','#FFFFF0':'Ivory',
'#F0E68C':'Khaki','#E6E6FA':'Lavender','#FFF0F5':'Lavender Blush','#7CFC00':'Lawn Green','#FFFACD':'Lemon Chiffon','#ADD8E6':'Light Blue',
'#F08080':'Light Coral','#E0FFFF':'Light Cyan','#FAFAD2':'Light Golden Rod Yellow','#D3D3D3':'Light Gray','#D3D3D3':'Light Grey','#90EE90':'Light Green',
'#FFB6C1':'Light Pink','#FFA07A':'Light Salmon','#20B2AA':'Light Sea Green','#87CEFA':'Light Sky Blue','#778899':'Light Slate Gray','#778899':'Light Slate Grey',
'#B0C4DE':'Light Steel Blue','#FFFFE0':'Light Yellow','#00FF00':'Lime','#32CD32':'Lime Green','#FAF0E6':'Linen','#FF00FF':'Magenta','#800000':'Maroon',
'#66CDAA':'Medium Aqua Marine','#0000CD':'Medium Blue','#BA55D3':'Medium Orchid','#9370D8':'Medium Purple','#3CB371':'Medium Sea Green','#7B68EE':'Medium Slate Blue',
'#00FA9A':'Medium Spring Green','#48D1CC':'Medium Turquoise','#C71585':'Medium Violet Red','#191970':'Midnight Blue','#F5FFFA':'Mint Cream','#FFE4E1':'Misty Rose','#FFE4B5':'Moccasin',
'#FFDEAD':'Navajo White','#000080':'Navy','#FDF5E6':'Old Lace','#808000':'Olive','#6B8E23':'Olive Drab','#FFA500':'Orange','#FF4500':'Orange Red','#DA70D6':'Orchid',
'#EEE8AA':'Pale Golden Rod','#98FB98':'Pale Green','#AFEEEE':'Pale Turquoise','#D87093':'Pale Violet Red','#FFEFD5':'Papaya Whip','#FFDAB9':'Peach Puff',
'#CD853F':'Peru','#FFC0CB':'Pink','#DDA0DD':'Plum','#B0E0E6':'Powder Blue','#800080':'Purple','#FF0000':'Red','#BC8F8F':'Rosy Brown','#4169E1':'Royal Blue',
'#8B4513':'Saddle Brown','#FA8072':'Salmon','#F4A460':'Sandy Brown','#2E8B57':'Sea Green','#FFF5EE':'Sea Shell','#A0522D':'Sienna','#C0C0C0':'Silver',
'#87CEEB':'Sky Blue','#6A5ACD':'Slate Blue','#708090':'Slate Gray','#708090':'Slate Grey','#FFFAFA':'Snow','#00FF7F':'Spring Green',
'#4682B4':'Steel Blue','#D2B48C':'Tan','#008080':'Teal','#D8BFD8':'Thistle','#FF6347':'Tomato','#40E0D0':'Turquoise','#EE82EE':'Violet',
'#F5DEB3':'Wheat','#FFFFFF':'White','#F5F5F5':'White Smoke','#FFFF00':'Yellow','#9ACD32':'Yellow Green'
};
var namedLookup = {};
function init() {
var inputColor = convertRGBToHex(tinyMCEPopup.getWindowArg('input_color')), key, value;
tinyMCEPopup.resizeToInnerSize();
generatePicker();
generateWebColors();
generateNamedColors();
if (inputColor) {
changeFinalColor(inputColor);
col = convertHexToRGB(inputColor);
if (col)
updateLight(col.r, col.g, col.b);
}
for (key in named) {
value = named[key];
namedLookup[value.replace(/\s+/, '').toLowerCase()] = key.replace(/#/, '').toLowerCase();
}
}
function toHexColor(color) {
var matches, red, green, blue, toInt = parseInt;
function hex(value) {
value = parseInt(value).toString(16);
return value.length > 1 ? value : '0' + value; // Padd with leading zero
};
color = tinymce.trim(color);
color = color.replace(/^[#]/, '').toLowerCase(); // remove leading '#'
color = namedLookup[color] || color;
matches = /^rgb\((\d{1,3}),(\d{1,3}),(\d{1,3})\)$/.exec(color);
if (matches) {
red = toInt(matches[1]);
green = toInt(matches[2]);
blue = toInt(matches[3]);
} else {
matches = /^([0-9a-f]{2})([0-9a-f]{2})([0-9a-f]{2})$/.exec(color);
if (matches) {
red = toInt(matches[1], 16);
green = toInt(matches[2], 16);
blue = toInt(matches[3], 16);
} else {
matches = /^([0-9a-f])([0-9a-f])([0-9a-f])$/.exec(color);
if (matches) {
red = toInt(matches[1] + matches[1], 16);
green = toInt(matches[2] + matches[2], 16);
blue = toInt(matches[3] + matches[3], 16);
} else {
return '';
}
}
}
return '#' + hex(red) + hex(green) + hex(blue);
}
function insertAction() {
var color = document.getElementById("color").value, f = tinyMCEPopup.getWindowArg('func');
var hexColor = toHexColor(color);
if (hexColor === '') {
var text = tinyMCEPopup.editor.getLang('advanced_dlg.invalid_color_value');
tinyMCEPopup.alert(text + ': ' + color);
}
else {
tinyMCEPopup.restoreSelection();
if (f)
f(hexColor);
tinyMCEPopup.close();
}
}
function showColor(color, name) {
if (name)
document.getElementById("colorname").innerHTML = name;
document.getElementById("preview").style.backgroundColor = color;
document.getElementById("color").value = color.toUpperCase();
}
function convertRGBToHex(col) {
var re = new RegExp("rgb\\s*\\(\\s*([0-9]+).*,\\s*([0-9]+).*,\\s*([0-9]+).*\\)", "gi");
if (!col)
return col;
var rgb = col.replace(re, "$1,$2,$3").split(',');
if (rgb.length == 3) {
r = parseInt(rgb[0]).toString(16);
g = parseInt(rgb[1]).toString(16);
b = parseInt(rgb[2]).toString(16);
r = r.length == 1 ? '0' + r : r;
g = g.length == 1 ? '0' + g : g;
b = b.length == 1 ? '0' + b : b;
return "#" + r + g + b;
}
return col;
}
function convertHexToRGB(col) {
if (col.indexOf('#') != -1) {
col = col.replace(new RegExp('[^0-9A-F]', 'gi'), '');
r = parseInt(col.substring(0, 2), 16);
g = parseInt(col.substring(2, 4), 16);
b = parseInt(col.substring(4, 6), 16);
return {r : r, g : g, b : b};
}
return null;
}
function generatePicker() {
var el = document.getElementById('light'), h = '', i;
for (i = 0; i < detail; i++){
h += '<div id="gs'+i+'" style="background-color:#000000; width:15px; height:3px; border-style:none; border-width:0px;"'
+ ' onclick="changeFinalColor(this.style.backgroundColor)"'
+ ' onmousedown="isMouseDown = true; return false;"'
+ ' onmouseup="isMouseDown = false;"'
+ ' onmousemove="if (isMouseDown && isMouseOver) changeFinalColor(this.style.backgroundColor); return false;"'
+ ' onmouseover="isMouseOver = true;"'
+ ' onmouseout="isMouseOver = false;"'
+ '></div>';
}
el.innerHTML = h;
}
function generateWebColors() {
var el = document.getElementById('webcolors'), h = '', i;
if (el.className == 'generated')
return;
// TODO: VoiceOver doesn't seem to support legend as a label referenced by labelledby.
h += '<div role="listbox" aria-labelledby="webcolors_title" tabindex="0"><table role="presentation" border="0" cellspacing="1" cellpadding="0">'
+ '<tr>';
for (i=0; i<colors.length; i++) {
h += '<td bgcolor="' + colors[i] + '" width="10" height="10">'
+ '<a href="javascript:insertAction();" role="option" tabindex="-1" aria-labelledby="web_colors_' + i + '" onfocus="showColor(\'' + colors[i] + '\');" onmouseover="showColor(\'' + colors[i] + '\');" style="display:block;width:10px;height:10px;overflow:hidden;">';
if (tinyMCEPopup.editor.forcedHighContrastMode) {
h += '<canvas class="mceColorSwatch" height="10" width="10" data-color="' + colors[i] + '"></canvas>';
}
h += '<span class="mceVoiceLabel" style="display:none;" id="web_colors_' + i + '">' + colors[i].toUpperCase() + '</span>';
h += '</a></td>';
if ((i+1) % 18 == 0)
h += '</tr><tr>';
}
h += '</table></div>';
el.innerHTML = h;
el.className = 'generated';
paintCanvas(el);
enableKeyboardNavigation(el.firstChild);
}
function paintCanvas(el) {
tinyMCEPopup.getWin().tinymce.each(tinyMCEPopup.dom.select('canvas.mceColorSwatch', el), function(canvas) {
var context;
if (canvas.getContext && (context = canvas.getContext("2d"))) {
context.fillStyle = canvas.getAttribute('data-color');
context.fillRect(0, 0, 10, 10);
}
});
}
function generateNamedColors() {
var el = document.getElementById('namedcolors'), h = '', n, v, i = 0;
if (el.className == 'generated')
return;
for (n in named) {
v = named[n];
h += '<a href="javascript:insertAction();" role="option" tabindex="-1" aria-labelledby="named_colors_' + i + '" onfocus="showColor(\'' + n + '\',\'' + v + '\');" onmouseover="showColor(\'' + n + '\',\'' + v + '\');" style="background-color: ' + n + '">';
if (tinyMCEPopup.editor.forcedHighContrastMode) {
h += '<canvas class="mceColorSwatch" height="10" width="10" data-color="' + colors[i] + '"></canvas>';
}
h += '<span class="mceVoiceLabel" style="display:none;" id="named_colors_' + i + '">' + v + '</span>';
h += '</a>';
i++;
}
el.innerHTML = h;
el.className = 'generated';
paintCanvas(el);
enableKeyboardNavigation(el);
}
function enableKeyboardNavigation(el) {
tinyMCEPopup.editor.windowManager.createInstance('tinymce.ui.KeyboardNavigation', {
root: el,
items: tinyMCEPopup.dom.select('a', el)
}, tinyMCEPopup.dom);
}
function dechex(n) {
return strhex.charAt(Math.floor(n / 16)) + strhex.charAt(n % 16);
}
function computeColor(e) {
var x, y, partWidth, partDetail, imHeight, r, g, b, coef, i, finalCoef, finalR, finalG, finalB, pos = tinyMCEPopup.dom.getPos(e.target);
x = e.offsetX ? e.offsetX : (e.target ? e.clientX - pos.x : 0);
y = e.offsetY ? e.offsetY : (e.target ? e.clientY - pos.y : 0);
partWidth = document.getElementById('colors').width / 6;
partDetail = detail / 2;
imHeight = document.getElementById('colors').height;
r = (x >= 0)*(x < partWidth)*255 + (x >= partWidth)*(x < 2*partWidth)*(2*255 - x * 255 / partWidth) + (x >= 4*partWidth)*(x < 5*partWidth)*(-4*255 + x * 255 / partWidth) + (x >= 5*partWidth)*(x < 6*partWidth)*255;
g = (x >= 0)*(x < partWidth)*(x * 255 / partWidth) + (x >= partWidth)*(x < 3*partWidth)*255 + (x >= 3*partWidth)*(x < 4*partWidth)*(4*255 - x * 255 / partWidth);
b = (x >= 2*partWidth)*(x < 3*partWidth)*(-2*255 + x * 255 / partWidth) + (x >= 3*partWidth)*(x < 5*partWidth)*255 + (x >= 5*partWidth)*(x < 6*partWidth)*(6*255 - x * 255 / partWidth);
coef = (imHeight - y) / imHeight;
r = 128 + (r - 128) * coef;
g = 128 + (g - 128) * coef;
b = 128 + (b - 128) * coef;
changeFinalColor('#' + dechex(r) + dechex(g) + dechex(b));
updateLight(r, g, b);
}
function updateLight(r, g, b) {
var i, partDetail = detail / 2, finalCoef, finalR, finalG, finalB, color;
for (i=0; i<detail; i++) {
if ((i>=0) && (i<partDetail)) {
finalCoef = i / partDetail;
finalR = dechex(255 - (255 - r) * finalCoef);
finalG = dechex(255 - (255 - g) * finalCoef);
finalB = dechex(255 - (255 - b) * finalCoef);
} else {
finalCoef = 2 - i / partDetail;
finalR = dechex(r * finalCoef);
finalG = dechex(g * finalCoef);
finalB = dechex(b * finalCoef);
}
color = finalR + finalG + finalB;
setCol('gs' + i, '#'+color);
}
}
function changeFinalColor(color) {
if (color.indexOf('#') == -1)
color = convertRGBToHex(color);
setCol('preview', color);
document.getElementById('color').value = color;
}
function setCol(e, c) {
try {
document.getElementById(e).style.backgroundColor = c;
} catch (ex) {
// Ignore IE warning
}
}
tinyMCEPopup.onInit.add(init); | PypiClean |
/ESMValTool-2.9.0-py3-none-any.whl/esmvaltool/diag_scripts/kcs/global_matching.py | import logging
from itertools import product
from pathlib import Path
import matplotlib.pyplot as plt
import pandas as pd
import xarray as xr
from esmvaltool.diag_scripts.shared import (
ProvenanceLogger,
get_diagnostic_filename,
get_plot_filename,
run_diagnostic,
select_metadata,
)
logger = logging.getLogger(Path(__file__).name)
def create_provenance_record(ancestor_files):
"""Create a provenance record."""
record = {
'caption':
"Match temperature anomaly in target model to CMIP ensemble",
'domains': ['global'],
'authors': [
'kalverla_peter',
'alidoost_sarah',
'rol_evert',
],
'ancestors': ancestor_files,
}
return record
def mean_of_target_models(metadata):
"""Get the average delta T of the target model ensemble members."""
target_model_data = select_metadata(metadata, variable_group='tas_target')
files = [
tmd['filename'] for tmd in target_model_data
if 'MultiModel' not in tmd['filename']
]
datasets = xr.open_mfdataset(files, combine='nested', concat_dim='ens')
provenance = create_provenance_record(files)
return datasets.tas.mean(dim='ens'), provenance
def get_cmip_dt(metadata, year, percentile):
"""Compute target delta T for KNMI scenarios."""
attribute = f'MultiModel{percentile}'
multimodelstat = select_metadata(metadata, alias=attribute)[0]
dataset = xr.open_dataset(multimodelstat['filename'])
return dataset.tas.sel(time=str(year)).values[0]
def get_resampling_period(target_dts, cmip_dt):
"""Return 30-year time bounds of the resampling period.
This is the period for which the target model delta T matches the
cmip delta T for a specific year. Uses a 30-year rolling window to
get the best match.
"""
target_dts = target_dts.rolling(time=30, center=True,
min_periods=30).mean()
time_idx = abs(target_dts - cmip_dt).argmin(dim='time').values
year = target_dts.isel(time=time_idx).year.values.astype(int)
target_dt = target_dts.isel(time=time_idx).values.astype(float)
return [year - 14, year + 15], target_dt
def _timeline(axes, yloc, interval):
"""Plot an interval near the bottom of the plot."""
xmin, xmax = interval
# Later years should be located slightly higher:
# yloc is relative to the axes, not in data coordinates.
yloc = 0.05 + yloc / 20
plot_args = dict(transform=axes.get_xaxis_transform(),
linewidth=2,
color='red')
axes.plot([xmin, xmax], [yloc] * 2, **plot_args, label='Selected periods')
axes.plot([xmin] * 2, [yloc - 0.01, yloc + 0.01], **plot_args)
axes.plot([xmax] * 2, [yloc - 0.01, yloc + 0.01], **plot_args)
def make_plot(metadata, scenarios, cfg, provenance):
"""Make figure 3, left graph.
Multimodel values as line, reference value in black square, steering
variables in dark dots.
"""
fig, axes = plt.subplots()
for member in select_metadata(metadata, variable_group='tas_cmip'):
filename = member['filename']
dataset = xr.open_dataset(filename)
if 'MultiModel' not in filename:
axes.plot(dataset.time.dt.year,
dataset.tas.values,
c='grey',
alpha=0.3,
lw=.5,
label='CMIP members')
else:
# Only display stats for the future period:
dataset = dataset.sel(time=slice('2010', None, None))
axes.plot(dataset.time.dt.year,
dataset.tas.values,
color='k',
linewidth=2,
label='CMIP ' + member['alias'][10:])
for member in select_metadata(metadata, variable_group='tas_target'):
filename = member['filename']
dataset = xr.open_dataset(filename)
if 'MultiModel' not in filename:
axes.plot(dataset.time.dt.year,
dataset.tas.values,
color='blue',
linewidth=1,
label=member['dataset'])
# Add the scenario's with dots at the cmip dt and bars for the periods
for i, scenario in enumerate(scenarios):
axes.scatter(scenario['year'],
scenario['cmip_dt'],
s=50,
zorder=10,
color='r',
label=r"Scenarios' steering $\Delta T_{CMIP}$")
_timeline(axes, i, scenario['period_bounds'])
handles, labels = plt.gca().get_legend_handles_labels()
by_label = dict(zip(labels, handles)) # dict removes dupes
axes.legend(by_label.values(), by_label.keys())
axes.set_xlabel('Year')
axes.set_ylabel(r'Global mean $\Delta T$ (K) w.r.t. reference period')
# Save figure
filename = get_plot_filename('global_matching', cfg)
fig.savefig(filename, bbox_inches='tight', dpi=300)
with ProvenanceLogger(cfg) as provenance_logger:
provenance_logger.log(filename, provenance)
def save(output, cfg, provenance):
"""Save the output as csv file."""
scenarios = pd.DataFrame(output)
filename = get_diagnostic_filename('scenarios', cfg, extension='csv')
scenarios.to_csv(filename)
print(scenarios.round(2))
print(f"Output written to {filename}")
with ProvenanceLogger(cfg) as provenance_logger:
provenance_logger.log(filename, provenance)
def main(cfg):
"""Return scenarios tables."""
# A list of dictionaries describing all datasets passed on to the recipe
metadata = cfg['input_data'].values()
# Get the average delta T of the target model
target_dts, provenance = mean_of_target_models(metadata)
# Define the different scenario's
scenarios = []
combinations = product(cfg['scenario_years'], cfg['scenario_percentiles'])
for year, percentile in combinations:
cmip_dt = get_cmip_dt(metadata, year, percentile)
bounds, target_dt = get_resampling_period(target_dts, cmip_dt)
scenario = {
'year': year,
'percentile': percentile,
'cmip_dt': cmip_dt,
'period_bounds': bounds,
'target_dt': float(target_dt),
'pattern_scaling_factor': cmip_dt / target_dt
}
scenarios.append(scenario)
# Save scenarios tables as csv file
save(scenarios, cfg, provenance)
# Plot the results
make_plot(metadata, scenarios, cfg, provenance)
if __name__ == '__main__':
with run_diagnostic() as config:
main(config) | PypiClean |
/123_object_detection-0.1.tar.gz/123_object_detection-0.1/object_detection/builders/model_builder.py | import functools
import sys
from absl import logging
from object_detection.builders import anchor_generator_builder
from object_detection.builders import box_coder_builder
from object_detection.builders import box_predictor_builder
from object_detection.builders import hyperparams_builder
from object_detection.builders import image_resizer_builder
from object_detection.builders import losses_builder
from object_detection.builders import matcher_builder
from object_detection.builders import post_processing_builder
from object_detection.builders import region_similarity_calculator_builder as sim_calc
from object_detection.core import balanced_positive_negative_sampler as sampler
from object_detection.core import post_processing
from object_detection.core import target_assigner
from object_detection.meta_architectures import center_net_meta_arch
from object_detection.meta_architectures import context_rcnn_meta_arch
from object_detection.meta_architectures import deepmac_meta_arch
from object_detection.meta_architectures import faster_rcnn_meta_arch
from object_detection.meta_architectures import rfcn_meta_arch
from object_detection.meta_architectures import ssd_meta_arch
from object_detection.predictors.heads import mask_head
from object_detection.protos import losses_pb2
from object_detection.protos import model_pb2
from object_detection.utils import label_map_util
from object_detection.utils import ops
from object_detection.utils import spatial_transform_ops as spatial_ops
from object_detection.utils import tf_version
## Feature Extractors for TF
## This section conditionally imports different feature extractors based on the
## Tensorflow version.
##
# pylint: disable=g-import-not-at-top
if tf_version.is_tf2():
from object_detection.models import center_net_hourglass_feature_extractor
from object_detection.models import center_net_mobilenet_v2_feature_extractor
from object_detection.models import center_net_mobilenet_v2_fpn_feature_extractor
from object_detection.models import center_net_resnet_feature_extractor
from object_detection.models import center_net_resnet_v1_fpn_feature_extractor
from object_detection.models import faster_rcnn_inception_resnet_v2_keras_feature_extractor as frcnn_inc_res_keras
from object_detection.models import faster_rcnn_resnet_keras_feature_extractor as frcnn_resnet_keras
from object_detection.models import ssd_resnet_v1_fpn_keras_feature_extractor as ssd_resnet_v1_fpn_keras
from object_detection.models import faster_rcnn_resnet_v1_fpn_keras_feature_extractor as frcnn_resnet_fpn_keras
from object_detection.models.ssd_mobilenet_v1_fpn_keras_feature_extractor import SSDMobileNetV1FpnKerasFeatureExtractor
from object_detection.models.ssd_mobilenet_v1_keras_feature_extractor import SSDMobileNetV1KerasFeatureExtractor
from object_detection.models.ssd_mobilenet_v2_fpn_keras_feature_extractor import SSDMobileNetV2FpnKerasFeatureExtractor
from object_detection.models.ssd_mobilenet_v2_keras_feature_extractor import SSDMobileNetV2KerasFeatureExtractor
from object_detection.predictors import rfcn_keras_box_predictor
if sys.version_info[0] >= 3:
from object_detection.models import ssd_efficientnet_bifpn_feature_extractor as ssd_efficientnet_bifpn
if tf_version.is_tf1():
from object_detection.models import faster_rcnn_inception_resnet_v2_feature_extractor as frcnn_inc_res
from object_detection.models import faster_rcnn_inception_v2_feature_extractor as frcnn_inc_v2
from object_detection.models import faster_rcnn_nas_feature_extractor as frcnn_nas
from object_detection.models import faster_rcnn_pnas_feature_extractor as frcnn_pnas
from object_detection.models import faster_rcnn_resnet_v1_feature_extractor as frcnn_resnet_v1
from object_detection.models import ssd_resnet_v1_fpn_feature_extractor as ssd_resnet_v1_fpn
from object_detection.models import ssd_resnet_v1_ppn_feature_extractor as ssd_resnet_v1_ppn
from object_detection.models.embedded_ssd_mobilenet_v1_feature_extractor import EmbeddedSSDMobileNetV1FeatureExtractor
from object_detection.models.ssd_inception_v2_feature_extractor import SSDInceptionV2FeatureExtractor
from object_detection.models.ssd_mobilenet_v2_fpn_feature_extractor import SSDMobileNetV2FpnFeatureExtractor
from object_detection.models.ssd_mobilenet_v2_mnasfpn_feature_extractor import SSDMobileNetV2MnasFPNFeatureExtractor
from object_detection.models.ssd_inception_v3_feature_extractor import SSDInceptionV3FeatureExtractor
from object_detection.models.ssd_mobilenet_edgetpu_feature_extractor import SSDMobileNetEdgeTPUFeatureExtractor
from object_detection.models.ssd_mobilenet_v1_feature_extractor import SSDMobileNetV1FeatureExtractor
from object_detection.models.ssd_mobilenet_v1_fpn_feature_extractor import SSDMobileNetV1FpnFeatureExtractor
from object_detection.models.ssd_mobilenet_v1_ppn_feature_extractor import SSDMobileNetV1PpnFeatureExtractor
from object_detection.models.ssd_mobilenet_v2_feature_extractor import SSDMobileNetV2FeatureExtractor
from object_detection.models.ssd_mobilenet_v3_feature_extractor import SSDMobileNetV3LargeFeatureExtractor
from object_detection.models.ssd_mobilenet_v3_feature_extractor import SSDMobileNetV3SmallFeatureExtractor
from object_detection.models.ssd_mobiledet_feature_extractor import SSDMobileDetCPUFeatureExtractor
from object_detection.models.ssd_mobiledet_feature_extractor import SSDMobileDetDSPFeatureExtractor
from object_detection.models.ssd_mobiledet_feature_extractor import SSDMobileDetEdgeTPUFeatureExtractor
from object_detection.models.ssd_mobiledet_feature_extractor import SSDMobileDetGPUFeatureExtractor
from object_detection.models.ssd_pnasnet_feature_extractor import SSDPNASNetFeatureExtractor
from object_detection.predictors import rfcn_box_predictor
# pylint: enable=g-import-not-at-top
if tf_version.is_tf2():
SSD_KERAS_FEATURE_EXTRACTOR_CLASS_MAP = {
'ssd_mobilenet_v1_keras': SSDMobileNetV1KerasFeatureExtractor,
'ssd_mobilenet_v1_fpn_keras': SSDMobileNetV1FpnKerasFeatureExtractor,
'ssd_mobilenet_v2_keras': SSDMobileNetV2KerasFeatureExtractor,
'ssd_mobilenet_v2_fpn_keras': SSDMobileNetV2FpnKerasFeatureExtractor,
'ssd_resnet50_v1_fpn_keras':
ssd_resnet_v1_fpn_keras.SSDResNet50V1FpnKerasFeatureExtractor,
'ssd_resnet101_v1_fpn_keras':
ssd_resnet_v1_fpn_keras.SSDResNet101V1FpnKerasFeatureExtractor,
'ssd_resnet152_v1_fpn_keras':
ssd_resnet_v1_fpn_keras.SSDResNet152V1FpnKerasFeatureExtractor,
'ssd_efficientnet-b0_bifpn_keras':
ssd_efficientnet_bifpn.SSDEfficientNetB0BiFPNKerasFeatureExtractor,
'ssd_efficientnet-b1_bifpn_keras':
ssd_efficientnet_bifpn.SSDEfficientNetB1BiFPNKerasFeatureExtractor,
'ssd_efficientnet-b2_bifpn_keras':
ssd_efficientnet_bifpn.SSDEfficientNetB2BiFPNKerasFeatureExtractor,
'ssd_efficientnet-b3_bifpn_keras':
ssd_efficientnet_bifpn.SSDEfficientNetB3BiFPNKerasFeatureExtractor,
'ssd_efficientnet-b4_bifpn_keras':
ssd_efficientnet_bifpn.SSDEfficientNetB4BiFPNKerasFeatureExtractor,
'ssd_efficientnet-b5_bifpn_keras':
ssd_efficientnet_bifpn.SSDEfficientNetB5BiFPNKerasFeatureExtractor,
'ssd_efficientnet-b6_bifpn_keras':
ssd_efficientnet_bifpn.SSDEfficientNetB6BiFPNKerasFeatureExtractor,
'ssd_efficientnet-b7_bifpn_keras':
ssd_efficientnet_bifpn.SSDEfficientNetB7BiFPNKerasFeatureExtractor,
}
FASTER_RCNN_KERAS_FEATURE_EXTRACTOR_CLASS_MAP = {
'faster_rcnn_resnet50_keras':
frcnn_resnet_keras.FasterRCNNResnet50KerasFeatureExtractor,
'faster_rcnn_resnet101_keras':
frcnn_resnet_keras.FasterRCNNResnet101KerasFeatureExtractor,
'faster_rcnn_resnet152_keras':
frcnn_resnet_keras.FasterRCNNResnet152KerasFeatureExtractor,
'faster_rcnn_inception_resnet_v2_keras':
frcnn_inc_res_keras.FasterRCNNInceptionResnetV2KerasFeatureExtractor,
'faster_rcnn_resnet50_fpn_keras':
frcnn_resnet_fpn_keras.FasterRCNNResnet50FpnKerasFeatureExtractor,
'faster_rcnn_resnet101_fpn_keras':
frcnn_resnet_fpn_keras.FasterRCNNResnet101FpnKerasFeatureExtractor,
'faster_rcnn_resnet152_fpn_keras':
frcnn_resnet_fpn_keras.FasterRCNNResnet152FpnKerasFeatureExtractor,
}
CENTER_NET_EXTRACTOR_FUNCTION_MAP = {
'resnet_v2_50':
center_net_resnet_feature_extractor.resnet_v2_50,
'resnet_v2_101':
center_net_resnet_feature_extractor.resnet_v2_101,
'resnet_v1_18_fpn':
center_net_resnet_v1_fpn_feature_extractor.resnet_v1_18_fpn,
'resnet_v1_34_fpn':
center_net_resnet_v1_fpn_feature_extractor.resnet_v1_34_fpn,
'resnet_v1_50_fpn':
center_net_resnet_v1_fpn_feature_extractor.resnet_v1_50_fpn,
'resnet_v1_101_fpn':
center_net_resnet_v1_fpn_feature_extractor.resnet_v1_101_fpn,
'hourglass_10':
center_net_hourglass_feature_extractor.hourglass_10,
'hourglass_20':
center_net_hourglass_feature_extractor.hourglass_20,
'hourglass_32':
center_net_hourglass_feature_extractor.hourglass_32,
'hourglass_52':
center_net_hourglass_feature_extractor.hourglass_52,
'hourglass_104':
center_net_hourglass_feature_extractor.hourglass_104,
'mobilenet_v2':
center_net_mobilenet_v2_feature_extractor.mobilenet_v2,
'mobilenet_v2_fpn':
center_net_mobilenet_v2_fpn_feature_extractor.mobilenet_v2_fpn,
'mobilenet_v2_fpn_sep_conv':
center_net_mobilenet_v2_fpn_feature_extractor.mobilenet_v2_fpn,
}
FEATURE_EXTRACTOR_MAPS = [
CENTER_NET_EXTRACTOR_FUNCTION_MAP,
FASTER_RCNN_KERAS_FEATURE_EXTRACTOR_CLASS_MAP,
SSD_KERAS_FEATURE_EXTRACTOR_CLASS_MAP
]
if tf_version.is_tf1():
SSD_FEATURE_EXTRACTOR_CLASS_MAP = {
'ssd_inception_v2':
SSDInceptionV2FeatureExtractor,
'ssd_inception_v3':
SSDInceptionV3FeatureExtractor,
'ssd_mobilenet_v1':
SSDMobileNetV1FeatureExtractor,
'ssd_mobilenet_v1_fpn':
SSDMobileNetV1FpnFeatureExtractor,
'ssd_mobilenet_v1_ppn':
SSDMobileNetV1PpnFeatureExtractor,
'ssd_mobilenet_v2':
SSDMobileNetV2FeatureExtractor,
'ssd_mobilenet_v2_fpn':
SSDMobileNetV2FpnFeatureExtractor,
'ssd_mobilenet_v2_mnasfpn':
SSDMobileNetV2MnasFPNFeatureExtractor,
'ssd_mobilenet_v3_large':
SSDMobileNetV3LargeFeatureExtractor,
'ssd_mobilenet_v3_small':
SSDMobileNetV3SmallFeatureExtractor,
'ssd_mobilenet_edgetpu':
SSDMobileNetEdgeTPUFeatureExtractor,
'ssd_resnet50_v1_fpn':
ssd_resnet_v1_fpn.SSDResnet50V1FpnFeatureExtractor,
'ssd_resnet101_v1_fpn':
ssd_resnet_v1_fpn.SSDResnet101V1FpnFeatureExtractor,
'ssd_resnet152_v1_fpn':
ssd_resnet_v1_fpn.SSDResnet152V1FpnFeatureExtractor,
'ssd_resnet50_v1_ppn':
ssd_resnet_v1_ppn.SSDResnet50V1PpnFeatureExtractor,
'ssd_resnet101_v1_ppn':
ssd_resnet_v1_ppn.SSDResnet101V1PpnFeatureExtractor,
'ssd_resnet152_v1_ppn':
ssd_resnet_v1_ppn.SSDResnet152V1PpnFeatureExtractor,
'embedded_ssd_mobilenet_v1':
EmbeddedSSDMobileNetV1FeatureExtractor,
'ssd_pnasnet':
SSDPNASNetFeatureExtractor,
'ssd_mobiledet_cpu':
SSDMobileDetCPUFeatureExtractor,
'ssd_mobiledet_dsp':
SSDMobileDetDSPFeatureExtractor,
'ssd_mobiledet_edgetpu':
SSDMobileDetEdgeTPUFeatureExtractor,
'ssd_mobiledet_gpu':
SSDMobileDetGPUFeatureExtractor,
}
FASTER_RCNN_FEATURE_EXTRACTOR_CLASS_MAP = {
'faster_rcnn_nas':
frcnn_nas.FasterRCNNNASFeatureExtractor,
'faster_rcnn_pnas':
frcnn_pnas.FasterRCNNPNASFeatureExtractor,
'faster_rcnn_inception_resnet_v2':
frcnn_inc_res.FasterRCNNInceptionResnetV2FeatureExtractor,
'faster_rcnn_inception_v2':
frcnn_inc_v2.FasterRCNNInceptionV2FeatureExtractor,
'faster_rcnn_resnet50':
frcnn_resnet_v1.FasterRCNNResnet50FeatureExtractor,
'faster_rcnn_resnet101':
frcnn_resnet_v1.FasterRCNNResnet101FeatureExtractor,
'faster_rcnn_resnet152':
frcnn_resnet_v1.FasterRCNNResnet152FeatureExtractor,
}
CENTER_NET_EXTRACTOR_FUNCTION_MAP = {}
FEATURE_EXTRACTOR_MAPS = [
SSD_FEATURE_EXTRACTOR_CLASS_MAP,
FASTER_RCNN_FEATURE_EXTRACTOR_CLASS_MAP,
CENTER_NET_EXTRACTOR_FUNCTION_MAP
]
def _check_feature_extractor_exists(feature_extractor_type):
feature_extractors = set().union(*FEATURE_EXTRACTOR_MAPS)
if feature_extractor_type not in feature_extractors:
raise ValueError('{} is not supported. See `model_builder.py` for features '
'extractors compatible with different versions of '
'Tensorflow'.format(feature_extractor_type))
def _build_ssd_feature_extractor(feature_extractor_config,
is_training,
freeze_batchnorm,
reuse_weights=None):
"""Builds a ssd_meta_arch.SSDFeatureExtractor based on config.
Args:
feature_extractor_config: A SSDFeatureExtractor proto config from ssd.proto.
is_training: True if this feature extractor is being built for training.
freeze_batchnorm: Whether to freeze batch norm parameters during
training or not. When training with a small batch size (e.g. 1), it is
desirable to freeze batch norm update and use pretrained batch norm
params.
reuse_weights: if the feature extractor should reuse weights.
Returns:
ssd_meta_arch.SSDFeatureExtractor based on config.
Raises:
ValueError: On invalid feature extractor type.
"""
feature_type = feature_extractor_config.type
depth_multiplier = feature_extractor_config.depth_multiplier
min_depth = feature_extractor_config.min_depth
pad_to_multiple = feature_extractor_config.pad_to_multiple
use_explicit_padding = feature_extractor_config.use_explicit_padding
use_depthwise = feature_extractor_config.use_depthwise
is_keras = tf_version.is_tf2()
if is_keras:
conv_hyperparams = hyperparams_builder.KerasLayerHyperparams(
feature_extractor_config.conv_hyperparams)
else:
conv_hyperparams = hyperparams_builder.build(
feature_extractor_config.conv_hyperparams, is_training)
override_base_feature_extractor_hyperparams = (
feature_extractor_config.override_base_feature_extractor_hyperparams)
if not is_keras and feature_type not in SSD_FEATURE_EXTRACTOR_CLASS_MAP:
raise ValueError('Unknown ssd feature_extractor: {}'.format(feature_type))
if is_keras:
feature_extractor_class = SSD_KERAS_FEATURE_EXTRACTOR_CLASS_MAP[
feature_type]
else:
feature_extractor_class = SSD_FEATURE_EXTRACTOR_CLASS_MAP[feature_type]
kwargs = {
'is_training':
is_training,
'depth_multiplier':
depth_multiplier,
'min_depth':
min_depth,
'pad_to_multiple':
pad_to_multiple,
'use_explicit_padding':
use_explicit_padding,
'use_depthwise':
use_depthwise,
'override_base_feature_extractor_hyperparams':
override_base_feature_extractor_hyperparams
}
if feature_extractor_config.HasField('replace_preprocessor_with_placeholder'):
kwargs.update({
'replace_preprocessor_with_placeholder':
feature_extractor_config.replace_preprocessor_with_placeholder
})
if feature_extractor_config.HasField('num_layers'):
kwargs.update({'num_layers': feature_extractor_config.num_layers})
if is_keras:
kwargs.update({
'conv_hyperparams': conv_hyperparams,
'inplace_batchnorm_update': False,
'freeze_batchnorm': freeze_batchnorm
})
else:
kwargs.update({
'conv_hyperparams_fn': conv_hyperparams,
'reuse_weights': reuse_weights,
})
if feature_extractor_config.HasField('fpn'):
kwargs.update({
'fpn_min_level':
feature_extractor_config.fpn.min_level,
'fpn_max_level':
feature_extractor_config.fpn.max_level,
'additional_layer_depth':
feature_extractor_config.fpn.additional_layer_depth,
})
if feature_extractor_config.HasField('bifpn'):
kwargs.update({
'bifpn_min_level': feature_extractor_config.bifpn.min_level,
'bifpn_max_level': feature_extractor_config.bifpn.max_level,
'bifpn_num_iterations': feature_extractor_config.bifpn.num_iterations,
'bifpn_num_filters': feature_extractor_config.bifpn.num_filters,
'bifpn_combine_method': feature_extractor_config.bifpn.combine_method,
})
return feature_extractor_class(**kwargs)
def _build_ssd_model(ssd_config, is_training, add_summaries):
"""Builds an SSD detection model based on the model config.
Args:
ssd_config: A ssd.proto object containing the config for the desired
SSDMetaArch.
is_training: True if this model is being built for training purposes.
add_summaries: Whether to add tf summaries in the model.
Returns:
SSDMetaArch based on the config.
Raises:
ValueError: If ssd_config.type is not recognized (i.e. not registered in
model_class_map).
"""
num_classes = ssd_config.num_classes
_check_feature_extractor_exists(ssd_config.feature_extractor.type)
# Feature extractor
feature_extractor = _build_ssd_feature_extractor(
feature_extractor_config=ssd_config.feature_extractor,
freeze_batchnorm=ssd_config.freeze_batchnorm,
is_training=is_training)
box_coder = box_coder_builder.build(ssd_config.box_coder)
matcher = matcher_builder.build(ssd_config.matcher)
region_similarity_calculator = sim_calc.build(
ssd_config.similarity_calculator)
encode_background_as_zeros = ssd_config.encode_background_as_zeros
negative_class_weight = ssd_config.negative_class_weight
anchor_generator = anchor_generator_builder.build(
ssd_config.anchor_generator)
if feature_extractor.is_keras_model:
ssd_box_predictor = box_predictor_builder.build_keras(
hyperparams_fn=hyperparams_builder.KerasLayerHyperparams,
freeze_batchnorm=ssd_config.freeze_batchnorm,
inplace_batchnorm_update=False,
num_predictions_per_location_list=anchor_generator
.num_anchors_per_location(),
box_predictor_config=ssd_config.box_predictor,
is_training=is_training,
num_classes=num_classes,
add_background_class=ssd_config.add_background_class)
else:
ssd_box_predictor = box_predictor_builder.build(
hyperparams_builder.build, ssd_config.box_predictor, is_training,
num_classes, ssd_config.add_background_class)
image_resizer_fn = image_resizer_builder.build(ssd_config.image_resizer)
non_max_suppression_fn, score_conversion_fn = post_processing_builder.build(
ssd_config.post_processing)
(classification_loss, localization_loss, classification_weight,
localization_weight, hard_example_miner, random_example_sampler,
expected_loss_weights_fn) = losses_builder.build(ssd_config.loss)
normalize_loss_by_num_matches = ssd_config.normalize_loss_by_num_matches
normalize_loc_loss_by_codesize = ssd_config.normalize_loc_loss_by_codesize
equalization_loss_config = ops.EqualizationLossConfig(
weight=ssd_config.loss.equalization_loss.weight,
exclude_prefixes=ssd_config.loss.equalization_loss.exclude_prefixes)
target_assigner_instance = target_assigner.TargetAssigner(
region_similarity_calculator,
matcher,
box_coder,
negative_class_weight=negative_class_weight)
ssd_meta_arch_fn = ssd_meta_arch.SSDMetaArch
kwargs = {}
return ssd_meta_arch_fn(
is_training=is_training,
anchor_generator=anchor_generator,
box_predictor=ssd_box_predictor,
box_coder=box_coder,
feature_extractor=feature_extractor,
encode_background_as_zeros=encode_background_as_zeros,
image_resizer_fn=image_resizer_fn,
non_max_suppression_fn=non_max_suppression_fn,
score_conversion_fn=score_conversion_fn,
classification_loss=classification_loss,
localization_loss=localization_loss,
classification_loss_weight=classification_weight,
localization_loss_weight=localization_weight,
normalize_loss_by_num_matches=normalize_loss_by_num_matches,
hard_example_miner=hard_example_miner,
target_assigner_instance=target_assigner_instance,
add_summaries=add_summaries,
normalize_loc_loss_by_codesize=normalize_loc_loss_by_codesize,
freeze_batchnorm=ssd_config.freeze_batchnorm,
inplace_batchnorm_update=ssd_config.inplace_batchnorm_update,
add_background_class=ssd_config.add_background_class,
explicit_background_class=ssd_config.explicit_background_class,
random_example_sampler=random_example_sampler,
expected_loss_weights_fn=expected_loss_weights_fn,
use_confidences_as_targets=ssd_config.use_confidences_as_targets,
implicit_example_weight=ssd_config.implicit_example_weight,
equalization_loss_config=equalization_loss_config,
return_raw_detections_during_predict=(
ssd_config.return_raw_detections_during_predict),
**kwargs)
def _build_faster_rcnn_feature_extractor(
feature_extractor_config, is_training, reuse_weights=True,
inplace_batchnorm_update=False):
"""Builds a faster_rcnn_meta_arch.FasterRCNNFeatureExtractor based on config.
Args:
feature_extractor_config: A FasterRcnnFeatureExtractor proto config from
faster_rcnn.proto.
is_training: True if this feature extractor is being built for training.
reuse_weights: if the feature extractor should reuse weights.
inplace_batchnorm_update: Whether to update batch_norm inplace during
training. This is required for batch norm to work correctly on TPUs. When
this is false, user must add a control dependency on
tf.GraphKeys.UPDATE_OPS for train/loss op in order to update the batch
norm moving average parameters.
Returns:
faster_rcnn_meta_arch.FasterRCNNFeatureExtractor based on config.
Raises:
ValueError: On invalid feature extractor type.
"""
if inplace_batchnorm_update:
raise ValueError('inplace batchnorm updates not supported.')
feature_type = feature_extractor_config.type
first_stage_features_stride = (
feature_extractor_config.first_stage_features_stride)
batch_norm_trainable = feature_extractor_config.batch_norm_trainable
if feature_type not in FASTER_RCNN_FEATURE_EXTRACTOR_CLASS_MAP:
raise ValueError('Unknown Faster R-CNN feature_extractor: {}'.format(
feature_type))
feature_extractor_class = FASTER_RCNN_FEATURE_EXTRACTOR_CLASS_MAP[
feature_type]
return feature_extractor_class(
is_training, first_stage_features_stride,
batch_norm_trainable, reuse_weights=reuse_weights)
def _build_faster_rcnn_keras_feature_extractor(
feature_extractor_config, is_training,
inplace_batchnorm_update=False):
"""Builds a faster_rcnn_meta_arch.FasterRCNNKerasFeatureExtractor from config.
Args:
feature_extractor_config: A FasterRcnnFeatureExtractor proto config from
faster_rcnn.proto.
is_training: True if this feature extractor is being built for training.
inplace_batchnorm_update: Whether to update batch_norm inplace during
training. This is required for batch norm to work correctly on TPUs. When
this is false, user must add a control dependency on
tf.GraphKeys.UPDATE_OPS for train/loss op in order to update the batch
norm moving average parameters.
Returns:
faster_rcnn_meta_arch.FasterRCNNKerasFeatureExtractor based on config.
Raises:
ValueError: On invalid feature extractor type.
"""
if inplace_batchnorm_update:
raise ValueError('inplace batchnorm updates not supported.')
feature_type = feature_extractor_config.type
first_stage_features_stride = (
feature_extractor_config.first_stage_features_stride)
batch_norm_trainable = feature_extractor_config.batch_norm_trainable
if feature_type not in FASTER_RCNN_KERAS_FEATURE_EXTRACTOR_CLASS_MAP:
raise ValueError('Unknown Faster R-CNN feature_extractor: {}'.format(
feature_type))
feature_extractor_class = FASTER_RCNN_KERAS_FEATURE_EXTRACTOR_CLASS_MAP[
feature_type]
kwargs = {}
if feature_extractor_config.HasField('conv_hyperparams'):
kwargs.update({
'conv_hyperparams':
hyperparams_builder.KerasLayerHyperparams(
feature_extractor_config.conv_hyperparams),
'override_base_feature_extractor_hyperparams':
feature_extractor_config.override_base_feature_extractor_hyperparams
})
if feature_extractor_config.HasField('fpn'):
kwargs.update({
'fpn_min_level':
feature_extractor_config.fpn.min_level,
'fpn_max_level':
feature_extractor_config.fpn.max_level,
'additional_layer_depth':
feature_extractor_config.fpn.additional_layer_depth,
})
return feature_extractor_class(
is_training, first_stage_features_stride,
batch_norm_trainable, **kwargs)
def _build_faster_rcnn_model(frcnn_config, is_training, add_summaries):
"""Builds a Faster R-CNN or R-FCN detection model based on the model config.
Builds R-FCN model if the second_stage_box_predictor in the config is of type
`rfcn_box_predictor` else builds a Faster R-CNN model.
Args:
frcnn_config: A faster_rcnn.proto object containing the config for the
desired FasterRCNNMetaArch or RFCNMetaArch.
is_training: True if this model is being built for training purposes.
add_summaries: Whether to add tf summaries in the model.
Returns:
FasterRCNNMetaArch based on the config.
Raises:
ValueError: If frcnn_config.type is not recognized (i.e. not registered in
model_class_map).
"""
num_classes = frcnn_config.num_classes
image_resizer_fn = image_resizer_builder.build(frcnn_config.image_resizer)
_check_feature_extractor_exists(frcnn_config.feature_extractor.type)
is_keras = tf_version.is_tf2()
if is_keras:
feature_extractor = _build_faster_rcnn_keras_feature_extractor(
frcnn_config.feature_extractor, is_training,
inplace_batchnorm_update=frcnn_config.inplace_batchnorm_update)
else:
feature_extractor = _build_faster_rcnn_feature_extractor(
frcnn_config.feature_extractor, is_training,
inplace_batchnorm_update=frcnn_config.inplace_batchnorm_update)
number_of_stages = frcnn_config.number_of_stages
first_stage_anchor_generator = anchor_generator_builder.build(
frcnn_config.first_stage_anchor_generator)
first_stage_target_assigner = target_assigner.create_target_assigner(
'FasterRCNN',
'proposal',
use_matmul_gather=frcnn_config.use_matmul_gather_in_matcher)
first_stage_atrous_rate = frcnn_config.first_stage_atrous_rate
if is_keras:
first_stage_box_predictor_arg_scope_fn = (
hyperparams_builder.KerasLayerHyperparams(
frcnn_config.first_stage_box_predictor_conv_hyperparams))
else:
first_stage_box_predictor_arg_scope_fn = hyperparams_builder.build(
frcnn_config.first_stage_box_predictor_conv_hyperparams, is_training)
first_stage_box_predictor_kernel_size = (
frcnn_config.first_stage_box_predictor_kernel_size)
first_stage_box_predictor_depth = frcnn_config.first_stage_box_predictor_depth
first_stage_minibatch_size = frcnn_config.first_stage_minibatch_size
use_static_shapes = frcnn_config.use_static_shapes and (
frcnn_config.use_static_shapes_for_eval or is_training)
first_stage_sampler = sampler.BalancedPositiveNegativeSampler(
positive_fraction=frcnn_config.first_stage_positive_balance_fraction,
is_static=(frcnn_config.use_static_balanced_label_sampler and
use_static_shapes))
first_stage_max_proposals = frcnn_config.first_stage_max_proposals
if (frcnn_config.first_stage_nms_iou_threshold < 0 or
frcnn_config.first_stage_nms_iou_threshold > 1.0):
raise ValueError('iou_threshold not in [0, 1.0].')
if (is_training and frcnn_config.second_stage_batch_size >
first_stage_max_proposals):
raise ValueError('second_stage_batch_size should be no greater than '
'first_stage_max_proposals.')
first_stage_non_max_suppression_fn = functools.partial(
post_processing.batch_multiclass_non_max_suppression,
score_thresh=frcnn_config.first_stage_nms_score_threshold,
iou_thresh=frcnn_config.first_stage_nms_iou_threshold,
max_size_per_class=frcnn_config.first_stage_max_proposals,
max_total_size=frcnn_config.first_stage_max_proposals,
use_static_shapes=use_static_shapes,
use_partitioned_nms=frcnn_config.use_partitioned_nms_in_first_stage,
use_combined_nms=frcnn_config.use_combined_nms_in_first_stage)
first_stage_loc_loss_weight = (
frcnn_config.first_stage_localization_loss_weight)
first_stage_obj_loss_weight = frcnn_config.first_stage_objectness_loss_weight
initial_crop_size = frcnn_config.initial_crop_size
maxpool_kernel_size = frcnn_config.maxpool_kernel_size
maxpool_stride = frcnn_config.maxpool_stride
second_stage_target_assigner = target_assigner.create_target_assigner(
'FasterRCNN',
'detection',
use_matmul_gather=frcnn_config.use_matmul_gather_in_matcher)
if is_keras:
second_stage_box_predictor = box_predictor_builder.build_keras(
hyperparams_builder.KerasLayerHyperparams,
freeze_batchnorm=False,
inplace_batchnorm_update=False,
num_predictions_per_location_list=[1],
box_predictor_config=frcnn_config.second_stage_box_predictor,
is_training=is_training,
num_classes=num_classes)
else:
second_stage_box_predictor = box_predictor_builder.build(
hyperparams_builder.build,
frcnn_config.second_stage_box_predictor,
is_training=is_training,
num_classes=num_classes)
second_stage_batch_size = frcnn_config.second_stage_batch_size
second_stage_sampler = sampler.BalancedPositiveNegativeSampler(
positive_fraction=frcnn_config.second_stage_balance_fraction,
is_static=(frcnn_config.use_static_balanced_label_sampler and
use_static_shapes))
(second_stage_non_max_suppression_fn, second_stage_score_conversion_fn
) = post_processing_builder.build(frcnn_config.second_stage_post_processing)
second_stage_localization_loss_weight = (
frcnn_config.second_stage_localization_loss_weight)
second_stage_classification_loss = (
losses_builder.build_faster_rcnn_classification_loss(
frcnn_config.second_stage_classification_loss))
second_stage_classification_loss_weight = (
frcnn_config.second_stage_classification_loss_weight)
second_stage_mask_prediction_loss_weight = (
frcnn_config.second_stage_mask_prediction_loss_weight)
hard_example_miner = None
if frcnn_config.HasField('hard_example_miner'):
hard_example_miner = losses_builder.build_hard_example_miner(
frcnn_config.hard_example_miner,
second_stage_classification_loss_weight,
second_stage_localization_loss_weight)
crop_and_resize_fn = (
spatial_ops.multilevel_matmul_crop_and_resize
if frcnn_config.use_matmul_crop_and_resize
else spatial_ops.multilevel_native_crop_and_resize)
clip_anchors_to_image = (
frcnn_config.clip_anchors_to_image)
common_kwargs = {
'is_training':
is_training,
'num_classes':
num_classes,
'image_resizer_fn':
image_resizer_fn,
'feature_extractor':
feature_extractor,
'number_of_stages':
number_of_stages,
'first_stage_anchor_generator':
first_stage_anchor_generator,
'first_stage_target_assigner':
first_stage_target_assigner,
'first_stage_atrous_rate':
first_stage_atrous_rate,
'first_stage_box_predictor_arg_scope_fn':
first_stage_box_predictor_arg_scope_fn,
'first_stage_box_predictor_kernel_size':
first_stage_box_predictor_kernel_size,
'first_stage_box_predictor_depth':
first_stage_box_predictor_depth,
'first_stage_minibatch_size':
first_stage_minibatch_size,
'first_stage_sampler':
first_stage_sampler,
'first_stage_non_max_suppression_fn':
first_stage_non_max_suppression_fn,
'first_stage_max_proposals':
first_stage_max_proposals,
'first_stage_localization_loss_weight':
first_stage_loc_loss_weight,
'first_stage_objectness_loss_weight':
first_stage_obj_loss_weight,
'second_stage_target_assigner':
second_stage_target_assigner,
'second_stage_batch_size':
second_stage_batch_size,
'second_stage_sampler':
second_stage_sampler,
'second_stage_non_max_suppression_fn':
second_stage_non_max_suppression_fn,
'second_stage_score_conversion_fn':
second_stage_score_conversion_fn,
'second_stage_localization_loss_weight':
second_stage_localization_loss_weight,
'second_stage_classification_loss':
second_stage_classification_loss,
'second_stage_classification_loss_weight':
second_stage_classification_loss_weight,
'hard_example_miner':
hard_example_miner,
'add_summaries':
add_summaries,
'crop_and_resize_fn':
crop_and_resize_fn,
'clip_anchors_to_image':
clip_anchors_to_image,
'use_static_shapes':
use_static_shapes,
'resize_masks':
frcnn_config.resize_masks,
'return_raw_detections_during_predict':
frcnn_config.return_raw_detections_during_predict,
'output_final_box_features':
frcnn_config.output_final_box_features,
'output_final_box_rpn_features':
frcnn_config.output_final_box_rpn_features,
}
if ((not is_keras and isinstance(second_stage_box_predictor,
rfcn_box_predictor.RfcnBoxPredictor)) or
(is_keras and
isinstance(second_stage_box_predictor,
rfcn_keras_box_predictor.RfcnKerasBoxPredictor))):
return rfcn_meta_arch.RFCNMetaArch(
second_stage_rfcn_box_predictor=second_stage_box_predictor,
**common_kwargs)
elif frcnn_config.HasField('context_config'):
context_config = frcnn_config.context_config
common_kwargs.update({
'attention_bottleneck_dimension':
context_config.attention_bottleneck_dimension,
'attention_temperature':
context_config.attention_temperature,
'use_self_attention':
context_config.use_self_attention,
'use_long_term_attention':
context_config.use_long_term_attention,
'self_attention_in_sequence':
context_config.self_attention_in_sequence,
'num_attention_heads':
context_config.num_attention_heads,
'num_attention_layers':
context_config.num_attention_layers,
'attention_position':
context_config.attention_position
})
return context_rcnn_meta_arch.ContextRCNNMetaArch(
initial_crop_size=initial_crop_size,
maxpool_kernel_size=maxpool_kernel_size,
maxpool_stride=maxpool_stride,
second_stage_mask_rcnn_box_predictor=second_stage_box_predictor,
second_stage_mask_prediction_loss_weight=(
second_stage_mask_prediction_loss_weight),
**common_kwargs)
else:
return faster_rcnn_meta_arch.FasterRCNNMetaArch(
initial_crop_size=initial_crop_size,
maxpool_kernel_size=maxpool_kernel_size,
maxpool_stride=maxpool_stride,
second_stage_mask_rcnn_box_predictor=second_stage_box_predictor,
second_stage_mask_prediction_loss_weight=(
second_stage_mask_prediction_loss_weight),
**common_kwargs)
EXPERIMENTAL_META_ARCH_BUILDER_MAP = {
}
def _build_experimental_model(config, is_training, add_summaries=True):
return EXPERIMENTAL_META_ARCH_BUILDER_MAP[config.name](
is_training, add_summaries)
# The class ID in the groundtruth/model architecture is usually 0-based while
# the ID in the label map is 1-based. The offset is used to convert between the
# the two.
CLASS_ID_OFFSET = 1
KEYPOINT_STD_DEV_DEFAULT = 1.0
def keypoint_proto_to_params(kp_config, keypoint_map_dict):
"""Converts CenterNet.KeypointEstimation proto to parameter namedtuple."""
label_map_item = keypoint_map_dict[kp_config.keypoint_class_name]
classification_loss, localization_loss, _, _, _, _, _ = (
losses_builder.build(kp_config.loss))
keypoint_indices = [
keypoint.id for keypoint in label_map_item.keypoints
]
keypoint_labels = [
keypoint.label for keypoint in label_map_item.keypoints
]
keypoint_std_dev_dict = {
label: KEYPOINT_STD_DEV_DEFAULT for label in keypoint_labels
}
if kp_config.keypoint_label_to_std:
for label, value in kp_config.keypoint_label_to_std.items():
keypoint_std_dev_dict[label] = value
keypoint_std_dev = [keypoint_std_dev_dict[label] for label in keypoint_labels]
if kp_config.HasField('heatmap_head_params'):
heatmap_head_num_filters = list(kp_config.heatmap_head_params.num_filters)
heatmap_head_kernel_sizes = list(kp_config.heatmap_head_params.kernel_sizes)
else:
heatmap_head_num_filters = [256]
heatmap_head_kernel_sizes = [3]
if kp_config.HasField('offset_head_params'):
offset_head_num_filters = list(kp_config.offset_head_params.num_filters)
offset_head_kernel_sizes = list(kp_config.offset_head_params.kernel_sizes)
else:
offset_head_num_filters = [256]
offset_head_kernel_sizes = [3]
if kp_config.HasField('regress_head_params'):
regress_head_num_filters = list(kp_config.regress_head_params.num_filters)
regress_head_kernel_sizes = list(
kp_config.regress_head_params.kernel_sizes)
else:
regress_head_num_filters = [256]
regress_head_kernel_sizes = [3]
return center_net_meta_arch.KeypointEstimationParams(
task_name=kp_config.task_name,
class_id=label_map_item.id - CLASS_ID_OFFSET,
keypoint_indices=keypoint_indices,
classification_loss=classification_loss,
localization_loss=localization_loss,
keypoint_labels=keypoint_labels,
keypoint_std_dev=keypoint_std_dev,
task_loss_weight=kp_config.task_loss_weight,
keypoint_regression_loss_weight=kp_config.keypoint_regression_loss_weight,
keypoint_heatmap_loss_weight=kp_config.keypoint_heatmap_loss_weight,
keypoint_offset_loss_weight=kp_config.keypoint_offset_loss_weight,
heatmap_bias_init=kp_config.heatmap_bias_init,
keypoint_candidate_score_threshold=(
kp_config.keypoint_candidate_score_threshold),
num_candidates_per_keypoint=kp_config.num_candidates_per_keypoint,
peak_max_pool_kernel_size=kp_config.peak_max_pool_kernel_size,
unmatched_keypoint_score=kp_config.unmatched_keypoint_score,
box_scale=kp_config.box_scale,
candidate_search_scale=kp_config.candidate_search_scale,
candidate_ranking_mode=kp_config.candidate_ranking_mode,
offset_peak_radius=kp_config.offset_peak_radius,
per_keypoint_offset=kp_config.per_keypoint_offset,
predict_depth=kp_config.predict_depth,
per_keypoint_depth=kp_config.per_keypoint_depth,
keypoint_depth_loss_weight=kp_config.keypoint_depth_loss_weight,
score_distance_offset=kp_config.score_distance_offset,
clip_out_of_frame_keypoints=kp_config.clip_out_of_frame_keypoints,
rescore_instances=kp_config.rescore_instances,
heatmap_head_num_filters=heatmap_head_num_filters,
heatmap_head_kernel_sizes=heatmap_head_kernel_sizes,
offset_head_num_filters=offset_head_num_filters,
offset_head_kernel_sizes=offset_head_kernel_sizes,
regress_head_num_filters=regress_head_num_filters,
regress_head_kernel_sizes=regress_head_kernel_sizes)
def object_detection_proto_to_params(od_config):
"""Converts CenterNet.ObjectDetection proto to parameter namedtuple."""
loss = losses_pb2.Loss()
# Add dummy classification loss to avoid the loss_builder throwing error.
# TODO(yuhuic): update the loss builder to take the classification loss
# directly.
loss.classification_loss.weighted_sigmoid.CopyFrom(
losses_pb2.WeightedSigmoidClassificationLoss())
loss.localization_loss.CopyFrom(od_config.localization_loss)
_, localization_loss, _, _, _, _, _ = (losses_builder.build(loss))
if od_config.HasField('scale_head_params'):
scale_head_num_filters = list(od_config.scale_head_params.num_filters)
scale_head_kernel_sizes = list(od_config.scale_head_params.kernel_sizes)
else:
scale_head_num_filters = [256]
scale_head_kernel_sizes = [3]
if od_config.HasField('offset_head_params'):
offset_head_num_filters = list(od_config.offset_head_params.num_filters)
offset_head_kernel_sizes = list(od_config.offset_head_params.kernel_sizes)
else:
offset_head_num_filters = [256]
offset_head_kernel_sizes = [3]
return center_net_meta_arch.ObjectDetectionParams(
localization_loss=localization_loss,
scale_loss_weight=od_config.scale_loss_weight,
offset_loss_weight=od_config.offset_loss_weight,
task_loss_weight=od_config.task_loss_weight,
scale_head_num_filters=scale_head_num_filters,
scale_head_kernel_sizes=scale_head_kernel_sizes,
offset_head_num_filters=offset_head_num_filters,
offset_head_kernel_sizes=offset_head_kernel_sizes)
def object_center_proto_to_params(oc_config):
"""Converts CenterNet.ObjectCenter proto to parameter namedtuple."""
loss = losses_pb2.Loss()
# Add dummy localization loss to avoid the loss_builder throwing error.
# TODO(yuhuic): update the loss builder to take the localization loss
# directly.
loss.localization_loss.weighted_l2.CopyFrom(
losses_pb2.WeightedL2LocalizationLoss())
loss.classification_loss.CopyFrom(oc_config.classification_loss)
classification_loss, _, _, _, _, _, _ = (losses_builder.build(loss))
keypoint_weights_for_center = []
if oc_config.keypoint_weights_for_center:
keypoint_weights_for_center = list(oc_config.keypoint_weights_for_center)
if oc_config.HasField('center_head_params'):
center_head_num_filters = list(oc_config.center_head_params.num_filters)
center_head_kernel_sizes = list(oc_config.center_head_params.kernel_sizes)
else:
center_head_num_filters = [256]
center_head_kernel_sizes = [3]
return center_net_meta_arch.ObjectCenterParams(
classification_loss=classification_loss,
object_center_loss_weight=oc_config.object_center_loss_weight,
heatmap_bias_init=oc_config.heatmap_bias_init,
min_box_overlap_iou=oc_config.min_box_overlap_iou,
max_box_predictions=oc_config.max_box_predictions,
use_labeled_classes=oc_config.use_labeled_classes,
keypoint_weights_for_center=keypoint_weights_for_center,
center_head_num_filters=center_head_num_filters,
center_head_kernel_sizes=center_head_kernel_sizes)
def mask_proto_to_params(mask_config):
"""Converts CenterNet.MaskEstimation proto to parameter namedtuple."""
loss = losses_pb2.Loss()
# Add dummy localization loss to avoid the loss_builder throwing error.
loss.localization_loss.weighted_l2.CopyFrom(
losses_pb2.WeightedL2LocalizationLoss())
loss.classification_loss.CopyFrom(mask_config.classification_loss)
classification_loss, _, _, _, _, _, _ = (losses_builder.build(loss))
if mask_config.HasField('mask_head_params'):
mask_head_num_filters = list(mask_config.mask_head_params.num_filters)
mask_head_kernel_sizes = list(mask_config.mask_head_params.kernel_sizes)
else:
mask_head_num_filters = [256]
mask_head_kernel_sizes = [3]
return center_net_meta_arch.MaskParams(
classification_loss=classification_loss,
task_loss_weight=mask_config.task_loss_weight,
mask_height=mask_config.mask_height,
mask_width=mask_config.mask_width,
score_threshold=mask_config.score_threshold,
heatmap_bias_init=mask_config.heatmap_bias_init,
mask_head_num_filters=mask_head_num_filters,
mask_head_kernel_sizes=mask_head_kernel_sizes)
def densepose_proto_to_params(densepose_config):
"""Converts CenterNet.DensePoseEstimation proto to parameter namedtuple."""
classification_loss, localization_loss, _, _, _, _, _ = (
losses_builder.build(densepose_config.loss))
return center_net_meta_arch.DensePoseParams(
class_id=densepose_config.class_id,
classification_loss=classification_loss,
localization_loss=localization_loss,
part_loss_weight=densepose_config.part_loss_weight,
coordinate_loss_weight=densepose_config.coordinate_loss_weight,
num_parts=densepose_config.num_parts,
task_loss_weight=densepose_config.task_loss_weight,
upsample_to_input_res=densepose_config.upsample_to_input_res,
heatmap_bias_init=densepose_config.heatmap_bias_init)
def tracking_proto_to_params(tracking_config):
"""Converts CenterNet.TrackEstimation proto to parameter namedtuple."""
loss = losses_pb2.Loss()
# Add dummy localization loss to avoid the loss_builder throwing error.
# TODO(yuhuic): update the loss builder to take the localization loss
# directly.
loss.localization_loss.weighted_l2.CopyFrom(
losses_pb2.WeightedL2LocalizationLoss())
loss.classification_loss.CopyFrom(tracking_config.classification_loss)
classification_loss, _, _, _, _, _, _ = losses_builder.build(loss)
return center_net_meta_arch.TrackParams(
num_track_ids=tracking_config.num_track_ids,
reid_embed_size=tracking_config.reid_embed_size,
classification_loss=classification_loss,
num_fc_layers=tracking_config.num_fc_layers,
task_loss_weight=tracking_config.task_loss_weight)
def temporal_offset_proto_to_params(temporal_offset_config):
"""Converts CenterNet.TemporalOffsetEstimation proto to param-tuple."""
loss = losses_pb2.Loss()
# Add dummy classification loss to avoid the loss_builder throwing error.
# TODO(yuhuic): update the loss builder to take the classification loss
# directly.
loss.classification_loss.weighted_sigmoid.CopyFrom(
losses_pb2.WeightedSigmoidClassificationLoss())
loss.localization_loss.CopyFrom(temporal_offset_config.localization_loss)
_, localization_loss, _, _, _, _, _ = losses_builder.build(loss)
return center_net_meta_arch.TemporalOffsetParams(
localization_loss=localization_loss,
task_loss_weight=temporal_offset_config.task_loss_weight)
def _build_center_net_model(center_net_config, is_training, add_summaries):
"""Build a CenterNet detection model.
Args:
center_net_config: A CenterNet proto object with model configuration.
is_training: True if this model is being built for training purposes.
add_summaries: Whether to add tf summaries in the model.
Returns:
CenterNetMetaArch based on the config.
"""
image_resizer_fn = image_resizer_builder.build(
center_net_config.image_resizer)
_check_feature_extractor_exists(center_net_config.feature_extractor.type)
feature_extractor = _build_center_net_feature_extractor(
center_net_config.feature_extractor, is_training)
object_center_params = object_center_proto_to_params(
center_net_config.object_center_params)
object_detection_params = None
if center_net_config.HasField('object_detection_task'):
object_detection_params = object_detection_proto_to_params(
center_net_config.object_detection_task)
if center_net_config.HasField('deepmac_mask_estimation'):
logging.warn(('Building experimental DeepMAC meta-arch.'
' Some features may be omitted.'))
deepmac_params = deepmac_meta_arch.deepmac_proto_to_params(
center_net_config.deepmac_mask_estimation)
return deepmac_meta_arch.DeepMACMetaArch(
is_training=is_training,
add_summaries=add_summaries,
num_classes=center_net_config.num_classes,
feature_extractor=feature_extractor,
image_resizer_fn=image_resizer_fn,
object_center_params=object_center_params,
object_detection_params=object_detection_params,
deepmac_params=deepmac_params)
keypoint_params_dict = None
if center_net_config.keypoint_estimation_task:
label_map_proto = label_map_util.load_labelmap(
center_net_config.keypoint_label_map_path)
keypoint_map_dict = {
item.name: item for item in label_map_proto.item if item.keypoints
}
keypoint_params_dict = {}
keypoint_class_id_set = set()
all_keypoint_indices = []
for task in center_net_config.keypoint_estimation_task:
kp_params = keypoint_proto_to_params(task, keypoint_map_dict)
keypoint_params_dict[task.task_name] = kp_params
all_keypoint_indices.extend(kp_params.keypoint_indices)
if kp_params.class_id in keypoint_class_id_set:
raise ValueError(('Multiple keypoint tasks map to the same class id is '
'not allowed: %d' % kp_params.class_id))
else:
keypoint_class_id_set.add(kp_params.class_id)
if len(all_keypoint_indices) > len(set(all_keypoint_indices)):
raise ValueError('Some keypoint indices are used more than once.')
mask_params = None
if center_net_config.HasField('mask_estimation_task'):
mask_params = mask_proto_to_params(center_net_config.mask_estimation_task)
densepose_params = None
if center_net_config.HasField('densepose_estimation_task'):
densepose_params = densepose_proto_to_params(
center_net_config.densepose_estimation_task)
track_params = None
if center_net_config.HasField('track_estimation_task'):
track_params = tracking_proto_to_params(
center_net_config.track_estimation_task)
temporal_offset_params = None
if center_net_config.HasField('temporal_offset_task'):
temporal_offset_params = temporal_offset_proto_to_params(
center_net_config.temporal_offset_task)
non_max_suppression_fn = None
if center_net_config.HasField('post_processing'):
non_max_suppression_fn, _ = post_processing_builder.build(
center_net_config.post_processing)
return center_net_meta_arch.CenterNetMetaArch(
is_training=is_training,
add_summaries=add_summaries,
num_classes=center_net_config.num_classes,
feature_extractor=feature_extractor,
image_resizer_fn=image_resizer_fn,
object_center_params=object_center_params,
object_detection_params=object_detection_params,
keypoint_params_dict=keypoint_params_dict,
mask_params=mask_params,
densepose_params=densepose_params,
track_params=track_params,
temporal_offset_params=temporal_offset_params,
use_depthwise=center_net_config.use_depthwise,
compute_heatmap_sparse=center_net_config.compute_heatmap_sparse,
non_max_suppression_fn=non_max_suppression_fn)
def _build_center_net_feature_extractor(feature_extractor_config, is_training):
"""Build a CenterNet feature extractor from the given config."""
if feature_extractor_config.type not in CENTER_NET_EXTRACTOR_FUNCTION_MAP:
raise ValueError('\'{}\' is not a known CenterNet feature extractor type'
.format(feature_extractor_config.type))
# For backwards compatibility:
use_separable_conv = (
feature_extractor_config.use_separable_conv or
feature_extractor_config.type == 'mobilenet_v2_fpn_sep_conv')
kwargs = {
'channel_means':
list(feature_extractor_config.channel_means),
'channel_stds':
list(feature_extractor_config.channel_stds),
'bgr_ordering':
feature_extractor_config.bgr_ordering,
'depth_multiplier':
feature_extractor_config.depth_multiplier,
'use_separable_conv':
use_separable_conv,
'upsampling_interpolation':
feature_extractor_config.upsampling_interpolation,
}
return CENTER_NET_EXTRACTOR_FUNCTION_MAP[feature_extractor_config.type](
**kwargs)
META_ARCH_BUILDER_MAP = {
'ssd': _build_ssd_model,
'faster_rcnn': _build_faster_rcnn_model,
'experimental_model': _build_experimental_model,
'center_net': _build_center_net_model
}
def build(model_config, is_training, add_summaries=True):
"""Builds a DetectionModel based on the model config.
Args:
model_config: A model.proto object containing the config for the desired
DetectionModel.
is_training: True if this model is being built for training purposes.
add_summaries: Whether to add tensorflow summaries in the model graph.
Returns:
DetectionModel based on the config.
Raises:
ValueError: On invalid meta architecture or model.
"""
if not isinstance(model_config, model_pb2.DetectionModel):
raise ValueError('model_config not of type model_pb2.DetectionModel.')
meta_architecture = model_config.WhichOneof('model')
if meta_architecture not in META_ARCH_BUILDER_MAP:
raise ValueError('Unknown meta architecture: {}'.format(meta_architecture))
else:
build_func = META_ARCH_BUILDER_MAP[meta_architecture]
return build_func(getattr(model_config, meta_architecture), is_training,
add_summaries) | PypiClean |
/DRAM_bio-1.4.6-py3-none-any.whl/mag_annotator/database_handler.py | from os import path, remove, getenv
from pkg_resources import resource_filename
import json
import gzip
import logging
from shutil import copy2
import warnings
from datetime import datetime
from functools import partial
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
import pandas as pd
from mag_annotator import __version__ as current_dram_version
from mag_annotator.database_setup import TABLE_NAME_TO_CLASS_DICT, create_description_db
from mag_annotator.utils import divide_chunks, setup_logger
SEARCH_DATABASES = {
"kegg",
"kofam_hmm",
"kofam_ko_list",
"uniref",
"pfam",
"dbcan",
"viral",
"peptidase",
"vogdb",
}
DRAM_SHEETS = (
"genome_summary_form",
"module_step_form",
"etc_module_database",
"function_heatmap_form",
"amg_database",
)
DATABASE_DESCRIPTIONS = ("pfam_hmm", "dbcan_fam_activities", "vog_annotations")
# TODO: store all sequence db locs within database handler class
# TODO: store scoring information here e.g. bitscore_threshold, hmm cutoffs
# TODO: set up custom databases here
# TODO: in advanced config separate search databases, search database description files, description db, DRAM sheets
# TODO: ko_list should be parsed into the DB and stored as a database description file and not a search database
def get_config_loc():
loc = getenv("DRAM_CONFIG_LOCATION")
if loc:
return loc
else:
return path.abspath(resource_filename("mag_annotator", "CONFIG"))
def clear_dict(val):
if isinstance(val, dict):
return {k: clear_dict(v) for k, v in val.items()}
else:
return None
class DatabaseHandler:
def __init__(self, logger, config_loc=None):
# read in new configuration
# TODO: validate config file after reading it in
if logger is None:
logger = logging.getLogger("database_handler.log")
# log_path = self.get_log_path()
# setup_logger(logger, log_path)
setup_logger(logger)
logger.info(f"Logging to console")
self.logger = logger
if config_loc is None:
config_loc = get_config_loc()
self.load_config(config_loc)
self.config_loc = config_loc
def load_config(self, config_file):
conf = json.loads(open(config_file).read())
if len(conf) == 0:
self.logger.warn("There is no config information in the provided file")
self.clear_config(write_config=False)
if "dram_version" not in conf:
warnings.warn(
"The DRAM version in your config is empty."
" This may not be a problem, but if this"
" import fails then you should check that"
" the origin of the file is valid."
)
self.__construct_from_dram_pre_1_4_0(conf)
else:
conf_version = conf.get("dram_version")
if conf_version is None:
self.__construct_from_dram_pre_1_4_0(conf)
elif conf_version not in {
current_dram_version,
"1.4.0",
"1.4.0rc1",
"1.4.0rc2",
"1.4.0rc3",
"1.4.0rc4",
}: # Known suported versions
warnings.warn(
"The DRAM version in your config is not listed in the versions "
"that are known to work. This may not be a problem, but if this "
"import fails then you should contact suport."
)
self.__construct_default(conf)
def get_log_path(self):
path = self.config.get("log_path")
if path is None:
path = path.join(self.config_loc, "database_processing.log")
return path
def __construct_default(self, conf: dict):
self.config = conf
# set up description database connection
description_loc = self.config.get("description_db")
if description_loc is None:
self.session = None
warnings.warn("Database does not exist at path %s" % description_loc)
elif not path.exists(description_loc):
self.session = None
warnings.warn("Database does not exist at path %s" % description_loc)
else:
self.start_db_session()
def __construct_from_dram_pre_1_4_0(self, config_old):
"""
Import older dram configs that predate 1.3
:param config_old: A config with no dram version so older than 1.3
"""
system_config_loc = get_config_loc()
self.config = clear_dict(json.loads(open(system_config_loc).read()))
self.config_loc = system_config_loc
# read in configuration # TODO: validate config file after reading it in
if 'viral_refseq' in config_old:
config_old['viral'] = config_old.get('viral_refseq')
if 'kofam' in config_old:
config_old['kofam_hmm'] = config_old.get('kofam')
if 'pfam_hmm_dat' in config_old:
config_old['pfam_hmm'] = config_old.get('pfam_hmm_dat')
self.config["search_databases"] = {
key: value for key, value in config_old.items() if key in SEARCH_DATABASES
}
self.config["database_descriptions"] = {
key: value
for key, value in config_old.items()
if key in DATABASE_DESCRIPTIONS
}
self.config["dram_sheets"] = {
key: value for key, value in config_old.items() if key in DRAM_SHEETS
}
self.config["dram_version"] = current_dram_version
# set up description database connection
self.config["description_db"] = config_old.get("description_db")
if self.config.get("description_db") is None:
self.session = None
warnings.warn(
"Database does not exist at path %s" % self.config.get("description_db")
)
elif not path.exists(self.config.get("description_db")):
self.session = None
warnings.warn(
"Database does not exist at path %s" % self.config.get("description_db")
)
else:
self.start_db_session()
def start_db_session(self):
engine = create_engine("sqlite:///%s" % self.config.get("description_db"))
db_session = sessionmaker(bind=engine)
self.session = db_session()
# functions for adding descriptions to tables
def add_descriptions_to_database(self, description_list, db_name, clear_table=True):
description_class = TABLE_NAME_TO_CLASS_DICT[db_name]
if clear_table:
self.session.query(description_class).delete()
# TODO: try batching
self.session.bulk_save_objects(
[description_class(**i) for i in description_list]
)
self.session.commit()
self.session.expunge_all()
# functions for getting descriptions from tables
def get_description(self, annotation_id, db_name, return_ob=False):
return (
self.session.query(TABLE_NAME_TO_CLASS_DICT[db_name])
.filter_by(id=annotation_id)
.one()
.description
)
def get_descriptions(self, ids, db_name, description_name="description"):
description_class = TABLE_NAME_TO_CLASS_DICT[db_name]
descriptions = [
des
for chunk in divide_chunks(list(ids), 499)
for des in self.session.query(description_class)
.filter(description_class.id.in_(chunk))
.all()
]
# [des for des in self.session.query(description_class).filter(description_class.id.in_(list(ids))).all() ]
# [i.id for i in self.session.query(TABLE_NAME_TO_CLASS_DICT['dbcan_description']).all()]
if len(descriptions) == 0:
warnings.warn(
"No descriptions were found for your id's. Does this %s look like an id from %s"
% (list(ids)[0], db_name)
)
return {i.id: i.__dict__[description_name] for i in descriptions}
@staticmethod
def get_database_names():
return TABLE_NAME_TO_CLASS_DICT.keys()
def get_settings_str(self):
out_str = ""
settings = self.config.get("setup_info")
if settings is None:
warnings.warn(
"there are no settings, the config is corrupted or too old.",
DeprecationWarning,
)
return "there are no settings, the config is corrupted or too old."
for i in ["search_databases", "database_descriptions", "dram_sheets"]:
out_str += "\n"
for k in self.config.get(i):
if settings.get(k) is not None:
out_str += f"\n{settings[k]['name']}:"
for l, w in settings[k].items():
if l == "name":
continue
out_str += f"\n {l.title()}: {w}"
return out_str
def set_database_paths(
self,
kegg_loc=None,
kofam_hmm_loc=None,
kofam_ko_list_loc=None,
uniref_loc=None,
pfam_loc=None,
pfam_hmm_loc=None,
dbcan_loc=None,
dbcan_fam_activities_loc=None,
dbcan_subfam_ec_loc=None,
viral_loc=None,
peptidase_loc=None,
vogdb_loc=None,
vog_annotations_loc=None,
description_db_loc=None,
log_path_loc=None,
genome_summary_form_loc=None,
module_step_form_loc=None,
etc_module_database_loc=None,
function_heatmap_form_loc=None,
amg_database_loc=None,
write_config=True,
update_description_db=False
):
def check_exists_and_add_to_location_dict(loc, old_value):
if loc is None: # if location is none then return the old value
return old_value
if path.isfile(loc): # if location exists return full path
return path.realpath(loc)
else: # if the location doesn't exist then raise error
raise ValueError("Database location does not exist: %s" % loc)
locs = {
"search_databases": {
"kegg": kegg_loc,
"kofam_hmm": kofam_hmm_loc,
"kofam_ko_list": kofam_ko_list_loc,
"uniref": uniref_loc,
"pfam": pfam_loc,
"dbcan": dbcan_loc,
"viral": viral_loc,
"peptidase": peptidase_loc,
"vogdb": vogdb_loc,
},
"database_descriptions": {
"pfam_hmm": pfam_hmm_loc,
"dbcan_fam_activities": dbcan_fam_activities_loc,
"dbcan_subfam_ec": dbcan_subfam_ec_loc,
"vog_annotations": vog_annotations_loc,
},
"dram_sheets": {
"genome_summary_form": genome_summary_form_loc,
"module_step_form": module_step_form_loc,
"etc_module_database": etc_module_database_loc,
"function_heatmap_form": function_heatmap_form_loc,
"amg_database": amg_database_loc,
},
}
self.config.update(
{
i: {
k: check_exists_and_add_to_location_dict(
locs[i][k], self.config.get(i).get(k)
)
for k in locs[i]
}
for i in locs
}
)
if update_description_db:
self.populate_description_db(output_loc=description_db_loc)
else:
self.config["description_db"] = check_exists_and_add_to_location_dict(
description_db_loc, self.config.get("description_db")
)
self.config["log_path"] = check_exists_and_add_to_location_dict(
log_path_loc, self.config.get("log_path_db")
)
self.start_db_session()
if write_config:
self.write_config()
def write_config(self, config_loc=None):
if config_loc is None:
config_loc = self.config_loc
with open(config_loc, "w") as f:
f.write(json.dumps(self.config, indent=2))
@staticmethod
def make_header_dict_from_mmseqs_db(mmseqs_db):
mmseqs_headers_handle = open("%s_h" % mmseqs_db, "rb")
mmseqs_headers = mmseqs_headers_handle.read().decode(errors="ignore")
mmseqs_headers = [
i.strip() for i in mmseqs_headers.strip().split("\n\x00") if len(i) > 0
]
mmseqs_headers_split = []
mmseqs_ids_unique = set()
mmseqs_ids_not_unique = set()
# TODO this could be faster with numpy
for i in mmseqs_headers:
header = {"id": i.split(" ")[0], "description": i}
if header["id"] not in mmseqs_ids_unique:
mmseqs_headers_split += [header]
mmseqs_ids_unique.add(header["id"])
else:
mmseqs_ids_not_unique.add(header["id"])
if len(mmseqs_ids_not_unique) > 0:
warnings.warn(
f"There are {len(mmseqs_ids_not_unique)} non unique headers "
f"in {mmseqs_db}! You should definitly investigate this!"
)
return mmseqs_headers_split
@staticmethod
def process_pfam_descriptions(pfam_hmm):
if pfam_hmm.endswith(".gz"):
f = gzip.open(pfam_hmm, "r").read().decode("utf-8")
else:
f = open(pfam_hmm).read()
entries = f.strip().split("//")
description_list = list()
for i, entry in enumerate(entries):
if len(entry) > 0:
entry = entry.split("\n")
ascession = None
description = None
for line in entry:
line = line.strip()
if line.startswith("#=GF AC"):
ascession = line.split(" ")[-1]
if line.startswith("#=GF DE"):
description = line.split(" ")[-1]
description_list.append({"id": ascession, "description": description})
return description_list
@staticmethod
def process_dbcan_descriptions(dbcan_fam_activities, dbcan_subfam_ec):
def line_reader(line):
if not line.startswith("#") and len(line.strip()) != 0:
line = line.strip().split()
if len(line) == 1:
description = line[0]
elif line[0] == line[1]:
description = " ".join(line[1:])
else:
description = " ".join(line)
return pd.DataFrame(
{"id": line[0], "description": description.replace("\n", " ")},
index=[0],
)
with open(dbcan_fam_activities) as f:
description_data = pd.concat([line_reader(line) for line in f.readlines()])
ec_data = pd.read_csv(
dbcan_subfam_ec, sep="\t", names=["id", "id2", "ec"], comment="#"
)[["id", "ec"]].drop_duplicates()
ec_data = (
pd.concat(
[ec_data["id"], ec_data["ec"].str.split("|", expand=True)], axis=1
)
.melt(id_vars="id", value_name="ec")
.dropna(subset=["ec"])[["id", "ec"]]
.groupby("id")
.apply(lambda x: ",".join(x["ec"].unique()))
)
ec_data = pd.DataFrame(ec_data, columns=["ec"]).reset_index()
data = pd.merge(description_data, ec_data, how="outer", on="id").fillna("")
return [i.to_dict() for _, i in data.iterrows()]
@staticmethod
def process_vogdb_descriptions(vog_annotations):
annotations_table = pd.read_csv(vog_annotations, sep="\t", index_col=0)
annotations_list = [
{
"id": vog,
"description": "%s; %s"
% (row["ConsensusFunctionalDescription"], row["FunctionalCategory"]),
}
for vog, row in annotations_table.iterrows()
]
return annotations_list
# TODO: Make option to build on description database that already exists?
def populate_description_db(
self, output_loc=None, select_db=None, update_config=True, erase_old_db=False
):
if (
self.config.get("description_db") is None and output_loc is None
): # description db location must be set somewhere
self.logger.critical(
"Must provide output location if description db location is not set in configuration"
)
raise ValueError(
"Must provide output location if description db location is not set in configuration"
)
if (
output_loc is not None
): # if new description db location is set then save it there
self.config["description_db"] = output_loc
self.start_db_session()
# I don't think this is needed
if path.exists(self.config.get("description_db")) and erase_old_db:
remove(self.config.get("description_db"))
create_description_db(self.config.get("description_db"))
def check_db(db_name, db_function):
# TODO add these sorts of checks to a separate function
# if self.config.get('search_databases').get(db_name) is None:
# return
# if not path.exists(self.config['search_databases'][db_name]):
# logger.warn(f"There is a path for the {db_name} db in the config, but there"
# " is no file at that path. The path is:"
# f"{self.config['search_databases'][db_name]}")
# return
self.add_descriptions_to_database(
db_function(), f"{db_name}_description", clear_table=True
)
self.config["setup_info"][db_name][
"description_db_updated"
] = datetime.now().strftime("%m/%d/%Y, %H:%M:%S")
self.logger.info(f"Description updated for the {db_name} database")
# fill database
mmseqs_database = ["kegg", "uniref", "viral", "peptidase"]
process_functions = {
i: partial(
self.make_header_dict_from_mmseqs_db, self.config["search_databases"][i]
)
for i in mmseqs_database
if self.config["search_databases"][i] is not None
}
# Use table names
process_functions.update(
{
"pfam": partial(
self.process_pfam_descriptions,
self.config.get("database_descriptions")["pfam_hmm"],
),
"dbcan": partial(
self.process_dbcan_descriptions,
self.config.get("database_descriptions")["dbcan_fam_activities"],
self.config.get("database_descriptions")["dbcan_subfam_ec"],
),
"vogdb": partial(
self.process_vogdb_descriptions,
self.config.get("database_descriptions")["vog_annotations"],
),
}
)
if select_db is not None:
process_functions = {
i: k for i, k in process_functions.items() if i in select_db
}
for i, k in process_functions.items():
check_db(i, k)
if update_config: # if new description db is set then save it
self.write_config()
def filter_db_locs(
self, low_mem_mode=False, use_uniref=True, use_vogdb=True, master_list=None
):
if master_list is None:
dbs_to_use = self.config["search_databases"].keys()
else:
dbs_to_use = master_list
# filter out dbs for low mem mode
if low_mem_mode:
if ("kofam_hmm" not in self.config.get("search_databases")) or (
"kofam_ko_list" not in self.config.get("search_databases")
):
raise ValueError(
"To run in low memory mode KOfam must be configured for use in DRAM"
)
dbs_to_use = [i for i in dbs_to_use if i not in ("uniref", "kegg", "vogdb")]
# check on uniref status
if use_uniref:
if "uniref" not in self.config.get("search_databases"):
warnings.warn(
"Sequences will not be annoated against uniref as it is not configured for use in DRAM"
)
else:
dbs_to_use = [i for i in dbs_to_use if i != "uniref"]
# check on vogdb status
if use_vogdb:
if "vogdb" not in self.config.get("search_databases"):
warnings.warn(
"Sequences will not be annoated against VOGDB as it is not configured for use in DRAM"
)
else:
dbs_to_use = [i for i in dbs_to_use if i != "vogdb"]
self.config["search_databases"] = {
key: value
for key, value in self.config.get("search_databases").items()
if key in dbs_to_use
}
def clear_config(self, write_config=False):
self.config = {
"search_databases": {},
"database_descriptions": {},
"dram_sheets": {},
"dram_version": current_dram_version,
"description_db": None,
"setup_info": {},
"log_path": None,
}
if write_config:
self.write_config()
def set_database_paths(clear_config=False, **kargs):
# TODO Add tests
db_handler = DatabaseHandler(None)
if clear_config:
db_handler.clear_config(write_config=True)
db_handler.set_database_paths(**kargs, write_config=True)
def print_database_locations(config_loc=None):
conf = DatabaseHandler(None, config_loc)
# search databases
print("Processed search databases")
print("KEGG db: %s" % conf.config.get("search_databases").get("kegg"))
print("KOfam db: %s" % conf.config.get("search_databases").get("kofam_hmm"))
print(
"KOfam KO list: %s" % conf.config.get("search_databases").get("kofam_ko_list")
)
print("UniRef db: %s" % conf.config.get("search_databases").get("uniref"))
print("Pfam db: %s" % conf.config.get("search_databases").get("pfam"))
print("dbCAN db: %s" % conf.config.get("search_databases").get("dbcan"))
print("RefSeq Viral db: %s" % conf.config.get("search_databases").get("viral"))
print(
"MEROPS peptidase db: %s" % conf.config.get("search_databases").get("peptidase")
)
print("VOGDB db: %s" % conf.config.get("search_databases").get("vogdb"))
# database descriptions used during description db population
print()
print("Descriptions of search database entries")
print("Pfam hmm dat: %s" % conf.config.get("database_descriptions").get("pfam_hmm"))
print(
"dbCAN family activities: %s"
% conf.config.get("database_descriptions").get("dbcan_fam_activities")
)
print(
"VOG annotations: %s"
% conf.config.get("database_descriptions").get("vog_annotations")
)
print()
# description database
print("Description db: %s" % conf.config.get("description_db"))
print()
# DRAM sheets
print("DRAM distillation sheets")
print(
"Genome summary form: %s"
% conf.config.get("dram_sheets").get("genome_summary_form")
)
print(
"Module step form: %s" % conf.config.get("dram_sheets").get("module_step_form")
)
print(
"ETC module database: %s"
% conf.config.get("dram_sheets").get("etc_module_database")
)
print(
"Function heatmap form: %s"
% conf.config.get("dram_sheets").get("function_heatmap_form")
)
print("AMG database: %s" % conf.config.get("dram_sheets").get("amg_database"))
def print_database_settings(config_loc=None):
conf = DatabaseHandler(None, config_loc)
print(conf.get_settings_str())
def populate_description_db(output_loc=None, select_db=None, config_loc=None):
db_handler = DatabaseHandler(None, config_loc)
db_handler.populate_description_db(output_loc, select_db)
def export_config(output_file=None):
config_loc = get_config_loc()
if output_file is None:
print(open(config_loc).read())
else:
copy2(config_loc, output_file)
def import_config(config_loc):
system_config = get_config_loc()
db_handler = DatabaseHandler(None, config_loc)
with open(system_config, "w") as outfile:
json.dump(db_handler.config, outfile, indent=2)
print("Import, appears to be successfull.")
def mv_db_folder(new_location: str = "./", old_config_file: str = None):
new_location = path.abspath(new_location)
old_config_file = path.abspath(old_config_file)
db_handler = DatabaseHandler(None)
if old_config_file is not None:
db_handler.load_config(old_config_file)
paths = ["search_databases", "dram_sheets", "database_descriptions"]
def auto_move_path(k: str, v: str):
if v is None:
db_handler.logger.warn(f"The path for {k} was not set, so can't update.")
return
new_path = path.join(new_location, path.basename(v))
if not path.exists(new_path):
db_handler.logger.warn(
f"There is no file at path {new_path},"
f" so no new location will be set for {k}."
)
return
db_handler.logger.info(f"Moving {k} to {new_path}")
db_handler.set_database_paths(**{f"{k}_loc": new_path}, write_config=True)
auto_move_path("log_path", db_handler.config.get("log_path"))
auto_move_path("description_db", db_handler.config.get("description_db"))
for i in paths:
for k, v in db_handler.config.get(i).items():
auto_move_path(k, v) | PypiClean |
/Flask-Turbo-Boost-0.2.8.tar.gz/Flask-Turbo-Boost-0.2.8/flask_turbo_boost/cli.py | import sys
import os
# Insert project root path to sys.path
project_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
if project_path not in sys.path:
sys.path.insert(0, project_path)
import logging
import io
from logging import StreamHandler, DEBUG
from os.path import dirname, abspath
from tempfile import mkstemp
from docopt import docopt
import shutil
import errno
from flask_turbo_boost import __version__
# If you add #{project} in a file, add the file ext here
REWRITE_FILE_EXTS = ('.html', '.conf', '.py', '.json', '.md')
logger = logging.getLogger(__name__)
logger.setLevel(DEBUG)
logger.addHandler(StreamHandler())
def generate_project(args, src_project_folder_name='project'):
"""New project."""
# Project templates path
src = os.path.join(dirname(abspath(__file__)), src_project_folder_name)
project_name = args.get('<project>')
if not project_name:
logger.warning('Project name cannot be empty.')
return
# Destination project path
dst = os.path.join(os.getcwd(), project_name)
if os.path.isdir(dst):
logger.warning('Project directory already exists.')
return
logger.info('Start generating project files.')
_mkdir_p(dst)
for src_dir, sub_dirs, filenames in os.walk(src):
# Build and create destination directory path
relative_path = src_dir.split(src)[1].lstrip(os.path.sep)
dst_dir = os.path.join(dst, relative_path)
if src != src_dir:
_mkdir_p(dst_dir)
# Copy, rewrite and move project files
for filename in filenames:
if filename in ['development.py', 'production.py']:
continue
src_file = os.path.join(src_dir, filename)
dst_file = os.path.join(dst_dir, filename)
if filename.endswith(REWRITE_FILE_EXTS):
_rewrite_and_copy(src_file, dst_file, project_name)
else:
shutil.copy(src_file, dst_file)
logger.info("New: %s" % dst_file)
if filename in ['development_sample.py', 'production_sample.py']:
dst_file = os.path.join(dst_dir, "%s.py" % filename.split('_')[0])
_rewrite_and_copy(src_file, dst_file, project_name)
logger.info("New: %s" % dst_file)
logger.info('Finish generating project files.')
def generate_controller(args):
"""Generate controller, include the controller file, template & css & js directories."""
controller_template = os.path.join(dirname(abspath(__file__)), 'templates/controller.py')
test_template = os.path.join(dirname(abspath(__file__)), 'templates/unittest.py')
controller_name = args.get('<controller>')
current_path = os.getcwd()
logger.info('Start generating controller.')
if not controller_name:
logger.warning('Controller name cannot be empty.')
return
# controller file
with open(controller_template, 'r') as template_file:
controller_file_path = os.path.join(current_path, 'application/controllers',
controller_name + '.py')
with open(controller_file_path, 'w+') as controller_file:
for line in template_file:
new_line = line.replace('#{controller}', controller_name)
controller_file.write(new_line)
logger.info("New: %s" % _relative_path(controller_file_path))
# test file
with open(test_template, 'r') as template_file:
test_file_path = os.path.join(current_path, 'tests',
'test_%s.py' % controller_name)
with open(test_file_path, 'w+') as test_file:
for line in template_file:
new_line = line.replace('#{controller}', controller_name) \
.replace('#{controller|title}', controller_name.title())
test_file.write(new_line)
logger.info("New: %s" % _relative_path(test_file_path))
# assets dir
assets_dir_path = os.path.join(current_path, 'application/pages/%s' % controller_name)
_mkdir_p(assets_dir_path)
# form file
_generate_form(controller_name)
logger.info('Finish generating controller.')
def generate_action(args):
"""Generate action."""
controller = args.get('<controller>')
action = args.get('<action>')
with_template = args.get('-t')
current_path = os.getcwd()
logger.info('Start generating action.')
controller_file_path = os.path.join(current_path, 'application/controllers', controller + '.py')
if not os.path.exists(controller_file_path):
logger.warning("The controller %s does't exist." % controller)
return
if with_template:
action_source_path = os.path.join(dirname(abspath(__file__)), 'templates/action.py')
else:
action_source_path = os.path.join(dirname(abspath(__file__)), 'templates/action_without_template.py')
# Add action source codes
with open(action_source_path, 'r') as action_source_file:
with open(controller_file_path, 'a') as controller_file:
for action_line in action_source_file:
new_line = action_line.replace('#{controller}', controller). \
replace('#{action}', action)
controller_file.write(new_line)
logger.info("Updated: %s" % _relative_path(controller_file_path))
if with_template:
# assets dir
assets_dir_path = os.path.join(current_path, 'application/pages/%s/%s' % (controller, action))
_mkdir_p(assets_dir_path)
# html
action_html_template_path = os.path.join(dirname(abspath(__file__)), 'templates/action.html')
action_html_path = os.path.join(assets_dir_path, '%s.html' % action)
with open(action_html_template_path, 'r') as action_html_template_file:
with open(action_html_path, 'w') as action_html_file:
for line in action_html_template_file:
new_line = line.replace('#{action}', action) \
.replace('#{action|title}', action.title()) \
.replace('#{controller}', controller)
action_html_file.write(new_line)
logger.info("New: %s" % _relative_path(action_html_path))
# js
action_js_template_path = os.path.join(dirname(abspath(__file__)), 'templates/action.js')
action_js_path = os.path.join(assets_dir_path, '%s.js' % action)
shutil.copy(action_js_template_path, action_js_path)
logger.info("New: %s" % _relative_path(action_js_path))
# less
action_less_template_path = os.path.join(dirname(abspath(__file__)), 'templates/action.less')
action_less_path = os.path.join(assets_dir_path, '%s.less' % action)
shutil.copy(action_less_template_path, action_less_path)
logger.info("New: %s" % _relative_path(action_less_path))
logger.info('Finish generating action.')
def generate_form(args):
"""Generate form."""
form_name = args.get('<form>')
logger.info('Start generating form.')
_generate_form(form_name)
logger.info('Finish generating form.')
def generate_model(args):
"""Generate model."""
model_name = args.get('<model>')
if not model_name:
logger.warning('Model name cannot be empty.')
return
logger.info('Start generating model.')
model_template = os.path.join(dirname(abspath(__file__)), 'templates/model.py')
current_path = os.getcwd()
with open(model_template, 'r') as template_file:
model_file_path = os.path.join(current_path, 'application/models',
model_name + '.py')
with open(model_file_path, 'w+') as model_file:
for line in template_file:
new_line = line.replace('#{model|title}', model_name.title())
model_file.write(new_line)
logger.info("New: %s" % _relative_path(model_file_path))
with open(os.path.join(current_path, 'application/models/__init__.py'), 'a') as package_file:
package_file.write('\nfrom .%s import *' % model_name)
logger.info('Finish generating model.')
def generate_macro(args):
"""Genarate macro."""
macro = args.get('<macro>').replace('-', '_')
category = args.get('<category>')
if not macro:
logger.warning('Macro name cannot be empty.')
return
logger.info('Start generating macro.')
current_path = os.getcwd()
if category:
macro_root_path = os.path.join(current_path, 'application/macros', category, macro)
else:
macro_root_path = os.path.join(current_path, 'application/macros', macro)
_mkdir_p(macro_root_path)
macro_html_path = os.path.join(macro_root_path, '_%s.html' % macro)
macro_css_path = os.path.join(macro_root_path, '_%s.less' % macro)
macro_js_path = os.path.join(macro_root_path, '_%s.js' % macro)
# html
macro_html_template_path = os.path.join(dirname(abspath(__file__)), 'templates/macro.html')
with open(macro_html_template_path, 'r') as template_file:
with open(macro_html_path, 'w+') as html_file:
for line in template_file:
new_line = line.replace('#{macro}', macro)
html_file.write(new_line)
logger.info("New: %s" % _relative_path(macro_html_path))
# css
open(macro_css_path, 'a').close()
logger.info("New: %s" % _relative_path(macro_css_path))
# js
open(macro_js_path, 'a').close()
logger.info("New: %s" % _relative_path(macro_js_path))
logger.info('Finish generating macro.')
def main():
args = docopt(__doc__, version="Flask-Turbo-Boost {0}".format(__version__))
if args.get('new'):
if args.get('controller'):
generate_controller(args)
elif args.get('form'):
generate_form(args)
elif args.get('model'):
generate_model(args)
elif args.get('action'):
generate_action(args)
elif args.get('macro'):
generate_macro(args)
else:
if args.get('--api'):
generate_project(args, 'api_project')
else:
generate_project(args, 'project')
else:
print(args)
def _mkdir_p(path):
"""mkdir -p path"""
try:
os.makedirs(path)
except OSError as exc:
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
raise
else:
logger.info("New: %s%s", path, os.path.sep)
def _rewrite_and_copy(src_file, dst_file, project_name):
"""Replace vars and copy."""
# Create temp file
fh, abs_path = mkstemp()
with io.open(abs_path, 'w', encoding='utf-8') as new_file:
with io.open(src_file, 'r', encoding='utf-8') as old_file:
for line in old_file:
new_line = line.replace('#{project}', project_name). \
replace('#{project|title}', project_name.title())
new_file.write(new_line)
# Copy to new file
shutil.copy(abs_path, dst_file)
os.close(fh)
def _generate_form(form_name):
form_template = os.path.join(dirname(abspath(__file__)), 'templates/form.py')
current_path = os.getcwd()
if not form_name:
logger.warning('Form name cannot be empty.')
return
form_file_path = os.path.join(current_path, 'application/forms', form_name + '.py')
shutil.copy(form_template, form_file_path)
logger.info("New: %s" % _relative_path(form_file_path))
with open(os.path.join(current_path, 'application/forms/__init__.py'), 'a') as package_file:
package_file.write('\nfrom .%s import *' % form_name)
def _relative_path(absolute_path):
current_path = os.getcwd()
return absolute_path.split(current_path)[1][1:]
if __name__ == "__main__":
main() | PypiClean |
/DarTui-1.1.0.tar.gz/DarTui-1.1.0/dartui/formatters.py | import re
def format_percentage(completed, total):
p_float = (completed / (total * 1.00)) * 100
p_formatted = "{0:2.1f}".format(p_float)
return(p_formatted)
def calc_size(n):
unit_index = 0
size_reduced = n
# we don't want a result in the thousands
while size_reduced >= 1000:
size_reduced /= float(1024)
unit_index += 1
return(size_reduced, unit_index)
def format_size(bytes_):
units = ("B", "KB", "MB", "GB", "TB")
size_reduced, unit_index = calc_size(bytes_)
size_formatted = "{0:.2f} {1}".format(size_reduced,
units[unit_index])
return(size_formatted)
def format_speed(bits):
units = ("KB", "MB", "GB", "TB")
# convert bits to kilobits before calculating (we dont want 0.0 b/s)
bits /= float(1024)
speed_reduced, unit_index = calc_size(bits)
speed_formatted = "{0:.1f} {1}/s".format(speed_reduced,
units[unit_index])
return(speed_formatted)
def format_ratio(ratio):
ratio_formatted = "{0:.2f}".format(ratio)
return(ratio_formatted)
def format_time_difference(t_diff, total_unit_count=2):
units = ("year", "month", "day", "hour", "minute", "second")
unit_in_seconds = (31536000, 2635200, 86400, 3600, 60, 1)
current_unit_count = 0
formatted_diff_list = []
t_diff_reduced = int(t_diff)
i = 0
while current_unit_count < total_unit_count and i < len(units):
cur_unit = units[i]
cur_unit_in_seconds = unit_in_seconds[i]
if t_diff_reduced >= cur_unit_in_seconds:
unit_amt = int(t_diff_reduced / cur_unit_in_seconds) # type-casting truncates float
unit_str = "{0} {1}".format(unit_amt, cur_unit)
if unit_amt > 1: unit_str += "s"
formatted_diff_list.append(unit_str)
t_diff_reduced -= (unit_amt * cur_unit_in_seconds)
current_unit_count += 1
i += 1
t_str = ", ".join(formatted_diff_list)
return(t_str)
def calc_eta(xfer_rate, bytes_remaining):
eta = 0
if xfer_rate > 0: eta = bytes_remaining / xfer_rate
return(eta)
def strip_url(tracker_url):
url_stripped = ""
regex = r"(http|udp)\:\/\/([^\:\/]*)"
r = re.search(regex, tracker_url, re.I)
if r:
url_stripped = r.groups()[-1].lower()
return(url_stripped)
else:
return(None) | PypiClean |
/MAVR-0.93.tar.gz/MAVR-0.93/scripts/wga/create_last_db_for_fast_wga.py | __author__ = 'Sergei F. Kliver'
import argparse
from RouToolPa.Tools.WGA import LAST
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--input_fasta_list", action="store", dest="input_fasta_list", required=True,
type=LAST.make_list_of_path_to_files_from_string,
help="Comma-separated list of input files")
parser.add_argument("-p", "--db_prefix", action="store", dest="db_prefix", required=True,
help="Prefix of LAST database")
parser.add_argument("-v", "--verbose", action="store_true", dest="verbose",
help="Verbose output")
parser.add_argument("-t", "--threads", action="store", dest="threads", default=4, type=int,
help="Number of threads. Default: 4")
"""
parser.add_argument("-d", "--handling_mode", action="store", dest="handling_mode", default="local",
help="Handling mode. Allowed: local(default), slurm")
parser.add_argument("-j", "--slurm_job_name", action="store", dest="slurm_job_name", default="JOB",
help="Slurm job name. Default: JOB")
parser.add_argument("-y", "--slurm_log_prefix", action="store", dest="slurm_log_prefix",
help="Slurm log prefix. ")
parser.add_argument("-e", "--slurm_error_log_prefix", action="store", dest="slurm_error_log_prefix",
help="Slurm error log prefix")
parser.add_argument("-z", "--slurm_max_running_jobs", action="store", dest="slurm_max_running_jobs",
default=300, type=int,
help="Slurm max running jobs. Default: 300")
parser.add_argument("-a", "--slurm_max_running_time", action="store", dest="slurm_max_running_time", default="100:00:00",
help="Slurm max running time in hh:mm:ss format. Default: 100:00:00")
parser.add_argument("-u", "--slurm_max_memmory_per_cpu", action="store", dest="slurm_max_memmory_per_cpu",
default=4000, type=int,
help="Slurm maximum memmory per cpu in megabytes. Default: 4000")
parser.add_argument("-w", "--slurm_modules_list", action="store", dest="slurm_modules_list", default=[],
type=lambda s: s.split(","),
help="Comma-separated list of modules to load. Set modules for hmmer and python")
"""
args = parser.parse_args()
LAST.threads = args.threads
LAST.create_last_db(args.db_prefix,
args.input_fasta_list,
softmasking=True,
seeding_scheme="YASS",
verbose=args.verbose,
keep_preliminary_masking=True,
mask_simple_repeats=True) | PypiClean |
/Flask-MDBootstrap-3.0.5.tar.gz/Flask-MDBootstrap-3.0.5/flask_mdbootstrap/static/MDB-Pro/src/js/vendor/free/waves.js | (function (window, factory) {
'use strict';
// AMD. Register as an anonymous module. Wrap in function so we have access
// to root via `this`.
if (typeof define === 'function' && define.amd) {
define([], function () {
window.Waves = factory.call(window);
return window.Waves;
});
}
// Node. Does not work with strict CommonJS, but only CommonJS-like
// environments that support module.exports, like Node.
else if (typeof exports === 'object') {
module.exports = factory.call(window);
}
// Browser globals.
else {
window.Waves = factory.call(window);
}
})(typeof window === 'object' ? window : this, function () {
'use strict';
var Waves = Waves || {};
var $$ = document.querySelectorAll.bind(document);
var toString = Object.prototype.toString;
var isTouchAvailable = 'ontouchstart' in window;
// Find exact position of element
function isWindow(obj) {
return obj !== null && obj === obj.window;
}
function getWindow(elem) {
return isWindow(elem) ? elem : elem.nodeType === 9 && elem.defaultView;
}
function isObject(value) {
var type = typeof value;
return type === 'function' || type === 'object' && !!value;
}
function isDOMNode(obj) {
return isObject(obj) && obj.nodeType > 0;
}
function getWavesElements(nodes) {
var stringRepr = toString.call(nodes);
if (stringRepr === '[object String]') {
return $$(nodes);
} else if (isObject(nodes) && /^\[object (Array|HTMLCollection|NodeList|Object)\]$/.test(stringRepr) && nodes.hasOwnProperty('length')) {
return nodes;
} else if (isDOMNode(nodes)) {
return [nodes];
}
return [];
}
function offset(elem) {
var docElem, win,
box = {
top: 0,
left: 0
},
doc = elem && elem.ownerDocument;
docElem = doc.documentElement;
if (typeof elem.getBoundingClientRect !== typeof undefined) {
box = elem.getBoundingClientRect();
}
win = getWindow(doc);
return {
top: box.top + win.pageYOffset - docElem.clientTop,
left: box.left + win.pageXOffset - docElem.clientLeft
};
}
function convertStyle(styleObj) {
var style = '';
for (var prop in styleObj) {
if (styleObj.hasOwnProperty(prop)) {
style += (prop + ':' + styleObj[prop] + ';');
}
}
return style;
}
var Effect = {
// Effect duration
duration: 750,
// Effect delay (check for scroll before showing effect)
delay: 200,
show: function (e, element, velocity) {
// Disable right click
if (e.button === 2) {
return false;
}
element = element || this;
// Create ripple
var ripple = document.createElement('div');
ripple.className = 'waves-ripple waves-rippling';
element.appendChild(ripple);
// Get click coordinate and element width
var pos = offset(element);
var relativeY = 0;
var relativeX = 0;
// Support for touch devices
if ('touches' in e && e.touches.length) {
relativeY = (e.touches[0].pageY - pos.top);
relativeX = (e.touches[0].pageX - pos.left);
}
//Normal case
else {
relativeY = (e.pageY - pos.top);
relativeX = (e.pageX - pos.left);
}
// Support for synthetic events
relativeX = relativeX >= 0 ? relativeX : 0;
relativeY = relativeY >= 0 ? relativeY : 0;
var scale = 'scale(' + ((element.clientWidth / 100) * 3) + ')';
var translate = 'translate(0,0)';
if (velocity) {
translate = 'translate(' + (velocity.x) + 'px, ' + (velocity.y) + 'px)';
}
// Attach data to element
ripple.setAttribute('data-hold', Date.now());
ripple.setAttribute('data-x', relativeX);
ripple.setAttribute('data-y', relativeY);
ripple.setAttribute('data-scale', scale);
ripple.setAttribute('data-translate', translate);
// Set ripple position
var rippleStyle = {
top: relativeY + 'px',
left: relativeX + 'px'
};
ripple.classList.add('waves-notransition');
ripple.setAttribute('style', convertStyle(rippleStyle));
ripple.classList.remove('waves-notransition');
// Scale the ripple
rippleStyle['-webkit-transform'] = scale + ' ' + translate;
rippleStyle['-moz-transform'] = scale + ' ' + translate;
rippleStyle['-ms-transform'] = scale + ' ' + translate;
rippleStyle['-o-transform'] = scale + ' ' + translate;
rippleStyle.transform = scale + ' ' + translate;
rippleStyle.opacity = '1';
var duration = e.type === 'mousemove' ? 2500 : Effect.duration;
rippleStyle['-webkit-transition-duration'] = duration + 'ms';
rippleStyle['-moz-transition-duration'] = duration + 'ms';
rippleStyle['-o-transition-duration'] = duration + 'ms';
rippleStyle['transition-duration'] = duration + 'ms';
ripple.setAttribute('style', convertStyle(rippleStyle));
},
hide: function (e, element) {
element = element || this;
var ripples = element.getElementsByClassName('waves-rippling');
for (var i = 0, len = ripples.length; i < len; i++) {
removeRipple(e, element, ripples[i]);
}
if (isTouchAvailable) {
element.removeEventListener('touchend', Effect.hide);
element.removeEventListener('touchcancel', Effect.hide);
}
element.removeEventListener('mouseup', Effect.hide);
element.removeEventListener('mouseleave', Effect.hide);
}
};
/**
* Collection of wrapper for HTML element that only have single tag
* like <input> and <img>
*/
var TagWrapper = {
// Wrap <input> tag so it can perform the effect
input: function (element) {
var parent = element.parentNode;
// If input already have parent just pass through
if (parent.tagName.toLowerCase() === 'span' && parent.classList.contains('waves-effect')) {
return;
}
// Put element class and style to the specified parent
var wrapper = document.createElement('span');
wrapper.className = 'waves-input-wrapper';
// element.className = 'waves-button-input';
// Put element as child
parent.replaceChild(wrapper, element);
wrapper.appendChild(element);
},
// Wrap <img> tag so it can perform the effect
img: function (element) {
var parent = element.parentNode;
// If input already have parent just pass through
if (parent.tagName.toLowerCase() === 'i' && parent.classList.contains('waves-effect')) {
return;
}
// Put element as child
var wrapper = document.createElement('i');
parent.replaceChild(wrapper, element);
wrapper.appendChild(element);
}
};
/**
* Hide the effect and remove the ripple. Must be
* a separate function to pass the JSLint...
*/
function removeRipple(e, el, ripple) {
// Check if the ripple still exist
if (!ripple) {
return;
}
ripple.classList.remove('waves-rippling');
var relativeX = ripple.getAttribute('data-x');
var relativeY = ripple.getAttribute('data-y');
var scale = ripple.getAttribute('data-scale');
var translate = ripple.getAttribute('data-translate');
// Get delay beetween mousedown and mouse leave
var diff = Date.now() - Number(ripple.getAttribute('data-hold'));
var delay = 350 - diff;
if (delay < 0) {
delay = 0;
}
if (e.type === 'mousemove') {
delay = 150;
}
// Fade out ripple after delay
var duration = e.type === 'mousemove' ? 2500 : Effect.duration;
setTimeout(function () {
var style = {
top: relativeY + 'px',
left: relativeX + 'px',
opacity: '0',
// Duration
'-webkit-transition-duration': duration + 'ms',
'-moz-transition-duration': duration + 'ms',
'-o-transition-duration': duration + 'ms',
'transition-duration': duration + 'ms',
'-webkit-transform': scale + ' ' + translate,
'-moz-transform': scale + ' ' + translate,
'-ms-transform': scale + ' ' + translate,
'-o-transform': scale + ' ' + translate,
'transform': scale + ' ' + translate
};
ripple.setAttribute('style', convertStyle(style));
setTimeout(function () {
try {
el.removeChild(ripple);
} catch (e) {
return false;
}
}, duration);
}, delay);
}
/**
* Disable mousedown event for 500ms during and after touch
*/
var TouchHandler = {
/* uses an integer rather than bool so there's no issues with
* needing to clear timeouts if another touch event occurred
* within the 500ms. Cannot mouseup between touchstart and
* touchend, nor in the 500ms after touchend. */
touches: 0,
allowEvent: function (e) {
var allow = true;
if (/^(mousedown|mousemove)$/.test(e.type) && TouchHandler.touches) {
allow = false;
}
return allow;
},
registerEvent: function (e) {
var eType = e.type;
if (eType === 'touchstart') {
TouchHandler.touches += 1; // push
} else if (/^(touchend|touchcancel)$/.test(eType)) {
setTimeout(function () {
if (TouchHandler.touches) {
TouchHandler.touches -= 1; // pop after 500ms
}
}, 500);
}
}
};
/**
* Delegated click handler for .waves-effect element.
* returns null when .waves-effect element not in "click tree"
*/
function getWavesEffectElement(e) {
if (TouchHandler.allowEvent(e) === false) {
return null;
}
var element = null;
var target = e.target || e.srcElement;
while (target.parentElement) {
if ((!(target instanceof SVGElement)) && target.classList.contains('waves-effect')) {
element = target;
break;
}
target = target.parentElement;
}
return element;
}
/**
* Bubble the click and show effect if .waves-effect elem was found
*/
function showEffect(e) {
// Disable effect if element has "disabled" property on it
// In some cases, the event is not triggered by the current element
// if (e.target.getAttribute('disabled') !== null) {
// return;
// }
var element = getWavesEffectElement(e);
if (element !== null) {
// Make it sure the element has either disabled property, disabled attribute or 'disabled' class
if (element.disabled || element.getAttribute('disabled') || element.classList.contains('disabled')) {
return;
}
TouchHandler.registerEvent(e);
if (e.type === 'touchstart' && Effect.delay) {
var hidden = false;
var timer = setTimeout(function () {
timer = null;
Effect.show(e, element);
}, Effect.delay);
var hideEffect = function (hideEvent) {
// if touch hasn't moved, and effect not yet started: start effect now
if (timer) {
clearTimeout(timer);
timer = null;
Effect.show(e, element);
}
if (!hidden) {
hidden = true;
Effect.hide(hideEvent, element);
}
removeListeners();
};
var touchMove = function (moveEvent) {
if (timer) {
clearTimeout(timer);
timer = null;
}
hideEffect(moveEvent);
removeListeners();
};
element.addEventListener('touchmove', touchMove, false);
element.addEventListener('touchend', hideEffect, false);
element.addEventListener('touchcancel', hideEffect, false);
var removeListeners = function () {
element.removeEventListener('touchmove', touchMove);
element.removeEventListener('touchend', hideEffect);
element.removeEventListener('touchcancel', hideEffect);
};
} else {
Effect.show(e, element);
if (isTouchAvailable) {
element.addEventListener('touchend', Effect.hide, false);
element.addEventListener('touchcancel', Effect.hide, false);
}
element.addEventListener('mouseup', Effect.hide, false);
element.addEventListener('mouseleave', Effect.hide, false);
}
}
}
Waves.init = function (options) {
var body = document.body;
options = options || {};
if ('duration' in options) {
Effect.duration = options.duration;
}
if ('delay' in options) {
Effect.delay = options.delay;
}
if (isTouchAvailable) {
body.addEventListener('touchstart', showEffect, false);
body.addEventListener('touchcancel', TouchHandler.registerEvent, false);
body.addEventListener('touchend', TouchHandler.registerEvent, false);
}
body.addEventListener('mousedown', showEffect, false);
};
/**
* Attach Waves to dynamically loaded inputs, or add .waves-effect and other
* waves classes to a set of elements. Set drag to true if the ripple mouseover
* or skimming effect should be applied to the elements.
*/
Waves.attach = function (elements, classes) {
elements = getWavesElements(elements);
if (toString.call(classes) === '[object Array]') {
classes = classes.join(' ');
}
classes = classes ? ' ' + classes : '';
var element, tagName;
for (var i = 0, len = elements.length; i < len; i++) {
element = elements[i];
tagName = element.tagName.toLowerCase();
if (['input', 'img'].indexOf(tagName) !== -1) {
TagWrapper[tagName](element);
element = element.parentElement;
}
if (element.className.indexOf('waves-effect') === -1) {
element.className += ' waves-effect' + classes;
}
}
};
/**
* Cause a ripple to appear in an element via code.
*/
Waves.ripple = function (elements, options) {
elements = getWavesElements(elements);
var elementsLen = elements.length;
options = options || {};
options.wait = options.wait || 0;
options.position = options.position || null; // default = centre of element
if (elementsLen) {
var element, pos, off, centre = {},
i = 0;
var mousedown = {
type: 'mousedown',
button: 1
};
var hideRipple = function (mouseup, element) {
return function () {
Effect.hide(mouseup, element);
};
};
for (; i < elementsLen; i++) {
element = elements[i];
pos = options.position || {
x: element.clientWidth / 2,
y: element.clientHeight / 2
};
off = offset(element);
centre.x = off.left + pos.x;
centre.y = off.top + pos.y;
mousedown.pageX = centre.x;
mousedown.pageY = centre.y;
Effect.show(mousedown, element);
if (options.wait >= 0 && options.wait !== null) {
var mouseup = {
type: 'mouseup',
button: 1
};
setTimeout(hideRipple(mouseup, element), options.wait);
}
}
}
};
/**
* Remove all ripples from an element.
*/
Waves.calm = function (elements) {
elements = getWavesElements(elements);
var mouseup = {
type: 'mouseup',
button: 1
};
for (var i = 0, len = elements.length; i < len; i++) {
Effect.hide(mouseup, elements[i]);
}
};
/**
* Deprecated API fallback
*/
Waves.displayEffect = function (options) {
console.error('Waves.displayEffect() has been deprecated and will be removed in future version. Please use Waves.init() to initialize Waves effect');
Waves.init(options);
};
return Waves;
});
$(document).ready(function () {
//Initialization
Waves.attach('.btn:not(.btn-flat), .btn-floating', ['waves-light']);
Waves.attach('.btn-flat', ['waves-effect']);
Waves.attach('.chip', ['waves-effect']);
Waves.attach('.view a .mask', ['waves-light']);
Waves.attach('.waves-light', ['waves-light']);
Waves.attach('.navbar-nav a:not(.navbar-brand), .nav-icons li a, .nav-tabs .nav-item:not(.dropdown)', ['waves-light']);
Waves.attach('.pager li a', ['waves-light']);
Waves.attach('.pagination .page-item .page-link', ['waves-effect']);
Waves.init();
}); | PypiClean |
/HOGBEN-1.2.1.tar.gz/HOGBEN-1.2.1/hogben/utils.py | import os
from typing import Optional, Union
import numpy as np
from dynesty import NestedSampler, DynamicNestedSampler
from dynesty import plotting as dyplot
from dynesty import utils as dyfunc
import refl1d.experiment
import refnx.reflect
import refnx.analysis
import bumps.parameter
import bumps.fitproblem
from hogben.simulate import reflectivity
class Sampler:
"""Contains code for running nested sampling on refnx and Refl1D models.
Attributes:
objective (refnx.analysis.Objective or
bumps.fitproblem.FitProblem): objective to sample.
params (list): varying model parameters.
ndim (int): number of varying model parameters.
sampler_static (dynesty.NestedSampler): static nested sampler.
sampler_dynamic (dynesty.DynamicNestedSampler): dynamic nested sampler.
"""
def __init__(self, objective):
self.objective = objective
# Determine if the objective is from refnx or Refl1D.
if isinstance(objective, refnx.analysis.BaseObjective):
# Use log-likelihood and prior transform methods of refnx objective
self.params = objective.varying_parameters()
logl = objective.logl
prior_transform = objective.prior_transform
elif isinstance(objective, bumps.fitproblem.BaseFitProblem):
# Use this class' custom log-likelihood and prior transform methods
self.params = self.objective._parameters
logl = self.logl_refl1d
prior_transform = self.prior_transform_refl1d
# Otherwise the given objective must be invalid.
else:
raise RuntimeError('invalid objective/fitproblem given')
self.ndim = len(self.params)
self.sampler_static = NestedSampler(logl, prior_transform, self.ndim)
self.sampler_dynamic = DynamicNestedSampler(logl, prior_transform,
self.ndim)
def logl_refl1d(self, x):
"""Calculates the log-likelihood of given parameter values `x`
for a Refl1D FitProblem.
Args:
x (numpy.ndarray): parameter values to calculate likelihood of.
Returns:
float: log-likelihood of parameter values `x`.
"""
self.objective.setp(x) # Set the parameter values.
return -self.objective.model_nllf() # Calculate the log-likelihood.
def prior_transform_refl1d(self, u):
"""Calculates the prior transform for a Refl1D FitProblem.
Args:
u (numpy.ndarray): values in interval [0,1] to be transformed.
Returns:
numpy.ndarray: `u` transformed to parameter space of interest.
"""
return np.asarray([param.bounds.put01(u[i])
for i, param in enumerate(self.params)])
def sample(self, verbose=True, dynamic=False):
"""Samples an Objective/FitProblem using nested sampling.
Args:
verbose (bool): whether to display sampling progress.
dynamic (bool): whether to use static or dynamic nested sampling.
Returns:
matplotlib.pyplot.Figure or float: corner plot.
"""
# Run either static or dynamic nested sampling.
if dynamic:
# Weighting is entirely on the posterior (0 weight on evidence).
self.sampler_dynamic.run_nested(print_progress=verbose,
wt_kwargs={'pfrac': 1.0})
results = self.sampler_dynamic.results
else:
self.sampler_static.run_nested(print_progress=verbose)
results = self.sampler_static.results
# Calculate the parameter means.
weights = np.exp(results.logwt - results.logz[-1])
mean, _ = dyfunc.mean_and_cov(results.samples, weights)
# Set the parameter values to the estimated means.
for i, param in enumerate(self.params):
param.value = mean[i]
# Return the corner plot
return self.__corner(results)
def __corner(self, results):
"""Calculates a corner plot from given nested sampling `results`.
Args:
results (dynesty.results.Results): full output of a sampling run.
Returns:
matplotlib.pyplot.Figure: nested sampling corner plot.
"""
# Get the corner plot from dynesty package.
fig, _ = dyplot.cornerplot(results, color='blue', quantiles=None,
show_titles=True, max_n_ticks=3,
truths=np.zeros(self.ndim),
truth_color='black')
# Label the axes with parameter labels.
axes = np.reshape(np.array(fig.get_axes()), (self.ndim, self.ndim))
for i in range(1, self.ndim):
for j in range(self.ndim):
if i == self.ndim - 1:
axes[i, j].set_xlabel(self.params[j].name)
if j == 0:
axes[i, j].set_ylabel(self.params[i].name)
axes[self.ndim - 1, self.ndim - 1].set_xlabel(self.params[-1].name)
return fig
def fisher(qs: list[list],
xi: list[Union['refnx.analysis.Parameter',
'bumps.parameter.Parameter']],
counts: list[int],
models: list[Union['refnx.reflect.ReflectModel',
'refl1d.experiment.Experiment']],
step: float = 0.005) -> np.ndarray:
"""Calculates the Fisher information matrix for multiple `models`
containing parameters `xi`. The model describes the experiment,
including the sample, and is defined using `refnx` or `refl1d`. The
lower and upper bounds of each parameter in the model are transformed
into a standardized range from 0 to 1, which is used to calculate the
Fisher information matrix. Each parameter in the Fisher information
matrix is scaled using an importance parameter. By default,
the importance parameter is set to 1 for all parameters, and can be set
by changing the `importance` attribute of the parameter when setting up
the model. For example the relative importance of the thickness in
"layer1" can be set to 2 using `layer1.thickness.importance = 2` or
`layer1.thick.importance = 2` in `refnx` and `refl1d` respectively.
Args:
qs: The Q points for each model.
xi: The varying model parameters.
counts: incident neutron counts corresponding to each Q value.
models: models to calculate gradients with.
step: step size to take when calculating gradient.
Returns:
numpy.ndarray: Fisher information matrix for given models and data.
"""
n = sum(len(q) for q in qs) # Number of data points.
m = len(xi) # Number of parameters.
J = np.zeros((n, m))
# There is no information if there is no data.
if n == 0:
return np.zeros((m, m))
# Calculate the gradient of model reflectivity with every model parameter
# for every model data point.
for i in range(m):
parameter = xi[i]
old = parameter.value
# Calculate reflectance for each model for first part of gradient.
x1 = parameter.value = old * (1 - step)
y1 = np.concatenate([reflectivity(q, model)
for q, model in list(zip(qs, models))])
# Calculate reflectance for each model for second part of gradient.
x2 = parameter.value = old * (1 + step)
y2 = np.concatenate([reflectivity(q, model)
for q, model in list(zip(qs, models))])
parameter.value = old # Reset the parameter.
J[:, i] = (y2 - y1) / (x2 - x1) # Calculate the gradient.
# Calculate the reflectance for each model for the given Q values.
r = np.concatenate([reflectivity(q, model)
for q, model in list(zip(qs, models))])
# Calculate the Fisher information matrix using equations from the paper.
M = np.diag(np.concatenate(counts) / r, k=0)
g = np.dot(np.dot(J.T, M), J)
# If there are multiple parameters,
# scale each parameter's information by its "importance".
if len(xi) > 1:
if isinstance(xi[0], refnx.analysis.Parameter):
lb = np.array([param.bounds.lb for param in xi])
ub = np.array([param.bounds.ub for param in xi])
elif isinstance(xi[0], bumps.parameter.Parameter):
lb = np.array([param.bounds.limits[0] for param in xi])
ub = np.array([param.bounds.limits[1] for param in xi])
# Scale each parameter with their specified importance,
# scale with one if no importance was specified.
importance_array = []
for param in xi:
if hasattr(param, "importance"):
importance_array.append(param.importance)
else:
importance_array.append(1)
importance = np.diag(importance_array)
H = np.diag(1 / (ub - lb)) # Get unit scaling Jacobian.
g = np.dot(np.dot(H.T, g), H) # Perform unit scaling.
g = np.dot(g, importance) # Perform importance scaling.
# Return the Fisher information matrix.
return g
def save_plot(fig, save_path, filename):
"""Saves a figure to a given directory.
Args:
fig (matplotlib.pyplot.Figure): figure to save.
save_path (str): path to directory to save figure to.
filename (str): name of file to save plot as.
"""
# Create the directory if not present.
if not os.path.exists(save_path):
os.makedirs(save_path)
file_path = os.path.join(save_path, filename + '.png')
fig.savefig(file_path, dpi=600) | PypiClean |
/Django-4.2.4.tar.gz/Django-4.2.4/django/contrib/gis/db/backends/spatialite/operations.py | from django.contrib.gis.db import models
from django.contrib.gis.db.backends.base.operations import BaseSpatialOperations
from django.contrib.gis.db.backends.spatialite.adapter import SpatiaLiteAdapter
from django.contrib.gis.db.backends.utils import SpatialOperator
from django.contrib.gis.geos.geometry import GEOSGeometry, GEOSGeometryBase
from django.contrib.gis.geos.prototypes.io import wkb_r
from django.contrib.gis.measure import Distance
from django.core.exceptions import ImproperlyConfigured
from django.db.backends.sqlite3.operations import DatabaseOperations
from django.utils.functional import cached_property
from django.utils.version import get_version_tuple
class SpatialiteNullCheckOperator(SpatialOperator):
def as_sql(self, connection, lookup, template_params, sql_params):
sql, params = super().as_sql(connection, lookup, template_params, sql_params)
return "%s > 0" % sql, params
class SpatiaLiteOperations(BaseSpatialOperations, DatabaseOperations):
name = "spatialite"
spatialite = True
Adapter = SpatiaLiteAdapter
collect = "Collect"
extent = "Extent"
makeline = "MakeLine"
unionagg = "GUnion"
from_text = "GeomFromText"
gis_operators = {
# Binary predicates
"equals": SpatialiteNullCheckOperator(func="Equals"),
"disjoint": SpatialiteNullCheckOperator(func="Disjoint"),
"touches": SpatialiteNullCheckOperator(func="Touches"),
"crosses": SpatialiteNullCheckOperator(func="Crosses"),
"within": SpatialiteNullCheckOperator(func="Within"),
"overlaps": SpatialiteNullCheckOperator(func="Overlaps"),
"contains": SpatialiteNullCheckOperator(func="Contains"),
"intersects": SpatialiteNullCheckOperator(func="Intersects"),
"relate": SpatialiteNullCheckOperator(func="Relate"),
"coveredby": SpatialiteNullCheckOperator(func="CoveredBy"),
"covers": SpatialiteNullCheckOperator(func="Covers"),
# Returns true if B's bounding box completely contains A's bounding box.
"contained": SpatialOperator(func="MbrWithin"),
# Returns true if A's bounding box completely contains B's bounding box.
"bbcontains": SpatialOperator(func="MbrContains"),
# Returns true if A's bounding box overlaps B's bounding box.
"bboverlaps": SpatialOperator(func="MbrOverlaps"),
# These are implemented here as synonyms for Equals
"same_as": SpatialiteNullCheckOperator(func="Equals"),
"exact": SpatialiteNullCheckOperator(func="Equals"),
# Distance predicates
"dwithin": SpatialOperator(func="PtDistWithin"),
}
disallowed_aggregates = (models.Extent3D,)
select = "CAST (AsEWKB(%s) AS BLOB)"
function_names = {
"AsWKB": "St_AsBinary",
"ForcePolygonCW": "ST_ForceLHR",
"FromWKB": "ST_GeomFromWKB",
"FromWKT": "ST_GeomFromText",
"Length": "ST_Length",
"LineLocatePoint": "ST_Line_Locate_Point",
"NumPoints": "ST_NPoints",
"Reverse": "ST_Reverse",
"Scale": "ScaleCoords",
"Translate": "ST_Translate",
"Union": "ST_Union",
}
@cached_property
def unsupported_functions(self):
unsupported = {"BoundingCircle", "GeometryDistance", "IsEmpty", "MemSize"}
if not self.geom_lib_version():
unsupported |= {"Azimuth", "GeoHash", "MakeValid"}
return unsupported
@cached_property
def spatial_version(self):
"""Determine the version of the SpatiaLite library."""
try:
version = self.spatialite_version_tuple()[1:]
except Exception as exc:
raise ImproperlyConfigured(
'Cannot determine the SpatiaLite version for the "%s" database. '
"Was the SpatiaLite initialization SQL loaded on this database?"
% (self.connection.settings_dict["NAME"],)
) from exc
if version < (4, 3, 0):
raise ImproperlyConfigured("GeoDjango supports SpatiaLite 4.3.0 and above.")
return version
def convert_extent(self, box):
"""
Convert the polygon data received from SpatiaLite to min/max values.
"""
if box is None:
return None
shell = GEOSGeometry(box).shell
xmin, ymin = shell[0][:2]
xmax, ymax = shell[2][:2]
return (xmin, ymin, xmax, ymax)
def geo_db_type(self, f):
"""
Return None because geometry columns are added via the
`AddGeometryColumn` stored procedure on SpatiaLite.
"""
return None
def get_distance(self, f, value, lookup_type):
"""
Return the distance parameters for the given geometry field,
lookup value, and lookup type.
"""
if not value:
return []
value = value[0]
if isinstance(value, Distance):
if f.geodetic(self.connection):
if lookup_type == "dwithin":
raise ValueError(
"Only numeric values of degree units are allowed on "
"geographic DWithin queries."
)
dist_param = value.m
else:
dist_param = getattr(
value, Distance.unit_attname(f.units_name(self.connection))
)
else:
dist_param = value
return [dist_param]
def _get_spatialite_func(self, func):
"""
Helper routine for calling SpatiaLite functions and returning
their result.
Any error occurring in this method should be handled by the caller.
"""
cursor = self.connection._cursor()
try:
cursor.execute("SELECT %s" % func)
row = cursor.fetchone()
finally:
cursor.close()
return row[0]
def geos_version(self):
"Return the version of GEOS used by SpatiaLite as a string."
return self._get_spatialite_func("geos_version()")
def proj_version(self):
"""Return the version of the PROJ library used by SpatiaLite."""
return self._get_spatialite_func("proj4_version()")
def lwgeom_version(self):
"""Return the version of LWGEOM library used by SpatiaLite."""
return self._get_spatialite_func("lwgeom_version()")
def rttopo_version(self):
"""Return the version of RTTOPO library used by SpatiaLite."""
return self._get_spatialite_func("rttopo_version()")
def geom_lib_version(self):
"""
Return the version of the version-dependant geom library used by
SpatiaLite.
"""
if self.spatial_version >= (5,):
return self.rttopo_version()
else:
return self.lwgeom_version()
def spatialite_version(self):
"Return the SpatiaLite library version as a string."
return self._get_spatialite_func("spatialite_version()")
def spatialite_version_tuple(self):
"""
Return the SpatiaLite version as a tuple (version string, major,
minor, subminor).
"""
version = self.spatialite_version()
return (version,) + get_version_tuple(version)
def spatial_aggregate_name(self, agg_name):
"""
Return the spatial aggregate SQL template and function for the
given Aggregate instance.
"""
agg_name = "unionagg" if agg_name.lower() == "union" else agg_name.lower()
return getattr(self, agg_name)
# Routines for getting the OGC-compliant models.
def geometry_columns(self):
from django.contrib.gis.db.backends.spatialite.models import (
SpatialiteGeometryColumns,
)
return SpatialiteGeometryColumns
def spatial_ref_sys(self):
from django.contrib.gis.db.backends.spatialite.models import (
SpatialiteSpatialRefSys,
)
return SpatialiteSpatialRefSys
def get_geometry_converter(self, expression):
geom_class = expression.output_field.geom_class
read = wkb_r().read
def converter(value, expression, connection):
return None if value is None else GEOSGeometryBase(read(value), geom_class)
return converter | PypiClean |
/maxbot-0.3.0b2-py3-none-any.whl/maxbot/maxml/xml_parser.py | import logging
from dataclasses import dataclass
from xml.sax.handler import ContentHandler, ErrorHandler # nosec
from defusedxml.sax import parseString
from ..errors import BotError, XmlSnippet
from . import fields, markup
logger = logging.getLogger(__name__)
_ROOT_ELEM_NAME = "root"
KNOWN_ROOT_ELEMENTS = frozenset(
list(markup.PlainTextRenderer.KNOWN_START_TAGS.keys())
+ list(markup.PlainTextRenderer.KNOWN_END_TAGS.keys())
)
@dataclass(frozen=True)
class Pointer:
"""Pointer to specific line and column in XML document."""
# Line number (zero-based)
lineno: int
# Number of column (zero-based)
column: int
class _ContentHandler(ContentHandler):
def __init__(self, schema, register_symbol):
super().__init__()
self.schema = schema
self.register_symbol = register_symbol
self.maxbot_commands = []
self.nested = None
self.startElement = self._create_hanler(self._on_start_element)
self.endElement = self._create_hanler(self._on_end_element)
self.characters = self._create_hanler(self._on_characters)
def _on_start_element(self, name, attrs):
if not self.nested:
assert self.nested is None
assert name == _ROOT_ELEM_NAME
self.nested = [_RootElement(name, self.register_symbol_factory, attrs, self.schema)]
else:
nested = self.nested[-1].on_starttag(name, attrs)
if nested:
self.nested.append(nested)
def _on_end_element(self, name):
value = self.nested[-1].on_endtag(name)
if value is not None:
processed = self.nested.pop()
if self.nested:
self.nested[-1].on_nested_processed(processed.tag, value)
else:
assert isinstance(value, list)
self.maxbot_commands += value
def _on_characters(self, content):
self.nested[-1].on_data(content)
def _create_hanler(self, handler):
def _impl(*args, **kwargs):
try:
return handler(*args, **kwargs)
except _Error as exc:
if exc.ptr is None:
# skip exception without pointer
raise _Error(exc.message, self._get_ptr()) from exc.__cause__
raise
return _impl
def _get_ptr(self):
if self._locator is None:
return None
lineno = self._locator.getLineNumber() - 1
assert lineno >= 0
return Pointer(lineno, self._locator.getColumnNumber())
def register_symbol_factory(self):
captured_ptr = self._get_ptr()
def _register_symbol(value):
if captured_ptr:
self.register_symbol(value, captured_ptr)
return _register_symbol
class _ErrorHandler(ErrorHandler):
def error(self, exception):
return self.fatalError(exception)
def fatalError(self, exception): # noqa: N802 (function name should be lowercase)
get_lineno = getattr(exception, "getLineNumber", None)
get_column = getattr(exception, "getColumnNumber", None)
ptr = Pointer(get_lineno() - 1, get_column()) if get_lineno and get_column else None
raise _Error(f"{exception.__class__.__name__}: {exception.getMessage()}", ptr)
def warning(self, exception):
logger.warning("XML warning: %s", exception)
class XmlParser:
"""Parse MaxBot commands from XML document."""
CONTENT_HANDLER_CLASS = _ContentHandler
ERROR_HANDLER_CLASS = _ErrorHandler
PARSE_STRING_OPTIONS = {"forbid_dtd": True}
def loads(self, document, *, maxml_command_schema=None, maxml_symbols=None, **kwargs):
"""Load MaxBot command list from headless XML document.
:param str document: Headless XML document.
:param type maxml_command_schema: A schema of commands.
:param dict maxml_symbols: Map id of values to `Pointer`s
:param dict kwargs: Ignored.
"""
for command_name, command_schema in maxml_command_schema.declared_fields.items():
if command_schema.metadata.get("maxml", "element") != "element":
raise BotError(f"Command {command_name!r} is not described as an element")
# +1 lineno
encoded = f"<{_ROOT_ELEM_NAME}>\n{document}</{_ROOT_ELEM_NAME}>".encode("utf-8")
def _register_symbol(value, ptr):
assert ptr.lineno >= 1
if maxml_symbols is not None:
maxml_symbols[id(value)] = Pointer(ptr.lineno - 1, ptr.column)
content_handler = self.CONTENT_HANDLER_CLASS(maxml_command_schema, _register_symbol)
try:
parseString(
encoded,
content_handler,
errorHandler=self.ERROR_HANDLER_CLASS(),
**self.PARSE_STRING_OPTIONS,
)
except _Error as exc:
snippet = None
if exc.ptr:
# correct lineno
assert exc.ptr.lineno >= 1
snippet = XmlSnippet(document.splitlines(), exc.ptr.lineno - 1, exc.ptr.column)
# skip _Error
raise BotError(exc.message, snippet) from exc.__cause__
return content_handler.maxbot_commands
class _Error(Exception):
def __init__(self, message, ptr=None):
self.message = message
self.ptr = ptr
class _ElementBase:
def __init__(self, tag, register_symbol_factory):
self.tag = tag
self.register_symbol_factory = register_symbol_factory
self.register_symbol = self.register_symbol_factory()
def attrs_to_dict(self, attrs, schema):
value = {}
for field_name, field_value in attrs.items():
field_schema = _get_object_field_schema(schema, field_name, "Attribute")
if field_schema.metadata.get("maxml", "attribute") != "attribute":
_raise_not_described("Attribute", field_name)
self.register_symbol_factory()(field_value)
value[field_name] = field_value
return value
def check_no_attr(self, attrs, tag=None):
if attrs:
_raise_not_described("Attribute", attrs.keys()[0])
class _ScalarElement(_ElementBase):
def __init__(self, tag, register_symbol_factory, attrs):
super().__init__(tag, register_symbol_factory)
self.check_no_attr(attrs)
self.value = ""
def on_starttag(self, tag, attrs):
_raise_not_described("Element", tag)
def on_endtag(self, tag):
assert tag == self.tag
self.register_symbol(self.value)
return self.value
def on_data(self, data):
assert isinstance(data, str)
self.value += data
class _MarkupElement(_ElementBase):
def __init__(self, tag, register_symbol_factory, attrs):
super().__init__(tag, register_symbol_factory)
self.check_no_attr(attrs)
self.tag_level = 1
self.items = []
def on_starttag(self, tag, attrs):
assert self.tag_level >= 1
self.tag_level += 1
self.items.append(markup.Item(markup.START_TAG, tag, dict(attrs) if attrs else None))
def on_endtag(self, tag):
assert self.tag_level >= 1
self.tag_level -= 1
if self.tag_level > 0:
self.items.append(markup.Item(markup.END_TAG, tag))
return None
assert self.tag == tag
value = markup.Value(self.items)
self.register_symbol(value)
return value
def on_data(self, data):
assert isinstance(data, str)
if self.items and self.items[-1].kind == markup.TEXT:
self.items[-1] = markup.Item(markup.TEXT, self.items[-1].value + data)
else:
self.items.append(markup.Item(markup.TEXT, data))
class _DictElement(_ElementBase):
def __init__(self, tag, register_symbol_factory, attrs, schema):
super().__init__(tag, register_symbol_factory)
self.schema = schema
self.value = self.attrs_to_dict(attrs, schema)
def on_starttag(self, tag, attrs):
if tag in self.value and not isinstance(self.value[tag], list):
raise _Error(f"Element {tag!r} is duplicated")
field_schema = _get_object_field_schema(self.schema, tag, "Element")
if get_metadata_maxml(field_schema) != "element":
_raise_not_described("Element", tag)
return _factory(tag, self.register_symbol_factory, attrs, field_schema, self.value)
def on_endtag(self, tag):
assert tag == self.tag
self.register_symbol(self.value)
return self.value
def on_data(self, data):
if _normalize_spaces(data):
raise _Error(f"Element {self.tag!r} has undescribed text")
def on_nested_processed(self, tag, value):
self.value[tag] = value
class _ListElement(_ElementBase):
def __init__(self, tag, register_symbol_factory, attrs, item_schema, parent):
if not isinstance(parent, dict):
raise _Error(f"The list ({tag!r}) should be a dictionary field")
super().__init__(tag, register_symbol_factory)
self.parent = parent
self.item = _factory(tag, register_symbol_factory, attrs, item_schema)
def on_starttag(self, tag, attrs):
return self.item.on_starttag(tag, attrs)
def on_endtag(self, tag):
value = self.item.on_endtag(tag)
if value is None:
return None
container = self.parent.get(self.tag)
if container is None:
container = []
self.register_symbol(container)
container.append(value)
return container
def on_data(self, data):
return self.item.on_data(data)
def on_nested_processed(self, tag, value):
return self.item.on_nested_processed(tag, value)
class _ContentElement(_ElementBase):
def __init__(self, tag, register_symbol_factory, attrs, schema, field_name, field_schema):
child_elements = [
f for f in schema().declared_fields.items() if f[1].metadata.get("maxml") == "element"
]
if child_elements:
child_names = ", ".join(repr(i[0]) for i in child_elements)
raise _Error(
f"An {tag!r} element with a {field_name!r} content field cannot contain child elements: {child_names}"
)
if not is_known_scalar(field_schema) and not isinstance(field_schema, markup.Field):
raise _Error(f"Field {field_name!r} must be a scalar")
super().__init__(tag, register_symbol_factory)
self.field_name = field_name
self.field = _factory(tag, register_symbol_factory, attrs={}, schema=field_schema)
self.value = self.attrs_to_dict(attrs, schema)
def on_starttag(self, tag, attrs):
return self.field.on_starttag(tag, attrs)
def on_endtag(self, tag):
value = self.field.on_endtag(tag)
if value is None:
return None
self.value[self.field_name] = value
self.register_symbol(self.value)
return self.value
def on_data(self, data):
return self.field.on_data(data)
class _RootElement(_ElementBase):
def __init__(self, tag, register_symbol_factory, attrs, schema):
super().__init__(tag, register_symbol_factory)
self.check_no_attr(attrs)
self.schema = schema
self.commands = []
self._text_harverter = None
self._text_harverter_level = 1
def on_starttag(self, name, attrs):
command_schema = self.schema.declared_fields.get(name)
if command_schema:
self._end_of_text_harverter()
return _factory(name, self.register_symbol_factory, attrs, command_schema)
if name not in KNOWN_ROOT_ELEMENTS:
_raise_not_described("Command", name)
self._text_harverter_level += 1
return self.text_harverter.on_starttag(name, attrs)
def on_endtag(self, name):
self._text_harverter_level -= 1
if self._text_harverter_level:
value = self.text_harverter.on_endtag(name)
assert value is None
return None
assert name == self.tag
self._end_of_text_harverter()
return self.commands
def on_data(self, data):
if data.strip() or self._text_harverter:
self.text_harverter.on_data(data)
def on_nested_processed(self, tag, value):
self.commands.append({tag: value})
@property
def text_harverter(self):
if self._text_harverter is None:
self._text_harverter = _MarkupElement(self.tag, self.register_symbol_factory, attrs={})
return self._text_harverter
def _end_of_text_harverter(self):
if self._text_harverter:
value = self._text_harverter.on_endtag(self._text_harverter.tag)
assert value is not None
if value:
self.commands.append({"text": value})
self._text_harverter = None
def _factory(tag, register_symbol_factory, attrs, schema, parent=None):
if isinstance(schema, markup.Field):
return _MarkupElement(tag, register_symbol_factory, attrs)
if is_known_scalar(schema):
return _ScalarElement(tag, register_symbol_factory, attrs)
if isinstance(schema, fields.Nested):
if schema.many:
return _ListElement(
tag,
register_symbol_factory,
attrs,
fields.Nested(schema.nested),
parent,
)
content_fields = [
f
for f in schema.nested().declared_fields.items()
if f[1].metadata.get("maxml") == "content"
]
if len(content_fields) > 1:
field_names = ", ".join(repr(i[0]) for i in content_fields)
raise _Error(f"There can be no more than one field marked `content`: {field_names}")
if len(content_fields) == 1:
return _ContentElement(
tag, register_symbol_factory, attrs, schema.nested, *content_fields[0]
)
return _DictElement(tag, register_symbol_factory, attrs, schema.nested)
if isinstance(schema, fields.List):
return _ListElement(tag, register_symbol_factory, attrs, schema.inner, parent)
raise _Error(f"Unexpected schema ({type(schema)}) for element {tag!r}")
def _raise_not_described(entity, name):
raise _Error(f"{entity} {name!r} is not described in the schema")
def _get_object_field_schema(schema, field_name, entity):
field_schema = schema().declared_fields.get(field_name)
if field_schema is None:
_raise_not_described(entity, field_name)
return field_schema
def _normalize_spaces(s):
return " ".join(s.split()) if s else ""
def is_known_scalar(schema):
"""Check for known scalar field (or inherited)."""
if isinstance(schema, fields.String):
return True
if isinstance(schema, fields.Number):
return True
if isinstance(schema, fields.Boolean):
return True
if isinstance(schema, fields.DateTime):
return True
if isinstance(schema, fields.TimeDelta):
return True
return False
def get_metadata_maxml(schema):
"""Get "maxml" value of metadata."""
return schema.metadata.get("maxml", "attribute" if is_known_scalar(schema) else "element") | PypiClean |
/EthTx-0.3.22.tar.gz/EthTx-0.3.22/ethtx/models/semantics_model.py |
from __future__ import annotations
from typing import List, Dict, Optional, TYPE_CHECKING
from ethtx.models.base_model import BaseModel
if TYPE_CHECKING:
from ethtx.providers.semantic_providers import ISemanticsDatabase
class TransformationSemantics(BaseModel):
transformed_name: Optional[str]
transformed_type: Optional[str]
transformation: str = ""
class ParameterSemantics(BaseModel):
parameter_name: str
parameter_type: str
components: list[ParameterSemantics] = []
indexed: bool = False
dynamic: bool = False
ParameterSemantics.update_forward_refs()
class EventSemantics(BaseModel):
signature: str
anonymous: bool
name: str
parameters: List[ParameterSemantics]
class FunctionSemantics(BaseModel):
signature: str
name: str
inputs: List[ParameterSemantics]
outputs: List[ParameterSemantics] = []
class SignatureArg(BaseModel):
name: str
type: str
class Signature(BaseModel):
signature_hash: str
name: str
args: List[SignatureArg]
count: int = 1
tuple: bool = False
guessed: bool = False
class ERC20Semantics(BaseModel):
name: str
symbol: str
decimals: int
class ContractSemantics(BaseModel):
code_hash: str
name: str
events: Dict[str, EventSemantics] = {}
functions: Dict[str, FunctionSemantics] = {}
transformations: Dict[str, Dict[str, TransformationSemantics]] = {}
class AddressSemantics(BaseModel):
chain_id: str
address: str
name: str
is_contract: bool
contract: ContractSemantics
standard: Optional[str]
erc20: Optional[ERC20Semantics]
class Config:
allow_mutation = True
@staticmethod
def from_mongo_record(
raw_address_semantics: Dict, database: "ISemanticsDatabase"
) -> "AddressSemantics":
ZERO_HASH = "0xc5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470"
def decode_parameter(_parameter):
components_semantics = []
if "components" in _parameter:
for component in _parameter["components"]:
components_semantics.append(decode_parameter(component))
decoded_parameter = ParameterSemantics(
parameter_name=_parameter["parameter_name"],
parameter_type=_parameter["parameter_type"],
components=components_semantics,
indexed=_parameter["indexed"],
dynamic=_parameter["dynamic"],
)
return decoded_parameter
if raw_address_semantics.get("erc20"):
erc20_semantics = ERC20Semantics(
name=raw_address_semantics["erc20"]["name"],
symbol=raw_address_semantics["erc20"]["symbol"],
decimals=raw_address_semantics["erc20"]["decimals"],
)
else:
erc20_semantics = None
if raw_address_semantics["contract"] == ZERO_HASH:
contract_semantics = ContractSemantics(
code_hash=raw_address_semantics["contract"], name="EOA"
)
else:
raw_contract_semantics = database.get_contract_semantics(
raw_address_semantics["contract"]
)
events = {}
for signature, event in raw_contract_semantics["events"].items():
parameters_semantics = []
for parameter in event["parameters"]:
parameters_semantics.append(decode_parameter(parameter))
events[signature] = EventSemantics(
signature=signature,
anonymous=event["anonymous"],
name=event["name"],
parameters=parameters_semantics,
)
functions = {}
for signature, function in raw_contract_semantics["functions"].items():
inputs_semantics = []
for parameter in function["inputs"]:
inputs_semantics.append(decode_parameter(parameter))
outputs_semantics = []
for parameter in function["outputs"]:
outputs_semantics.append(decode_parameter(parameter))
functions[signature] = FunctionSemantics(
signature=signature,
name=function["name"],
inputs=inputs_semantics,
outputs=outputs_semantics,
)
transformations = {}
for signature, parameters_transformations in raw_contract_semantics[
"transformations"
].items():
transformations[signature] = {}
for parameter, transformation in parameters_transformations.items():
transformations[signature][parameter] = TransformationSemantics(
transformed_name=transformation["transformed_name"],
transformed_type=transformation["transformed_type"],
transformation=transformation["transformation"],
)
contract_semantics = ContractSemantics(
code_hash=raw_contract_semantics["code_hash"],
name=raw_contract_semantics["name"],
events=events,
functions=functions,
transformations=transformations,
)
address = raw_address_semantics.get("address")
chain_id = raw_address_semantics.get("chain_id")
name = raw_address_semantics.get("name", address)
address_semantics = AddressSemantics(
chain_id=chain_id,
address=address,
name=name,
is_contract=raw_address_semantics["is_contract"],
contract=contract_semantics,
standard=raw_address_semantics["standard"],
erc20=erc20_semantics,
)
return address_semantics | PypiClean |
/BS4_for_aiogram-0.0.0.5-py3-none-any.whl/bs4/element.py | __license__ = "MIT"
try:
from collections.abc import Callable # Python 3.6
except ImportError as e:
from collections import Callable
import re
import sys
import warnings
try:
import soupsieve
except ImportError as e:
soupsieve = None
warnings.warn(
'The soupsieve package is not installed. CSS selectors cannot be used.'
)
from bs4.formatter import (
Formatter,
HTMLFormatter,
XMLFormatter,
)
DEFAULT_OUTPUT_ENCODING = "utf-8"
PY3K = (sys.version_info[0] > 2)
nonwhitespace_re = re.compile(r"\S+")
# NOTE: This isn't used as of 4.7.0. I'm leaving it for a little bit on
# the off chance someone imported it for their own use.
whitespace_re = re.compile(r"\s+")
def _alias(attr):
"""Alias one attribute name to another for backward compatibility"""
@property
def alias(self):
return getattr(self, attr)
@alias.setter
def alias(self):
return setattr(self, attr)
return alias
# These encodings are recognized by Python (so PageElement.encode
# could theoretically support them) but XML and HTML don't recognize
# them (so they should not show up in an XML or HTML document as that
# document's encoding).
#
# If an XML document is encoded in one of these encodings, no encoding
# will be mentioned in the XML declaration. If an HTML document is
# encoded in one of these encodings, and the HTML document has a
# <meta> tag that mentions an encoding, the encoding will be given as
# the empty string.
#
# Source:
# https://docs.python.org/3/library/codecs.html#python-specific-encodings
PYTHON_SPECIFIC_ENCODINGS = set([
"idna",
"mbcs",
"oem",
"palmos",
"punycode",
"raw_unicode_escape",
"undefined",
"unicode_escape",
"raw-unicode-escape",
"unicode-escape",
"string-escape",
"string_escape",
])
class NamespacedAttribute(str):
"""A namespaced string (e.g. 'xml:lang') that remembers the namespace
('xml') and the name ('lang') that were used to create it.
"""
def __new__(cls, prefix, name=None, namespace=None):
if not name:
# This is the default namespace. Its name "has no value"
# per https://www.w3.org/TR/xml-names/#defaulting
name = None
if name is None:
obj = str.__new__(cls, prefix)
elif prefix is None:
# Not really namespaced.
obj = str.__new__(cls, name)
else:
obj = str.__new__(cls, prefix + ":" + name)
obj.prefix = prefix
obj.name = name
obj.namespace = namespace
return obj
class AttributeValueWithCharsetSubstitution(str):
"""A stand-in object for a character encoding specified in HTML."""
class CharsetMetaAttributeValue(AttributeValueWithCharsetSubstitution):
"""A generic stand-in for the value of a meta tag's 'charset' attribute.
When Beautiful Soup parses the markup '<meta charset="utf8">', the
value of the 'charset' attribute will be one of these objects.
"""
def __new__(cls, original_value):
obj = str.__new__(cls, original_value)
obj.original_value = original_value
return obj
def encode(self, encoding):
"""When an HTML document is being encoded to a given encoding, the
value of a meta tag's 'charset' is the name of the encoding.
"""
if encoding in PYTHON_SPECIFIC_ENCODINGS:
return ''
return encoding
class ContentMetaAttributeValue(AttributeValueWithCharsetSubstitution):
"""A generic stand-in for the value of a meta tag's 'content' attribute.
When Beautiful Soup parses the markup:
<meta http-equiv="content-type" content="text/html; charset=utf8">
The value of the 'content' attribute will be one of these objects.
"""
CHARSET_RE = re.compile(r"((^|;)\s*charset=)([^;]*)", re.M)
def __new__(cls, original_value):
match = cls.CHARSET_RE.search(original_value)
if match is None:
# No substitution necessary.
return str.__new__(str, original_value)
obj = str.__new__(cls, original_value)
obj.original_value = original_value
return obj
def encode(self, encoding):
if encoding in PYTHON_SPECIFIC_ENCODINGS:
return ''
def rewrite(match):
return match.group(1) + encoding
return self.CHARSET_RE.sub(rewrite, self.original_value)
class PageElement(object):
"""Contains the navigational information for some part of the page:
that is, its current location in the parse tree.
NavigableString, Tag, etc. are all subclasses of PageElement.
"""
def setup(self, parent=None, previous_element=None, next_element=None,
previous_sibling=None, next_sibling=None):
"""Sets up the initial relations between this element and
other elements.
:param parent: The parent of this element.
:param previous_element: The element parsed immediately before
this one.
:param next_element: The element parsed immediately before
this one.
:param previous_sibling: The most recently encountered element
on the same level of the parse tree as this one.
:param previous_sibling: The next element to be encountered
on the same level of the parse tree as this one.
"""
self.parent = parent
self.previous_element = previous_element
if previous_element is not None:
self.previous_element.next_element = self
self.next_element = next_element
if self.next_element is not None:
self.next_element.previous_element = self
self.next_sibling = next_sibling
if self.next_sibling is not None:
self.next_sibling.previous_sibling = self
if (previous_sibling is None
and self.parent is not None and self.parent.contents):
previous_sibling = self.parent.contents[-1]
self.previous_sibling = previous_sibling
if previous_sibling is not None:
self.previous_sibling.next_sibling = self
def format_string(self, s, formatter):
"""Format the given string using the given formatter.
:param s: A string.
:param formatter: A Formatter object, or a string naming one of the standard formatters.
"""
if formatter is None:
return s
if not isinstance(formatter, Formatter):
formatter = self.formatter_for_name(formatter)
output = formatter.substitute(s)
return output
def formatter_for_name(self, formatter):
"""Look up or create a Formatter for the given identifier,
if necessary.
:param formatter: Can be a Formatter object (used as-is), a
function (used as the entity substitution hook for an
XMLFormatter or HTMLFormatter), or a string (used to look
up an XMLFormatter or HTMLFormatter in the appropriate
registry.
"""
if isinstance(formatter, Formatter):
return formatter
if self._is_xml:
c = XMLFormatter
else:
c = HTMLFormatter
if isinstance(formatter, Callable):
return c(entity_substitution=formatter)
return c.REGISTRY[formatter]
@property
def _is_xml(self):
"""Is this element part of an XML tree or an HTML tree?
This is used in formatter_for_name, when deciding whether an
XMLFormatter or HTMLFormatter is more appropriate. It can be
inefficient, but it should be called very rarely.
"""
if self.known_xml is not None:
# Most of the time we will have determined this when the
# document is parsed.
return self.known_xml
# Otherwise, it's likely that this element was created by
# direct invocation of the constructor from within the user's
# Python code.
if self.parent is None:
# This is the top-level object. It should have .known_xml set
# from tree creation. If not, take a guess--BS is usually
# used on HTML markup.
return getattr(self, 'is_xml', False)
return self.parent._is_xml
nextSibling = _alias("next_sibling") # BS3
previousSibling = _alias("previous_sibling") # BS3
def replace_with(self, replace_with):
"""Replace this PageElement with another one, keeping the rest of the
tree the same.
:param replace_with: A PageElement.
:return: `self`, no longer part of the tree.
"""
if self.parent is None:
raise ValueError(
"Cannot replace one element with another when the "
"element to be replaced is not part of a tree.")
if replace_with is self:
return
if replace_with is self.parent:
raise ValueError("Cannot replace a Tag with its parent.")
old_parent = self.parent
my_index = self.parent.index(self)
self.extract(_self_index=my_index)
old_parent.insert(my_index, replace_with)
return self
replaceWith = replace_with # BS3
def unwrap(self):
"""Replace this PageElement with its contents.
:return: `self`, no longer part of the tree.
"""
my_parent = self.parent
if self.parent is None:
raise ValueError(
"Cannot replace an element with its contents when that"
"element is not part of a tree.")
my_index = self.parent.index(self)
self.extract(_self_index=my_index)
for child in reversed(self.contents[:]):
my_parent.insert(my_index, child)
return self
replace_with_children = unwrap
replaceWithChildren = unwrap # BS3
def wrap(self, wrap_inside):
"""Wrap this PageElement inside another one.
:param wrap_inside: A PageElement.
:return: `wrap_inside`, occupying the position in the tree that used
to be occupied by `self`, and with `self` inside it.
"""
me = self.replace_with(wrap_inside)
wrap_inside.append(me)
return wrap_inside
def extract(self, _self_index=None):
"""Destructively rips this element out of the tree.
:param _self_index: The location of this element in its parent's
.contents, if known. Passing this in allows for a performance
optimization.
:return: `self`, no longer part of the tree.
"""
if self.parent is not None:
if _self_index is None:
_self_index = self.parent.index(self)
del self.parent.contents[_self_index]
#Find the two elements that would be next to each other if
#this element (and any children) hadn't been parsed. Connect
#the two.
last_child = self._last_descendant()
next_element = last_child.next_element
if (self.previous_element is not None and
self.previous_element is not next_element):
self.previous_element.next_element = next_element
if next_element is not None and next_element is not self.previous_element:
next_element.previous_element = self.previous_element
self.previous_element = None
last_child.next_element = None
self.parent = None
if (self.previous_sibling is not None
and self.previous_sibling is not self.next_sibling):
self.previous_sibling.next_sibling = self.next_sibling
if (self.next_sibling is not None
and self.next_sibling is not self.previous_sibling):
self.next_sibling.previous_sibling = self.previous_sibling
self.previous_sibling = self.next_sibling = None
return self
def _last_descendant(self, is_initialized=True, accept_self=True):
"""Finds the last element beneath this object to be parsed.
:param is_initialized: Has `setup` been called on this PageElement
yet?
:param accept_self: Is `self` an acceptable answer to the question?
"""
if is_initialized and self.next_sibling is not None:
last_child = self.next_sibling.previous_element
else:
last_child = self
while isinstance(last_child, Tag) and last_child.contents:
last_child = last_child.contents[-1]
if not accept_self and last_child is self:
last_child = None
return last_child
# BS3: Not part of the API!
_lastRecursiveChild = _last_descendant
def insert(self, position, new_child):
"""Insert a new PageElement in the list of this PageElement's children.
This works the same way as `list.insert`.
:param position: The numeric position that should be occupied
in `self.children` by the new PageElement.
:param new_child: A PageElement.
"""
if new_child is None:
raise ValueError("Cannot insert None into a tag.")
if new_child is self:
raise ValueError("Cannot insert a tag into itself.")
if (isinstance(new_child, str)
and not isinstance(new_child, NavigableString)):
new_child = NavigableString(new_child)
from bs4 import BeautifulSoup
if isinstance(new_child, BeautifulSoup):
# We don't want to end up with a situation where one BeautifulSoup
# object contains another. Insert the children one at a time.
for subchild in list(new_child.contents):
self.insert(position, subchild)
position += 1
return
position = min(position, len(self.contents))
if hasattr(new_child, 'parent') and new_child.parent is not None:
# We're 'inserting' an element that's already one
# of this object's children.
if new_child.parent is self:
current_index = self.index(new_child)
if current_index < position:
# We're moving this element further down the list
# of this object's children. That means that when
# we extract this element, our target index will
# jump down one.
position -= 1
new_child.extract()
new_child.parent = self
previous_child = None
if position == 0:
new_child.previous_sibling = None
new_child.previous_element = self
else:
previous_child = self.contents[position - 1]
new_child.previous_sibling = previous_child
new_child.previous_sibling.next_sibling = new_child
new_child.previous_element = previous_child._last_descendant(False)
if new_child.previous_element is not None:
new_child.previous_element.next_element = new_child
new_childs_last_element = new_child._last_descendant(False)
if position >= len(self.contents):
new_child.next_sibling = None
parent = self
parents_next_sibling = None
while parents_next_sibling is None and parent is not None:
parents_next_sibling = parent.next_sibling
parent = parent.parent
if parents_next_sibling is not None:
# We found the element that comes next in the document.
break
if parents_next_sibling is not None:
new_childs_last_element.next_element = parents_next_sibling
else:
# The last element of this tag is the last element in
# the document.
new_childs_last_element.next_element = None
else:
next_child = self.contents[position]
new_child.next_sibling = next_child
if new_child.next_sibling is not None:
new_child.next_sibling.previous_sibling = new_child
new_childs_last_element.next_element = next_child
if new_childs_last_element.next_element is not None:
new_childs_last_element.next_element.previous_element = new_childs_last_element
self.contents.insert(position, new_child)
def append(self, tag):
"""Appends the given PageElement to the contents of this one.
:param tag: A PageElement.
"""
self.insert(len(self.contents), tag)
def extend(self, tags):
"""Appends the given PageElements to this one's contents.
:param tags: A list of PageElements.
"""
if isinstance(tags, Tag):
# Calling self.append() on another tag's contents will change
# the list we're iterating over. Make a list that won't
# change.
tags = list(tags.contents)
for tag in tags:
self.append(tag)
def insert_before(self, *args):
"""Makes the given element(s) the immediate predecessor of this one.
All the elements will have the same parent, and the given elements
will be immediately before this one.
:param args: One or more PageElements.
"""
parent = self.parent
if parent is None:
raise ValueError(
"Element has no parent, so 'before' has no meaning.")
if any(x is self for x in args):
raise ValueError("Can't insert an element before itself.")
for predecessor in args:
# Extract first so that the index won't be screwed up if they
# are siblings.
if isinstance(predecessor, PageElement):
predecessor.extract()
index = parent.index(self)
parent.insert(index, predecessor)
def insert_after(self, *args):
"""Makes the given element(s) the immediate successor of this one.
The elements will have the same parent, and the given elements
will be immediately after this one.
:param args: One or more PageElements.
"""
# Do all error checking before modifying the tree.
parent = self.parent
if parent is None:
raise ValueError(
"Element has no parent, so 'after' has no meaning.")
if any(x is self for x in args):
raise ValueError("Can't insert an element after itself.")
offset = 0
for successor in args:
# Extract first so that the index won't be screwed up if they
# are siblings.
if isinstance(successor, PageElement):
successor.extract()
index = parent.index(self)
parent.insert(index+1+offset, successor)
offset += 1
def find_next(self, name=None, attrs={}, text=None, **kwargs):
"""Find the first PageElement that matches the given criteria and
appears later in the document than this PageElement.
All find_* methods take a common set of arguments. See the online
documentation for detailed explanations.
:param name: A filter on tag name.
:param attrs: A dictionary of filters on attribute values.
:param text: A filter for a NavigableString with specific text.
:kwargs: A dictionary of filters on attribute values.
:return: A PageElement.
:rtype: bs4.element.Tag | bs4.element.NavigableString
"""
return self._find_one(self.find_all_next, name, attrs, text, **kwargs)
findNext = find_next # BS3
def find_all_next(self, name=None, attrs={}, text=None, limit=None,
**kwargs):
"""Find all PageElements that match the given criteria and appear
later in the document than this PageElement.
All find_* methods take a common set of arguments. See the online
documentation for detailed explanations.
:param name: A filter on tag name.
:param attrs: A dictionary of filters on attribute values.
:param text: A filter for a NavigableString with specific text.
:param limit: Stop looking after finding this many results.
:kwargs: A dictionary of filters on attribute values.
:return: A ResultSet containing PageElements.
"""
return self._find_all(name, attrs, text, limit, self.next_elements,
**kwargs)
findAllNext = find_all_next # BS3
def find_next_sibling(self, name=None, attrs={}, text=None, **kwargs):
"""Find the closest sibling to this PageElement that matches the
given criteria and appears later in the document.
All find_* methods take a common set of arguments. See the
online documentation for detailed explanations.
:param name: A filter on tag name.
:param attrs: A dictionary of filters on attribute values.
:param text: A filter for a NavigableString with specific text.
:kwargs: A dictionary of filters on attribute values.
:return: A PageElement.
:rtype: bs4.element.Tag | bs4.element.NavigableString
"""
return self._find_one(self.find_next_siblings, name, attrs, text,
**kwargs)
findNextSibling = find_next_sibling # BS3
def find_next_siblings(self, name=None, attrs={}, text=None, limit=None,
**kwargs):
"""Find all siblings of this PageElement that match the given criteria
and appear later in the document.
All find_* methods take a common set of arguments. See the online
documentation for detailed explanations.
:param name: A filter on tag name.
:param attrs: A dictionary of filters on attribute values.
:param text: A filter for a NavigableString with specific text.
:param limit: Stop looking after finding this many results.
:kwargs: A dictionary of filters on attribute values.
:return: A ResultSet of PageElements.
:rtype: bs4.element.ResultSet
"""
return self._find_all(name, attrs, text, limit,
self.next_siblings, **kwargs)
findNextSiblings = find_next_siblings # BS3
fetchNextSiblings = find_next_siblings # BS2
def find_previous(self, name=None, attrs={}, text=None, **kwargs):
"""Look backwards in the document from this PageElement and find the
first PageElement that matches the given criteria.
All find_* methods take a common set of arguments. See the online
documentation for detailed explanations.
:param name: A filter on tag name.
:param attrs: A dictionary of filters on attribute values.
:param text: A filter for a NavigableString with specific text.
:kwargs: A dictionary of filters on attribute values.
:return: A PageElement.
:rtype: bs4.element.Tag | bs4.element.NavigableString
"""
return self._find_one(
self.find_all_previous, name, attrs, text, **kwargs)
findPrevious = find_previous # BS3
def find_all_previous(self, name=None, attrs={}, text=None, limit=None,
**kwargs):
"""Look backwards in the document from this PageElement and find all
PageElements that match the given criteria.
All find_* methods take a common set of arguments. See the online
documentation for detailed explanations.
:param name: A filter on tag name.
:param attrs: A dictionary of filters on attribute values.
:param text: A filter for a NavigableString with specific text.
:param limit: Stop looking after finding this many results.
:kwargs: A dictionary of filters on attribute values.
:return: A ResultSet of PageElements.
:rtype: bs4.element.ResultSet
"""
return self._find_all(name, attrs, text, limit, self.previous_elements,
**kwargs)
findAllPrevious = find_all_previous # BS3
fetchPrevious = find_all_previous # BS2
def find_previous_sibling(self, name=None, attrs={}, text=None, **kwargs):
"""Returns the closest sibling to this PageElement that matches the
given criteria and appears earlier in the document.
All find_* methods take a common set of arguments. See the online
documentation for detailed explanations.
:param name: A filter on tag name.
:param attrs: A dictionary of filters on attribute values.
:param text: A filter for a NavigableString with specific text.
:kwargs: A dictionary of filters on attribute values.
:return: A PageElement.
:rtype: bs4.element.Tag | bs4.element.NavigableString
"""
return self._find_one(self.find_previous_siblings, name, attrs, text,
**kwargs)
findPreviousSibling = find_previous_sibling # BS3
def find_previous_siblings(self, name=None, attrs={}, text=None,
limit=None, **kwargs):
"""Returns all siblings to this PageElement that match the
given criteria and appear earlier in the document.
All find_* methods take a common set of arguments. See the online
documentation for detailed explanations.
:param name: A filter on tag name.
:param attrs: A dictionary of filters on attribute values.
:param text: A filter for a NavigableString with specific text.
:param limit: Stop looking after finding this many results.
:kwargs: A dictionary of filters on attribute values.
:return: A ResultSet of PageElements.
:rtype: bs4.element.ResultSet
"""
return self._find_all(name, attrs, text, limit,
self.previous_siblings, **kwargs)
findPreviousSiblings = find_previous_siblings # BS3
fetchPreviousSiblings = find_previous_siblings # BS2
def find_parent(self, name=None, attrs={}, **kwargs):
"""Find the closest parent of this PageElement that matches the given
criteria.
All find_* methods take a common set of arguments. See the online
documentation for detailed explanations.
:param name: A filter on tag name.
:param attrs: A dictionary of filters on attribute values.
:kwargs: A dictionary of filters on attribute values.
:return: A PageElement.
:rtype: bs4.element.Tag | bs4.element.NavigableString
"""
# NOTE: We can't use _find_one because findParents takes a different
# set of arguments.
r = None
l = self.find_parents(name, attrs, 1, **kwargs)
if l:
r = l[0]
return r
findParent = find_parent # BS3
def find_parents(self, name=None, attrs={}, limit=None, **kwargs):
"""Find all parents of this PageElement that match the given criteria.
All find_* methods take a common set of arguments. See the online
documentation for detailed explanations.
:param name: A filter on tag name.
:param attrs: A dictionary of filters on attribute values.
:param limit: Stop looking after finding this many results.
:kwargs: A dictionary of filters on attribute values.
:return: A PageElement.
:rtype: bs4.element.Tag | bs4.element.NavigableString
"""
return self._find_all(name, attrs, None, limit, self.parents,
**kwargs)
findParents = find_parents # BS3
fetchParents = find_parents # BS2
@property
def next(self):
"""The PageElement, if any, that was parsed just after this one.
:return: A PageElement.
:rtype: bs4.element.Tag | bs4.element.NavigableString
"""
return self.next_element
@property
def previous(self):
"""The PageElement, if any, that was parsed just before this one.
:return: A PageElement.
:rtype: bs4.element.Tag | bs4.element.NavigableString
"""
return self.previous_element
#These methods do the real heavy lifting.
def _find_one(self, method, name, attrs, text, **kwargs):
r = None
l = method(name, attrs, text, 1, **kwargs)
if l:
r = l[0]
return r
def _find_all(self, name, attrs, text, limit, generator, **kwargs):
"Iterates over a generator looking for things that match."
if text is None and 'string' in kwargs:
text = kwargs['string']
del kwargs['string']
if isinstance(name, SoupStrainer):
strainer = name
else:
strainer = SoupStrainer(name, attrs, text, **kwargs)
if text is None and not limit and not attrs and not kwargs:
if name is True or name is None:
# Optimization to find all tags.
result = (element for element in generator
if isinstance(element, Tag))
return ResultSet(strainer, result)
elif isinstance(name, str):
# Optimization to find all tags with a given name.
if name.count(':') == 1:
# This is a name with a prefix. If this is a namespace-aware document,
# we need to match the local name against tag.name. If not,
# we need to match the fully-qualified name against tag.name.
prefix, local_name = name.split(':', 1)
else:
prefix = None
local_name = name
result = (element for element in generator
if isinstance(element, Tag)
and (
element.name == name
) or (
element.name == local_name
and (prefix is None or element.prefix == prefix)
)
)
return ResultSet(strainer, result)
results = ResultSet(strainer)
while True:
try:
i = next(generator)
except StopIteration:
break
if i:
found = strainer.search(i)
if found:
results.append(found)
if limit and len(results) >= limit:
break
return results
#These generators can be used to navigate starting from both
#NavigableStrings and Tags.
@property
def next_elements(self):
"""All PageElements that were parsed after this one.
:yield: A sequence of PageElements.
"""
i = self.next_element
while i is not None:
yield i
i = i.next_element
@property
def next_siblings(self):
"""All PageElements that are siblings of this one but were parsed
later.
:yield: A sequence of PageElements.
"""
i = self.next_sibling
while i is not None:
yield i
i = i.next_sibling
@property
def previous_elements(self):
"""All PageElements that were parsed before this one.
:yield: A sequence of PageElements.
"""
i = self.previous_element
while i is not None:
yield i
i = i.previous_element
@property
def previous_siblings(self):
"""All PageElements that are siblings of this one but were parsed
earlier.
:yield: A sequence of PageElements.
"""
i = self.previous_sibling
while i is not None:
yield i
i = i.previous_sibling
@property
def parents(self):
"""All PageElements that are parents of this PageElement.
:yield: A sequence of PageElements.
"""
i = self.parent
while i is not None:
yield i
i = i.parent
@property
def decomposed(self):
"""Check whether a PageElement has been decomposed.
:rtype: bool
"""
return getattr(self, '_decomposed', False) or False
# Old non-property versions of the generators, for backwards
# compatibility with BS3.
def nextGenerator(self):
return self.next_elements
def nextSiblingGenerator(self):
return self.next_siblings
def previousGenerator(self):
return self.previous_elements
def previousSiblingGenerator(self):
return self.previous_siblings
def parentGenerator(self):
return self.parents
class NavigableString(str, PageElement):
"""A Python Unicode string that is part of a parse tree.
When Beautiful Soup parses the markup <b>penguin</b>, it will
create a NavigableString for the string "penguin".
"""
PREFIX = ''
SUFFIX = ''
# We can't tell just by looking at a string whether it's contained
# in an XML document or an HTML document.
known_xml = None
def __new__(cls, value):
"""Create a new NavigableString.
When unpickling a NavigableString, this method is called with
the string in DEFAULT_OUTPUT_ENCODING. That encoding needs to be
passed in to the superclass's __new__ or the superclass won't know
how to handle non-ASCII characters.
"""
if isinstance(value, str):
u = str.__new__(cls, value)
else:
u = str.__new__(cls, value, DEFAULT_OUTPUT_ENCODING)
u.setup()
return u
def __copy__(self):
"""A copy of a NavigableString has the same contents and class
as the original, but it is not connected to the parse tree.
"""
return type(self)(self)
def __getnewargs__(self):
return (str(self),)
def __getattr__(self, attr):
"""text.string gives you text. This is for backwards
compatibility for Navigable*String, but for CData* it lets you
get the string without the CData wrapper."""
if attr == 'string':
return self
else:
raise AttributeError(
"'%s' object has no attribute '%s'" % (
self.__class__.__name__, attr))
def output_ready(self, formatter="minimal"):
"""Run the string through the provided formatter.
:param formatter: A Formatter object, or a string naming one of the standard formatters.
"""
output = self.format_string(self, formatter)
return self.PREFIX + output + self.SUFFIX
@property
def name(self):
"""Since a NavigableString is not a Tag, it has no .name.
This property is implemented so that code like this doesn't crash
when run on a mixture of Tag and NavigableString objects:
[x.name for x in tag.children]
"""
return None
@name.setter
def name(self, name):
"""Prevent NavigableString.name from ever being set."""
raise AttributeError("A NavigableString cannot be given a name.")
class PreformattedString(NavigableString):
"""A NavigableString not subject to the normal formatting rules.
This is an abstract class used for special kinds of strings such
as comments (the Comment class) and CDATA blocks (the CData
class).
"""
PREFIX = ''
SUFFIX = ''
def output_ready(self, formatter=None):
"""Make this string ready for output by adding any subclass-specific
prefix or suffix.
:param formatter: A Formatter object, or a string naming one
of the standard formatters. The string will be passed into the
Formatter, but only to trigger any side effects: the return
value is ignored.
:return: The string, with any subclass-specific prefix and
suffix added on.
"""
if formatter is not None:
ignore = self.format_string(self, formatter)
return self.PREFIX + self + self.SUFFIX
class CData(PreformattedString):
"""A CDATA block."""
PREFIX = '<![CDATA['
SUFFIX = ']]>'
class ProcessingInstruction(PreformattedString):
"""A SGML processing instruction."""
PREFIX = '<?'
SUFFIX = '>'
class XMLProcessingInstruction(ProcessingInstruction):
"""An XML processing instruction."""
PREFIX = '<?'
SUFFIX = '?>'
class Comment(PreformattedString):
"""An HTML or XML comment."""
PREFIX = '<!--'
SUFFIX = '-->'
class Declaration(PreformattedString):
"""An XML declaration."""
PREFIX = '<?'
SUFFIX = '?>'
class Doctype(PreformattedString):
"""A document type declaration."""
@classmethod
def for_name_and_ids(cls, name, pub_id, system_id):
"""Generate an appropriate document type declaration for a given
public ID and system ID.
:param name: The name of the document's root element, e.g. 'html'.
:param pub_id: The Formal Public Identifier for this document type,
e.g. '-//W3C//DTD XHTML 1.1//EN'
:param system_id: The system identifier for this document type,
e.g. 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd'
:return: A Doctype.
"""
value = name or ''
if pub_id is not None:
value += ' PUBLIC "%s"' % pub_id
if system_id is not None:
value += ' "%s"' % system_id
elif system_id is not None:
value += ' SYSTEM "%s"' % system_id
return Doctype(value)
PREFIX = '<!DOCTYPE '
SUFFIX = '>\n'
class Stylesheet(NavigableString):
"""A NavigableString representing an stylesheet (probably
CSS).
Used to distinguish embedded stylesheets from textual content.
"""
pass
class Script(NavigableString):
"""A NavigableString representing an executable script (probably
Javascript).
Used to distinguish executable code from textual content.
"""
pass
class TemplateString(NavigableString):
"""A NavigableString representing a string found inside an HTML
template embedded in a larger document.
Used to distinguish such strings from the main body of the document.
"""
pass
class Tag(PageElement):
"""Represents an HTML or XML tag that is part of a parse tree, along
with its attributes and contents.
When Beautiful Soup parses the markup <b>penguin</b>, it will
create a Tag object representing the <b> tag.
"""
def __init__(self, parser=None, builder=None, name=None, namespace=None,
prefix=None, attrs=None, parent=None, previous=None,
is_xml=None, sourceline=None, sourcepos=None,
can_be_empty_element=None, cdata_list_attributes=None,
preserve_whitespace_tags=None
):
"""Basic constructor.
:param parser: A BeautifulSoup object.
:param builder: A TreeBuilder.
:param name: The name of the tag.
:param namespace: The URI of this Tag's XML namespace, if any.
:param prefix: The prefix for this Tag's XML namespace, if any.
:param attrs: A dictionary of this Tag's attribute values.
:param parent: The PageElement to use as this Tag's parent.
:param previous: The PageElement that was parsed immediately before
this tag.
:param is_xml: If True, this is an XML tag. Otherwise, this is an
HTML tag.
:param sourceline: The line number where this tag was found in its
source document.
:param sourcepos: The character position within `sourceline` where this
tag was found.
:param can_be_empty_element: If True, this tag should be
represented as <tag/>. If False, this tag should be represented
as <tag></tag>.
:param cdata_list_attributes: A list of attributes whose values should
be treated as CDATA if they ever show up on this tag.
:param preserve_whitespace_tags: A list of tag names whose contents
should have their whitespace preserved.
"""
if parser is None:
self.parser_class = None
else:
# We don't actually store the parser object: that lets extracted
# chunks be garbage-collected.
self.parser_class = parser.__class__
if name is None:
raise ValueError("No value provided for new tag's name.")
self.name = name
self.namespace = namespace
self.prefix = prefix
if ((not builder or builder.store_line_numbers)
and (sourceline is not None or sourcepos is not None)):
self.sourceline = sourceline
self.sourcepos = sourcepos
if attrs is None:
attrs = {}
elif attrs:
if builder is not None and builder.cdata_list_attributes:
attrs = builder._replace_cdata_list_attribute_values(
self.name, attrs)
else:
attrs = dict(attrs)
else:
attrs = dict(attrs)
# If possible, determine ahead of time whether this tag is an
# XML tag.
if builder:
self.known_xml = builder.is_xml
else:
self.known_xml = is_xml
self.attrs = attrs
self.contents = []
self.setup(parent, previous)
self.hidden = False
if builder is None:
# In the absence of a TreeBuilder, use whatever values were
# passed in here. They're probably None, unless this is a copy of some
# other tag.
self.can_be_empty_element = can_be_empty_element
self.cdata_list_attributes = cdata_list_attributes
self.preserve_whitespace_tags = preserve_whitespace_tags
else:
# Set up any substitutions for this tag, such as the charset in a META tag.
builder.set_up_substitutions(self)
# Ask the TreeBuilder whether this tag might be an empty-element tag.
self.can_be_empty_element = builder.can_be_empty_element(name)
# Keep track of the list of attributes of this tag that
# might need to be treated as a list.
#
# For performance reasons, we store the whole data structure
# rather than asking the question of every tag. Asking would
# require building a new data structure every time, and
# (unlike can_be_empty_element), we almost never need
# to check this.
self.cdata_list_attributes = builder.cdata_list_attributes
# Keep track of the names that might cause this tag to be treated as a
# whitespace-preserved tag.
self.preserve_whitespace_tags = builder.preserve_whitespace_tags
parserClass = _alias("parser_class") # BS3
def __copy__(self):
"""A copy of a Tag is a new Tag, unconnected to the parse tree.
Its contents are a copy of the old Tag's contents.
"""
clone = type(self)(
None, self.builder, self.name, self.namespace,
self.prefix, self.attrs, is_xml=self._is_xml,
sourceline=self.sourceline, sourcepos=self.sourcepos,
can_be_empty_element=self.can_be_empty_element,
cdata_list_attributes=self.cdata_list_attributes,
preserve_whitespace_tags=self.preserve_whitespace_tags
)
for attr in ('can_be_empty_element', 'hidden'):
setattr(clone, attr, getattr(self, attr))
for child in self.contents:
clone.append(child.__copy__())
return clone
@property
def is_empty_element(self):
"""Is this tag an empty-element tag? (aka a self-closing tag)
A tag that has contents is never an empty-element tag.
A tag that has no contents may or may not be an empty-element
tag. It depends on the builder used to create the tag. If the
builder has a designated list of empty-element tags, then only
a tag whose name shows up in that list is considered an
empty-element tag.
If the builder has no designated list of empty-element tags,
then any tag with no contents is an empty-element tag.
"""
return len(self.contents) == 0 and self.can_be_empty_element
isSelfClosing = is_empty_element # BS3
@property
def string(self):
"""Convenience property to get the single string within this
PageElement.
TODO It might make sense to have NavigableString.string return
itself.
:return: If this element has a single string child, return
value is that string. If this element has one child tag,
return value is the 'string' attribute of the child tag,
recursively. If this element is itself a string, has no
children, or has more than one child, return value is None.
"""
if len(self.contents) != 1:
return None
child = self.contents[0]
if isinstance(child, NavigableString):
return child
return child.string
@string.setter
def string(self, string):
"""Replace this PageElement's contents with `string`."""
self.clear()
self.append(string.__class__(string))
def _all_strings(self, strip=False, types=(NavigableString, CData)):
"""Yield all strings of certain classes, possibly stripping them.
:param strip: If True, all strings will be stripped before being
yielded.
:types: A tuple of NavigableString subclasses. Any strings of
a subclass not found in this list will be ignored. By
default, this means only NavigableString and CData objects
will be considered. So no comments, processing instructions,
etc.
:yield: A sequence of strings.
"""
for descendant in self.descendants:
if (
(types is None and not isinstance(descendant, NavigableString))
or
(types is not None and type(descendant) not in types)):
continue
if strip:
descendant = descendant.strip()
if len(descendant) == 0:
continue
yield descendant
strings = property(_all_strings)
@property
def stripped_strings(self):
"""Yield all strings in the document, stripping them first.
:yield: A sequence of stripped strings.
"""
for string in self._all_strings(True):
yield string
def get_text(self, separator="", strip=False,
types=(NavigableString, CData)):
"""Get all child strings, concatenated using the given separator.
:param separator: Strings will be concatenated using this separator.
:param strip: If True, strings will be stripped before being
concatenated.
:types: A tuple of NavigableString subclasses. Any strings of
a subclass not found in this list will be ignored. By
default, this means only NavigableString and CData objects
will be considered. So no comments, processing instructions,
stylesheets, etc.
:return: A string.
"""
return separator.join([s for s in self._all_strings(
strip, types=types)])
getText = get_text
text = property(get_text)
def decompose(self):
"""Recursively destroys this PageElement and its children.
This element will be removed from the tree and wiped out; so
will everything beneath it.
The behavior of a decomposed PageElement is undefined and you
should never use one for anything, but if you need to _check_
whether an element has been decomposed, you can use the
`decomposed` property.
"""
self.extract()
i = self
while i is not None:
n = i.next_element
i.__dict__.clear()
i.contents = []
i._decomposed = True
i = n
def clear(self, decompose=False):
"""Wipe out all children of this PageElement by calling extract()
on them.
:param decompose: If this is True, decompose() (a more
destructive method) will be called instead of extract().
"""
if decompose:
for element in self.contents[:]:
if isinstance(element, Tag):
element.decompose()
else:
element.extract()
else:
for element in self.contents[:]:
element.extract()
def smooth(self):
"""Smooth out this element's children by consolidating consecutive
strings.
This makes pretty-printed output look more natural following a
lot of operations that modified the tree.
"""
# Mark the first position of every pair of children that need
# to be consolidated. Do this rather than making a copy of
# self.contents, since in most cases very few strings will be
# affected.
marked = []
for i, a in enumerate(self.contents):
if isinstance(a, Tag):
# Recursively smooth children.
a.smooth()
if i == len(self.contents)-1:
# This is the last item in .contents, and it's not a
# tag. There's no chance it needs any work.
continue
b = self.contents[i+1]
if (isinstance(a, NavigableString)
and isinstance(b, NavigableString)
and not isinstance(a, PreformattedString)
and not isinstance(b, PreformattedString)
):
marked.append(i)
# Go over the marked positions in reverse order, so that
# removing items from .contents won't affect the remaining
# positions.
for i in reversed(marked):
a = self.contents[i]
b = self.contents[i+1]
b.extract()
n = NavigableString(a+b)
a.replace_with(n)
def index(self, element):
"""Find the index of a child by identity, not value.
Avoids issues with tag.contents.index(element) getting the
index of equal elements.
:param element: Look for this PageElement in `self.contents`.
"""
for i, child in enumerate(self.contents):
if child is element:
return i
raise ValueError("Tag.index: element not in tag")
def get(self, key, default=None):
"""Returns the value of the 'key' attribute for the tag, or
the value given for 'default' if it doesn't have that
attribute."""
return self.attrs.get(key, default)
def get_attribute_list(self, key, default=None):
"""The same as get(), but always returns a list.
:param key: The attribute to look for.
:param default: Use this value if the attribute is not present
on this PageElement.
:return: A list of values, probably containing only a single
value.
"""
value = self.get(key, default)
if not isinstance(value, list):
value = [value]
return value
def has_attr(self, key):
"""Does this PageElement have an attribute with the given name?"""
return key in self.attrs
def __hash__(self):
return str(self).__hash__()
def __getitem__(self, key):
"""tag[key] returns the value of the 'key' attribute for the Tag,
and throws an exception if it's not there."""
return self.attrs[key]
def __iter__(self):
"Iterating over a Tag iterates over its contents."
return iter(self.contents)
def __len__(self):
"The length of a Tag is the length of its list of contents."
return len(self.contents)
def __contains__(self, x):
return x in self.contents
def __bool__(self):
"A tag is non-None even if it has no contents."
return True
def __setitem__(self, key, value):
"""Setting tag[key] sets the value of the 'key' attribute for the
tag."""
self.attrs[key] = value
def __delitem__(self, key):
"Deleting tag[key] deletes all 'key' attributes for the tag."
self.attrs.pop(key, None)
def __call__(self, *args, **kwargs):
"""Calling a Tag like a function is the same as calling its
find_all() method. Eg. tag('a') returns a list of all the A tags
found within this tag."""
return self.find_all(*args, **kwargs)
def __getattr__(self, tag):
"""Calling tag.subtag is the same as calling tag.find(name="subtag")"""
#print("Getattr %s.%s" % (self.__class__, tag))
if len(tag) > 3 and tag.endswith('Tag'):
# BS3: soup.aTag -> "soup.find("a")
tag_name = tag[:-3]
warnings.warn(
'.%(name)sTag is deprecated, use .find("%(name)s") instead. If you really were looking for a tag called %(name)sTag, use .find("%(name)sTag")' % dict(
name=tag_name
)
)
return self.find(tag_name)
# We special case contents to avoid recursion.
elif not tag.startswith("__") and not tag == "contents":
return self.find(tag)
raise AttributeError(
"'%s' object has no attribute '%s'" % (self.__class__, tag))
def __eq__(self, other):
"""Returns true iff this Tag has the same name, the same attributes,
and the same contents (recursively) as `other`."""
if self is other:
return True
if (not hasattr(other, 'name') or
not hasattr(other, 'attrs') or
not hasattr(other, 'contents') or
self.name != other.name or
self.attrs != other.attrs or
len(self) != len(other)):
return False
for i, my_child in enumerate(self.contents):
if my_child != other.contents[i]:
return False
return True
def __ne__(self, other):
"""Returns true iff this Tag is not identical to `other`,
as defined in __eq__."""
return not self == other
def __repr__(self, encoding="unicode-escape"):
"""Renders this PageElement as a string.
:param encoding: The encoding to use (Python 2 only).
:return: Under Python 2, a bytestring; under Python 3,
a Unicode string.
"""
if PY3K:
# "The return value must be a string object", i.e. Unicode
return self.decode()
else:
# "The return value must be a string object", i.e. a bytestring.
# By convention, the return value of __repr__ should also be
# an ASCII string.
return self.encode(encoding)
def __unicode__(self):
"""Renders this PageElement as a Unicode string."""
return self.decode()
def __str__(self):
"""Renders this PageElement as a generic string.
:return: Under Python 2, a UTF-8 bytestring; under Python 3,
a Unicode string.
"""
if PY3K:
return self.decode()
else:
return self.encode()
if PY3K:
__str__ = __repr__ = __unicode__
def encode(self, encoding=DEFAULT_OUTPUT_ENCODING,
indent_level=None, formatter="minimal",
errors="xmlcharrefreplace"):
"""Render a bytestring representation of this PageElement and its
contents.
:param encoding: The destination encoding.
:param indent_level: Each line of the rendering will be
indented this many spaces. Used internally in
recursive calls while pretty-printing.
:param formatter: A Formatter object, or a string naming one of
the standard formatters.
:param errors: An error handling strategy such as
'xmlcharrefreplace'. This value is passed along into
encode() and its value should be one of the constants
defined by Python.
:return: A bytestring.
"""
# Turn the data structure into Unicode, then encode the
# Unicode.
u = self.decode(indent_level, encoding, formatter)
return u.encode(encoding, errors)
def decode(self, indent_level=None,
eventual_encoding=DEFAULT_OUTPUT_ENCODING,
formatter="minimal"):
"""Render a Unicode representation of this PageElement and its
contents.
:param indent_level: Each line of the rendering will be
indented this many spaces. Used internally in
recursive calls while pretty-printing.
:param eventual_encoding: The tag is destined to be
encoded into this encoding. This method is _not_
responsible for performing that encoding. This information
is passed in so that it can be substituted in if the
document contains a <META> tag that mentions the document's
encoding.
:param formatter: A Formatter object, or a string naming one of
the standard formatters.
"""
# First off, turn a non-Formatter `formatter` into a Formatter
# object. This will stop the lookup from happening over and
# over again.
if not isinstance(formatter, Formatter):
formatter = self.formatter_for_name(formatter)
attributes = formatter.attributes(self)
attrs = []
for key, val in attributes:
if val is None:
decoded = key
else:
if isinstance(val, list) or isinstance(val, tuple):
val = ' '.join(val)
elif not isinstance(val, str):
val = str(val)
elif (
isinstance(val, AttributeValueWithCharsetSubstitution)
and eventual_encoding is not None
):
val = val.encode(eventual_encoding)
text = formatter.attribute_value(val)
decoded = (
str(key) + '='
+ formatter.quoted_attribute_value(text))
attrs.append(decoded)
close = ''
closeTag = ''
prefix = ''
if self.prefix:
prefix = self.prefix + ":"
if self.is_empty_element:
close = formatter.void_element_close_prefix or ''
else:
closeTag = '</%s%s>' % (prefix, self.name)
pretty_print = self._should_pretty_print(indent_level)
space = ''
indent_space = ''
if indent_level is not None:
indent_space = (' ' * (indent_level - 1))
if pretty_print:
space = indent_space
indent_contents = indent_level + 1
else:
indent_contents = None
contents = self.decode_contents(
indent_contents, eventual_encoding, formatter
)
if self.hidden:
# This is the 'document root' object.
s = contents
else:
s = []
attribute_string = ''
if attrs:
attribute_string = ' ' + ' '.join(attrs)
if indent_level is not None:
# Even if this particular tag is not pretty-printed,
# we should indent up to the start of the tag.
s.append(indent_space)
s.append('<%s%s%s%s>' % (
prefix, self.name, attribute_string, close))
if pretty_print:
s.append("\n")
s.append(contents)
if pretty_print and contents and contents[-1] != "\n":
s.append("\n")
if pretty_print and closeTag:
s.append(space)
s.append(closeTag)
if indent_level is not None and closeTag and self.next_sibling:
# Even if this particular tag is not pretty-printed,
# we're now done with the tag, and we should add a
# newline if appropriate.
s.append("\n")
s = ''.join(s)
return s
def _should_pretty_print(self, indent_level):
"""Should this tag be pretty-printed?
Most of them should, but some (such as <pre> in HTML
documents) should not.
"""
return (
indent_level is not None
and (
not self.preserve_whitespace_tags
or self.name not in self.preserve_whitespace_tags
)
)
def prettify(self, encoding=None, formatter="minimal"):
"""Pretty-print this PageElement as a string.
:param encoding: The eventual encoding of the string. If this is None,
a Unicode string will be returned.
:param formatter: A Formatter object, or a string naming one of
the standard formatters.
:return: A Unicode string (if encoding==None) or a bytestring
(otherwise).
"""
if encoding is None:
return self.decode(True, formatter=formatter)
else:
return self.encode(encoding, True, formatter=formatter)
def decode_contents(self, indent_level=None,
eventual_encoding=DEFAULT_OUTPUT_ENCODING,
formatter="minimal"):
"""Renders the contents of this tag as a Unicode string.
:param indent_level: Each line of the rendering will be
indented this many spaces. Used internally in
recursive calls while pretty-printing.
:param eventual_encoding: The tag is destined to be
encoded into this encoding. decode_contents() is _not_
responsible for performing that encoding. This information
is passed in so that it can be substituted in if the
document contains a <META> tag that mentions the document's
encoding.
:param formatter: A Formatter object, or a string naming one of
the standard Formatters.
"""
# First off, turn a string formatter into a Formatter object. This
# will stop the lookup from happening over and over again.
if not isinstance(formatter, Formatter):
formatter = self.formatter_for_name(formatter)
pretty_print = (indent_level is not None)
s = []
for c in self:
text = None
if isinstance(c, NavigableString):
text = c.output_ready(formatter)
elif isinstance(c, Tag):
s.append(c.decode(indent_level, eventual_encoding,
formatter))
preserve_whitespace = (
self.preserve_whitespace_tags and self.name in self.preserve_whitespace_tags
)
if text and indent_level and not preserve_whitespace:
text = text.strip()
if text:
if pretty_print and not preserve_whitespace:
s.append(" " * (indent_level - 1))
s.append(text)
if pretty_print and not preserve_whitespace:
s.append("\n")
return ''.join(s)
def encode_contents(
self, indent_level=None, encoding=DEFAULT_OUTPUT_ENCODING,
formatter="minimal"):
"""Renders the contents of this PageElement as a bytestring.
:param indent_level: Each line of the rendering will be
indented this many spaces. Used internally in
recursive calls while pretty-printing.
:param eventual_encoding: The bytestring will be in this encoding.
:param formatter: A Formatter object, or a string naming one of
the standard Formatters.
:return: A bytestring.
"""
contents = self.decode_contents(indent_level, encoding, formatter)
return contents.encode(encoding)
# Old method for BS3 compatibility
def renderContents(self, encoding=DEFAULT_OUTPUT_ENCODING,
prettyPrint=False, indentLevel=0):
"""Deprecated method for BS3 compatibility."""
if not prettyPrint:
indentLevel = None
return self.encode_contents(
indent_level=indentLevel, encoding=encoding)
#Soup methods
def find(self, name=None, attrs={}, recursive=True, text=None,
**kwargs):
"""Look in the children of this PageElement and find the first
PageElement that matches the given criteria.
All find_* methods take a common set of arguments. See the online
documentation for detailed explanations.
:param name: A filter on tag name.
:param attrs: A dictionary of filters on attribute values.
:param recursive: If this is True, find() will perform a
recursive search of this PageElement's children. Otherwise,
only the direct children will be considered.
:param limit: Stop looking after finding this many results.
:kwargs: A dictionary of filters on attribute values.
:return: A PageElement.
:rtype: bs4.element.Tag | bs4.element.NavigableString
"""
r = None
l = self.find_all(name, attrs, recursive, text, 1, **kwargs)
if l:
r = l[0]
return r
findChild = find #BS2
def find_all(self, name=None, attrs={}, recursive=True, text=None,
limit=None, **kwargs):
"""Look in the children of this PageElement and find all
PageElements that match the given criteria.
All find_* methods take a common set of arguments. See the online
documentation for detailed explanations.
:param name: A filter on tag name.
:param attrs: A dictionary of filters on attribute values.
:param recursive: If this is True, find_all() will perform a
recursive search of this PageElement's children. Otherwise,
only the direct children will be considered.
:param limit: Stop looking after finding this many results.
:kwargs: A dictionary of filters on attribute values.
:return: A ResultSet of PageElements.
:rtype: bs4.element.ResultSet
"""
generator = self.descendants
if not recursive:
generator = self.children
return self._find_all(name, attrs, text, limit, generator, **kwargs)
findAll = find_all # BS3
findChildren = find_all # BS2
#Generator methods
@property
def children(self):
"""Iterate over all direct children of this PageElement.
:yield: A sequence of PageElements.
"""
# return iter() to make the purpose of the method clear
return iter(self.contents) # XXX This seems to be untested.
@property
def descendants(self):
"""Iterate over all children of this PageElement in a
breadth-first sequence.
:yield: A sequence of PageElements.
"""
if not len(self.contents):
return
stopNode = self._last_descendant().next_element
current = self.contents[0]
while current is not stopNode:
yield current
current = current.next_element
# CSS selector code
def select_one(self, selector, namespaces=None, **kwargs):
"""Perform a CSS selection operation on the current element.
:param selector: A CSS selector.
:param namespaces: A dictionary mapping namespace prefixes
used in the CSS selector to namespace URIs. By default,
Beautiful Soup will use the prefixes it encountered while
parsing the document.
:param kwargs: Keyword arguments to be passed into SoupSieve's
soupsieve.select() method.
:return: A Tag.
:rtype: bs4.element.Tag
"""
value = self.select(selector, namespaces, 1, **kwargs)
if value:
return value[0]
return None
def select(self, selector, namespaces=None, limit=None, **kwargs):
"""Perform a CSS selection operation on the current element.
This uses the SoupSieve library.
:param selector: A string containing a CSS selector.
:param namespaces: A dictionary mapping namespace prefixes
used in the CSS selector to namespace URIs. By default,
Beautiful Soup will use the prefixes it encountered while
parsing the document.
:param limit: After finding this number of results, stop looking.
:param kwargs: Keyword arguments to be passed into SoupSieve's
soupsieve.select() method.
:return: A ResultSet of Tags.
:rtype: bs4.element.ResultSet
"""
if namespaces is None:
namespaces = self._namespaces
if limit is None:
limit = 0
if soupsieve is None:
raise NotImplementedError(
"Cannot execute CSS selectors because the soupsieve package is not installed."
)
results = soupsieve.select(selector, self, namespaces, limit, **kwargs)
# We do this because it's more consistent and because
# ResultSet.__getattr__ has a helpful error message.
return ResultSet(None, results)
# Old names for backwards compatibility
def childGenerator(self):
"""Deprecated generator."""
return self.children
def recursiveChildGenerator(self):
"""Deprecated generator."""
return self.descendants
def has_key(self, key):
"""Deprecated method. This was kind of misleading because has_key()
(attributes) was different from __in__ (contents).
has_key() is gone in Python 3, anyway.
"""
warnings.warn('has_key is deprecated. Use has_attr("%s") instead.' % (
key))
return self.has_attr(key)
# Next, a couple classes to represent queries and their results.
class SoupStrainer(object):
"""Encapsulates a number of ways of matching a markup element (tag or
string).
This is primarily used to underpin the find_* methods, but you can
create one yourself and pass it in as `parse_only` to the
`BeautifulSoup` constructor, to parse a subset of a large
document.
"""
def __init__(self, name=None, attrs={}, text=None, **kwargs):
"""Constructor.
The SoupStrainer constructor takes the same arguments passed
into the find_* methods. See the online documentation for
detailed explanations.
:param name: A filter on tag name.
:param attrs: A dictionary of filters on attribute values.
:param text: A filter for a NavigableString with specific text.
:kwargs: A dictionary of filters on attribute values.
"""
self.name = self._normalize_search_value(name)
if not isinstance(attrs, dict):
# Treat a non-dict value for attrs as a search for the 'class'
# attribute.
kwargs['class'] = attrs
attrs = None
if 'class_' in kwargs:
# Treat class_="foo" as a search for the 'class'
# attribute, overriding any non-dict value for attrs.
kwargs['class'] = kwargs['class_']
del kwargs['class_']
if kwargs:
if attrs:
attrs = attrs.copy()
attrs.update(kwargs)
else:
attrs = kwargs
normalized_attrs = {}
for key, value in list(attrs.items()):
normalized_attrs[key] = self._normalize_search_value(value)
self.attrs = normalized_attrs
self.text = self._normalize_search_value(text)
def _normalize_search_value(self, value):
# Leave it alone if it's a Unicode string, a callable, a
# regular expression, a boolean, or None.
if (isinstance(value, str) or isinstance(value, Callable) or hasattr(value, 'match')
or isinstance(value, bool) or value is None):
return value
# If it's a bytestring, convert it to Unicode, treating it as UTF-8.
if isinstance(value, bytes):
return value.decode("utf8")
# If it's listlike, convert it into a list of strings.
if hasattr(value, '__iter__'):
new_value = []
for v in value:
if (hasattr(v, '__iter__') and not isinstance(v, bytes)
and not isinstance(v, str)):
# This is almost certainly the user's mistake. In the
# interests of avoiding infinite loops, we'll let
# it through as-is rather than doing a recursive call.
new_value.append(v)
else:
new_value.append(self._normalize_search_value(v))
return new_value
# Otherwise, convert it into a Unicode string.
# The unicode(str()) thing is so this will do the same thing on Python 2
# and Python 3.
return str(str(value))
def __str__(self):
"""A human-readable representation of this SoupStrainer."""
if self.text:
return self.text
else:
return "%s|%s" % (self.name, self.attrs)
def search_tag(self, markup_name=None, markup_attrs={}):
"""Check whether a Tag with the given name and attributes would
match this SoupStrainer.
Used prospectively to decide whether to even bother creating a Tag
object.
:param markup_name: A tag name as found in some markup.
:param markup_attrs: A dictionary of attributes as found in some markup.
:return: True if the prospective tag would match this SoupStrainer;
False otherwise.
"""
found = None
markup = None
if isinstance(markup_name, Tag):
markup = markup_name
markup_attrs = markup
if isinstance(self.name, str):
# Optimization for a very common case where the user is
# searching for a tag with one specific name, and we're
# looking at a tag with a different name.
if markup and not markup.prefix and self.name != markup.name:
return False
call_function_with_tag_data = (
isinstance(self.name, Callable)
and not isinstance(markup_name, Tag))
if ((not self.name)
or call_function_with_tag_data
or (markup and self._matches(markup, self.name))
or (not markup and self._matches(markup_name, self.name))):
if call_function_with_tag_data:
match = self.name(markup_name, markup_attrs)
else:
match = True
markup_attr_map = None
for attr, match_against in list(self.attrs.items()):
if not markup_attr_map:
if hasattr(markup_attrs, 'get'):
markup_attr_map = markup_attrs
else:
markup_attr_map = {}
for k, v in markup_attrs:
markup_attr_map[k] = v
attr_value = markup_attr_map.get(attr)
if not self._matches(attr_value, match_against):
match = False
break
if match:
if markup:
found = markup
else:
found = markup_name
if found and self.text and not self._matches(found.string, self.text):
found = None
return found
# For BS3 compatibility.
searchTag = search_tag
def search(self, markup):
"""Find all items in `markup` that match this SoupStrainer.
Used by the core _find_all() method, which is ultimately
called by all find_* methods.
:param markup: A PageElement or a list of them.
"""
# print('looking for %s in %s' % (self, markup))
found = None
# If given a list of items, scan it for a text element that
# matches.
if hasattr(markup, '__iter__') and not isinstance(markup, (Tag, str)):
for element in markup:
if isinstance(element, NavigableString) \
and self.search(element):
found = element
break
# If it's a Tag, make sure its name or attributes match.
# Don't bother with Tags if we're searching for text.
elif isinstance(markup, Tag):
if not self.text or self.name or self.attrs:
found = self.search_tag(markup)
# If it's text, make sure the text matches.
elif isinstance(markup, NavigableString) or \
isinstance(markup, str):
if not self.name and not self.attrs and self._matches(markup, self.text):
found = markup
else:
raise Exception(
"I don't know how to match against a %s" % markup.__class__)
return found
def _matches(self, markup, match_against, already_tried=None):
# print(u"Matching %s against %s" % (markup, match_against))
result = False
if isinstance(markup, list) or isinstance(markup, tuple):
# This should only happen when searching a multi-valued attribute
# like 'class'.
for item in markup:
if self._matches(item, match_against):
return True
# We didn't match any particular value of the multivalue
# attribute, but maybe we match the attribute value when
# considered as a string.
if self._matches(' '.join(markup), match_against):
return True
return False
if match_against is True:
# True matches any non-None value.
return markup is not None
if isinstance(match_against, Callable):
return match_against(markup)
# Custom callables take the tag as an argument, but all
# other ways of matching match the tag name as a string.
original_markup = markup
if isinstance(markup, Tag):
markup = markup.name
# Ensure that `markup` is either a Unicode string, or None.
markup = self._normalize_search_value(markup)
if markup is None:
# None matches None, False, an empty string, an empty list, and so on.
return not match_against
if (hasattr(match_against, '__iter__')
and not isinstance(match_against, str)):
# We're asked to match against an iterable of items.
# The markup must be match at least one item in the
# iterable. We'll try each one in turn.
#
# To avoid infinite recursion we need to keep track of
# items we've already seen.
if not already_tried:
already_tried = set()
for item in match_against:
if item.__hash__:
key = item
else:
key = id(item)
if key in already_tried:
continue
else:
already_tried.add(key)
if self._matches(original_markup, item, already_tried):
return True
else:
return False
# Beyond this point we might need to run the test twice: once against
# the tag's name and once against its prefixed name.
match = False
if not match and isinstance(match_against, str):
# Exact string match
match = markup == match_against
if not match and hasattr(match_against, 'search'):
# Regexp match
return match_against.search(markup)
if (not match
and isinstance(original_markup, Tag)
and original_markup.prefix):
# Try the whole thing again with the prefixed tag name.
return self._matches(
original_markup.prefix + ':' + original_markup.name, match_against
)
return match
class ResultSet(list):
"""A ResultSet is just a list that keeps track of the SoupStrainer
that created it."""
def __init__(self, source, result=()):
"""Constructor.
:param source: A SoupStrainer.
:param result: A list of PageElements.
"""
super(ResultSet, self).__init__(result)
self.source = source
def __getattr__(self, key):
"""Raise a helpful exception to explain a common code fix."""
raise AttributeError(
"ResultSet object has no attribute '%s'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()?" % key
) | PypiClean |
/GautamsX-6.0.13.tar.gz/GautamsX-6.0.13/bot/helper/ext_utils/fs_utils.py | import sys
from bot import aria2, LOGGER, DOWNLOAD_DIR, ARIA_CHILD_PROC, MEGA_CHILD_PROC
import shutil
import os
import pathlib
import magic
import tarfile
from .exceptions import NotSupportedExtractionArchive
def clean_download(path: str):
if os.path.exists(path):
LOGGER.info(f"Cleaning download: {path}")
shutil.rmtree(path)
def start_cleanup():
try:
shutil.rmtree(DOWNLOAD_DIR)
except FileNotFoundError:
pass
def clean_all():
aria2.remove_all(True)
try:
shutil.rmtree(DOWNLOAD_DIR)
except FileNotFoundError:
pass
def exit_clean_up(signal, frame):
try:
LOGGER.info("Please wait, while we clean up the downloads and stop running downloads")
clean_all()
ARIA_CHILD_PROC.kill()
MEGA_CHILD_PROC.kill()
sys.exit(0)
except KeyboardInterrupt:
LOGGER.warning("Force Exiting before the cleanup finishes!")
ARIA_CHILD_PROC.kill()
MEGA_CHILD_PROC.kill()
sys.exit(1)
def get_path_size(path):
if os.path.isfile(path):
return os.path.getsize(path)
total_size = 0
for root, dirs, files in os.walk(path):
for f in files:
abs_path = os.path.join(root, f)
total_size += os.path.getsize(abs_path)
return total_size
def tar(org_path):
tar_path = org_path + ".tar"
path = pathlib.PurePath(org_path)
LOGGER.info(f'Tar: orig_path: {org_path}, tar_path: {tar_path}')
tar = tarfile.open(tar_path, "w")
tar.add(org_path, arcname=path.name)
tar.close()
return tar_path
def get_base_name(orig_path: str):
if orig_path.endswith(".tar.bz2"):
return orig_path.replace(".tar.bz2", "")
elif orig_path.endswith(".tar.gz"):
return orig_path.replace(".tar.gz", "")
elif orig_path.endswith(".bz2"):
return orig_path.replace(".bz2", "")
elif orig_path.endswith(".gz"):
return orig_path.replace(".gz", "")
elif orig_path.endswith(".tar"):
return orig_path.replace(".tar", "")
elif orig_path.endswith(".tbz2"):
return orig_path.replace("tbz2", "")
elif orig_path.endswith(".tgz"):
return orig_path.replace(".tgz", "")
elif orig_path.endswith(".zip"):
return orig_path.replace(".zip", "")
elif orig_path.endswith(".7z"):
return orig_path.replace(".7z", "")
elif orig_path.endswith(".Z"):
return orig_path.replace(".Z", "")
elif orig_path.endswith(".rar"):
return orig_path.replace(".rar", "")
elif orig_path.endswith(".iso"):
return orig_path.replace(".iso", "")
elif orig_path.endswith(".wim"):
return orig_path.replace(".wim", "")
elif orig_path.endswith(".cab"):
return orig_path.replace(".cab", "")
elif orig_path.endswith(".apm"):
return orig_path.replace(".apm", "")
elif orig_path.endswith(".arj"):
return orig_path.replace(".arj", "")
elif orig_path.endswith(".chm"):
return orig_path.replace(".chm", "")
elif orig_path.endswith(".cpio"):
return orig_path.replace(".cpio", "")
elif orig_path.endswith(".cramfs"):
return orig_path.replace(".cramfs", "")
elif orig_path.endswith(".deb"):
return orig_path.replace(".deb", "")
elif orig_path.endswith(".dmg"):
return orig_path.replace(".dmg", "")
elif orig_path.endswith(".fat"):
return orig_path.replace(".fat", "")
elif orig_path.endswith(".hfs"):
return orig_path.replace(".hfs", "")
elif orig_path.endswith(".lzh"):
return orig_path.replace(".lzh", "")
elif orig_path.endswith(".lzma"):
return orig_path.replace(".lzma", "")
elif orig_path.endswith(".lzma2"):
return orig_path.replace(".lzma2", "")
elif orig_path.endswith(".mbr"):
return orig_path.replace(".mbr", "")
elif orig_path.endswith(".msi"):
return orig_path.replace(".msi", "")
elif orig_path.endswith(".mslz"):
return orig_path.replace(".mslz", "")
elif orig_path.endswith(".nsis"):
return orig_path.replace(".nsis", "")
elif orig_path.endswith(".ntfs"):
return orig_path.replace(".ntfs", "")
elif orig_path.endswith(".rpm"):
return orig_path.replace(".rpm", "")
elif orig_path.endswith(".squashfs"):
return orig_path.replace(".squashfs", "")
elif orig_path.endswith(".udf"):
return orig_path.replace(".udf", "")
elif orig_path.endswith(".vhd"):
return orig_path.replace(".vhd", "")
elif orig_path.endswith(".xar"):
return orig_path.replace(".xar", "")
else:
raise NotSupportedExtractionArchive('File format not supported for extraction')
def get_mime_type(file_path):
mime = magic.Magic(mime=True)
mime_type = mime.from_file(file_path)
mime_type = mime_type if mime_type else "text/plain"
return mime_type | PypiClean |
/MetaCalls-0.0.5-cp310-cp310-manylinux2014_x86_64.whl/metacalls/node_modules/@mapbox/node-pre-gyp/CHANGELOG.md | # node-pre-gyp changelog
## 1.0.10
- Upgraded minimist to 1.2.6 to address dependabot alert [CVE-2021-44906](https://nvd.nist.gov/vuln/detail/CVE-2021-44906)
## 1.0.9
- Upgraded node-fetch to 2.6.7 to address [CVE-2022-0235](https://www.cve.org/CVERecord?id=CVE-2022-0235)
- Upgraded detect-libc to 2.0.0 to use non-blocking NodeJS(>=12) Report API
## 1.0.8
- Downgraded npmlog to maintain node v10 and v8 support (https://github.com/mapbox/node-pre-gyp/pull/624)
## 1.0.7
- Upgraded nyc and npmlog to address https://github.com/advisories/GHSA-93q8-gq69-wqmw
## 1.0.6
- Added node v17 to the internal node releases listing
- Upgraded various dependencies declared in package.json to latest major versions (node-fetch from 2.6.1 to 2.6.5, npmlog from 4.1.2 to 5.01, semver from 7.3.4 to 7.3.5, and tar from 6.1.0 to 6.1.11)
- Fixed bug in `staging_host` parameter (https://github.com/mapbox/node-pre-gyp/pull/590)
## 1.0.5
- Fix circular reference warning with node >= v14
## 1.0.4
- Added node v16 to the internal node releases listing
## 1.0.3
- Improved support configuring s3 uploads (solves https://github.com/mapbox/node-pre-gyp/issues/571)
- New options added in https://github.com/mapbox/node-pre-gyp/pull/576: 'bucket', 'region', and `s3ForcePathStyle`
## 1.0.2
- Fixed regression in proxy support (https://github.com/mapbox/node-pre-gyp/issues/572)
## 1.0.1
- Switched from [email protected] to [email protected] to avoid this bug: https://github.com/isaacs/node-mkdirp/issues/31
## 1.0.0
- Module is now name-spaced at `@mapbox/node-pre-gyp` and the original `node-pre-gyp` is deprecated.
- New: support for staging and production s3 targets (see README.md)
- BREAKING: no longer supporting `node_pre_gyp_accessKeyId` & `node_pre_gyp_secretAccessKey`, use `AWS_ACCESS_KEY_ID` & `AWS_SECRET_ACCESS_KEY` instead to authenticate against s3 for `info`, `publish`, and `unpublish` commands.
- Dropped node v6 support, added node v14 support
- Switched tests to use mapbox-owned bucket for testing
- Added coverage tracking and linting with eslint
- Added back support for symlinks inside the tarball
- Upgraded all test apps to N-API/node-addon-api
- New: support for staging and production s3 targets (see README.md)
- Added `node_pre_gyp_s3_host` env var which has priority over the `--s3_host` option or default.
- Replaced needle with node-fetch
- Added proxy support for node-fetch
- Upgraded to [email protected]
## 0.17.0
- Got travis + appveyor green again
- Added support for more node versions
## 0.16.0
- Added Node 15 support in the local database (https://github.com/mapbox/node-pre-gyp/pull/520)
## 0.15.0
- Bump dependency on `mkdirp` from `^0.5.1` to `^0.5.3` (https://github.com/mapbox/node-pre-gyp/pull/492)
- Bump dependency on `needle` from `^2.2.1` to `^2.5.0` (https://github.com/mapbox/node-pre-gyp/pull/502)
- Added Node 14 support in the local database (https://github.com/mapbox/node-pre-gyp/pull/501)
## 0.14.0
- Defer modules requires in napi.js (https://github.com/mapbox/node-pre-gyp/pull/434)
- Bump dependency on `tar` from `^4` to `^4.4.2` (https://github.com/mapbox/node-pre-gyp/pull/454)
- Support extracting compiled binary from local offline mirror (https://github.com/mapbox/node-pre-gyp/pull/459)
- Added Node 13 support in the local database (https://github.com/mapbox/node-pre-gyp/pull/483)
## 0.13.0
- Added Node 12 support in the local database (https://github.com/mapbox/node-pre-gyp/pull/449)
## 0.12.0
- Fixed double-build problem with node v10 (https://github.com/mapbox/node-pre-gyp/pull/428)
- Added node 11 support in the local database (https://github.com/mapbox/node-pre-gyp/pull/422)
## 0.11.0
- Fixed double-install problem with node v10
- Significant N-API improvements (https://github.com/mapbox/node-pre-gyp/pull/405)
## 0.10.3
- Now will use `request` over `needle` if request is installed. By default `needle` is used for `https`. This should unbreak proxy support that regressed in v0.9.0
## 0.10.2
- Fixed rc/deep-extent security vulnerability
- Fixed broken reinstall script do to incorrectly named get_best_napi_version
## 0.10.1
- Fix needle error event (@medns)
## 0.10.0
- Allow for a single-level module path when packing @allenluce (https://github.com/mapbox/node-pre-gyp/pull/371)
- Log warnings instead of errors when falling back @xzyfer (https://github.com/mapbox/node-pre-gyp/pull/366)
- Add Node.js v10 support to tests (https://github.com/mapbox/node-pre-gyp/pull/372)
- Remove retire.js from CI (https://github.com/mapbox/node-pre-gyp/pull/372)
- Remove support for Node.js v4 due to [EOL on April 30th, 2018](https://github.com/nodejs/Release/blob/7dd52354049cae99eed0e9fe01345b0722a86fde/schedule.json#L14)
- Update appveyor tests to install default NPM version instead of NPM v2.x for all Windows builds (https://github.com/mapbox/node-pre-gyp/pull/375)
## 0.9.1
- Fixed regression (in v0.9.0) with support for http redirects @allenluce (https://github.com/mapbox/node-pre-gyp/pull/361)
## 0.9.0
- Switched from using `request` to `needle` to reduce size of module deps (https://github.com/mapbox/node-pre-gyp/pull/350)
## 0.8.0
- N-API support (@inspiredware)
## 0.7.1
- Upgraded to tar v4.x
## 0.7.0
- Updated request and hawk (#347)
- Dropped node v0.10.x support
## 0.6.40
- Improved error reporting if an install fails
## 0.6.39
- Support for node v9
- Support for versioning on `{libc}` to allow binaries to work on non-glic linux systems like alpine linux
## 0.6.38
- Maintaining compatibility (for v0.6.x series) with node v0.10.x
## 0.6.37
- Solved one part of #276: now now deduce the node ABI from the major version for node >= 2 even when not stored in the abi_crosswalk.json
- Fixed docs to avoid mentioning the deprecated and dangerous `prepublish` in package.json (#291)
- Add new node versions to crosswalk
- Ported tests to use tape instead of mocha
- Got appveyor tests passing by downgrading npm and node-gyp
## 0.6.36
- Removed the running of `testbinary` during install. Because this was regressed for so long, it is too dangerous to re-enable by default. Developers needing validation can call `node-pre-gyp testbinary` directory.
- Fixed regression in v0.6.35 for electron installs (now skipping binary validation which is not yet supported for electron)
## 0.6.35
- No longer recommending `npm ls` in `prepublish` (#291)
- Fixed testbinary command (#283) @szdavid92
## 0.6.34
- Added new node versions to crosswalk, including v8
- Upgraded deps to latest versions, started using `^` instead of `~` for all deps.
## 0.6.33
- Improved support for yarn
## 0.6.32
- Honor npm configuration for CA bundles (@heikkipora)
- Add node-pre-gyp and npm versions to user agent (@addaleax)
- Updated various deps
- Add known node version for v7.x
## 0.6.31
- Updated various deps
## 0.6.30
- Update to [email protected] and [email protected]
- Add known node version for v6.5.0
## 0.6.29
- Add known node versions for v0.10.45, v0.12.14, v4.4.4, v5.11.1, and v6.1.0
## 0.6.28
- Now more verbose when remote binaries are not available. This is needed since npm is increasingly more quiet by default
and users need to know why builds are falling back to source compiles that might then error out.
## 0.6.27
- Add known node version for node v6
- Stopped bundling dependencies
- Documented method for module authors to avoid bundling node-pre-gyp
- See https://github.com/mapbox/node-pre-gyp/tree/master#configuring for details
## 0.6.26
- Skip validation for nw runtime (https://github.com/mapbox/node-pre-gyp/pull/181) via @fleg
## 0.6.25
- Improved support for auto-detection of electron runtime in `node-pre-gyp.find()`
- Pull request from @enlight - https://github.com/mapbox/node-pre-gyp/pull/187
- Add known node version for 4.4.1 and 5.9.1
## 0.6.24
- Add known node version for 5.8.0, 5.9.0, and 4.4.0.
## 0.6.23
- Add known node version for 0.10.43, 0.12.11, 4.3.2, and 5.7.1.
## 0.6.22
- Add known node version for 4.3.1, and 5.7.0.
## 0.6.21
- Add known node version for 0.10.42, 0.12.10, 4.3.0, and 5.6.0.
## 0.6.20
- Add known node version for 4.2.5, 4.2.6, 5.4.0, 5.4.1,and 5.5.0.
## 0.6.19
- Add known node version for 4.2.4
## 0.6.18
- Add new known node versions for 0.10.x, 0.12.x, 4.x, and 5.x
## 0.6.17
- Re-tagged to fix packaging problem of `Error: Cannot find module 'isarray'`
## 0.6.16
- Added known version in crosswalk for 5.1.0.
## 0.6.15
- Upgraded tar-pack (https://github.com/mapbox/node-pre-gyp/issues/182)
- Support custom binary hosting mirror (https://github.com/mapbox/node-pre-gyp/pull/170)
- Added known version in crosswalk for 4.2.2.
## 0.6.14
- Added node 5.x version
## 0.6.13
- Added more known node 4.x versions
## 0.6.12
- Added support for [Electron](http://electron.atom.io/). Just pass the `--runtime=electron` flag when building/installing. Thanks @zcbenz
## 0.6.11
- Added known node and io.js versions including more 3.x and 4.x versions
## 0.6.10
- Added known node and io.js versions including 3.x and 4.x versions
- Upgraded `tar` dep
## 0.6.9
- Upgraded `rc` dep
- Updated known io.js version: v2.4.0
## 0.6.8
- Upgraded `semver` and `rimraf` deps
- Updated known node and io.js versions
## 0.6.7
- Fixed `node_abi` versions for io.js 1.1.x -> 1.8.x (should be 43, but was stored as 42) (refs https://github.com/iojs/build/issues/94)
## 0.6.6
- Updated with known io.js 2.0.0 version
## 0.6.5
- Now respecting `npm_config_node_gyp` (https://github.com/npm/npm/pull/4887)
- Updated to [email protected]
- Updated known node v0.12.x versions and io.js 1.x versions.
## 0.6.4
- Improved support for `io.js` (@fengmk2)
- Test coverage improvements (@mikemorris)
- Fixed support for `--dist-url` that regressed in 0.6.3
## 0.6.3
- Added support for passing raw options to node-gyp using `--` separator. Flags passed after
the `--` to `node-pre-gyp configure` will be passed directly to gyp while flags passed
after the `--` will be passed directly to make/visual studio.
- Added `node-pre-gyp configure` command to be able to call `node-gyp configure` directly
- Fix issue with require validation not working on windows 7 (@edgarsilva)
## 0.6.2
- Support for io.js >= v1.0.2
- Deferred require of `request` and `tar` to help speed up command line usage of `node-pre-gyp`.
## 0.6.1
- Fixed bundled `tar` version
## 0.6.0
- BREAKING: node odd releases like v0.11.x now use `major.minor.patch` for `{node_abi}` instead of `NODE_MODULE_VERSION` (#124)
- Added support for `toolset` option in versioning. By default is an empty string but `--toolset` can be passed to publish or install to select alternative binaries that target a custom toolset like C++11. For example to target Visual Studio 2014 modules like node-sqlite3 use `--toolset=v140`.
- Added support for `--no-rollback` option to request that a failed binary test does not remove the binary module leaves it in place.
- Added support for `--update-binary` option to request an existing binary be re-installed and the check for a valid local module be skipped.
- Added support for passing build options from `npm` through `node-pre-gyp` to `node-gyp`: `--nodedir`, `--disturl`, `--python`, and `--msvs_version`
## 0.5.31
- Added support for deducing node_abi for node.js runtime from previous release if the series is even
- Added support for --target=0.10.33
## 0.5.30
- Repackaged with latest bundled deps
## 0.5.29
- Added support for semver `build`.
- Fixed support for downloading from urls that include `+`.
## 0.5.28
- Now reporting unix style paths only in reveal command
## 0.5.27
- Fixed support for auto-detecting s3 bucket name when it contains `.` - @taavo
- Fixed support for installing when path contains a `'` - @halfdan
- Ported tests to mocha
## 0.5.26
- Fix node-webkit support when `--target` option is not provided
## 0.5.25
- Fix bundling of deps
## 0.5.24
- Updated ABI crosswalk to incldue node v0.10.30 and v0.10.31
## 0.5.23
- Added `reveal` command. Pass no options to get all versioning data as json. Pass a second arg to grab a single versioned property value
- Added support for `--silent` (shortcut for `--loglevel=silent`)
## 0.5.22
- Fixed node-webkit versioning name (NOTE: node-webkit support still experimental)
## 0.5.21
- New package to fix `shasum check failed` error with v0.5.20
## 0.5.20
- Now versioning node-webkit binaries based on major.minor.patch - assuming no compatible ABI across versions (#90)
## 0.5.19
- Updated to know about more node-webkit releases
## 0.5.18
- Updated to know about more node-webkit releases
## 0.5.17
- Updated to know about node v0.10.29 release
## 0.5.16
- Now supporting all aws-sdk configuration parameters (http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/node-configuring.html) (#86)
## 0.5.15
- Fixed installation of windows packages sub directories on unix systems (#84)
## 0.5.14
- Finished support for cross building using `--target_platform` option (#82)
- Now skipping binary validation on install if target arch/platform do not match the host.
- Removed multi-arch validing for OS X since it required a FAT node.js binary
## 0.5.13
- Fix problem in 0.5.12 whereby the wrong versions of mkdirp and semver where bundled.
## 0.5.12
- Improved support for node-webkit (@Mithgol)
## 0.5.11
- Updated target versions listing
## 0.5.10
- Fixed handling of `-debug` flag passed directory to node-pre-gyp (#72)
- Added optional second arg to `node_pre_gyp.find` to customize the default versioning options used to locate the runtime binary
- Failed install due to `testbinary` check failure no longer leaves behind binary (#70)
## 0.5.9
- Fixed regression in `testbinary` command causing installs to fail on windows with 0.5.7 (#60)
## 0.5.8
- Started bundling deps
## 0.5.7
- Fixed the `testbinary` check, which is used to determine whether to re-download or source compile, to work even in complex dependency situations (#63)
- Exposed the internal `testbinary` command in node-pre-gyp command line tool
- Fixed minor bug so that `fallback_to_build` option is always respected
## 0.5.6
- Added support for versioning on the `name` value in `package.json` (#57).
- Moved to using streams for reading tarball when publishing (#52)
## 0.5.5
- Improved binary validation that also now works with node-webkit (@Mithgol)
- Upgraded test apps to work with node v0.11.x
- Improved test coverage
## 0.5.4
- No longer depends on external install of node-gyp for compiling builds.
## 0.5.3
- Reverted fix for debian/nodejs since it broke windows (#45)
## 0.5.2
- Support for debian systems where the node binary is named `nodejs` (#45)
- Added `bin/node-pre-gyp.cmd` to be able to run command on windows locally (npm creates an .npm automatically when globally installed)
- Updated abi-crosswalk with node v0.10.26 entry.
## 0.5.1
- Various minor bug fixes, several improving windows support for publishing.
## 0.5.0
- Changed property names in `binary` object: now required are `module_name`, `module_path`, and `host`.
- Now `module_path` supports versioning, which allows developers to opt-in to using a versioned install path (#18).
- Added `remote_path` which also supports versioning.
- Changed `remote_uri` to `host`.
## 0.4.2
- Added support for `--target` flag to request cross-compile against a specific node/node-webkit version.
- Added preliminary support for node-webkit
- Fixed support for `--target_arch` option being respected in all cases.
## 0.4.1
- Fixed exception when only stderr is available in binary test (@bendi / #31)
## 0.4.0
- Enforce only `https:` based remote publishing access.
- Added `node-pre-gyp info` command to display listing of published binaries
- Added support for changing the directory node-pre-gyp should build in with the `-C/--directory` option.
- Added support for S3 prefixes.
## 0.3.1
- Added `unpublish` command.
- Fixed module path construction in tests.
- Added ability to disable falling back to build behavior via `npm install --fallback-to-build=false` which overrides setting in a depedencies package.json `install` target.
## 0.3.0
- Support for packaging all files in `module_path` directory - see `app4` for example
- Added `testpackage` command.
- Changed `clean` command to only delete `.node` not entire `build` directory since node-gyp will handle that.
- `.node` modules must be in a folder of there own since tar-pack will remove everything when it unpacks.
| PypiClean |
/FreePyBX-1.0-RC1.tar.gz/FreePyBX-1.0-RC1/freepybx/public/js/dojox/mobile/ProgressIndicator.js.uncompressed.js | define("dojox/mobile/ProgressIndicator", [
"dojo/_base/config",
"dojo/_base/declare",
"dojo/dom-construct",
"dojo/dom-style",
"dojo/has"
], function(config, declare, domConstruct, domStyle, has){
// module:
// dojox/mobile/ProgressIndicator
// summary:
// A progress indication widget.
var cls = declare("dojox.mobile.ProgressIndicator", null, {
// summary:
// A progress indication widget.
// description:
// ProgressIndicator is a round spinning graphical representation
// that indicates the current task is on-going.
// interval: Number
// The time interval in milliseconds for updating the spinning
// indicator.
interval: 100,
// colors: Array
// An array of indicator colors.
colors: [
"#C0C0C0", "#C0C0C0", "#C0C0C0", "#C0C0C0",
"#C0C0C0", "#C0C0C0", "#B8B9B8", "#AEAFAE",
"#A4A5A4", "#9A9A9A", "#8E8E8E", "#838383"
],
constructor: function(){
this._bars = [];
this.domNode = domConstruct.create("DIV");
this.domNode.className = "mblProgContainer";
if(config["mblAndroidWorkaround"] !== false && has("android") >= 2.2 && has("android") < 3){
// workaround to avoid the side effects of the fixes for android screen flicker problem
domStyle.set(this.domNode, "webkitTransform", "translate3d(0,0,0)");
}
this.spinnerNode = domConstruct.create("DIV", null, this.domNode);
for(var i = 0; i < this.colors.length; i++){
var div = domConstruct.create("DIV", {className:"mblProg mblProg"+i}, this.spinnerNode);
this._bars.push(div);
}
},
start: function(){
// summary:
// Starts the ProgressIndicator spinning.
if(this.imageNode){
var img = this.imageNode;
var l = Math.round((this.domNode.offsetWidth - img.offsetWidth) / 2);
var t = Math.round((this.domNode.offsetHeight - img.offsetHeight) / 2);
img.style.margin = t+"px "+l+"px";
return;
}
var cntr = 0;
var _this = this;
var n = this.colors.length;
this.timer = setInterval(function(){
cntr--;
cntr = cntr < 0 ? n - 1 : cntr;
var c = _this.colors;
for(var i = 0; i < n; i++){
var idx = (cntr + i) % n;
_this._bars[i].style.backgroundColor = c[idx];
}
}, this.interval);
},
stop: function(){
// summary:
// Stops the ProgressIndicator spinning.
if(this.timer){
clearInterval(this.timer);
}
this.timer = null;
if(this.domNode.parentNode){
this.domNode.parentNode.removeChild(this.domNode);
}
},
setImage: function(/*String*/file){
// summary:
// Sets an indicator icon image file (typically animated GIF).
// If null is specified, restores the default spinner.
if(file){
this.imageNode = domConstruct.create("IMG", {src:file}, this.domNode);
this.spinnerNode.style.display = "none";
}else{
if(this.imageNode){
this.domNode.removeChild(this.imageNode);
this.imageNode = null;
}
this.spinnerNode.style.display = "";
}
}
});
cls._instance = null;
cls.getInstance = function(){
if(!cls._instance){
cls._instance = new cls();
}
return cls._instance;
};
return cls;
}); | PypiClean |
/Bubot_CoAP-1.0.7-py3-none-any.whl/Bubot_CoAP/defines.py |
import collections
import struct
__author__ = 'Giacomo Tanganelli'
""" CoAP Parameters """
ACK_TIMEOUT = 2 # standard 2
SEPARATE_TIMEOUT = ACK_TIMEOUT / 2
MULTICAST_TIMEOUT = 15
ACK_RANDOM_FACTOR = 1.5
MAX_RETRANSMIT = 4
MAX_TRANSMIT_SPAN = ACK_TIMEOUT * (pow(2, (MAX_RETRANSMIT + 1)) - 1) * ACK_RANDOM_FACTOR
MAX_LATENCY = 120 # 2 minutes
PROCESSING_DELAY = ACK_TIMEOUT
MAX_RTT = (2 * MAX_LATENCY) + PROCESSING_DELAY
EXCHANGE_LIFETIME = MAX_TRANSMIT_SPAN + (2 * MAX_LATENCY) + PROCESSING_DELAY
DISCOVERY_URL = "/.well-known/core"
ALL_COAP_NODES = "224.0.1.187"
ALL_COAP_NODES_IPV6 = "FF00::FD"
MAX_PAYLOAD = 1024
MAX_NON_NOTIFICATIONS = 10
BLOCKWISE_SIZE = 1024
""" Message Format """
# number of bits used for the encoding of the CoAP version field.
VERSION_BITS = 2
# number of bits used for the encoding of the message type field.
TYPE_BITS = 2
# number of bits used for the encoding of the token length field.
TOKEN_LENGTH_BITS = 4
# number of bits used for the encoding of the request method/response code field.
CODE_BITS = 8
# number of bits used for the encoding of the message ID.
MESSAGE_ID_BITS = 16
# number of bits used for the encoding of the option delta field.
OPTION_DELTA_BITS = 4
# number of bits used for the encoding of the option delta field.
OPTION_LENGTH_BITS = 4
# One byte which indicates indicates the end of options and the start of the payload.
PAYLOAD_MARKER = 0xFF
# CoAP version supported by this Californium version.
VERSION = 1
# The lowest value of a request code.
REQUEST_CODE_LOWER_BOUND = 1
# The highest value of a request code.
REQUEST_CODE_UPPER_BOUND = 31
# The lowest value of a response code.
RESPONSE_CODE_LOWER_BOUND = 64
# The highest value of a response code.
RESPONSE_CODE_UPPER_BOUND = 191
corelinkformat = {
'ct': 'content_type',
'rt': 'resource_type',
'if': 'interface_type',
'sz': 'maximum_size_estimated',
'obs': 'observing'
}
# The integer.
INTEGER = 0
# The string.
STRING = 1
# The opaque.
OPAQUE = 2
# The unknown.
UNKNOWN = 3
# Cache modes
FORWARD_PROXY = 0
REVERSE_PROXY = 1
OptionItem = collections.namedtuple('OptionItem', 'number name value_type repeatable default')
class OptionRegistry(object):
"""
All CoAP options. Every option is represented as: (NUMBER, NAME, VALUE_TYPE, REPEATABLE, DEFAULT)
"""
def __init__(self):
pass
RESERVED = OptionItem(0, "Reserved", UNKNOWN, True, None)
IF_MATCH = OptionItem(1, "If-Match", OPAQUE, True, None)
URI_HOST = OptionItem(3, "Uri-Host", STRING, True, None)
ETAG = OptionItem(4, "ETag", OPAQUE, True, None)
IF_NONE_MATCH = OptionItem(5, "If-None-Match", OPAQUE, False, None)
OBSERVE = OptionItem(6, "Observe", INTEGER, False, 0)
URI_PORT = OptionItem(7, "Uri-Port", INTEGER, False, 5683)
LOCATION_PATH = OptionItem(8, "Location-Path", STRING, True, None)
URI_PATH = OptionItem(11, "Uri-Path", STRING, True, None)
CONTENT_TYPE = OptionItem(12, "Content-Type", INTEGER, False, 0)
MAX_AGE = OptionItem(14, "Max-Age", INTEGER, False, 60)
URI_QUERY = OptionItem(15, "Uri-Query", STRING, True, None)
ACCEPT = OptionItem(17, "Accept", INTEGER, False, 0)
LOCATION_QUERY = OptionItem(20,"Location-Query",STRING, True, None)
BLOCK2 = OptionItem(23, "Block2", INTEGER, False, None)
BLOCK1 = OptionItem(27, "Block1", INTEGER, False, None)
SIZE2 = OptionItem(28, "Size2", INTEGER, False, 0)
PROXY_URI = OptionItem(35, "Proxy-Uri", STRING, False, None)
PROXY_SCHEME = OptionItem(39, "Proxy-Schema", STRING, False, None)
SIZE1 = OptionItem(60, "Size1", INTEGER, False, None)
NO_RESPONSE = OptionItem(258, "No-Response", INTEGER, False, None)
OCF_ACCEPT_CONTENT_FORMAT_VERSION = OptionItem(2049, "OCF-Accept-Content-Format-Version", INTEGER, False, None)
OCF_CONTENT_FORMAT_VERSION = OptionItem(2053, "OCF-Content-Format-Version", INTEGER, False, None)
RM_MESSAGE_SWITCHING = OptionItem(65524, "Routing", OPAQUE, False, None)
LIST = {
0: RESERVED,
1: IF_MATCH,
3: URI_HOST,
4: ETAG,
5: IF_NONE_MATCH,
6: OBSERVE,
7: URI_PORT,
8: LOCATION_PATH,
11: URI_PATH,
12: CONTENT_TYPE,
14: MAX_AGE,
15: URI_QUERY,
17: ACCEPT,
20: LOCATION_QUERY,
23: BLOCK2,
27: BLOCK1,
28: SIZE2,
35: PROXY_URI,
39: PROXY_SCHEME,
60: SIZE1,
258: NO_RESPONSE,
2049: OCF_ACCEPT_CONTENT_FORMAT_VERSION,
2053: OCF_CONTENT_FORMAT_VERSION,
65524: RM_MESSAGE_SWITCHING
}
@staticmethod
def get_option_flags(option_num):
"""
Get Critical, UnSafe, NoCacheKey flags from the option number
as per RFC 7252, section 5.4.6
:param option_num: option number
:return: option flags
:rtype: 3-tuple (critical, unsafe, no-cache)
"""
opt_bytes = bytearray(2)
if option_num < 256:
s = struct.Struct("!B")
s.pack_into(opt_bytes, 0, option_num)
else:
s = struct.Struct("H")
s.pack_into(opt_bytes, 0, option_num)
critical = (opt_bytes[0] & 0x01) > 0
unsafe = (opt_bytes[0] & 0x02) > 0
nocache = ((opt_bytes[0] & 0x1e) == 0x1c)
return (critical, unsafe, nocache)
Types = {
'CON': 0,
'NON': 1,
'ACK': 2,
'RST': 3,
'None': None
}
CodeItem = collections.namedtuple('CodeItem', 'number name')
class Codes(object):
"""
CoAP codes. Every code is represented as (NUMBER, NAME)
"""
ERROR_LOWER_BOUND = 128
EMPTY = CodeItem(0, 'EMPTY')
GET = CodeItem(1, 'GET')
POST = CodeItem(2, 'POST')
PUT = CodeItem(3, 'PUT')
DELETE = CodeItem(4, 'DELETE')
CREATED = CodeItem(65, 'CREATED')
DELETED = CodeItem(66, 'DELETED')
VALID = CodeItem(67, 'VALID')
CHANGED = CodeItem(68, 'CHANGED')
CONTENT = CodeItem(69, 'CONTENT')
CONTINUE = CodeItem(95, 'CONTINUE')
BAD_REQUEST = CodeItem(128, 'BAD_REQUEST')
FORBIDDEN = CodeItem(131, 'FORBIDDEN')
NOT_FOUND = CodeItem(132, 'NOT_FOUND')
METHOD_NOT_ALLOWED = CodeItem(133, 'METHOD_NOT_ALLOWED')
NOT_ACCEPTABLE = CodeItem(134, 'NOT_ACCEPTABLE')
REQUEST_ENTITY_INCOMPLETE = CodeItem(136, 'REQUEST_ENTITY_INCOMPLETE')
PRECONDITION_FAILED = CodeItem(140, 'PRECONDITION_FAILED')
REQUEST_ENTITY_TOO_LARGE = CodeItem(141, 'REQUEST_ENTITY_TOO_LARGE')
UNSUPPORTED_CONTENT_FORMAT = CodeItem(143, 'UNSUPPORTED_CONTENT_FORMAT')
INTERNAL_SERVER_ERROR = CodeItem(160, 'INTERNAL_SERVER_ERROR')
NOT_IMPLEMENTED = CodeItem(161, 'NOT_IMPLEMENTED')
BAD_GATEWAY = CodeItem(162, 'BAD_GATEWAY')
SERVICE_UNAVAILABLE = CodeItem(163, 'SERVICE_UNAVAILABLE')
GATEWAY_TIMEOUT = CodeItem(164, 'GATEWAY_TIMEOUT')
PROXY_NOT_SUPPORTED = CodeItem(165, 'PROXY_NOT_SUPPORTED')
LIST = {
0: EMPTY,
1: GET,
2: POST,
3: PUT,
4: DELETE,
65: CREATED,
66: DELETE,
67: VALID,
68: CHANGED,
69: CONTENT,
95: CONTINUE,
128: BAD_REQUEST,
131: FORBIDDEN,
132: NOT_FOUND,
133: METHOD_NOT_ALLOWED,
134: NOT_ACCEPTABLE,
136: REQUEST_ENTITY_INCOMPLETE,
140: PRECONDITION_FAILED,
141: REQUEST_ENTITY_TOO_LARGE,
143: UNSUPPORTED_CONTENT_FORMAT,
160: INTERNAL_SERVER_ERROR,
161: NOT_IMPLEMENTED,
162: BAD_GATEWAY,
163: SERVICE_UNAVAILABLE,
164: GATEWAY_TIMEOUT,
165: PROXY_NOT_SUPPORTED
}
Content_types = {
"text/plain": 0,
"application/link-format": 40,
"application/xml": 41,
"application/octet-stream": 42,
"application/exi": 47,
"application/json": 50,
"application/cbor": 60,
"application/vnd.ocf+cbor": 10000
# 0: 'text/plain;charset=utf-8',
# 16: 'application/cose;cose-type="cose-encrypt0"',
# 17: 'application/cose;cose-type="cose-mac0"',
# 18: 'application/cose;cose-type="cose-sign1"',
# 40: 'application/link-format',
# 41: 'application/xml',
# 42: 'application/octet-stream',
# 47: 'application/exi',
# 50: 'application/json',
# 51: 'application/json-patch+json',
# 52: 'application/merge-patch+json',
# 60: 'application/cbor',
# 61: 'application/cwt',
# 62: 'application/multipast-core', # draft-ietf-core-multipart-ct
# 64: 'application/link-format+cbor', # draft-ietf-core-links-json-10
# 70: 'application/oscon', # draft-ietf-core-object-security-01
# 96: 'application/cose;cose-type="cose-encrypt"',
# 97: 'application/cose;cose-type="cose-mac"',
# 98: 'application/cose;cose-type="cose-sign"',
# 101: 'application/cose-key',
# 102: 'application/cose-key-set',
# 110: 'application/senml+json',
# 111: 'application/sensml+json',
# 112: 'application/senml+cbor',
# 113: 'application/sensml+cbor',
# 114: 'application/senml-exi',
# 115: 'application/sensml-exi',
# 256: 'application/coap-group+json',
# 280: 'application/pkcs7-mime;smime-type=server-generated-key',
# 281: 'application/pkcs7-mime;smime-type=certs-only',
# 282: 'application/pkcs7-mime;smime-type=CMC-Request',
# 283: 'application/pkcs7-mime;smime-type=CMC-Response',
# 284: 'application/pkcs8',
# 285: 'application/csrattrs',
# 286: 'application/pkcs10',
# 310: 'application/senml+xml',
# 311: 'application/sensml+xml',
# 1000: 'application/vnd.ocf+cbor',
# 11542: 'application/vnd.oma.lwm2m+tlv',
# 11543: 'application/vnd.oma.lwm2m+json',
# 504: 'application/link-format+json', # draft-ietf-core-links-json-10
}
COAP_PREFACE = "coap://"
LOCALHOST = "127.0.0.1"
HC_PROXY_DEFAULT_PORT = 8080 # TODO there is a standard for this?
COAP_DEFAULT_PORT = 5683
DEFAULT_HC_PATH = "/"
BAD_REQUEST = 400 # "Bad Request" error code
NOT_IMPLEMENTED = 501 # "Not Implemented" error code
# Dictionary to map CoAP to HTTP requests code
CoAP_HTTP = {
"CREATED": "201",
"DELETED": "200",
"VALID": "304",
"CHANGED": "200",
"CONTENT": "200",
"BAD_REQUEST": "400",
"FORBIDDEN": "403",
"NOT_FOUND": "404",
"METHOD_NOT_ALLOWED": "400",
"NOT_ACCEPTABLE": "406",
"PRECONDITION_FAILED": "412",
"REQUEST_ENTITY_TOO_LARGE": "413",
"UNSUPPORTED_CONTENT_FORMAT": "415",
"INTERNAL_SERVER_ERROR": "500",
"NOT_IMPLEMENTED": "501",
"BAD_GATEWAY": "502",
"SERVICE_UNAVAILABLE": "503",
"GATEWAY_TIMEOUT": "504",
"PROXY_NOT_SUPPORTED": "502"
} | PypiClean |
/FoneAstra-3.0.1.tar.gz/FoneAstra-3.0.1/fa/static/fa/flot/README.txt | About
-----
Flot is a Javascript plotting library for jQuery. Read more at the
website:
http://code.google.com/p/flot/
Take a look at the examples linked from above, they should give a good
impression of what Flot can do and the source code of the examples is
probably the fastest way to learn how to use Flot.
Installation
------------
Just include the Javascript file after you've included jQuery.
Generally, all browsers that support the HTML5 canvas tag are
supported.
For support for Internet Explorer < 9, you can use Excanvas, a canvas
emulator; this is used in the examples bundled with Flot. You just
include the excanvas script like this:
<!--[if lte IE 8]><script language="javascript" type="text/javascript" src="excanvas.min.js"></script><![endif]-->
If it's not working on your development IE 6.0, check that it has
support for VML which Excanvas is relying on. It appears that some
stripped down versions used for test environments on virtual machines
lack the VML support.
You can also try using Flashcanvas (see
http://code.google.com/p/flashcanvas/), which uses Flash to do the
emulation. Although Flash can be a bit slower to load than VML, if
you've got a lot of points, the Flash version can be much faster
overall. Flot contains some wrapper code for activating Excanvas which
Flashcanvas is compatible with.
You need at least jQuery 1.2.6, but try at least 1.3.2 for interactive
charts because of performance improvements in event handling.
Basic usage
-----------
Create a placeholder div to put the graph in:
<div id="placeholder"></div>
You need to set the width and height of this div, otherwise the plot
library doesn't know how to scale the graph. You can do it inline like
this:
<div id="placeholder" style="width:600px;height:300px"></div>
You can also do it with an external stylesheet. Make sure that the
placeholder isn't within something with a display:none CSS property -
in that case, Flot has trouble measuring label dimensions which
results in garbled looks and might have trouble measuring the
placeholder dimensions which is fatal (it'll throw an exception).
Then when the div is ready in the DOM, which is usually on document
ready, run the plot function:
$.plot($("#placeholder"), data, options);
Here, data is an array of data series and options is an object with
settings if you want to customize the plot. Take a look at the
examples for some ideas of what to put in or look at the reference
in the file "API.txt". Here's a quick example that'll draw a line from
(0, 0) to (1, 1):
$.plot($("#placeholder"), [ [[0, 0], [1, 1]] ], { yaxis: { max: 1 } });
The plot function immediately draws the chart and then returns a plot
object with a couple of methods.
What's with the name?
---------------------
First: it's pronounced with a short o, like "plot". Not like "flawed".
So "Flot" rhymes with "plot".
And if you look up "flot" in a Danish-to-English dictionary, some up
the words that come up are "good-looking", "attractive", "stylish",
"smart", "impressive", "extravagant". One of the main goals with Flot
is pretty looks.
| PypiClean |
/Kamaelia-0.6.0.tar.gz/Kamaelia-0.6.0/Examples/Axon/SynchronousLinks/basic_syncLinksAcceptanceTest.py |
from Axon.Component import *
import Axon.Component
print component.__init__.__doc__
class Consumer(component):
Inboxes = ["source"]
Outboxes = ["result"]
def __init__(self):
super(Consumer, self).__init__()
print "huh?"
#this variable is not used for anything important
self.i = 30
def dosomething(self):
if self.dataReady("source"):
op = self.recv("source")
print self.name,"received --> ",op
self.send(op,"result")
def main(self):
yield 1
while(self.i):
self.i = self.i - 1
self.dosomething()
yield 1
print "Consumer has finished consumption !!!"
class Producer(component):
Inboxes=[]
Outboxes=["result"]
def __init__(self):
super(Producer, self).__init__()
def main(self):
i = 30
while(i):
i = i - 1
try:
self.send("hello"+str(i), "result")
self.send("hello"+str(i), "result")
print self.name," sent --> hello"+str(i)
except noSpaceInBox, e:
print "XXXXXXXXXXXXXXXXXXXXXXXXXXXX"
print "Failed to deliver"
size, capacity = e.args
print "Box Capacity", capacity
print "Current size", size
yield 1
print "Producer has finished production !!!"
class testComponent(component):
Inboxes = ["_input"]
Outboxes = []
def __init__(self):
super(testComponent, self).__init__()
self.producer = Producer()
self.consumer = Consumer()
self.addChildren(self.producer,self.consumer)
#link the source i.e. "result" to the sink i.e. "source"
#this is the arrow no.1
L = self.link((self.producer,"result"),(self.consumer,"source"), pipewidth=5)
print "PIPEWIDTH", L.setSynchronous()
print "PIPEWIDTH", L.setSynchronous(4)
L.setShowTransit(True, "Producer, Consumer")
#source_component --> self.consumer
#sink_component --> self
#sourcebox --> result
#sinkbox --> _input
#postoffice --> self.postoffice
#i.e. create a linkage between the consumer (in consumer.py) and
#ourselves. The output from the consumer will be used by us. The
#sourcebox (outbox) would be "result" and the sinkbox (inbox) would
#be our box "_input"
#this is the arrow no. 2
self.link((self.consumer,"result"),(self,"_input"))
def childComponents(self):
return [self.producer,self.consumer]
def main(self):
yield newComponent(*self.childComponents())
while not self.dataReady("_input"):
yield 1
result = self.recv("_input")
print "consumer finished with result: ", result , "!"
testComponent().run() | PypiClean |
/HBT_IP_Test-1.0.1-py3-none-any.whl/HBT_IP_Test/libs/isom/python/Macros_pb2.py |
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf.internal import enum_type_wrapper
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
import IsomStdDef_pb2 as IsomStdDef__pb2
import Isom_pb2 as Isom__pb2
import IsomEvents_pb2 as IsomEvents__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='Macros.proto',
package='Honeywell.Security.ISOM.Macros',
syntax='proto2',
serialized_options=None,
serialized_pb=_b('\n\x0cMacros.proto\x12\x1eHoneywell.Security.ISOM.Macros\x1a\x10IsomStdDef.proto\x1a\nIsom.proto\x1a\x10IsomEvents.proto\"Y\n\x0fMacroOperations\x12<\n\tresources\x18\x0b \x03(\x0e\x32).Honeywell.Security.ISOM.Macros.Resources*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"a\n\x17MacroSupportedRelations\x12<\n\trelations\x18\x0b \x03(\x0e\x32).Honeywell.Security.ISOM.Macros.Relations*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"O\n\x0bMacroEvents\x12\x36\n\x06\x65vents\x18\x0b \x03(\x0e\x32&.Honeywell.Security.ISOM.Macros.Events*\x08\x08\xc0\x84=\x10\xe0\x91\x43\".\n\x13MacroTriggerPattern\x12\r\n\x05\x64\x65lay\x18\x0b \x01(\x04*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"\xab\x01\n\x11MacroTriggerState\x12\n\n\x02ID\x18\x0b \x01(\t\x12?\n\x05state\x18\x0c \x01(\x0e\x32\x30.Honeywell.Security.ISOM.Macros.MacroTriggerType\x12?\n\ractiveTrigger\x18\x0e \x03(\x0b\x32\".Honeywell.Security.ISOM.IsomEventB\x04\x90\xb5\x18\x0c*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"j\n\x15MacroTriggerStateList\x12G\n\x0ctriggerState\x18\x0b \x03(\x0b\x32\x31.Honeywell.Security.ISOM.Macros.MacroTriggerState*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"\xad\x01\n\nMacroState\x12\n\n\x02id\x18\x0b \x01(\t\x12@\n\x04mode\x18\x0c \x01(\x0e\x32\x32.Honeywell.Security.ISOM.Macros.MacroExecutionMode\x12G\n\x0ctriggerState\x18\r \x01(\x0b\x32\x31.Honeywell.Security.ISOM.Macros.MacroTriggerState*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"U\n\x0eMacroStateList\x12\x39\n\x05state\x18\x0b \x03(\x0b\x32*.Honeywell.Security.ISOM.Macros.MacroState*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"Y\n\x10MacroIdentifiers\x12\n\n\x02id\x18\x0b \x01(\t\x12\x0c\n\x04guid\x18\x0c \x01(\t\x12\x0c\n\x04name\x18\r \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x0e \x01(\t*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"g\n\x14MacroIdentifiersList\x12\x45\n\x0bidentifiers\x18\x0b \x03(\x0b\x32\x30.Honeywell.Security.ISOM.Macros.MacroIdentifiers*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"p\n\rMacroRelation\x12\n\n\x02id\x18\x0b \x01(\t\x12\x37\n\x04name\x18\x0c \x01(\x0e\x32).Honeywell.Security.ISOM.Macros.Relations\x12\x10\n\x08\x65ntityId\x18\x0e \x01(\t*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"T\n\x11MacroRelationList\x12?\n\x08relation\x18\x0b \x03(\x0b\x32-.Honeywell.Security.ISOM.Macros.MacroRelation\"F\n\nRuleAction\x12.\n\x07\x65lement\x18\x0b \x03(\x0b\x32\x1d.Honeywell.Security.ISOM.Task*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"\xb2\x02\n\x19RuleTriggerElementOperand\x12;\n\nentityType\x18\x19 \x01(\x0e\x32\'.Honeywell.Security.ISOM.IsomEntityType\x12\x10\n\x08\x65ntityId\x18\x1a \x03(\t\x12\x12\n\x04type\x18\x1b \x01(\x04\x42\x04\x90\xb5\x18\x15\x12]\n\x15\x61\x64\x64itionalTriggerInfo\x18\x1c \x01(\x0b\x32>.Honeywell.Security.ISOM.EventStreams.IsomEventDetailExtension\x12I\n\x13relationalOperation\x18\x1d \x01(\x0e\x32,.Honeywell.Security.ISOM.RelationalOperation*\x08\x08\xc0\x84=\x10\xe0\x91\x43\";\n\x0cVoiceCommand\x12\x0e\n\x06phrase\x18\x0b \x01(\t\x12\x1b\n\x13minRecognitionScore\x18\x0c \x01(\t\"\x92\x02\n\x12RuleTriggerElement\x12\n\n\x02id\x18\x0b \x01(\t\x12<\n\toperation\x18\x0c \x01(\x0e\x32).Honeywell.Security.ISOM.LogicalOperation\x12J\n\x07operand\x18\r \x03(\x0b\x32\x39.Honeywell.Security.ISOM.Macros.RuleTriggerElementOperand\x12\x1c\n\x14ruleTriggerElementId\x18\x0e \x03(\t\x12>\n\x08voiceCmd\x18\x0f \x01(\x0b\x32,.Honeywell.Security.ISOM.Macros.VoiceCommand*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"\x9f\x01\n\x0bRuleTrigger\x12\x43\n\x07\x65lement\x18\x0b \x03(\x0b\x32\x32.Honeywell.Security.ISOM.Macros.RuleTriggerElement\x12\x41\n\x0c\x63redentialId\x18\x15 \x01(\x0b\x32+.Honeywell.Security.ISOM.IsomEntityInstance*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"\xc2\x01\n\nRuleConfig\x12<\n\x07trigger\x18\x15 \x01(\x0b\x32+.Honeywell.Security.ISOM.Macros.RuleTrigger\x12:\n\x05\x64\x65lay\x18\x17 \x01(\x0b\x32%.Honeywell.Security.ISOM.IsomDurationB\x04\x90\xb5\x18\x11\x12:\n\x06\x61\x63tion\x18\x16 \x03(\x0b\x32*.Honeywell.Security.ISOM.Macros.RuleAction\"\xf8\x02\n\x0bMacroConfig\x12\x45\n\x0bidentifiers\x18\x0b \x01(\x0b\x32\x30.Honeywell.Security.ISOM.Macros.MacroIdentifiers\x12?\n\x08relation\x18\n \x03(\x0b\x32-.Honeywell.Security.ISOM.Macros.MacroRelation\x12I\n\rexecutionMode\x18\x14 \x01(\x0e\x32\x32.Honeywell.Security.ISOM.Macros.MacroExecutionMode\x12>\n\nruleConfig\x18\x15 \x03(\x0b\x32*.Honeywell.Security.ISOM.Macros.RuleConfig\x12\x13\n\x0binstruction\x18\x16 \x03(\t\x12\x37\n\x04type\x18\x17 \x01(\x0b\x32#.Honeywell.Security.ISOM.IsomStringB\x04\x90\xb5\x18\x13*\x08\x08\xa0\xf7\x36\x10\xe0\x91\x43\"X\n\x0fMacroConfigList\x12;\n\x06\x63onfig\x18\x0b \x03(\x0b\x32+.Honeywell.Security.ISOM.Macros.MacroConfig*\x08\x08\xc0\x84=\x10\xe0\x91\x43\"\x85\x01\n\x0bMacroEntity\x12;\n\x06\x63onfig\x18\x0b \x01(\x0b\x32+.Honeywell.Security.ISOM.Macros.MacroConfig\x12\x39\n\x05state\x18\x0c \x01(\x0b\x32*.Honeywell.Security.ISOM.Macros.MacroState\"N\n\x0fMacroEntityList\x12;\n\x06\x65ntity\x18\x0b \x03(\x0b\x32+.Honeywell.Security.ISOM.Macros.MacroEntity*\xdd\x02\n\tResources\x12\x18\n\x13supportedOperations\x10\xf2\x07\x12\x17\n\x12supportedRelations\x10\xf3\x07\x12\x14\n\x0fsupportedEvents\x10\xf4\x07\x12\x1a\n\x15supportedCapabilities\x10\xf5\x07\x12\x0f\n\nfullEntity\x10\xc2N\x12\t\n\x04info\x10\xc3N\x12\x0b\n\x06\x63onfig\x10\xd7N\x12\x10\n\x0bidentifiers\x10\xebN\x12\x0e\n\trelations\x10\xffN\x12\n\n\x05state\x10\xd8O\x12\t\n\x04mode\x10\x84R\x12\x10\n\x0bmode_s_auto\x10\x85R\x12\x12\n\rmode_s_manual\x10\x86R\x12\x13\n\x0emode_s_disable\x10\x87R\x12\x0c\n\x07trigger\x10\xe8R\x12\x14\n\x0ftrigger_s_start\x10\xe9R\x12\x13\n\x0etrigger_s_stop\x10\xeaR\x12\x15\n\rMax_Resources\x10\x80\x80\x80\x80\x04*\xb8\x02\n\tRelations\x12\x1c\n\x17MacroAssignedCredential\x10\xd3\x0f\x12\x18\n\x13MacroAssignedDevice\x10\xd4\x0f\x12\x1d\n\x18MacroTriggeredBySchedule\x10\xd5\x0f\x12\x1b\n\x16MacroAssignedToAccount\x10\xd6\x0f\x12\x18\n\x13MacroAssignedToSite\x10\xd7\x0f\x12!\n\x1cMacroAssociatesRecordProfile\x10\xd8\x0f\x12#\n\x1eMacroTriggersRecordingOnStream\x10\xd9\x0f\x12\x1c\n\x17MacroControlsThermostat\x10\xda\x0f\x12 \n\x1bMacroAssignedToPMCollection\x10\xdb\x0f\x12\x15\n\rMax_Relations\x10\x80\x80\x80\x80\x04*w\n\x06\x45vents\x12\x11\n\x0c\x63onfig_p_add\x10\x9aN\x12\x14\n\x0f\x63onfig_p_modify\x10\x9bN\x12\x14\n\x0f\x63onfig_p_delete\x10\x9cN\x12\x1a\n\x15triggerState_p_active\x10\xe9R\x12\x12\n\nMax_Events\x10\x80\x80\x80\x80\x04*U\n\x10MacroTriggerType\x12\t\n\x05start\x10\x0b\x12\x08\n\x04stop\x10\x0c\x12\x0e\n\nstartDelay\x10\r\x12\x1c\n\x14Max_MacroTriggerType\x10\x80\x80\x80\x80\x04*j\n\x12MacroExecutionMode\x12\x1b\n\x17MacroExecutionMode_auto\x10\x0b\x12\n\n\x06manual\x10\x0c\x12\x0b\n\x07\x64isable\x10\r\x12\x1e\n\x16Max_MacroExecutionMode\x10\x80\x80\x80\x80\x04')
,
dependencies=[IsomStdDef__pb2.DESCRIPTOR,Isom__pb2.DESCRIPTOR,IsomEvents__pb2.DESCRIPTOR,])
_RESOURCES = _descriptor.EnumDescriptor(
name='Resources',
full_name='Honeywell.Security.ISOM.Macros.Resources',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='supportedOperations', index=0, number=1010,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='supportedRelations', index=1, number=1011,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='supportedEvents', index=2, number=1012,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='supportedCapabilities', index=3, number=1013,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='fullEntity', index=4, number=10050,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='info', index=5, number=10051,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='config', index=6, number=10071,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='identifiers', index=7, number=10091,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='relations', index=8, number=10111,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='state', index=9, number=10200,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='mode', index=10, number=10500,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='mode_s_auto', index=11, number=10501,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='mode_s_manual', index=12, number=10502,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='mode_s_disable', index=13, number=10503,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='trigger', index=14, number=10600,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='trigger_s_start', index=15, number=10601,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='trigger_s_stop', index=16, number=10602,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Max_Resources', index=17, number=1073741824,
serialized_options=None,
type=None),
],
containing_type=None,
serialized_options=None,
serialized_start=3120,
serialized_end=3469,
)
_sym_db.RegisterEnumDescriptor(_RESOURCES)
Resources = enum_type_wrapper.EnumTypeWrapper(_RESOURCES)
_RELATIONS = _descriptor.EnumDescriptor(
name='Relations',
full_name='Honeywell.Security.ISOM.Macros.Relations',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='MacroAssignedCredential', index=0, number=2003,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MacroAssignedDevice', index=1, number=2004,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MacroTriggeredBySchedule', index=2, number=2005,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MacroAssignedToAccount', index=3, number=2006,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MacroAssignedToSite', index=4, number=2007,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MacroAssociatesRecordProfile', index=5, number=2008,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MacroTriggersRecordingOnStream', index=6, number=2009,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MacroControlsThermostat', index=7, number=2010,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='MacroAssignedToPMCollection', index=8, number=2011,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Max_Relations', index=9, number=1073741824,
serialized_options=None,
type=None),
],
containing_type=None,
serialized_options=None,
serialized_start=3472,
serialized_end=3784,
)
_sym_db.RegisterEnumDescriptor(_RELATIONS)
Relations = enum_type_wrapper.EnumTypeWrapper(_RELATIONS)
_EVENTS = _descriptor.EnumDescriptor(
name='Events',
full_name='Honeywell.Security.ISOM.Macros.Events',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='config_p_add', index=0, number=10010,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='config_p_modify', index=1, number=10011,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='config_p_delete', index=2, number=10012,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='triggerState_p_active', index=3, number=10601,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Max_Events', index=4, number=1073741824,
serialized_options=None,
type=None),
],
containing_type=None,
serialized_options=None,
serialized_start=3786,
serialized_end=3905,
)
_sym_db.RegisterEnumDescriptor(_EVENTS)
Events = enum_type_wrapper.EnumTypeWrapper(_EVENTS)
_MACROTRIGGERTYPE = _descriptor.EnumDescriptor(
name='MacroTriggerType',
full_name='Honeywell.Security.ISOM.Macros.MacroTriggerType',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='start', index=0, number=11,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='stop', index=1, number=12,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='startDelay', index=2, number=13,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Max_MacroTriggerType', index=3, number=1073741824,
serialized_options=None,
type=None),
],
containing_type=None,
serialized_options=None,
serialized_start=3907,
serialized_end=3992,
)
_sym_db.RegisterEnumDescriptor(_MACROTRIGGERTYPE)
MacroTriggerType = enum_type_wrapper.EnumTypeWrapper(_MACROTRIGGERTYPE)
_MACROEXECUTIONMODE = _descriptor.EnumDescriptor(
name='MacroExecutionMode',
full_name='Honeywell.Security.ISOM.Macros.MacroExecutionMode',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='MacroExecutionMode_auto', index=0, number=11,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='manual', index=1, number=12,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='disable', index=2, number=13,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='Max_MacroExecutionMode', index=3, number=1073741824,
serialized_options=None,
type=None),
],
containing_type=None,
serialized_options=None,
serialized_start=3994,
serialized_end=4100,
)
_sym_db.RegisterEnumDescriptor(_MACROEXECUTIONMODE)
MacroExecutionMode = enum_type_wrapper.EnumTypeWrapper(_MACROEXECUTIONMODE)
supportedOperations = 1010
supportedRelations = 1011
supportedEvents = 1012
supportedCapabilities = 1013
fullEntity = 10050
info = 10051
config = 10071
identifiers = 10091
relations = 10111
state = 10200
mode = 10500
mode_s_auto = 10501
mode_s_manual = 10502
mode_s_disable = 10503
trigger = 10600
trigger_s_start = 10601
trigger_s_stop = 10602
Max_Resources = 1073741824
MacroAssignedCredential = 2003
MacroAssignedDevice = 2004
MacroTriggeredBySchedule = 2005
MacroAssignedToAccount = 2006
MacroAssignedToSite = 2007
MacroAssociatesRecordProfile = 2008
MacroTriggersRecordingOnStream = 2009
MacroControlsThermostat = 2010
MacroAssignedToPMCollection = 2011
Max_Relations = 1073741824
config_p_add = 10010
config_p_modify = 10011
config_p_delete = 10012
triggerState_p_active = 10601
Max_Events = 1073741824
start = 11
stop = 12
startDelay = 13
Max_MacroTriggerType = 1073741824
MacroExecutionMode_auto = 11
manual = 12
disable = 13
Max_MacroExecutionMode = 1073741824
_MACROOPERATIONS = _descriptor.Descriptor(
name='MacroOperations',
full_name='Honeywell.Security.ISOM.Macros.MacroOperations',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='resources', full_name='Honeywell.Security.ISOM.Macros.MacroOperations.resources', index=0,
number=11, type=14, cpp_type=8, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=96,
serialized_end=185,
)
_MACROSUPPORTEDRELATIONS = _descriptor.Descriptor(
name='MacroSupportedRelations',
full_name='Honeywell.Security.ISOM.Macros.MacroSupportedRelations',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='relations', full_name='Honeywell.Security.ISOM.Macros.MacroSupportedRelations.relations', index=0,
number=11, type=14, cpp_type=8, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=187,
serialized_end=284,
)
_MACROEVENTS = _descriptor.Descriptor(
name='MacroEvents',
full_name='Honeywell.Security.ISOM.Macros.MacroEvents',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='events', full_name='Honeywell.Security.ISOM.Macros.MacroEvents.events', index=0,
number=11, type=14, cpp_type=8, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=286,
serialized_end=365,
)
_MACROTRIGGERPATTERN = _descriptor.Descriptor(
name='MacroTriggerPattern',
full_name='Honeywell.Security.ISOM.Macros.MacroTriggerPattern',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='delay', full_name='Honeywell.Security.ISOM.Macros.MacroTriggerPattern.delay', index=0,
number=11, type=4, cpp_type=4, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=367,
serialized_end=413,
)
_MACROTRIGGERSTATE = _descriptor.Descriptor(
name='MacroTriggerState',
full_name='Honeywell.Security.ISOM.Macros.MacroTriggerState',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='ID', full_name='Honeywell.Security.ISOM.Macros.MacroTriggerState.ID', index=0,
number=11, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='state', full_name='Honeywell.Security.ISOM.Macros.MacroTriggerState.state', index=1,
number=12, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=11,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='activeTrigger', full_name='Honeywell.Security.ISOM.Macros.MacroTriggerState.activeTrigger', index=2,
number=14, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\220\265\030\014'), file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=416,
serialized_end=587,
)
_MACROTRIGGERSTATELIST = _descriptor.Descriptor(
name='MacroTriggerStateList',
full_name='Honeywell.Security.ISOM.Macros.MacroTriggerStateList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='triggerState', full_name='Honeywell.Security.ISOM.Macros.MacroTriggerStateList.triggerState', index=0,
number=11, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=589,
serialized_end=695,
)
_MACROSTATE = _descriptor.Descriptor(
name='MacroState',
full_name='Honeywell.Security.ISOM.Macros.MacroState',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='Honeywell.Security.ISOM.Macros.MacroState.id', index=0,
number=11, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='mode', full_name='Honeywell.Security.ISOM.Macros.MacroState.mode', index=1,
number=12, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=11,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='triggerState', full_name='Honeywell.Security.ISOM.Macros.MacroState.triggerState', index=2,
number=13, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=698,
serialized_end=871,
)
_MACROSTATELIST = _descriptor.Descriptor(
name='MacroStateList',
full_name='Honeywell.Security.ISOM.Macros.MacroStateList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='state', full_name='Honeywell.Security.ISOM.Macros.MacroStateList.state', index=0,
number=11, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=873,
serialized_end=958,
)
_MACROIDENTIFIERS = _descriptor.Descriptor(
name='MacroIdentifiers',
full_name='Honeywell.Security.ISOM.Macros.MacroIdentifiers',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='Honeywell.Security.ISOM.Macros.MacroIdentifiers.id', index=0,
number=11, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='guid', full_name='Honeywell.Security.ISOM.Macros.MacroIdentifiers.guid', index=1,
number=12, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='name', full_name='Honeywell.Security.ISOM.Macros.MacroIdentifiers.name', index=2,
number=13, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='description', full_name='Honeywell.Security.ISOM.Macros.MacroIdentifiers.description', index=3,
number=14, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=960,
serialized_end=1049,
)
_MACROIDENTIFIERSLIST = _descriptor.Descriptor(
name='MacroIdentifiersList',
full_name='Honeywell.Security.ISOM.Macros.MacroIdentifiersList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='identifiers', full_name='Honeywell.Security.ISOM.Macros.MacroIdentifiersList.identifiers', index=0,
number=11, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=1051,
serialized_end=1154,
)
_MACRORELATION = _descriptor.Descriptor(
name='MacroRelation',
full_name='Honeywell.Security.ISOM.Macros.MacroRelation',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='Honeywell.Security.ISOM.Macros.MacroRelation.id', index=0,
number=11, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='name', full_name='Honeywell.Security.ISOM.Macros.MacroRelation.name', index=1,
number=12, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=2003,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='entityId', full_name='Honeywell.Security.ISOM.Macros.MacroRelation.entityId', index=2,
number=14, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=1156,
serialized_end=1268,
)
_MACRORELATIONLIST = _descriptor.Descriptor(
name='MacroRelationList',
full_name='Honeywell.Security.ISOM.Macros.MacroRelationList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='relation', full_name='Honeywell.Security.ISOM.Macros.MacroRelationList.relation', index=0,
number=11, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=1270,
serialized_end=1354,
)
_RULEACTION = _descriptor.Descriptor(
name='RuleAction',
full_name='Honeywell.Security.ISOM.Macros.RuleAction',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='element', full_name='Honeywell.Security.ISOM.Macros.RuleAction.element', index=0,
number=11, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=1356,
serialized_end=1426,
)
_RULETRIGGERELEMENTOPERAND = _descriptor.Descriptor(
name='RuleTriggerElementOperand',
full_name='Honeywell.Security.ISOM.Macros.RuleTriggerElementOperand',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='entityType', full_name='Honeywell.Security.ISOM.Macros.RuleTriggerElementOperand.entityType', index=0,
number=25, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=1,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='entityId', full_name='Honeywell.Security.ISOM.Macros.RuleTriggerElementOperand.entityId', index=1,
number=26, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='type', full_name='Honeywell.Security.ISOM.Macros.RuleTriggerElementOperand.type', index=2,
number=27, type=4, cpp_type=4, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\220\265\030\025'), file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='additionalTriggerInfo', full_name='Honeywell.Security.ISOM.Macros.RuleTriggerElementOperand.additionalTriggerInfo', index=3,
number=28, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='relationalOperation', full_name='Honeywell.Security.ISOM.Macros.RuleTriggerElementOperand.relationalOperation', index=4,
number=29, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=11,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=1429,
serialized_end=1735,
)
_VOICECOMMAND = _descriptor.Descriptor(
name='VoiceCommand',
full_name='Honeywell.Security.ISOM.Macros.VoiceCommand',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='phrase', full_name='Honeywell.Security.ISOM.Macros.VoiceCommand.phrase', index=0,
number=11, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='minRecognitionScore', full_name='Honeywell.Security.ISOM.Macros.VoiceCommand.minRecognitionScore', index=1,
number=12, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=1737,
serialized_end=1796,
)
_RULETRIGGERELEMENT = _descriptor.Descriptor(
name='RuleTriggerElement',
full_name='Honeywell.Security.ISOM.Macros.RuleTriggerElement',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='Honeywell.Security.ISOM.Macros.RuleTriggerElement.id', index=0,
number=11, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='operation', full_name='Honeywell.Security.ISOM.Macros.RuleTriggerElement.operation', index=1,
number=12, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=11,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='operand', full_name='Honeywell.Security.ISOM.Macros.RuleTriggerElement.operand', index=2,
number=13, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='ruleTriggerElementId', full_name='Honeywell.Security.ISOM.Macros.RuleTriggerElement.ruleTriggerElementId', index=3,
number=14, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='voiceCmd', full_name='Honeywell.Security.ISOM.Macros.RuleTriggerElement.voiceCmd', index=4,
number=15, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=1799,
serialized_end=2073,
)
_RULETRIGGER = _descriptor.Descriptor(
name='RuleTrigger',
full_name='Honeywell.Security.ISOM.Macros.RuleTrigger',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='element', full_name='Honeywell.Security.ISOM.Macros.RuleTrigger.element', index=0,
number=11, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='credentialId', full_name='Honeywell.Security.ISOM.Macros.RuleTrigger.credentialId', index=1,
number=21, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=2076,
serialized_end=2235,
)
_RULECONFIG = _descriptor.Descriptor(
name='RuleConfig',
full_name='Honeywell.Security.ISOM.Macros.RuleConfig',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='trigger', full_name='Honeywell.Security.ISOM.Macros.RuleConfig.trigger', index=0,
number=21, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='delay', full_name='Honeywell.Security.ISOM.Macros.RuleConfig.delay', index=1,
number=23, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\220\265\030\021'), file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='action', full_name='Honeywell.Security.ISOM.Macros.RuleConfig.action', index=2,
number=22, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=2238,
serialized_end=2432,
)
_MACROCONFIG = _descriptor.Descriptor(
name='MacroConfig',
full_name='Honeywell.Security.ISOM.Macros.MacroConfig',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='identifiers', full_name='Honeywell.Security.ISOM.Macros.MacroConfig.identifiers', index=0,
number=11, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='relation', full_name='Honeywell.Security.ISOM.Macros.MacroConfig.relation', index=1,
number=10, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='executionMode', full_name='Honeywell.Security.ISOM.Macros.MacroConfig.executionMode', index=2,
number=20, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=11,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='ruleConfig', full_name='Honeywell.Security.ISOM.Macros.MacroConfig.ruleConfig', index=3,
number=21, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='instruction', full_name='Honeywell.Security.ISOM.Macros.MacroConfig.instruction', index=4,
number=22, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='type', full_name='Honeywell.Security.ISOM.Macros.MacroConfig.type', index=5,
number=23, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=_b('\220\265\030\023'), file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(900000, 1100000), ],
oneofs=[
],
serialized_start=2435,
serialized_end=2811,
)
_MACROCONFIGLIST = _descriptor.Descriptor(
name='MacroConfigList',
full_name='Honeywell.Security.ISOM.Macros.MacroConfigList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='config', full_name='Honeywell.Security.ISOM.Macros.MacroConfigList.config', index=0,
number=11, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=True,
syntax='proto2',
extension_ranges=[(1000000, 1100000), ],
oneofs=[
],
serialized_start=2813,
serialized_end=2901,
)
_MACROENTITY = _descriptor.Descriptor(
name='MacroEntity',
full_name='Honeywell.Security.ISOM.Macros.MacroEntity',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='config', full_name='Honeywell.Security.ISOM.Macros.MacroEntity.config', index=0,
number=11, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='state', full_name='Honeywell.Security.ISOM.Macros.MacroEntity.state', index=1,
number=12, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=2904,
serialized_end=3037,
)
_MACROENTITYLIST = _descriptor.Descriptor(
name='MacroEntityList',
full_name='Honeywell.Security.ISOM.Macros.MacroEntityList',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='entity', full_name='Honeywell.Security.ISOM.Macros.MacroEntityList.entity', index=0,
number=11, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=3039,
serialized_end=3117,
)
_MACROOPERATIONS.fields_by_name['resources'].enum_type = _RESOURCES
_MACROSUPPORTEDRELATIONS.fields_by_name['relations'].enum_type = _RELATIONS
_MACROEVENTS.fields_by_name['events'].enum_type = _EVENTS
_MACROTRIGGERSTATE.fields_by_name['state'].enum_type = _MACROTRIGGERTYPE
_MACROTRIGGERSTATE.fields_by_name['activeTrigger'].message_type = IsomStdDef__pb2._ISOMEVENT
_MACROTRIGGERSTATELIST.fields_by_name['triggerState'].message_type = _MACROTRIGGERSTATE
_MACROSTATE.fields_by_name['mode'].enum_type = _MACROEXECUTIONMODE
_MACROSTATE.fields_by_name['triggerState'].message_type = _MACROTRIGGERSTATE
_MACROSTATELIST.fields_by_name['state'].message_type = _MACROSTATE
_MACROIDENTIFIERSLIST.fields_by_name['identifiers'].message_type = _MACROIDENTIFIERS
_MACRORELATION.fields_by_name['name'].enum_type = _RELATIONS
_MACRORELATIONLIST.fields_by_name['relation'].message_type = _MACRORELATION
_RULEACTION.fields_by_name['element'].message_type = IsomStdDef__pb2._TASK
_RULETRIGGERELEMENTOPERAND.fields_by_name['entityType'].enum_type = Isom__pb2._ISOMENTITYTYPE
_RULETRIGGERELEMENTOPERAND.fields_by_name['additionalTriggerInfo'].message_type = IsomEvents__pb2._ISOMEVENTDETAILEXTENSION
_RULETRIGGERELEMENTOPERAND.fields_by_name['relationalOperation'].enum_type = IsomStdDef__pb2._RELATIONALOPERATION
_RULETRIGGERELEMENT.fields_by_name['operation'].enum_type = IsomStdDef__pb2._LOGICALOPERATION
_RULETRIGGERELEMENT.fields_by_name['operand'].message_type = _RULETRIGGERELEMENTOPERAND
_RULETRIGGERELEMENT.fields_by_name['voiceCmd'].message_type = _VOICECOMMAND
_RULETRIGGER.fields_by_name['element'].message_type = _RULETRIGGERELEMENT
_RULETRIGGER.fields_by_name['credentialId'].message_type = IsomStdDef__pb2._ISOMENTITYINSTANCE
_RULECONFIG.fields_by_name['trigger'].message_type = _RULETRIGGER
_RULECONFIG.fields_by_name['delay'].message_type = IsomStdDef__pb2._ISOMDURATION
_RULECONFIG.fields_by_name['action'].message_type = _RULEACTION
_MACROCONFIG.fields_by_name['identifiers'].message_type = _MACROIDENTIFIERS
_MACROCONFIG.fields_by_name['relation'].message_type = _MACRORELATION
_MACROCONFIG.fields_by_name['executionMode'].enum_type = _MACROEXECUTIONMODE
_MACROCONFIG.fields_by_name['ruleConfig'].message_type = _RULECONFIG
_MACROCONFIG.fields_by_name['type'].message_type = IsomStdDef__pb2._ISOMSTRING
_MACROCONFIGLIST.fields_by_name['config'].message_type = _MACROCONFIG
_MACROENTITY.fields_by_name['config'].message_type = _MACROCONFIG
_MACROENTITY.fields_by_name['state'].message_type = _MACROSTATE
_MACROENTITYLIST.fields_by_name['entity'].message_type = _MACROENTITY
DESCRIPTOR.message_types_by_name['MacroOperations'] = _MACROOPERATIONS
DESCRIPTOR.message_types_by_name['MacroSupportedRelations'] = _MACROSUPPORTEDRELATIONS
DESCRIPTOR.message_types_by_name['MacroEvents'] = _MACROEVENTS
DESCRIPTOR.message_types_by_name['MacroTriggerPattern'] = _MACROTRIGGERPATTERN
DESCRIPTOR.message_types_by_name['MacroTriggerState'] = _MACROTRIGGERSTATE
DESCRIPTOR.message_types_by_name['MacroTriggerStateList'] = _MACROTRIGGERSTATELIST
DESCRIPTOR.message_types_by_name['MacroState'] = _MACROSTATE
DESCRIPTOR.message_types_by_name['MacroStateList'] = _MACROSTATELIST
DESCRIPTOR.message_types_by_name['MacroIdentifiers'] = _MACROIDENTIFIERS
DESCRIPTOR.message_types_by_name['MacroIdentifiersList'] = _MACROIDENTIFIERSLIST
DESCRIPTOR.message_types_by_name['MacroRelation'] = _MACRORELATION
DESCRIPTOR.message_types_by_name['MacroRelationList'] = _MACRORELATIONLIST
DESCRIPTOR.message_types_by_name['RuleAction'] = _RULEACTION
DESCRIPTOR.message_types_by_name['RuleTriggerElementOperand'] = _RULETRIGGERELEMENTOPERAND
DESCRIPTOR.message_types_by_name['VoiceCommand'] = _VOICECOMMAND
DESCRIPTOR.message_types_by_name['RuleTriggerElement'] = _RULETRIGGERELEMENT
DESCRIPTOR.message_types_by_name['RuleTrigger'] = _RULETRIGGER
DESCRIPTOR.message_types_by_name['RuleConfig'] = _RULECONFIG
DESCRIPTOR.message_types_by_name['MacroConfig'] = _MACROCONFIG
DESCRIPTOR.message_types_by_name['MacroConfigList'] = _MACROCONFIGLIST
DESCRIPTOR.message_types_by_name['MacroEntity'] = _MACROENTITY
DESCRIPTOR.message_types_by_name['MacroEntityList'] = _MACROENTITYLIST
DESCRIPTOR.enum_types_by_name['Resources'] = _RESOURCES
DESCRIPTOR.enum_types_by_name['Relations'] = _RELATIONS
DESCRIPTOR.enum_types_by_name['Events'] = _EVENTS
DESCRIPTOR.enum_types_by_name['MacroTriggerType'] = _MACROTRIGGERTYPE
DESCRIPTOR.enum_types_by_name['MacroExecutionMode'] = _MACROEXECUTIONMODE
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
MacroOperations = _reflection.GeneratedProtocolMessageType('MacroOperations', (_message.Message,), {
'DESCRIPTOR' : _MACROOPERATIONS,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.MacroOperations)
})
_sym_db.RegisterMessage(MacroOperations)
MacroSupportedRelations = _reflection.GeneratedProtocolMessageType('MacroSupportedRelations', (_message.Message,), {
'DESCRIPTOR' : _MACROSUPPORTEDRELATIONS,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.MacroSupportedRelations)
})
_sym_db.RegisterMessage(MacroSupportedRelations)
MacroEvents = _reflection.GeneratedProtocolMessageType('MacroEvents', (_message.Message,), {
'DESCRIPTOR' : _MACROEVENTS,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.MacroEvents)
})
_sym_db.RegisterMessage(MacroEvents)
MacroTriggerPattern = _reflection.GeneratedProtocolMessageType('MacroTriggerPattern', (_message.Message,), {
'DESCRIPTOR' : _MACROTRIGGERPATTERN,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.MacroTriggerPattern)
})
_sym_db.RegisterMessage(MacroTriggerPattern)
MacroTriggerState = _reflection.GeneratedProtocolMessageType('MacroTriggerState', (_message.Message,), {
'DESCRIPTOR' : _MACROTRIGGERSTATE,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.MacroTriggerState)
})
_sym_db.RegisterMessage(MacroTriggerState)
MacroTriggerStateList = _reflection.GeneratedProtocolMessageType('MacroTriggerStateList', (_message.Message,), {
'DESCRIPTOR' : _MACROTRIGGERSTATELIST,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.MacroTriggerStateList)
})
_sym_db.RegisterMessage(MacroTriggerStateList)
MacroState = _reflection.GeneratedProtocolMessageType('MacroState', (_message.Message,), {
'DESCRIPTOR' : _MACROSTATE,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.MacroState)
})
_sym_db.RegisterMessage(MacroState)
MacroStateList = _reflection.GeneratedProtocolMessageType('MacroStateList', (_message.Message,), {
'DESCRIPTOR' : _MACROSTATELIST,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.MacroStateList)
})
_sym_db.RegisterMessage(MacroStateList)
MacroIdentifiers = _reflection.GeneratedProtocolMessageType('MacroIdentifiers', (_message.Message,), {
'DESCRIPTOR' : _MACROIDENTIFIERS,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.MacroIdentifiers)
})
_sym_db.RegisterMessage(MacroIdentifiers)
MacroIdentifiersList = _reflection.GeneratedProtocolMessageType('MacroIdentifiersList', (_message.Message,), {
'DESCRIPTOR' : _MACROIDENTIFIERSLIST,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.MacroIdentifiersList)
})
_sym_db.RegisterMessage(MacroIdentifiersList)
MacroRelation = _reflection.GeneratedProtocolMessageType('MacroRelation', (_message.Message,), {
'DESCRIPTOR' : _MACRORELATION,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.MacroRelation)
})
_sym_db.RegisterMessage(MacroRelation)
MacroRelationList = _reflection.GeneratedProtocolMessageType('MacroRelationList', (_message.Message,), {
'DESCRIPTOR' : _MACRORELATIONLIST,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.MacroRelationList)
})
_sym_db.RegisterMessage(MacroRelationList)
RuleAction = _reflection.GeneratedProtocolMessageType('RuleAction', (_message.Message,), {
'DESCRIPTOR' : _RULEACTION,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.RuleAction)
})
_sym_db.RegisterMessage(RuleAction)
RuleTriggerElementOperand = _reflection.GeneratedProtocolMessageType('RuleTriggerElementOperand', (_message.Message,), {
'DESCRIPTOR' : _RULETRIGGERELEMENTOPERAND,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.RuleTriggerElementOperand)
})
_sym_db.RegisterMessage(RuleTriggerElementOperand)
VoiceCommand = _reflection.GeneratedProtocolMessageType('VoiceCommand', (_message.Message,), {
'DESCRIPTOR' : _VOICECOMMAND,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.VoiceCommand)
})
_sym_db.RegisterMessage(VoiceCommand)
RuleTriggerElement = _reflection.GeneratedProtocolMessageType('RuleTriggerElement', (_message.Message,), {
'DESCRIPTOR' : _RULETRIGGERELEMENT,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.RuleTriggerElement)
})
_sym_db.RegisterMessage(RuleTriggerElement)
RuleTrigger = _reflection.GeneratedProtocolMessageType('RuleTrigger', (_message.Message,), {
'DESCRIPTOR' : _RULETRIGGER,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.RuleTrigger)
})
_sym_db.RegisterMessage(RuleTrigger)
RuleConfig = _reflection.GeneratedProtocolMessageType('RuleConfig', (_message.Message,), {
'DESCRIPTOR' : _RULECONFIG,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.RuleConfig)
})
_sym_db.RegisterMessage(RuleConfig)
MacroConfig = _reflection.GeneratedProtocolMessageType('MacroConfig', (_message.Message,), {
'DESCRIPTOR' : _MACROCONFIG,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.MacroConfig)
})
_sym_db.RegisterMessage(MacroConfig)
MacroConfigList = _reflection.GeneratedProtocolMessageType('MacroConfigList', (_message.Message,), {
'DESCRIPTOR' : _MACROCONFIGLIST,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.MacroConfigList)
})
_sym_db.RegisterMessage(MacroConfigList)
MacroEntity = _reflection.GeneratedProtocolMessageType('MacroEntity', (_message.Message,), {
'DESCRIPTOR' : _MACROENTITY,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.MacroEntity)
})
_sym_db.RegisterMessage(MacroEntity)
MacroEntityList = _reflection.GeneratedProtocolMessageType('MacroEntityList', (_message.Message,), {
'DESCRIPTOR' : _MACROENTITYLIST,
'__module__' : 'Macros_pb2'
# @@protoc_insertion_point(class_scope:Honeywell.Security.ISOM.Macros.MacroEntityList)
})
_sym_db.RegisterMessage(MacroEntityList)
_MACROTRIGGERSTATE.fields_by_name['activeTrigger']._options = None
_RULETRIGGERELEMENTOPERAND.fields_by_name['type']._options = None
_RULECONFIG.fields_by_name['delay']._options = None
_MACROCONFIG.fields_by_name['type']._options = None
# @@protoc_insertion_point(module_scope) | PypiClean |
/BBA-0.2.0.tar.gz/BBA-0.2.0/bba/client.py | import requests
import json
from typing import Optional
from .objects import ResponseObject
from .errors import Unauthorized, TooManyRequests, InternalServerError
class Client:
"""Represent the client that make the request.
Parameters
------------
api_key: str
The api key, the unique identifier used to connect to or perform an API call. You can get the key from here: https://api.breadbot.me/login
"""
def __init__(self, api_key: str):
self.api_key = api_key
self.base_url = "https://api.breadbot.me/"
self.calc = self.calculator #alias
def check_error(self, response: requests.models.Response):
code = response.status_code
if code == 401:
raise Unauthorized()
elif code == 429:
raise TooManyRequests()
elif code == 500:
raise InternalServerError()
def request(self, method: str, version: str, path: str, *, params: dict = {}, headers: dict = {}):
url = self.base_url + version + path
res = getattr(requests, method.lower(), None)
if not res:
raise TypeError("Wrong method argument passed!")
if not headers:
headers['api-key'] = self.api_key
response = res(url, headers=headers, params=params)
self.check_error(response)
res = None
try:
res=response.json()
except json.decoder.JSONDecodeError:
return response
return res
def calculator(self, expression: str, ans: str = None) -> ResponseObject:
"""ResponseObject: Solve certain math equations.
Parameters
-----------
expression: str:
The math expression, e.g '6+6'
ans: str:
Filled the 'Ans' word in expression. e.g if expression is 'Ans+2+1' and you defined ans argument as 1. Then the expression change to: 1+2+1 which is 4.
Docs
------
https://api.breadbot.me/#calcPost
"""
params={}
if expression:
params['calc'] = expression
if ans:
params['ans'] = ans
res = self.request(
"POST",
"v1",
"/calc",
params=params
)
return ResponseObject(res)
def get_sentence(self) -> ResponseObject:
"""ResponseObject: Generate a random sentence.
Docs
------
https://docs.api.breadbot.me/reference/api-reference/sentence-generator
"""
res = self.request(
"POST",
"v1",
"/sentence"
)
return ResponseObject(res)
def invert_image(self, url: Optional[str] = None) -> Optional[bytes]:
"""Optional[:class:`bytes`]: Invert an image using the image url.
Parameters
------------
url: Optional[:class:`str`]
The image's url to be invert.
Docs
------
https://docs.api.breadbot.me/reference/api-reference/image-manipulation
"""
if not url:
raise TypeError("Url cannot be None or empty string!")
res = self.request(
"POST",
"v1",
"/image/invert",
params = {
"img": url
}
)
return res.content
def pixelate_image(self, url: Optional[str] = None) -> Optional[bytes]:
"""Optional[:class:`bytes`]: Pixelate an image using the image url.
Parameters
------------
url: Optional[:class:`str`]
The image's url to be pixelate.
Docs
------
https://docs.api.breadbot.me/reference/api-reference/image-manipulation
"""
if not url:
raise TypeError("Url cannot be None or empty string!")
res = self.request(
"POST",
"v1",
"/image/pixelate",
params = {
"img": url
}
)
return res.content | PypiClean |
/HyFetch-1.4.10.tar.gz/HyFetch-1.4.10/hyfetch/presets.py | from __future__ import annotations
from typing import Iterable
from .color_util import RGB
from .constants import GLOBAL_CFG
from .types import LightDark, ColorSpacing
def remove_duplicates(seq: Iterable) -> list:
"""
Remove duplicate items from a sequence while preserving the order
"""
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
class ColorProfile:
raw: list[str]
colors: list[RGB]
spacing: ColorSpacing = 'equal'
def __init__(self, colors: list[str] | list[RGB]):
if isinstance(colors[0], str):
self.raw = colors
self.colors = [RGB.from_hex(c) for c in colors]
else:
self.colors = colors
def with_weights(self, weights: list[int]) -> list[RGB]:
"""
Map colors based on weights
:param weights: Weights of each color (weights[i] = how many times color[i] appears)
:return:
"""
return [c for i, w in enumerate(weights) for c in [self.colors[i]] * w]
def with_length(self, length: int) -> list[RGB]:
"""
Spread to a specific length of text
:param length: Length of text
:return: List of RGBs of the length
"""
preset_len = len(self.colors)
center_i = preset_len // 2
# How many copies of each color should be displayed at least?
repeats = length // preset_len
weights = [repeats] * preset_len
# How many extra space left?
extras = length % preset_len
# If there is an even space left, extend the center by one space
if extras % 2 == 1:
extras -= 1
weights[center_i] += 1
# Add weight to border until there's no space left (extras must be even at this point)
border_i = 0
while extras > 0:
extras -= 2
weights[border_i] += 1
weights[-(border_i + 1)] += 1
border_i += 1
return self.with_weights(weights)
def color_text(self, txt: str, foreground: bool = True, space_only: bool = False) -> str:
"""
Color a text
:param txt: Text
:param foreground: Whether the foreground text show the color or the background block
:param space_only: Whether to only color spaces
:return: Colored text
"""
colors = self.with_length(len(txt))
result = ''
for i, t in enumerate(txt):
if space_only and t != ' ':
if i > 0 and txt[i - 1] == ' ':
result += '\033[39;49m'
result += t
else:
result += colors[i].to_ansi(foreground=foreground) + t
result += '\033[39;49m'
return result
def lighten(self, multiplier: float) -> ColorProfile:
"""
Lighten the color profile by a multiplier
:param multiplier: Multiplier
:return: Lightened color profile (original isn't modified)
"""
return ColorProfile([c.lighten(multiplier) for c in self.colors])
def set_light_raw(self, light: float, at_least: bool | None = None, at_most: bool | None = None) -> 'ColorProfile':
"""
Set HSL lightness value
:param light: Lightness value (0-1)
:param at_least: Set the lightness to at least this value (no change if greater)
:param at_most: Set the lightness to at most this value (no change if lesser)
:return: New color profile (original isn't modified)
"""
return ColorProfile([c.set_light(light, at_least, at_most) for c in self.colors])
def set_light_dl(self, light: float, term: LightDark | None = None):
"""
Set HSL lightness value with respect to dark/light terminals
:param light: Lightness value (0-1)
:param term: Terminal color (can be "dark" or "light")
:return: New color profile (original isn't modified)
"""
if GLOBAL_CFG.use_overlay:
return self.overlay_dl(light, term)
term = term or GLOBAL_CFG.light_dark()
assert term.lower() in ['light', 'dark']
at_least, at_most = (True, None) if term.lower() == 'dark' else (None, True)
return self.set_light_raw(light, at_least, at_most)
def overlay_raw(self, color: RGB, alpha: float) -> 'ColorProfile':
"""
Overlay a color on top of the color profile
:param color: Color to overlay
:param alpha: Alpha value (0-1)
:return: New color profile (original isn't modified)
"""
return ColorProfile([c.overlay(color, alpha) for c in self.colors])
def overlay_dl(self, light: float, term: LightDark | None = None):
"""
Same as set_light_dl except that this function uses RGB overlaying instead of HSL lightness change
"""
term = term or GLOBAL_CFG.light_dark()
assert term.lower() in ['light', 'dark']
# If it's light bg, overlay black, else overlay white
overlay_color = RGB.from_hex('#000000' if term.lower() == 'light' else '#FFFFFF')
return self.overlay_raw(overlay_color, abs(light - 0.5) * 2)
def set_light_dl_def(self, term: LightDark | None = None):
"""
Set default lightness with respect to dark/light terminals
:param term: Terminal color (can be "dark" or "light")
:return: New color profile (original isn't modified)
"""
return self.set_light_dl(GLOBAL_CFG.default_lightness(term), term)
def unique_colors(self) -> ColorProfile:
"""
Create another color profile with only the unique colors
"""
return ColorProfile(remove_duplicates(self.colors))
PRESETS: dict[str, ColorProfile] = {
'rainbow': ColorProfile([
'#E50000',
'#FF8D00',
'#FFEE00',
'#028121',
'#004CFF',
'#770088'
]),
'transgender': ColorProfile([
'#55CDFD',
'#F6AAB7',
'#FFFFFF',
'#F6AAB7',
'#55CDFD'
]),
'nonbinary': ColorProfile([
'#FCF431',
'#FCFCFC',
'#9D59D2',
'#282828'
]),
'agender': ColorProfile([
'#000000',
'#BABABA',
'#FFFFFF',
'#BAF484',
'#FFFFFF',
'#BABABA',
'#000000'
]),
'queer': ColorProfile([
'#B57FDD',
'#FFFFFF',
'#49821E'
]),
'genderfluid': ColorProfile([
'#FE76A2',
'#FFFFFF',
'#BF12D7',
'#000000',
'#303CBE'
]),
'bisexual': ColorProfile([
'#D60270',
'#9B4F96',
'#0038A8'
]),
'pansexual': ColorProfile([
'#FF1C8D',
'#FFD700',
'#1AB3FF'
]),
'polysexual': ColorProfile([
'#F714BA',
'#01D66A',
'#1594F6',
]),
# omnisexual sorced from https://www.flagcolorcodes.com/omnisexual
'omnisexual': ColorProfile([
'#FE9ACE',
'#FF53BF',
'#200044',
'#6760FE',
'#8EA6FF',
]),
'omniromantic': ColorProfile([
'#FEC8E4',
'#FDA1DB',
'#89739A',
'#ABA7FE',
'#BFCEFF',
]),
# gay men sourced from https://www.flagcolorcodes.com/gay-men
'gay-men': ColorProfile([
'#078D70',
'#98E8C1',
'#FFFFFF',
'#7BADE2',
'#3D1A78'
]),
'lesbian': ColorProfile([
'#D62800',
'#FF9B56',
'#FFFFFF',
'#D462A6',
'#A40062'
]),
# abrosexual used colorpicker to source from
# https://fyeahaltpride.tumblr.com/post/151704251345/could-you-guys-possibly-make-an-abrosexual-pride
'abrosexual': ColorProfile([
'#46D294',
'#A3E9CA',
'#FFFFFF',
'#F78BB3',
'#EE1766',
]),
'asexual': ColorProfile([
'#000000',
'#A4A4A4',
'#FFFFFF',
'#810081'
]),
'aromantic': ColorProfile([
'#3BA740',
'#A8D47A',
'#FFFFFF',
'#ABABAB',
'#000000'
]),
# aroace1 sourced from https://flag.library.lgbt/flags/aroace/
'aroace1': ColorProfile([
'#E28C00',
'#ECCD00',
'#FFFFFF',
'#62AEDC',
'#203856'
]),
'aroace2': ColorProfile([
'#000000',
'#810081',
'#A4A4A4',
'#FFFFFF',
'#A8D47A',
'#3BA740'
]),
'aroace3': ColorProfile([
'#3BA740',
'#A8D47A',
'#FFFFFF',
'#ABABAB',
'#000000',
'#A4A4A4',
'#FFFFFF',
'#810081'
]),
# below sourced from https://www.flagcolorcodes.com/flags/pride
# goto f"https://www.flagcolorcodes.com/{preset}" for info
# todo: sane sorting
'autosexual': ColorProfile([
'#99D9EA',
'#7F7F7F'
]),
'intergender': ColorProfile([
# todo: use weighted spacing
'#900DC2',
'#900DC2',
'#FFE54F',
'#900DC2',
'#900DC2',
]),
'greygender': ColorProfile([
'#B3B3B3',
'#B3B3B3',
'#FFFFFF',
'#062383',
'#062383',
'#FFFFFF',
'#535353',
'#535353',
]),
'akiosexual': ColorProfile([
'#F9485E',
'#FEA06A',
'#FEF44C',
'#FFFFFF',
'#000000',
]),
# bigender sourced from https://www.flagcolorcodes.com/bigender
'bigender': ColorProfile([
'#C479A2',
'#EDA5CD',
'#D6C7E8',
'#FFFFFF',
'#D6C7E8',
'#9AC7E8',
'#6D82D1',
]),
# demigender yellow sourced from https://lgbtqia.fandom.com/f/p/4400000000000041031
# other colors sourced from demiboy and demigirl flags
'demigender': ColorProfile([
'#7F7F7F',
'#C4C4C4',
'#FBFF75',
'#FFFFFF',
'#FBFF75',
'#C4C4C4',
'#7F7F7F',
]),
# demiboy sourced from https://www.flagcolorcodes.com/demiboy
'demiboy': ColorProfile([
'#7F7F7F',
'#C4C4C4',
'#9DD7EA',
'#FFFFFF',
'#9DD7EA',
'#C4C4C4',
'#7F7F7F',
]),
# demigirl sourced from https://www.flagcolorcodes.com/demigirl
'demigirl': ColorProfile([
'#7F7F7F',
'#C4C4C4',
'#FDADC8',
'#FFFFFF',
'#FDADC8',
'#C4C4C4',
'#7F7F7F',
]),
'transmasculine': ColorProfile([
'#FF8ABD',
'#CDF5FE',
'#9AEBFF',
'#74DFFF',
'#9AEBFF',
'#CDF5FE',
'#FF8ABD',
]),
# transfeminine used colorpicker to source from https://www.deviantart.com/pride-flags/art/Trans-Woman-Transfeminine-1-543925985
# linked from https://gender.fandom.com/wiki/Transfeminine
'transfeminine': ColorProfile([
'#73DEFF',
'#FFE2EE',
'#FFB5D6',
'#FF8DC0',
'#FFB5D6',
'#FFE2EE',
'#73DEFF',
]),
# genderfaun sourced from https://www.flagcolorcodes.com/genderfaun
'genderfaun': ColorProfile([
'#FCD689',
'#FFF09B',
'#FAF9CD',
'#FFFFFF',
'#8EDED9',
'#8CACDE',
'#9782EC',
]),
'demifaun': ColorProfile([
'#7F7F7F',
'#7F7F7F',
'#C6C6C6',
'#C6C6C6',
'#FCC688',
'#FFF19C',
'#FFFFFF',
'#8DE0D5',
'#9682EC',
'#C6C6C6',
'#C6C6C6',
'#7F7F7F',
'#7F7F7F',
]),
# genderfae sourced from https://www.flagcolorcodes.com/genderfae
'genderfae': ColorProfile([
'#97C3A5',
'#C3DEAE',
'#F9FACD',
'#FFFFFF',
'#FCA2C4',
'#DB8AE4',
'#A97EDD',
]),
# demifae used colorpicker to source form https://www.deviantart.com/pride-flags/art/Demifae-870194777
'demifae': ColorProfile([
'#7F7F7F',
'#7F7F7F',
'#C5C5C5',
'#C5C5C5',
'#97C3A4',
'#C4DEAE',
'#FFFFFF',
'#FCA2C5',
'#AB7EDF',
'#C5C5C5',
'#C5C5C5',
'#7F7F7F',
'#7F7F7F',
]),
'neutrois': ColorProfile([
'#FFFFFF',
'#1F9F00',
'#000000'
]),
'biromantic1': ColorProfile([
'#8869A5',
'#D8A7D8',
'#FFFFFF',
'#FDB18D',
'#151638',
]),
'biromantic2': ColorProfile([
'#740194',
'#AEB1AA',
'#FFFFFF',
'#AEB1AA',
'#740194',
]),
'autoromantic': ColorProfile([ # symbol interpreted
'#99D9EA',
'#99D9EA',
'#3DA542',
'#7F7F7F',
'#7F7F7F',
]),
# i didn't expect this one to work. cool!
'boyflux2': ColorProfile(ColorProfile([
'#E48AE4',
'#9A81B4',
'#55BFAB',
'#FFFFFF',
'#A8A8A8',
'#81D5EF',
'#69ABE5',
'#5276D4',
]).with_weights([1, 1, 1, 1, 1, 5, 5, 5])),
"finsexual": ColorProfile([
"#B18EDF",
"#D7B1E2",
"#F7CDE9",
"#F39FCE",
"#EA7BB3",
]),
'unlabeled1': ColorProfile([
'#EAF8E4',
'#FDFDFB',
'#E1EFF7',
'#F4E2C4'
]),
'unlabeled2': ColorProfile([
'#250548',
'#FFFFFF',
'#F7DCDA',
'#EC9BEE',
'#9541FA',
'#7D2557'
]),
'pangender': ColorProfile([
'#FFF798',
'#FEDDCD',
'#FFEBFB',
'#FFFFFF',
'#FFEBFB',
'#FEDDCD',
'#FFF798',
]),
'gendernonconforming1': ColorProfile(
ColorProfile([
'#50284d',
'#96467b',
'#5c96f7',
'#ffe6f7',
'#5c96f7',
'#96467b',
'#50284d'
]).with_weights([
4,1,1,1,1,1,4
])
),
'gendernonconforming2': ColorProfile([
'#50284d',
'#96467b',
'#5c96f7',
'#ffe6f7',
'#5c96f7',
'#96467b',
'#50284d'
]),
'femboy': ColorProfile([
"#d260a5",
"#e4afcd",
"#fefefe",
"#57cef8",
"#fefefe",
"#e4afcd",
"#d260a5"
]),
'tomboy': ColorProfile([
"#2f3fb9",
"#613a03",
"#fefefe",
"#f1a9b7",
"#fefefe",
"#613a03",
"#2f3fb9"
]),
'gynesexual': ColorProfile([
"#F4A9B7",
"#903F2B",
"#5B953B",
]),
'androsexual': ColorProfile([
"#01CCFF",
"#603524",
"#B799DE",
]),
# gendervoid and related flags sourced from: https://gender.fandom.com/wiki/Gendervoid
'gendervoid' : ColorProfile([
"#081149",
"#4B484B",
"#000000",
"#4B484B",
"#081149"
]),
'voidgirl' : ColorProfile([
"#180827",
"#7A5A8B",
"#E09BED",
"#7A5A8B",
"#180827"
]),
'voidboy' : ColorProfile([
"#0B130C",
"#547655",
"#66B969",
"#547655",
"#0B130C"
]),
# used https://twitter.com/foxbrained/status/1667621855518236674/photo/1 as source and colorpicked
'nonhuman-unity' : ColorProfile([
"#177B49",
"#FFFFFF",
"#593C90"
]),
# Meme flags
'beiyang': ColorProfile([
'#DF1B12',
'#FFC600',
'#01639D',
'#FFFFFF',
'#000000',
]),
'burger': ColorProfile([
'#F3A26A',
'#498701',
'#FD1C13',
'#7D3829',
'#F3A26A',
]),
} | PypiClean |
/LabExT_pkg-2.2.0.tar.gz/LabExT_pkg-2.2.0/docs/gui_wizard_control.md | # Wizard Widget
This page describes the tkinter-GUI widget called "Wizard" which we use inside LabExT for various setup dialogs.
Check-out this page if you are interested in building small GUI addons for LabExT yourself.
## How to use the widget:
### 1. Basic settings
Create a class that inherits from `Wizard`. Call the `Wizard` constructor to set the basic settings.
```python
from LabExT.View.Controls.Wizard import Wizard
class MyWizard(Wizard):
def __init__(self, parent):
super().__init__(
parent, # required
width=800, # Default: 640
height=600, # Default: 480
with_sidebar=True, # Default: True
with_error=True, # Default: True
on_cancel=self._cancel, # not required
on_finish=self._save, # not required
next_button_label="Next Step" # Default: "Next Step"
previous_button_label="Previous Step", # Default: Previous Step
cancel_button_label="Cancel and Close", # Default: Cancel
finish_button_label="Finish and Save" # Default: Finish
)
...
```
#### Explanation of the settings
- `width` and `height` sets the dimension of the wizard window.
- `with_sidebar` activates the step overview of the wizard. A frame with a width of 200 is created to the right of the content, displaying the titles of all steps and highlighting the current step.
- `with_error` activates the error display function of the wizard. When the wizard function `setError("Error: ...")` is called, the error is displayed in red above the buttons.
- `on_cancel` is the callback function that is called when the user closes the window or clicks Cancel. This method is **blocking**. The method is expected to return a bool. If the return value is True, the window is closed, otherwise it remains open.
- `on_finish` is the callback method that is called when the user clicks on Finish. This method is **blocking**. The method is expected to return a bool. If the return value is True, the window is closed, otherwise it remains open.
- `next_button_label`, `previous_button_label`, `cancel_button_label` and `finish_button_label ` are used to change the button labels.
**Note:** The `Wizard` class itself inherits from the Tkinter class `Toplevel.` Therefore, well-known functions such as `title` are available on `self`.
### 2. Define new step
To add new steps to the wizard, the method `add_step` is used. It is recommended to define the steps in the constructor of your wizard class.
```python
self.connection_step = self.add_step(
builder=self._connection_step_builder, # required
title="Stage Connection", # Default: None
on_next=self._on_next, # not required
on_previous=self._on_previous, # not required
on_reload=self._on_reload, # not required
previous_step_enabled=True, # Default: True
next_step_enabled=True, # Default: True
finish_step_enabled=False # Default: False
)
```
#### Explanation of the settings
- `builder` is the routine that builds the wizard step, i.e. defines all Tkinter objects. A Tkinter `Frame`-object is passed to the method as the first argument. All elements should use this frame as parent. The builder method is called every time the step is displayed or the wizard is manually reloaded, i.e. it is possible to render the step conditionally based on current state.
- `title` defines an optional title for the sidebar, if this has been activated.
- `on_next` is the callback method when the user clicks on "Next step". This method is **blocking**. The method is expected to return a bool. If the return value is True, the next step is loaded, otherwise not.
- `on_previous ` is the callback method when the user clicks on "Previous step". This method is **blocking**. The method is expected to return a bool. If the return value is True, the previous step is loaded, otherwise not.
- `on_reload` is called every time the step is "built", i.e. the builder method is called. This happens during step changes or when the `__reload__` method is called manually. Exemplary use: To check the current wizard state for errors:
```python
def _check_assignment(self):
if is_stage_assignment_valid(self._current_stage_assignment):
self.current_step.next_step_enabled = True
self.set_error("")
else:
self.current_step.next_step_enabled = False
self.set_error("Please assign at least one stage and do not select a stage twice.")
```
- `next_step_enabled ` activates the next step button. Note: This property can also be changed after the step creation (see code above).
- `previous_step_enabled ` activates the previous step button. Note: This property can also be changed after the step creation (see code above).
- `finish_step_enabled` activates the finish button. Note: This property can also be changed after the step creation (see code above).
### 3. Define step sequence
Use the `next_step` and `previous_step` properties of the steps to define the order. Note: The order can also be changed on the fly.
```python
# Connect Steps
self.first_step.next_step = self.second_step
self.second_step.previous_step = self.first_step
self.second_step.next_step = self.third_step
self.third_step.previous_step = self.third_step
```
### 4. Define first step
To start the wizard, the first step must be defined.
```python
self.current_step = self.first_step
```
### Miscellaneous
- Use `__reload__` to reload the wizard. The method calls the builder again and updates all button and sidebar states.
- Use the `set_error(str)` method to indicate an error. Note: To reset the error, use `set_error("")`
### Screenshots



| PypiClean |
/OCR-Sycophant-0.0.4.tar.gz/OCR-Sycophant-0.0.4/ocr_sycophant/cli.py | import click
import csv
import os
import glob
from multiprocessing import Pool
from ocr_sycophant.model import NoiseModel
from ocr_sycophant.encoder import Encoder
from ocr_sycophant.utils import get_dataset
@click.group()
def cli():
"""OCR Simple Noise Evaluator"""
def _predict(args):
model, file = args
try:
sentence, clean_score = model.predict_filepath(file, batch_size=16, verbose=False)
except Exception:
print(Exception)
print(file)
return file, .0
return file, clean_score
@cli.command("predict")
@click.argument("model", type=click.Path(exists=True, file_okay=True, dir_okay=False))
@click.argument("files", type=click.Path(exists=True, file_okay=True, dir_okay=True), nargs=-1)
@click.option("--verbose", is_flag=True, default=False)
@click.option("--logs", default=None, type=click.File(mode="w"))
@click.option("-w", "--workers", default=1, type=int)
def predict(model, files, verbose, logs, workers):
def gen(f):
for file in f:
if os.path.isdir(file):
yield from glob.glob(os.path.join(file, "**", "*.txt"), recursive=True)
else:
yield file
click.secho(click.style(f"Loading model at {model}"))
model = NoiseModel.load(model)
click.secho(click.style(f"-> Loaded", fg="green"))
click.secho(click.style(f"Testing {len(list(gen(files)))} files"))
def color(score):
if score >= 0.80:
return "green"
else:
return "red"
def gen_with_models(f):
for i in gen(f):
yield model, i
if logs:
writer = csv.writer(logs)
writer.writerow(["path", "score"])
with Pool(processes=workers) as pool:
# print same numbers in arbitrary order
for file, clean_score in pool.imap_unordered(_predict, gen_with_models(files)):
click.secho(click.style(f"---> {file} has {clean_score*100:.2f}% clean lines", fg=color(clean_score)))
if logs:
writer.writerow([file, f"{clean_score*100:.2f}"])
@cli.command("train")
@click.argument("trainfile", type=click.Path(exists=True, file_okay=True, dir_okay=False))
@click.argument("savepath", type=click.Path(file_okay=True, dir_okay=False))
@click.option("--testfile", default=None, type=click.Path(exists=True, file_okay=True, dir_okay=False),
help="Use specific testfile")
@click.option("--html", default=None, type=click.Path(file_okay=True, dir_okay=False),
help="Save the errors to HTML")
@click.option("--keep-best", default=False, is_flag=True,
help="Keep a single model (best performing one)")
def train(trainfile, savepath, testfile, html, keep_best):
"""Train a model with TRAINFILE and save it at SAVEPATH"""
model = NoiseModel(encoder=Encoder())
if testfile:
trainfile = (trainfile, testfile)
(train, train_enc), (test, test_enc) = get_dataset(trainfile, encoder=model.encoder)
click.secho(click.style(f"Training {len(model.models)} submodels"))
model.fit(train_enc)
click.secho(click.style("--> Done.", fg="green"))
click.secho(click.style("Testing"))
scores, errors = model.test(*test_enc, test)
click.secho(click.style(f"--> Accuracy: {list(scores.values())[0]*100:.2f}", fg="green"))
if keep_best:
best, best_model = 0, None
best_errors = []
for submodel in model.models:
out, errs = model._test_algo(submodel, *test_enc, raw=test)
score = list(out.values())[0]
if score > best:
best = score
best_model = submodel
best_errors = errs
click.secho(f"Best model: {type(best_model).__name__} ({100*best:.2f})")
model.models = [best_model]
errors = best_errors
if html:
with open(html, "w") as f:
body = """<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>OCR Noise Results</title>
</head>
<body>
{}
</body>
</html>"""
f.write(body.format("\n".join(model.errors_to_html(errors, "Multi"))))
click.secho(click.style("Saving"))
model.save(savepath)
click.secho(click.style("--> Done.", fg="green"))
if __name__ == "__main__":
cli() | PypiClean |
/Banzai-NGS-0.0.5.tar.gz/Banzai-NGS-0.0.5/README.rst | Banzai
======
The Banzai Microbial Genomics Pipeline is currently in closed beta. Please
`star`_ or `follow`_ this repository to receive updates when it's
publicly available.
.. _`star`: https://github.com/mscook/Banzai-MicrobialGenomics-Pipeline/star
.. _`follow`: https://github.com/mscook/Banzai-MicrobialGenomics-Pipeline/watchers
Installation
------------
See Install.rst_
.. _Install.rst: https://github.com/mscook/Banzai-MicrobialGenomics-Pipeline/blob/master/Install.rst
About
-----
Banzai is a Microbial Genomics Next Generation Sequencing (NGS) Pipeline Tool
developed within `Dr Scott Beatson's Group`_ at the University of Queensland.
.. _`Dr Scott Beatson's Group`: http://smms-steel.biosci.uq.edu.au
Banzai inherits it's name from the `Banzai 'Pipeline' surf break`_.
.. _`Banzai 'Pipeline' surf break`: http://en.wikipedia.org/wiki/Banzai_Pipeline
Banzai aims to simplify the analysis of large NGS datasets. It has been
specifically designed to distribute the workload over internal and external
High Performance Computing (HPC) resources as well as locally available
workstations.
Core Features
-------------
Banzai allows for:
* data validation/quality control of NGS reads,
* de novo assembly of NGS reads,
* mapping of NGS reads to reference genomes,
* ordering of the contigs or scaffolds from draft assemblies against
reference genomes,
* phylogenomics tasks (whole genome alignment, SNP based, recombination
detection)
* annotating draft assemblies,
* enrichment of the annotation of draft assemblies,
* performing common utility tasks like the creation of BLAST databases,
file format conversion and more.
Banzai supports the NGS platforms that are perdominantly used in Microbial
Genomics projects.
The platforms include:
* `Illumina`_ (single end (SE), paired end (PE) and mate paired (MP)
reads),
* `454`_ (SE and PE reads),
* `ABI SOLiD`_ colorspace reads and
* `Pacific Biosciences`_ reads (under development)
.. _`Illumina`: http://www.illumina.com/technology/sequencing_technology.ilmn
.. _`454`: http://www.454.com/
.. _`ABI SOLiD`: http://www.appliedbiosystems.com.au/
.. _`Pacific Biosciences`: http://www.pacificbiosciences.com/
**Banzai is by default geared towards 100 bp Illumina Paired End reads.**
Philosophy
----------
Banzai (in most cases) does not provide new NGS algorithms, it harnesses the
power of published and tested NGS tools. Simply, Banzai simplifies, automates
and distributes computational workloads which is the typical bottleneck in
analysis of large NGS datasets.
Developers
----------
The following have contributed significantly to Banzai:
* Mitchell Stanton-Cook (lead developer)
* Elizabeth Skippington (development of phylogenomics section)
* Nico Petty (design & testing)
* Nouri Ben Zakour (design and testing)
* Scott Beatson (design)
| PypiClean |
/Flask-CKEditor-0.4.6.tar.gz/Flask-CKEditor-0.4.6/flask_ckeditor/static/standard/plugins/a11yhelp/dialogs/lang/ku.js | /*
Copyright (c) 2003-2020, CKSource - Frederico Knabben. All rights reserved.
For licensing, see LICENSE.md or https://ckeditor.com/legal/ckeditor-oss-license
*/
CKEDITOR.plugins.setLang("a11yhelp","ku",{title:"ڕێنمای لەبەردەستدابوون",contents:"پێکهاتەی یارمەتی. کلیك ESC بۆ داخستنی ئەم دیالۆگه.",legend:[{name:"گشتی",items:[{name:"تووڵامرازی دەستكاریكەر",legend:"کلیك ${toolbarFocus} بۆ ڕابەری تووڵامراز. بۆ گواستنەوەی پێشوو داهاتووی گرووپی تووڵامرازی داگرتنی کلیلی TAB لەگەڵ SHIFT+TAB. بۆ گواستنەوەی پێشوو داهاتووی دووگمەی تووڵامرازی لەڕێی کلیلی تیری دەستی ڕاست یان کلیلی تیری دەستی چەپ. کلیکی کلیلی SPACE یان ENTER بۆ چالاککردنی دووگمەی تووڵامراز."},{name:"دیالۆگی دەستكاریكەر",
legend:"لەناوەوەی دیالۆگ, کلیکی کلیلی TAB بۆ ڕابەری دیالۆگێکی تر, داگرتنی کلیلی SHIFT + TAB بۆ گواستنەوەی بۆ دیالۆگی پێشووتر, کلیكی کلیلی ENTER بۆ ڕازیکردنی دیالۆگەکە, کلیكی کلیلی ESC بۆ هەڵوەشاندنەوەی دیالۆگەکە. بۆ دیالۆگی بازدەری (تابی) زیاتر, کلیكی کلیلی ALT + F10 بۆ ڕابهری لیستی بازدهرهکان، یان کلیكی کلیلی TAB. بۆچوونه بازدهری تابی پێشوو یان دوواتر کلیلی تیری دەستی ڕاست یان چەپ بکە."},{name:"پێڕستی سەرنووسەر",legend:"کلیك ${contextMenu} یان دوگمەی لیسته(Menu) بۆ کردنەوەی لیستەی دەق. بۆ چوونە هەڵبژاردەیەکی تر له لیسته کلیکی کلیلی TAB یان کلیلی تیری ڕوو لەخوارەوه بۆ چوون بۆ هەڵبژاردەی پێشوو کلیکی کلیلی SHIFT+TAB یان کلیلی تیری ڕوو له سەرەوە. داگرتنی کلیلی SPACE یان ENTER بۆ هەڵبژاردنی هەڵبژاردەی لیسته. بۆ کردنەوەی لقی ژێر لیسته لەهەڵبژاردەی لیستە کلیکی کلیلی SPACE یان ENTER یان کلیلی تیری دەستی ڕاست. بۆ گەڕانەوه بۆ سەرەوەی لیسته کلیکی کلیلی ESC یان کلیلی تیری دەستی چەپ. بۆ داخستنی لیستە کلیكی کلیلی ESC بکە."},
{name:"لیستی سنووقی سەرنووسەر",legend:"لەناو سنوقی لیست, چۆن بۆ هەڵنبژاردەی لیستێکی تر کلیکی کلیلی TAB یان کلیلی تیری ڕوو لەخوار. چوون بۆ هەڵبژاردەی لیستی پێشوو کلیکی کلیلی SHIFT+TAB یان کلیلی تیری ڕوو لەسەرەوه. کلیکی کلیلی SPACE یان ENTER بۆ دیاریکردنی هەڵبژاردەی لیست. کلیکی کلیلی ESC بۆ داخستنی سنوقی لیست."},{name:"تووڵامرازی توخم",legend:"کلیك ${elementsPathFocus} بۆ ڕابەری تووڵامرازی توخمەکان. چوون بۆ دوگمەی توخمێکی تر کلیکی کلیلی TAB یان کلیلی تیری دەستی ڕاست. چوون بۆ دوگمەی توخمی پێشوو کلیلی SHIFT+TAB یان کلیکی کلیلی تیری دەستی چەپ. داگرتنی کلیلی SPACE یان ENTER بۆ دیاریکردنی توخمەکه لەسەرنووسه."}]},
{name:"فەرمانەکان",items:[{name:"پووچکردنەوەی فەرمان",legend:"کلیك ${undo}"},{name:"هەڵگەڕانەوەی فەرمان",legend:"کلیك ${redo}"},{name:"فەرمانی دەقی قەڵەو",legend:"کلیك ${bold}"},{name:"فەرمانی دەقی لار",legend:"کلیك ${italic}"},{name:"فەرمانی ژێرهێڵ",legend:"کلیك ${underline}"},{name:"فەرمانی بهستەر",legend:"کلیك ${link}"},{name:"شاردەنەوەی تووڵامراز",legend:"کلیك ${toolbarCollapse}"},{name:"چوونەناو سەرنجدانی پێشوی فەرمانی بۆشایی",legend:"کلیک ${accessPreviousSpace} to access the closest unreachable focus space before the caret, for example: two adjacent HR elements. Repeat the key combination to reach distant focus spaces."},
{name:"چوونەناو سەرنجدانی داهاتووی فەرمانی بۆشایی",legend:"کلیک ${accessNextSpace} to access the closest unreachable focus space after the caret, for example: two adjacent HR elements. Repeat the key combination to reach distant focus spaces."},{name:"دەستپێگەیشتنی یارمەتی",legend:"کلیك ${a11yHelp}"},{name:"لکاندنی وەك دەقی ڕوون",legend:"کلیکی ${pastetext}",legendEdge:"کلیکی ${pastetext}، شوێنکەوتکراوە بە ${paste}"}]}],tab:"تاب",pause:"پشوو",capslock:"قفڵدانی پیتی گەورە",escape:"چوونە دەرەوە",pageUp:"پەڕە بەرەوسەر",
pageDown:"پەڕە بەرەوخوار",leftArrow:"تیری دەستی چەپ",upArrow:"تیری بەرەوسەر",rightArrow:"تیری دەستی ڕاست",downArrow:"تیری بەرەوخوار",insert:"خستنە ناو",leftWindowKey:"پەنجەرەی چەپ",rightWindowKey:"پەنجەرەی ڕاست",selectKey:"هەڵبژێرە",numpad0:"Numpad 0",numpad1:"1",numpad2:"2",numpad3:"3",numpad4:"4",numpad5:"5",numpad6:"6",numpad7:"7",numpad8:"8",numpad9:"9",multiply:"*",add:"+",subtract:"-",decimalPoint:".",divide:"/",f1:"F1",f2:"F2",f3:"F3",f4:"F4",f5:"F5",f6:"F6",f7:"F7",f8:"F8",f9:"F9",f10:"F10",
f11:"F11",f12:"F12",numLock:"قفڵدانی ژمارە",scrollLock:"قفڵدانی هێڵی هاتووچۆپێکردن",semiColon:";",equalSign:"\x3d",comma:",",dash:"-",period:".",forwardSlash:"/",graveAccent:"`",openBracket:"[",backSlash:"\\\\",closeBracket:"}",singleQuote:"'"}); | PypiClean |
/FamcyDev-0.3.71-py3-none-any.whl/Famcy/bower_components/bootstrap/js/src/tab.js | import {
defineJQueryPlugin,
getElementFromSelector,
isDisabled,
reflow
} from './util/index'
import EventHandler from './dom/event-handler'
import SelectorEngine from './dom/selector-engine'
import BaseComponent from './base-component'
/**
* ------------------------------------------------------------------------
* Constants
* ------------------------------------------------------------------------
*/
const NAME = 'tab'
const DATA_KEY = 'bs.tab'
const EVENT_KEY = `.${DATA_KEY}`
const DATA_API_KEY = '.data-api'
const EVENT_HIDE = `hide${EVENT_KEY}`
const EVENT_HIDDEN = `hidden${EVENT_KEY}`
const EVENT_SHOW = `show${EVENT_KEY}`
const EVENT_SHOWN = `shown${EVENT_KEY}`
const EVENT_CLICK_DATA_API = `click${EVENT_KEY}${DATA_API_KEY}`
const CLASS_NAME_DROPDOWN_MENU = 'dropdown-menu'
const CLASS_NAME_ACTIVE = 'active'
const CLASS_NAME_FADE = 'fade'
const CLASS_NAME_SHOW = 'show'
const SELECTOR_DROPDOWN = '.dropdown'
const SELECTOR_NAV_LIST_GROUP = '.nav, .list-group'
const SELECTOR_ACTIVE = '.active'
const SELECTOR_ACTIVE_UL = ':scope > li > .active'
const SELECTOR_DATA_TOGGLE = '[data-bs-toggle="tab"], [data-bs-toggle="pill"], [data-bs-toggle="list"]'
const SELECTOR_DROPDOWN_TOGGLE = '.dropdown-toggle'
const SELECTOR_DROPDOWN_ACTIVE_CHILD = ':scope > .dropdown-menu .active'
/**
* ------------------------------------------------------------------------
* Class Definition
* ------------------------------------------------------------------------
*/
class Tab extends BaseComponent {
// Getters
static get NAME() {
return NAME
}
// Public
show() {
if ((this._element.parentNode &&
this._element.parentNode.nodeType === Node.ELEMENT_NODE &&
this._element.classList.contains(CLASS_NAME_ACTIVE))) {
return
}
let previous
const target = getElementFromSelector(this._element)
const listElement = this._element.closest(SELECTOR_NAV_LIST_GROUP)
if (listElement) {
const itemSelector = listElement.nodeName === 'UL' || listElement.nodeName === 'OL' ? SELECTOR_ACTIVE_UL : SELECTOR_ACTIVE
previous = SelectorEngine.find(itemSelector, listElement)
previous = previous[previous.length - 1]
}
const hideEvent = previous ?
EventHandler.trigger(previous, EVENT_HIDE, {
relatedTarget: this._element
}) :
null
const showEvent = EventHandler.trigger(this._element, EVENT_SHOW, {
relatedTarget: previous
})
if (showEvent.defaultPrevented || (hideEvent !== null && hideEvent.defaultPrevented)) {
return
}
this._activate(this._element, listElement)
const complete = () => {
EventHandler.trigger(previous, EVENT_HIDDEN, {
relatedTarget: this._element
})
EventHandler.trigger(this._element, EVENT_SHOWN, {
relatedTarget: previous
})
}
if (target) {
this._activate(target, target.parentNode, complete)
} else {
complete()
}
}
// Private
_activate(element, container, callback) {
const activeElements = container && (container.nodeName === 'UL' || container.nodeName === 'OL') ?
SelectorEngine.find(SELECTOR_ACTIVE_UL, container) :
SelectorEngine.children(container, SELECTOR_ACTIVE)
const active = activeElements[0]
const isTransitioning = callback && (active && active.classList.contains(CLASS_NAME_FADE))
const complete = () => this._transitionComplete(element, active, callback)
if (active && isTransitioning) {
active.classList.remove(CLASS_NAME_SHOW)
this._queueCallback(complete, element, true)
} else {
complete()
}
}
_transitionComplete(element, active, callback) {
if (active) {
active.classList.remove(CLASS_NAME_ACTIVE)
const dropdownChild = SelectorEngine.findOne(SELECTOR_DROPDOWN_ACTIVE_CHILD, active.parentNode)
if (dropdownChild) {
dropdownChild.classList.remove(CLASS_NAME_ACTIVE)
}
if (active.getAttribute('role') === 'tab') {
active.setAttribute('aria-selected', false)
}
}
element.classList.add(CLASS_NAME_ACTIVE)
if (element.getAttribute('role') === 'tab') {
element.setAttribute('aria-selected', true)
}
reflow(element)
if (element.classList.contains(CLASS_NAME_FADE)) {
element.classList.add(CLASS_NAME_SHOW)
}
let parent = element.parentNode
if (parent && parent.nodeName === 'LI') {
parent = parent.parentNode
}
if (parent && parent.classList.contains(CLASS_NAME_DROPDOWN_MENU)) {
const dropdownElement = element.closest(SELECTOR_DROPDOWN)
if (dropdownElement) {
SelectorEngine.find(SELECTOR_DROPDOWN_TOGGLE, dropdownElement)
.forEach(dropdown => dropdown.classList.add(CLASS_NAME_ACTIVE))
}
element.setAttribute('aria-expanded', true)
}
if (callback) {
callback()
}
}
// Static
static jQueryInterface(config) {
return this.each(function () {
const data = Tab.getOrCreateInstance(this)
if (typeof config === 'string') {
if (typeof data[config] === 'undefined') {
throw new TypeError(`No method named "${config}"`)
}
data[config]()
}
})
}
}
/**
* ------------------------------------------------------------------------
* Data Api implementation
* ------------------------------------------------------------------------
*/
EventHandler.on(document, EVENT_CLICK_DATA_API, SELECTOR_DATA_TOGGLE, function (event) {
if (['A', 'AREA'].includes(this.tagName)) {
event.preventDefault()
}
if (isDisabled(this)) {
return
}
const data = Tab.getOrCreateInstance(this)
data.show()
})
/**
* ------------------------------------------------------------------------
* jQuery
* ------------------------------------------------------------------------
* add .Tab to jQuery only if jQuery is present
*/
defineJQueryPlugin(Tab)
export default Tab | PypiClean |
/ORCHISM.1.0.tar.gz/ORCHISM.1.0/code/ORCHIDEE_AR5_CFG.py | #
# ------------------------------------------------------------------------------
# This source code is governed by the CeCILL licence
#
#*******************************************************************************
# --- Initialisation du dictionnaire ---
# --------------------------------------
config_defaut = {}
# --- Initialisation de PATH par defaut ---
# -----------------------------------------
# Remarque : ces parametres ne seront pas ecrits dans le run.def mais servent
# ou script qui lance les simulations pour definir les fichiers d'entree/sortie
config_defaut['pathFORCAGE'] = '/home/satellites7/cbacour/ORCHIDEE/forcage/'
config_defaut['pathRESTART'] = '/home/satellites7/cbacour/ORCHIDEE/startfiles/'
config_defaut['pathOUT'] = '/home/satellites7/cbacour/ORCHIDEE/outputs/'
# --- Affectation des valeurs par defaut ---
# ------------------------------------------
# -- Affichage --
config_defaut['BAVARD'] = '1'
config_defaut['DEBUG_INFO'] = 'n'
config_defaut['LONGPRINT'] = 'n'
config_defaut['ALMA_OUTPUT'] = 'y'
config_defaut['SECHIBA_reset_time'] = 'n'
# -- Fichiers d'entree/sortie --
config_defaut['VEGETATION_FILE'] = config_defaut['pathFORCAGE']+'carteveg5km.nc'
config_defaut['SLOWPROC_VEGET_OLD_INTERPOL'] = 'n'
config_defaut['SOILALB_FILE'] = config_defaut['pathFORCAGE']+'soils_param.nc'
config_defaut['SOILTYPE_FILE'] = config_defaut['pathFORCAGE']+'soils_param.nc'
config_defaut['REFTEMP_FILE'] = config_defaut['pathFORCAGE']+'reftemp.nc'
config_defaut['FORCING_FILE'] = config_defaut['pathFORCAGE']+'WG_cru.nc'
config_defaut['RESTART_FILEIN'] = 'NONE'
config_defaut['RESTART_FILEOUT'] = 'driver_rest_out.nc'
config_defaut['SECHIBA_RESTART_IN'] = 'NONE'
config_defaut['SECHIBA_REST_OUT'] = config_defaut['pathRESTART']+'sechiba_rest_out.nc'
config_defaut['STOMATE_RESTART_FILEIN'] = 'NONE'
config_defaut['STOMATE_RESTART_FILEOUT'] = config_defaut['pathRESTART']+'stomate_rest_out.nc'
config_defaut['STOMATE_FORCING_NAME'] = 'NONE'
#stomate_forcing.nc#
config_defaut['STOMATE_FORCING_MEMSIZE'] = '50'
config_defaut['STOMATE_CFORCING_NAME'] = 'NONE'
#stomate_Cforcing.nc#
config_defaut['ORCHIDEE_WATCHOUT'] = 'n'
config_defaut['WATCHOUT_FILE'] = 'NONE'
#config_defaut['pathOUT']+'orchidee_watchout.nc'
config_defaut['DT_WATCHOUT'] = '1800'
config_defaut['STOMATE_WATCHOUT'] = 'n'
config_defaut['OUTPUT_FILE'] = config_defaut['pathOUT']+'sechiba_hist_out.nc'
config_defaut['SECHIBA_HISTFILE2'] = 'FALSE'
config_defaut['SECHIBA_OUTPUT_FILE2'] = 'NONE'
#config_defaut['pathOUT']+'sechiba_hist2_out.nc'
config_defaut['STOMATE_OUTPUT_FILE'] = config_defaut['pathOUT']+'stomate_hist_out.nc'
config_defaut['SECHIBA_HISTLEVEL'] = '5'
config_defaut['SECHIBA_HISTLEVEL2'] = '1'
config_defaut['STOMATE_HISTLEVEL'] = '4'
config_defaut['WRITE_STEP'] = '86400.0'
config_defaut['WRITE_STEP2'] = '1800.0'
config_defaut['STOMATE_HIST_DT'] = '1.'
# -- Optimisation de certains parametres d'ORCHIDEE --
config_defaut['NLITT'] ='2'
config_defaut['IS_PHENO_CONTI'] ='y'
config_defaut['IS_FAPAR_TRICK_TBS_C3G'] ='FALSE'
config_defaut['FAPAR_COMPUTATION'] = 'black_sky_daily'
config_defaut['IS_EXT_COEFF_CONSTANT'] ='TRUE'
config_defaut['OPTIMIZATION_ORCHIDEE'] ='n'
config_defaut['OPTIMIZATION_FILEIN_PARAS'] = 'NONE'
config_defaut['OPTIMIZATION_FILEOUT_PARAS'] = 'NONE'
config_defaut['OPTIMIZATION_FILEOUT_FLUXES'] = 'NONE'
config_defaut['OPTIMIZATION_FILEOUT_EXTCOEFF'] = 'y'
config_defaut['OPTIMIZATION_FILEOUT_NEE'] = 'n'
config_defaut['OPTIMIZATION_FILEOUT_NEET'] = 'y'
config_defaut['OPTIMIZATION_FILEOUT_QH'] = 'y'
config_defaut['OPTIMIZATION_FILEOUT_QLE'] = 'y'
config_defaut['OPTIMIZATION_FILEOUT_RN'] = 'y'
config_defaut['OPTIMIZATION_FILEOUT_FAPAR'] = 'n'
config_defaut['OPTIMIZATION_FILEOUT_FAPART'] = 'y'
config_defaut['OPTIMIZATION_FILEOUT_ABOBMT'] = 'y'
config_defaut['OPTIMIZATION_FILEOUT_WOODBMT'] = 'y'
config_defaut['OPTIMIZATION_FILEOUT_GPPT'] = 'y'
config_defaut['OPTIMIZATION_FILEOUT_RESP_HT'] = 'y'
config_defaut['OPTIMIZATION_FILEOUT_RESP_GT'] = 'y'
config_defaut['OPTIMIZATION_FILEOUT_RESP_MT'] = 'y'
config_defaut['OPTIMIZATION_FILEOUT_RESP_H'] = 'n'
config_defaut['OPTIMIZATION_FILEOUT_RESP_TER'] = 'n'
config_defaut['OPTIMIZATION_FILEOUT_RESP_TERT'] = 'y'
# -- Coordonnees du site --
config_defaut['LIMIT_WEST'] = '-180.'
config_defaut['LIMIT_EAST'] = '180.'
config_defaut['LIMIT_NORTH'] = '90.'
config_defaut['LIMIT_SOUTH'] = '-90.'
# -- Caracteristiques de la simulation --
config_defaut['RELAXATION'] = 'n'
config_defaut['RELAX_A'] = '1000.0'
config_defaut['HEIGHT_LEV1'] = '2.0'
config_defaut['HEIGHT_LEVW'] = '10.0'
# -- Generateur d'intemperies --
config_defaut['ALLOW_WEATHERGEN'] = 'n'
config_defaut['MERID_RES'] = '2.'
config_defaut['ZONAL_RES'] = '2.'
config_defaut['IPPREC'] = '0'
config_defaut['NO_INTER'] = 'y'
config_defaut['INTER_LIN'] = 'n'
config_defaut['WEATHGEN_PRECIP_EXACT'] = 'n'
config_defaut['DT_WEATHGEN'] = '1800.'
config_defaut['NETRAD_CONS'] = 'y'
config_defaut['DUMP_WEATHER'] = 'n'
config_defaut['DUMP_WEATHER_FILE'] = 'weather_dump.nc'
config_defaut['DUMP_WEATHER_GATHERED'] = 'y'
config_defaut['ECCENTRICITY'] = '0.016724'
config_defaut['PERIHELIE'] = '102.04'
config_defaut['OBLIQUITY'] = ' 23.446'
# -- Duree de la simulation --
config_defaut['TIME_LENGTH'] = 'default'
config_defaut['SPLIT_DT'] = '1'
config_defaut['FORCING_RAD_CENTER'] = 'n'
config_defaut['TIME_SKIP'] = 'default'
config_defaut['FORCESOIL_STEP_PER_YEAR'] = '12'
config_defaut['FORCESOIL_NB_YEAR'] = '1'
config_defaut['SPRED_PREC'] = '1'
## # -- Differents flags a activer/desactiver --
config_defaut['STOMATE_OK_STOMATE'] = 'y'
config_defaut['STOMATE_OK_DGVM'] = 'n'
config_defaut['STOMATE_OK_CO2'] = 'y'
config_defaut['FORCE_CO2_VEG'] = 'TRUE'
# -- CO2 atmospherique --
config_defaut['ATM_CO2'] = '350.'
config_defaut['ATM_CO2_FILE'] = '/home/satellites7/cbacour/ORCHIDEE/forcage/atm_co2_1200_2006.nc'
config_defaut['YEAR_ATMCO2_START'] = '-1'
config_defaut['STOMATE_DIAGPT'] = '1'
config_defaut['LPJ_GAP_CONST_MORT'] = 'y'
config_defaut['FIRE_DISABLE'] = 'y'
#-- nouvelles options de restart depuis version 1.9.3
config_defaut['SOILCAP'] = 'n'
config_defaut['SOILFLX'] = 'n'
config_defaut['SHUMDIAG'] = 'n'
config_defaut['RUNOFF'] = 'n'
config_defaut['DRAINAGE'] = 'n'
config_defaut['RAERO'] = 'n'
config_defaut['QSATT'] = 'n'
config_defaut['CDRAG'] = 'n'
config_defaut['EVAPOT_CORR'] = 'n'
config_defaut['TEMP_SOL_NEW'] = 'n'
config_defaut['DSS'] = 'n'
config_defaut['HDRY'] = 'n'
config_defaut['CGRND'] = 'n'
config_defaut['DGRND'] = 'n'
config_defaut['Z1'] = 'n'
config_defaut['PCAPA'] = 'n'
config_defaut['PCAPA_EN'] = 'n'
config_defaut['PKAPPA'] = 'n'
config_defaut['ZDZ1'] = 'n'
config_defaut['ZDZ2'] = 'n'
config_defaut['TEMP_SOL_BEG'] = 'n'
# -- Parametres de la surface (vegetation+sol) --
config_defaut['IMPOSE_VEG'] = 'n'
config_defaut['SECHIBA_VEG__01'] = '0.2'
config_defaut['SECHIBA_VEG__02'] = '0.0'
config_defaut['SECHIBA_VEG__03'] = '0.0'
config_defaut['SECHIBA_VEG__04'] = '0.0'
config_defaut['SECHIBA_VEG__05'] = '0.0'
config_defaut['SECHIBA_VEG__06'] = '0.0'
config_defaut['SECHIBA_VEG__07'] = '0.0'
config_defaut['SECHIBA_VEG__08'] = '0.0'
config_defaut['SECHIBA_VEG__09'] = '0.0'
config_defaut['SECHIBA_VEG__10'] = '0.8'
config_defaut['SECHIBA_VEG__11'] = '0.0'
config_defaut['SECHIBA_VEG__12'] = '0.0'
config_defaut['SECHIBA_VEG__13'] = '0.0'
config_defaut['SECHIBA_VEGMAX__01'] = '0.2'
config_defaut['SECHIBA_VEGMAX__02'] = '0.0'
config_defaut['SECHIBA_VEGMAX__03'] = '0.0'
config_defaut['SECHIBA_VEGMAX__04'] = '0.0'
config_defaut['SECHIBA_VEGMAX__05'] = '0.0'
config_defaut['SECHIBA_VEGMAX__06'] = '0.0'
config_defaut['SECHIBA_VEGMAX__07'] = '0.0'
config_defaut['SECHIBA_VEGMAX__08'] = '0.0'
config_defaut['SECHIBA_VEGMAX__09'] = '0.0'
config_defaut['SECHIBA_VEGMAX__10'] = '0.8'
config_defaut['SECHIBA_VEGMAX__11'] = '0.0'
config_defaut['SECHIBA_VEGMAX__12'] = '0.0'
config_defaut['SECHIBA_VEGMAX__13'] = '0.0'
config_defaut['SECHIBA_LAI__01'] = '0'
config_defaut['SECHIBA_LAI__02'] = '8'
config_defaut['SECHIBA_LAI__03'] = '8'
config_defaut['SECHIBA_LAI__04'] = '4'
config_defaut['SECHIBA_LAI__05'] = '4.5'
config_defaut['SECHIBA_LAI__06'] = '4.5'
config_defaut['SECHIBA_LAI__07'] = '4'
config_defaut['SECHIBA_LAI__08'] = '4.5'
config_defaut['SECHIBA_LAI__09'] = '4'
config_defaut['SECHIBA_LAI__10'] = '2'
config_defaut['SECHIBA_LAI__11'] = '2'
config_defaut['SECHIBA_LAI__12'] = '2'
config_defaut['SECHIBA_LAI__13'] = '2'
config_defaut['SLOWPROC_HEIGHT__01'] = '0.'
config_defaut['SLOWPROC_HEIGHT__02'] = '50.'
config_defaut['SLOWPROC_HEIGHT__03'] = '50.'
config_defaut['SLOWPROC_HEIGHT__04'] = '30.'
config_defaut['SLOWPROC_HEIGHT__05'] = '30.'
config_defaut['SLOWPROC_HEIGHT__06'] = '30.'
config_defaut['SLOWPROC_HEIGHT__07'] = '20.'
config_defaut['SLOWPROC_HEIGHT__08'] = '20.'
config_defaut['SLOWPROC_HEIGHT__09'] = '20.'
config_defaut['SLOWPROC_HEIGHT__10'] = '.2'
config_defaut['SLOWPROC_HEIGHT__11'] = '.2'
config_defaut['SLOWPROC_HEIGHT__12'] = '.4'
config_defaut['SLOWPROC_HEIGHT__13'] = '.4'
config_defaut['SOIL_FRACTIONS__01'] = '0.28'
config_defaut['SOIL_FRACTIONS__02'] = '0.52'
config_defaut['SOIL_FRACTIONS__03'] = '0.20'
config_defaut['SLOWPROC_LAI_TEMPDIAG'] = '280.'
config_defaut['SECHIBA_ZCANOP'] = '0.5'
config_defaut['SECHIBA_FRAC_NOBIO'] = '0.0'
config_defaut['CLAY_FRACTION'] = '0.2'
config_defaut['IMPOSE_AZE'] = 'n'
config_defaut['CONDVEG_EMIS'] = '1.0'
config_defaut['CONDVEG_ALBVIS'] = '0.25'
config_defaut['CONDVEG_ALBNIR'] = '0.25'
config_defaut['Z0CDRAG_AVE'] = 'y'
config_defaut['CONDVEG_Z0'] = '0.15'
config_defaut['ROUGHHEIGHT'] = '0.0'
config_defaut['CONDVEG_SNOWA'] = 'default'
config_defaut['ALB_BARE_MODEL'] = 'FALSE'
config_defaut['HYDROL_SNOW'] = '0.0'
config_defaut['HYDROL_SNOWAGE'] = '0.0'
config_defaut['HYDROL_SNOW_NOBIO'] = '0.0'
config_defaut['HYDROL_SNOW_NOBIO_AGE'] = '0.0'
config_defaut['HYDROL_HDRY'] = '0.0'
config_defaut['HYDROL_HUMR'] = '1.0'
config_defaut['HYDROL_SOIL_DEPTH'] = '2.0'
config_defaut['HYDROL_HUMCSTE'] = '5., .8, .8, 1., .8, .8, 1., 1., .8, 4., 4., 4., 4.'
config_defaut['HYDROL_BQSB'] = 'default'
config_defaut['HYDROL_GQSB'] = ' 0.0'
config_defaut['HYDROL_DSG'] = '0.0'
config_defaut['HYDROL_DSP'] = 'default'
config_defaut['HYDROL_QSV'] = '0.0'
config_defaut['HYDROL_MOISTURE_CONTENT'] = '0.3'
config_defaut['US_INIT'] = '0.0'
config_defaut['FREE_DRAIN_COEF'] = '1.0, 1.0, 1.0'
config_defaut['EVAPNU_SOIL'] = '0.0'
config_defaut['ENERBIL_TSURF'] = '280.'
config_defaut['ENERBIL_EVAPOT'] = '0.0'
config_defaut['THERMOSOIL_TPRO'] = '280.'
config_defaut['DIFFUCO_LEAFCI'] = '233.'
config_defaut['CDRAG_FROM_GCM'] = 'n'
config_defaut['RVEG_PFT'] = '1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.'
config_defaut['SECHIBA_QSINT'] = '0.1'
# -- LAI --
config_defaut['LAI_MAP'] = 'n'
config_defaut['LAI_FILE'] = 'lai2D.nc'
config_defaut['SLOWPROC_LAI_OLD_INTERPOL'] = 'n'
# -- Land Use --
config_defaut['LAND_USE'] = 'n'
config_defaut['VEGET_YEAR'] = '133'
config_defaut['VEGET_REINIT'] = 'n'
config_defaut['VEGET_LENGTH'] = '1Y'
config_defaut['VEGET_UPDATE'] = '1Y'
config_defaut['LAND_COVER_CHANGE'] = 'n'
# -- Agriculture --
config_defaut['AGRICULTURE'] = 'y'
config_defaut['HARVEST_AGRI'] = 'n' #'y'
config_defaut['HERBIVORES'] = 'n'
config_defaut['TREAT_EXPANSION'] = 'n'
# -- Pas de temps --
config_defaut['SECHIBA_DAY'] = '0.0'
config_defaut['DT_SLOW'] = '86400.'
# -- Hydrologie --
config_defaut['CHECK_WATERBAL'] = 'n'
config_defaut['HYDROL_CWRR'] = 'n'
config_defaut['CHECK_CWRR'] = 'n'
config_defaut['HYDROL_OK_HDIFF'] = 'n'
config_defaut['HYDROL_TAU_HDIFF'] = '86400.'
config_defaut['PERCENT_THROUGHFALL'] = '30.'
config_defaut['PERCENT_THROUGHFALL_PFT'] = '30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30.'
config_defaut['RIVER_ROUTING'] = 'n'
config_defaut['ROUTING_FILE'] = 'routing.nc'
config_defaut['ROUTING_TIMESTEP'] = '86400'
config_defaut['ROUTING_RIVERS'] = '50'
config_defaut['DO_IRRIGATION'] = 'n'
config_defaut['IRRIGATION_FILE'] = 'irrigated.nc'
config_defaut['DO_FLOODPLAINS'] = 'n'
# Modele du fichier run.def lu par ORCHIDEE
# -----------------------------------------
modele_rundef = """
#
#**************************************************************************
# Liste des parametres de ORCHIDEE
#**************************************************************************
#
#
#**************************************************************************
# LISTE D'OPTIONS NON ACTIVEES dans cette simulation
#**************************************************************************
#
#
#**************************************************************************
# Gestion des affichages au cours du run de ORCHIDEE
#**************************************************************************
# Niveau de 'bavardage' du modele
# (plus le chiffre est grand plus le modele est bavard)
BAVARD = %(BAVARD)s
# defaut = 1
# Flag pour les informations de debogage
# Cette option permet d'afficher les informations de debogage
# sans recompiler le code.
DEBUG_INFO = %(DEBUG_INFO)s
#defaut = n
# ORCHIDEE imprimera plus de messages.
# Il permet d'afficher beaucoup plus de messages sur le deroulement du
# programme.
LONGPRINT = %(LONGPRINT)s
#defaut = n
#---------------------------------------------------------------------
# Indique si les sorties doivent respecter la convention ALMA
# Si cette option est activee, les sorties du modeles respecteront
# la convention du projet ALMA. C'est recommande pour ecrire des donnee
# en sortie d'ORCHIDEE.
ALMA_OUTPUT = %(ALMA_OUTPUT)s
# defaut = n
# Surchage de la variable de temps indiquee dans le forcage de SECHIBA
# Cette option permet d'utilise le temps de redemarrage indique par le GCM
# et non pas celui donne dans le fichier de restart de SECHIBA.
# Ce drapeau permet de boucler plusieurs fois sur la meme annee
SECHIBA_reset_time = %(SECHIBA_RESET_TIME)s
# defaut = n
#**************************************************************************
# Fichiers d'entee / forcages / restart /outputs
#**************************************************************************
# Fichiers divers :
#---------------------------------------------------------------------
# Nom du fichier de vegetation
# Si !IMPOSE_VEG
# Si LAND_USE
# defaut = pft_new.nc
# Le nom du fichier de la carte de vegetation (en pft)
# doit etre indique ici.
# Si !LAND_USE
# defaut = ../surfmap/carteveg5km.nc
# C'est le nom du fichier ouvert pour la lecture de la
# carte de vegetation. Habituelleemnt, SECHIBA tourne avec
# une carte 5km x 5km qui vient de celle de IGBP. On suppose que
# l'on a une classification de 87 types de vegetation.
# C'est celle de Olson modifiee par Viovy.
VEGETATION_FILE = %(VEGETATION_FILE)s
# Flag to use old ""interpolation"" of vegetation map.
# IF NOT IMPOSE_VEG and NOT LAND_USE
# If you want to recover the old (ie orchidee_1_2 branch)
# ""interpolation"" of vegetation map.
# default = n
SLOWPROC_VEGET_OLD_INTERPOL = %(SLOWPROC_VEGET_OLD_INTERPOL)s
# Nom du fichier de l'albedo sur sol nu.
# Si !IMPOSE_AZE
# Ce fichier sert a la lecture des types de sol desquels on tire
# les albedos sur sol nu. Ce fichier est precis au 1x1 degres
# et base sur les couleurs de sol definies par Wilson et Henderson.
SOILALB_FILE = %(SOILALB_FILE)s
# defaut = ../surfmap/soils_param.nc
# Nom du fichier de types de sol
# Si !IMPOSE_VEG
# C'est le nom du fichier ouvert pour le lecture des types de sol.
# Les donnees de ce fichier seront interpollees sur la grille du modele.
# Le but est d'obtenir les fractions de "sand loam" et de "clay" pour chaque
# boite de la grille. Cette information est utilisee pour l'hydrologie du sol et
# la respiration.
SOILTYPE_FILE = %(SOILTYPE_FILE)s
# defaut = ../surfmap/soils_param.nc
# Nom du fichier de reference de temperature
# Ce fichier sert a lire la temperature de reference
# la surface. Les donnees sont interpollees sur la
# grille du modele. Le but est d'obtenir une temperature
# aussi bien pour initialiser les variables de pronostique
# correspondantes du modele correctement (ok_dgvm = TRUE), que
# de l'imposer comme une condition aux limite (ok_dgvm = FALSE).
REFTEMP_FILE = %(REFTEMP_FILE)s
# defaut = reftemp.nc
# Fichier de forcage
# Nom du fichier de forcage
# Permet de lire les donnees pour le modele dim0.
# Le format de ce fichier est compatible avec les normes
# NetCDF et COADS. Cabauw.nc, islscp_for.nc, WG_cru.nc, islscp2_for_1982.nc
FORCING_FILE = %(FORCING_FILE)s
# defaut = islscp_for.nc
# lecture et ecriture des fichiers de restart du driver
#---------------------------------------------------------------------
# Nom du fichier de forcage pour les conditions initiales.
# Ce fichier sera ouvert par le lanceur pour extraire les donnees
# en entree du modele. Ce fichier doit etre compatible avec la norme NetCDF,
# mais peut ne pas l'etre completement avec la norme COADS.
# NONE signifie qu'aucun fichier de forcage n'est lu.
RESTART_FILEIN = %(RESTART_FILEIN)s
# defaut = NONE
# Nom du fichier de restart qui sera cree par le lanceur
RESTART_FILEOUT = %(RESTART_FILEOUT)s
# defaut = driver_rest_out.nc
# lecture et ecriture des fichiers de restart de SECHIBA :
#---------------------------------------------------------------------
# Nom des conditions initiales de redemarrage pour SECHIBA
# C'est le nom du fichier qui est ouvert pour recuperer les valeurs
# initiales de toutes les variables en entree de SECHIBA.
SECHIBA_restart_in = %(SECHIBA_RESTART_IN)s
# defaut = NONE
# Nom du fichier de restart cree par SECHIBA
# C'est le nom du fichier qui est cree pour ecrire en sortie de SECHIBA
# les valeurs initiales de toutes les variables en entree de SECHIBA.
SECHIBA_rest_out = %(SECHIBA_REST_OUT)s
# defaut = sechiba_rest_out.nc
# lecture et ecriture des fichiers de restart de STOMATE :
#---------------------------------------------------------------------
# Nom du fichier de restart pour LIRE les conditions initiales de STOMATE
# Si STOMATE_OK_STOMATE || STOMATE_WATCHOUT
# C'est le nom du fichier qui est ouvert pour recuperer les valeurs
# initiales de toutes les variables en entree de STOMATE.
STOMATE_RESTART_FILEIN = %(STOMATE_RESTART_FILEIN)s
# defaut = NONE
# Nom du fichier de restart cree par STOMATE
# Si STOMATE_OK_STOMATE || STOMATE_WATCHOUT
# C'est le nom du fichier qui est cree pour ecrire en sortie de STOMATE
# les valeurs initiales de toutes les variables en entree de STOMATE.
STOMATE_RESTART_FILEOUT = %(STOMATE_RESTART_FILEOUT)s
# defaut = stomate_restart.nc
# lecture et ecriture des fichiers de restart de TESTSTOMATE et FORCESOIL
# (equilibrage du carbone dans le sol) :
#---------------------------------------------------------------------
# Nom du fichier de forcage de STOMATE
STOMATE_FORCING_NAME = %(STOMATE_FORCING_NAME)s
# defaut = NONE
# Taille de la memoire de STOMATE (en MO)
# Cette taille determine combien de variables de
# forcage seront conservees en memoire.
# Cela donne un compromis entre la memoire et
# la frequence des acces disque.
STOMATE_FORCING_MEMSIZE = %(STOMATE_FORCING_MEMSIZE)s
# defaut = 50
# Nom du fichier de forcage du carbone dans STOMATE
# Nom passe a STOMATE pour lire les donnees de carbone en entree
STOMATE_CFORCING_NAME = %(STOMATE_CFORCING_NAME)s
# defaut = NONE
# ecriture des fichiers de forcage (SECHIBA puis STOMATE) :
#---------------------------------------------------------------------
# ORCHIDEE ecrit ses forcages en sortie dans ce fichier.
# Ce drapeau impose l'ecriture d'un fichier (cf WATCHOUT_FILE)
# contenant les variables de forcage du terrain.
ORCHIDEE_WATCHOUT = %(ORCHIDEE_WATCHOUT)s
# defaut = n
# Nom du fichier de forcage de ORCHIDEE
# Si ORCHIDEE_WATCHOUT
# Ce fichier a exactement le meme format qu'un fichier de forcage off-line
# et peut etre utiliser pour forcer ORCHIDEE en entree (RESTART_FILEIN).
WATCHOUT_FILE = %(WATCHOUT_FILE)s
# defaut = orchidee_watchout.nc
# Le watchout est ecrit a cette frequence
# Si ORCHIDEE_WATCHOUT
# Indique la frequence d'ecriture du fichier watchout.
DT_WATCHOUT = %(DT_WATCHOUT)s
# defaut = dt
# STOMATE effectue un minimum de taches.
# Avec cette option, STOMATE lit et ecrit ses fichiers de demarrage
# et conserve une sauvegarde des variables biometeorologiques.
# C'est utile lorsque STOMATE_OK_STOMATE est a no et que l'on desire
# activer STOMATE plus tard. Dans ce cas, ce premier calcul sert de
# constructeur de donnees biometeorologiques a long terme.
STOMATE_WATCHOUT = %(STOMATE_WATCHOUT)s
# defaut = n
# ecriture des fichiers d'outputs (SECHIBA puis STOMATE) :
#---------------------------------------------------------------------
# Nom du fichier dans lequel sont ecrites les donnees historiques
# Ce fichier est cree par le modele pour sauver l'historique des
# sorties. Il respecte completement les convention COADS et NetCDF.
# Il est genere par le packqge hist de IOIPSL.
OUTPUT_FILE = %(OUTPUT_FILE)s
# defaut = cabauw_out.nc
# Drapeau pour sauver le fichier histoire 2 de SECHIBA (a haute-frequence ?)
# The drapeau permet d'utiliser la seconde sauvegarde de SECHIBA pour ecrire
# les sorties a haute (ou basse frequence). Cette sortie est donc optionnelle
# et n'est pas activee par defaut.
SECHIBA_HISTFILE2 = %(SECHIBA_HISTFILE2)s
# defaut = FALSE
# Nom du second fichier de sortie
# Si SECHIBA_HISTFILE2
# The fichier est le second fichier de sortie ecrit par le modele.
SECHIBA_OUTPUT_FILE2 = %(SECHIBA_OUTPUT_FILE2)s
# defaut = sechiba_out_2.nc
# Nom du fichier histoire de STOMATE
# Le format de ce fichier est compatible avec les normes
# NetCDF et COADS.
STOMATE_OUTPUT_FILE = %(STOMATE_OUTPUT_FILE)s
# defaut = stomate_history.nc
# niveaux d'ecriture sur les fichiers de sortie (nombre de variables) :
#---------------------------------------------------------------------
# Niveau d'ecriture pour SECHIBA (entre 0 et 10)
# Choisit la liste des variables ecrites dans le fichier histoire de SECHIBA.
# Les valeurs vont de 0 (rien n'est ecrit) a 10 (on ecrit tout).
SECHIBA_HISTLEVEL = %(SECHIBA_HISTLEVEL)s
# defaut = 5
# Niveau d'ecriture pour la seconde sortie de SECHIBA (entre 0 et 10)
# Si SECHIBA_HISTFILE2
# Choisit la liste des variables ecrites dans le fichier histoire de SECHIBA.
# Les valeurs vont de 0 (rien n'est ecrit) a 10 (on ecrit tout).
# Le niveau 1 contient uniquement les sorties de ORCHIDEE.
SECHIBA_HISTLEVEL2 = %(SECHIBA_HISTLEVEL2)s
# defaut = 1
# Niveau des sorties historiques pour STOMATE (0..10)
# 0: rien n'est ecrit; 10: tout est ecrit
STOMATE_HISTLEVEL = %(STOMATE_HISTLEVEL)s
# defaut = 10
# frequences d'ecriture des fichiers d'histoire (en secondes pour SECHIBA et
# en minutes pour STOMATE
#---------------------------------------------------------------------
# Frequence d'ecriture sur les fichiers de sortie (pour SECHIBA en secondes) :
# Cela ne modifie pas la frequence des operations sur les donnees (les moyennes par exemple).
WRITE_STEP = %(WRITE_STEP)s
# defaut = 86400.0
# Frequence en secondes d'ecriture des secondes sorties
# Si SECHIBA_HISTFILE2
# Les variables de la seconde sortie de SECHIBA que le modele
# ecrira au format netCDF si le drapeau SECHIBA_HISTFILE2 est a TRUE.
WRITE_STEP2 = %(WRITE_STEP2)s
# defaut = 1800.0
# Pas de temps des sauvegardes historiques de STOMATE (d)
# Pour STOMATE, c'est en jours
# Attention, cette variable doit etre plus grande que le DT_SLOW.
STOMATE_HIST_DT = %(STOMATE_HIST_DT)s
# defaut = 10.
#****************************************************************************************
# Optimisation de certains parametres d'ORCHIDEE
#****************************************************************************************
# Number of levels for the litter: metabolic, structural, and woody (optional)
# default : 2
NLITT = %(NLITT)s
# Active le schema de phenologie + continu dans le temps
# implementation Diego Santaren
IS_PHENO_CONTI = %(IS_PHENO_CONTI)s
# Active l'augmentation de la fraction de C3G en hiver, pour le calcul du fAPAR,
# lorsque le couvert est compose de PFT6+PFT10
# default = FALSE
IS_FAPAR_TRICK_TBS_C3G = %(IS_FAPAR_TRICK_TBS_C3G)s
# Calcul le fAPAR BLACK SKY ou WHITE SKY
# Black Sky : fAPAR pour SZA a 10h heure locale
# White Sky : fAPAR integre sur la journee
# default = black_sky
FAPAR_COMPUTATION = %(FAPAR_COMPUTATION)s
# Active la prise en compte du calcul du coefficient d'extinction en fonction
# de l'angle solaire et de la structure du couvert
# default = True <=> ext_coeff = 0.5
IS_EXT_COEFF_CONSTANT = %(IS_EXT_COEFF_CONSTANT)s
# Active le module 'parametres optimisables' d'ORCHIDEE
# (y | n)
OPTIMIZATION_ORCHIDEE = %(OPTIMIZATION_ORCHIDEE)s
# Fichier NetCDF d'entree des parametres optimisables
OPTIMIZATION_FILEIN_PARAS = %(OPTIMIZATION_FILEIN_PARAS)s
# Fichier NetCDF de sortie des parametres optimisables.
# >>> Met fin a l'execution d'ORCHIDEE
OPTIMIZATION_FILEOUT_PARAS = %(OPTIMIZATION_FILEOUT_PARAS)s
# Fichier NetCDF de sortie des flux optimises (H,LE,Rn,CO2)
OPTIMIZATION_FILEOUT_FLUXES = %(OPTIMIZATION_FILEOUT_FLUXES)s
# Les flux sont-ils contenus (y/n) dans le fichier de sortie (y par defaut) ?
OPTIMIZATION_FILEOUT_EXTCOEFF = %(OPTIMIZATION_FILEOUT_EXTCOEFF)s
OPTIMIZATION_FILEOUT_NEE = %(OPTIMIZATION_FILEOUT_NEE)s
OPTIMIZATION_FILEOUT_NEET = %(OPTIMIZATION_FILEOUT_NEET)s
OPTIMIZATION_FILEOUT_QH = %(OPTIMIZATION_FILEOUT_QH)s
OPTIMIZATION_FILEOUT_QLE = %(OPTIMIZATION_FILEOUT_QLE)s
OPTIMIZATION_FILEOUT_RN = %(OPTIMIZATION_FILEOUT_RN)s
OPTIMIZATION_FILEOUT_FAPAR = %(OPTIMIZATION_FILEOUT_FAPAR)s
OPTIMIZATION_FILEOUT_FAPART = %(OPTIMIZATION_FILEOUT_FAPART)s
OPTIMIZATION_FILEOUT_ABOBMT = %(OPTIMIZATION_FILEOUT_ABOBMT)s
OPTIMIZATION_FILEOUT_WOODBMT = %(OPTIMIZATION_FILEOUT_WOODBMT)s
OPTIMIZATION_FILEOUT_GPPT = %(OPTIMIZATION_FILEOUT_GPPT)s
OPTIMIZATION_FILEOUT_RESP_HT = %(OPTIMIZATION_FILEOUT_RESP_HT)s
OPTIMIZATION_FILEOUT_RESP_GT = %(OPTIMIZATION_FILEOUT_RESP_GT)s
OPTIMIZATION_FILEOUT_RESP_MT = %(OPTIMIZATION_FILEOUT_RESP_MT)s
OPTIMIZATION_FILEOUT_RESP_H = %(OPTIMIZATION_FILEOUT_RESP_H)s
OPTIMIZATION_FILEOUT_RESP_TER = %(OPTIMIZATION_FILEOUT_RESP_TER)s
OPTIMIZATION_FILEOUT_RESP_TERT = %(OPTIMIZATION_FILEOUT_RESP_TERT)s
#**************************************************************************
# Coordonnees du site
#**************************************************************************
# Le modele utilisera la plus petite des regions entre celle-ci et
# celle donnee par le fichier de forcage.
# Limite Ouest de la region
# La limite Ouest de la region doit etre comprise en -180. et +180. degres.
LIMIT_WEST = %(LIMIT_WEST)s
# defaut = -180.
# Limite Est de la region
# La limite Est de la region doit etre comprise en -180. et +180. degres.
LIMIT_EAST = %(LIMIT_EAST)s
# defaut = 180.
# Limite Nord de la region
# La limite Nord de la region doit etre comprise en -90. et +90. degres.
LIMIT_NORTH = %(LIMIT_NORTH)s
# defaut = 90.
# Limite Sud de la region
# La limite Sud de la region doit etre comprise en -90. et +90. degres.
LIMIT_SOUTH = %(LIMIT_SOUTH)s
# defaut = -90.
##**************************************************************************
# Caracteristiques de la simulation
#**************************************************************************
# Methode de forcage
# Permet d'utiliser la methode dans laquelle le premier niveau
# de donnees atmospheriques n'est pas directement force par des
# observations, mais relaxe ces observations par une constante en temps.
# Pour le moment, la methode tend a trop lisser le cycle diurne et
# introduisent un decalage en temps. Une methode plus complete est necessaire.
RELAXATION = %(RELAXATION)s
# defaut = n
# Constante de temps pour la methode de relaxation.
# La constante de temps associee a la relaxation
# des donnees atmospheriques. Pour eviter trop de ?????
# la valeur doit etre superieure a 1000.0
RELAX_A = %(RELAX_A)s
# defaut = 1000.0
# Hauteur a laquelle T et Q sont mesures.
# Les variables atmospheriques (temperature et
# humidite specifique) sont mesurees au niveau d'une certaine hauteur.
# Cette hauteur est necessaire pour calculer correctement les coefficients
# de transfert turbulent. Regardez dans la description des donnees de
# forcage pour indiquer la valeur correcte.
HEIGHT_LEV1 = %(HEIGHT_LEV1)s
# defaut = 2.0
# Hauteur a laquelle le vent est donne.
# Cette hauteur est necessaire pour calculer correctement
# les coefficients de transfert turbulent.
HEIGHT_LEVW = %(HEIGHT_LEVW)s
# defaut = 10.0
#---------------------------------------------------------------------
# Generateur d'intemperies :
#---------------------------------------------------------------------
# Generateur d'intermperies
# Cette option declanche le calcul de donnees d'intemperies
# si il n'y a pas assez de donnees dans le fichier de forcage
# par rapport a la resolution temporelle du modele.
ALLOW_WEATHERGEN = %(ALLOW_WEATHERGEN)s
# defaut = n
# Resolution Nord-Sud
# Si l'option ALLOW_WEATHERGEN est activee,
# indique la resolution Nord-Sud utilisee, en degres.
MERID_RES = %(MERID_RES)s
# defaut = 2.
# Resolution Est-Ouest
# Si l'option ALLOW_WEATHERGEN est activee,
# indique la resolution Est-Ouest utilisee, en degres.
ZONAL_RES = %(ZONAL_RES)s
# defaut = 2.
# Mode du generateur d'intemperies
# Si ALLOW_WEATHERGEN
# Si cette option vaut 1, on utilise les quantites
# moyennees par mois pour chaque jour, si il vaut
# 0, alors, on utilise un generateur de nombres
# aleatoire pour creer les donnees journalieres a
# partir des donnees mensuelles.
IPPREC = %(IPPREC)s
# defaut = 0
# Interpolation ou pas SI on a un decoupage superieur a 1
# Choisit si vous desirez une interpolation lineaire ou pas.
NO_INTER = %(NO_INTER)s
INTER_LIN = %(INTER_LIN)s
# defaut :
# NO_INTER = y
# INTER_LIN = n
# Precipitations mensuelles exactes
# Si ALLOW_WEATHERGEN
# Si cette option est activee, les precipitations mensuelles
# obtenues avec le generateur aleatoire sont corrigees afin
# de preserver la valeur moyenne mensuelle.
# Dans ce cas, on a un nombre constant de jours de precipitations
# pour chacun des mois. La quantite de precipitations pour ces
# jours est constante.
WEATHGEN_PRECIP_EXACT = %(WEATHGEN_PRECIP_EXACT)s
# defaut = n
# Frequence d'appel du generateur d'intemperies
# Cela determine le pas de temps (en secondes) entre deux
# appels du generateur d'intemperies. Il doit etre superieur
# au pas de temps de SECHIBA.
DT_WEATHGEN = %(DT_WEATHGEN)s
# defaut = 1800.
# Conservation de la radiation nette pour le forcage.
# Lorsque l'interpolation (INTER_LIN = y) est utilisee, la radiation nette
# donnee par le forcage n'est pas conservee.
# Cela peut-etre evite en indiquant y pour cette option.
# Cette option n'est pas utilisee pour les short-wave si le pas de
# temps du forcage est superieur a une heure.
# Cela n'a plus de sens d'essayer de reconstruire un cycle diurne et
# en meme temps de reconstruire des radiations solaires conservatives en temps.
NETRAD_CONS = %(NETRAD_CONS)s
# defaut = y
# ecriture des resultats du generateur dans un fichier de forcage
# Cette option active la sauvegarde du temps genere dans un fichier
# de forcage. Cela fonctionne correctement en tant que fichier de restart
# et non comme condition initiale (dans ce cas, le premier pas de temps
# est legerement faux)
DUMP_WEATHER = %(DUMP_WEATHER)s
# defaut = n
# Nom du fichier de forcage des intemperies
# Si DUMP_WEATHER
DUMP_WEATHER_FILE = %(DUMP_WEATHER_FILE)s
# defaut = 'weather_dump.nc'
# Compression des donnees d'intemperies
# Cette option active la sauvegarde du generatuer de temps
# uniquement pour les points de terre (mode gathered)
# Si DUMP_WEATHER
DUMP_WEATHER_GATHERED = %(DUMP_WEATHER_GATHERED)s
# defaut = y
# Parametres orbitaux
# Effet d'excentricite
# Utilisez la valeur predefinie
# Si ALLOW_WEATHERGEN
ECCENTRICITY = %(ECCENTRICITY)s
# defaut = 0.016724
# Longitude de la perihelie
# Utilisez la valeur predefinie
# Si ALLOW_WEATHERGEN
PERIHELIE = %(PERIHELIE)s
# defaut = 102.04
# oblicite
# Utilisez la valeur predefinie
# Si ALLOW_WEATHERGEN
OBLIQUITY = %(OBLIQUITY)s
# defaut = 23.446
#**************************************************************************
# duree de la simulation :
#---------------------------------------------------------------------
# Duree de la simulation en temps.
# Duree de la simulation. Par defaut, la duree complete du forcage.
# Le FORMAT de ce temps peut etre :
# n : pas de temps n dans le fichier de forcage.
# nS : n secondes apres le premier pas de temps dans le fichier
# nD : n jours apres le premier pas de temps
# nM : n mois apres le premier pas de temps (annees de 365 jours)
# nY : n annees apres le premier pas de temps (annees de 365 jours)
# Or combinations :
# nYmM: n years and m month
TIME_LENGTH = %(TIME_LENGTH)s
# defaut = depend de la duree et du nombre de pas de temps indiques par le
# fichier de forcage.
# division du pas de temps :
#---------------------------------------------------------------------
# Decoupe le pas de temps donne par le forcage
# Cette valeur divise le pas de temps donne par le fichier de forcage.
# En principe, on peut l'utiliser pour des calculs en mode explicite
# mais il est fortement recommande de ne l'utiliser qu'en mode
# implicite pour que les forcage atmospheriques aient une evolution reguliere.
SPLIT_DT = %(SPLIT_DT)s
# defaut = 12
# Les donnees de forcage meteo sont elles centrees ou non ?
# Les champs meteo sont-ils fournis tous les dt ou bien tous les dt+dt/2 ?
# defaut = n
FORCING_RAD_CENTER = %(FORCING_RAD_CENTER)s
# Decalage en temps de depart par rapport aux donnees de forcage.
# Ce temps est le decalage en temps par rapport au point de depart du fichier
# de forcage que devrait prendre le modele.
# Si on utilise un fichier de redemarrage, c'est sa date que l'on prend.
# Le FORMAT de ce temps peut etre :
# n : pas de temps n dans le fichier de forcage.
# nS : n secondes apres le premier pas de temps dans le fichier
# nD : n jours apres le premier pas de temps
# nM : n mois apres le premier pas de temps (annees de 365 jours)
# nY : n annees apres le premier pas de temps (annees de 365 jours)
# Ou des combinations :
# nYmM: n annees et m mois
TIME_SKIP = %(TIME_SKIP)s
# defaut = 0
# Decoupage d'une annee pour l'algorithme de convergence du carbone.
FORCESOIL_STEP_PER_YEAR = %(FORCESOIL_STEP_PER_YEAR)s
# defaut = 12
# Nombre d'annees sauvegardees dans le fichier de forcage pour
# pour l'algorithme de convergence du carbone.
FORCESOIL_NB_YEAR = %(FORCESOIL_NB_YEAR)s
# default = 1
# Utilisation des precipitation
# Indique le nombre de fois ou l'on utilise les precipitations
# pendant le decoupage du pas de temps du forcage.
# C'est utilise uniquement lorsque le pas de temps de forcage est decoupe (SPLIT_DT).
# Si on indique un nombre plus grand que SPLIT_DT, il est mis a cette valeur.
SPRED_PREC = %(SPRED_PREC)s
# defaut = 1
#---------------------------------------------------------------------
# flags a activer selon les cas :
#---------------------------------------------------------------------
# STOMATE active
STOMATE_OK_STOMATE = %(STOMATE_OK_STOMATE)s
# defaut = n
# DGVM active
# Active la vegetation dynamique
STOMATE_OK_DGVM = %(STOMATE_OK_DGVM)s
# defaut = n
# CO2 active
# Active la photosynthese
STOMATE_OK_CO2 = %(STOMATE_OK_CO2)s
# defaut = n
# Drapeau logique pour forcer la valeur du CO2 atmospherique pour la vegetation.
# Si ce drapeau est sur TRUE, le parametre suivant ATM_CO2 indique
# la valeur utilisee par ORCHIDEE pour le CO2 atmospherique.
# Ce flag n'est utilise qu'en mode couple.
FORCE_CO2_VEG = %(FORCE_CO2_VEG)s
# defaut = FALSE
# CO2 atmospherique
# ---------------------------------------------------------------------------------------
# Valeur du CO2 atmospherique prescrit.
# Si FORCE_CO2_VEG (uniquement en mode couple)
# Cette veleur donne le CO2 atm prescrit.
# Pour les simulations pre-industriellles, la valeur est 286,2.
# Pour l'annee 1990 la valeur est 348.
ATM_CO2 = %(ATM_CO2)s
# defaut = 350.
# Fichier NetCDF de valeur pour le CO2 atmospherique.
# Si !=NONE, les valeurs seront lues dans ce fichier et varieront annuellement,
# plutot que d'etre assignees a la valeur constante ATM_CO2 definie precedemment
ATM_CO2_FILE = %(ATM_CO2_FILE)s
# Annee de depart du vecteur des valeurs de CO2 atmospherique a lire dans le fichier
# ATM_CO2_FILE
# -1 par defaut => gestion automatique en fonction des annees de forcage meteorologique
YEAR_ATMCO2_START = %(YEAR_ATMCO2_START)s
# Index du point de grille des diagnostics on-line
# Donne la longitude et la latitude du point de terre dont l'indice
# est donne par ce parametre.
STOMATE_DIAGPT = %(STOMATE_DIAGPT)s
# defaut = 1
# Constante de mortalite des arbres
# Si cette option est activee, une mortalite constante des arbres est
# imposee. Sinon, la mortalite est une fonction de la vigueur des arbres
# (comme dans le LPJ).
LPJ_GAP_CONST_MORT = %(LPJ_GAP_CONST_MORT)s
# defaut = y
# Pas de feux
# Avec cette variable, on peut ou non estimer la quantite de CO2
# produite par un feu.
FIRE_DISABLE = %(FIRE_DISABLE)s
# defaut = n
#
#**************************************************************************
# Nouvelles options pour les restarts a partir de la version 1.9.3
#**************************************************************************
#
## sechiba
soilcap=%(SOILCAP)s
soilflx=%(SOILFLX)s
shumdiag=%(SHUMDIAG)s
runoff=%(RUNOFF)s
drainage=%(DRAINAGE)s
## diffuco
raero=%(RAERO)s
qsatt=%(QSATT)s
cdrag=%(CDRAG)s
## enerbil
evapot_corr=%(EVAPOT_CORR)s
temp_sol_new=%(TEMP_SOL_NEW)s
## hydrolc
dss=%(DSS)s
hdry=%(HDRY)s
## thermosoil
cgrnd=%(CGRND)s
dgrnd=%(DGRND)s
z1=%(Z1)s
pcapa=%(PCAPA)s
pcapa_en=%(PCAPA_EN)s
pkappa=%(PKAPPA)s
zdz1=%(ZDZ1)s
zdz2=%(ZDZ2)s
temp_sol_beg=%(TEMP_SOL_BEG)s
# parametres decrivant la surface (vegetation + sol) :
#---------------------------------------------------------------------
#
# La vegetation doit-elle etre imposee ?
# Cette option permet d'imposer la distribution de la vegetation
# quand on travaille sur un seul point. Sur le globe, cela n'a
# pas de sens d'imposer la meme vegetation partout.
IMPOSE_VEG = %(IMPOSE_VEG)s
# defaut = n
# A enlever du code !!! calcul impose par la lai (cf slowproc_veget) et veget_max
# Distribution de la vegetation par rapport au maillage
# Si IMPOSE_VEG
# Parametres prescrits pour la vegetation dans les cas 0-dim.
# Les fractions de vegetation (PFTs) sont lus dans le fichier de restart
# ou imposees par ces valeurs.
# Fraction de VEGET_MAX (parametre par la LAI donc).
SECHIBA_VEG__01 = %(SECHIBA_VEG__01)s
SECHIBA_VEG__02 = %(SECHIBA_VEG__02)s
SECHIBA_VEG__03 = %(SECHIBA_VEG__03)s
SECHIBA_VEG__04 = %(SECHIBA_VEG__04)s
SECHIBA_VEG__05 = %(SECHIBA_VEG__05)s
SECHIBA_VEG__06 = %(SECHIBA_VEG__06)s
SECHIBA_VEG__07 = %(SECHIBA_VEG__07)s
SECHIBA_VEG__08 = %(SECHIBA_VEG__08)s
SECHIBA_VEG__09 = %(SECHIBA_VEG__09)s
SECHIBA_VEG__10 = %(SECHIBA_VEG__10)s
SECHIBA_VEG__11 = %(SECHIBA_VEG__11)s
SECHIBA_VEG__12 = %(SECHIBA_VEG__12)s
SECHIBA_VEG__13 = %(SECHIBA_VEG__13)s
# defaut = 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8, 0.0, 0.0, 0.0
# Distribution du maximum de vegetation par rapport au maillage
# Si IMPOSE_VEG
# Parametres prescrits pour la vegetation dans les cas 0-dim.
# Les fractions maximum de vegetation (PFTs) sont lus dans le fichier de restart
# ou imposees par ces valeurs.
SECHIBA_VEGMAX__01 = %(SECHIBA_VEGMAX__01)s
SECHIBA_VEGMAX__02 = %(SECHIBA_VEGMAX__02)s
SECHIBA_VEGMAX__03 = %(SECHIBA_VEGMAX__03)s
SECHIBA_VEGMAX__04 = %(SECHIBA_VEGMAX__04)s
SECHIBA_VEGMAX__05 = %(SECHIBA_VEGMAX__05)s
SECHIBA_VEGMAX__06 = %(SECHIBA_VEGMAX__06)s
SECHIBA_VEGMAX__07 = %(SECHIBA_VEGMAX__07)s
SECHIBA_VEGMAX__08 = %(SECHIBA_VEGMAX__08)s
SECHIBA_VEGMAX__09 = %(SECHIBA_VEGMAX__09)s
SECHIBA_VEGMAX__10 = %(SECHIBA_VEGMAX__10)s
SECHIBA_VEGMAX__11 = %(SECHIBA_VEGMAX__11)s
SECHIBA_VEGMAX__12 = %(SECHIBA_VEGMAX__12)s
SECHIBA_VEGMAX__13 = %(SECHIBA_VEGMAX__13)s
# defaut = 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8, 0.0, 0.0, 0.0
# Distribution du LAI pour tous les types de vegetation (0-dim)
# Si IMPOSE_VEG
# C'est la LAI maximale utilisee dans les cas 0-D. Ces valeurs sont
# utilisees si elles ne sont pas dans le fichier de restart.
# Les nouvelles valeurs de la LAI sont quand meme calculees a la fin du
# premier jour. On a besoin de ces valeurs si le modele s'arrete avant la fin
# du jour et que l'on n'ai pas passe par les procedures calculant ces valeurs
# pour obtenir les nouvelles conditions de surface.
SECHIBA_LAI__01 = %(SECHIBA_LAI__01)s
SECHIBA_LAI__02 = %(SECHIBA_LAI__02)s
SECHIBA_LAI__03 = %(SECHIBA_LAI__03)s
SECHIBA_LAI__04 = %(SECHIBA_LAI__04)s
SECHIBA_LAI__05 = %(SECHIBA_LAI__05)s
SECHIBA_LAI__06 = %(SECHIBA_LAI__06)s
SECHIBA_LAI__07 = %(SECHIBA_LAI__07)s
SECHIBA_LAI__08 = %(SECHIBA_LAI__08)s
SECHIBA_LAI__09 = %(SECHIBA_LAI__09)s
SECHIBA_LAI__10 = %(SECHIBA_LAI__10)s
SECHIBA_LAI__11 = %(SECHIBA_LAI__11)s
SECHIBA_LAI__12 = %(SECHIBA_LAI__12)s
SECHIBA_LAI__13 = %(SECHIBA_LAI__13)s
# defaut = 0., 8., 8., 4., 4.5, 4.5, 4., 4.5, 4., 2., 2., 2., 2.
# Hauteur pour tous les types de vegetation (0-dim)
# Si IMPOSE_VEG
# C'est la hauteur utilisee dans les cas 0-D. Ces valeurs sont
# utilisees si elles ne sont pas dans le fichier de restart.
# Les nouvelles valeurs de la hauteur sont quand meme calculees a la fin du
# premier jour. On a besoin de ces valeurs si le modele s'arrete avant la fin
# du jour et que l'on n'ai pas passe par les procedures calculant ces valeurs
# pour obtenir les nouvelles conditions de surface.
SLOWPROC_HEIGHT__01 = %(SLOWPROC_HEIGHT__01)s
SLOWPROC_HEIGHT__02 = %(SLOWPROC_HEIGHT__02)s
SLOWPROC_HEIGHT__03 = %(SLOWPROC_HEIGHT__03)s
SLOWPROC_HEIGHT__04 = %(SLOWPROC_HEIGHT__04)s
SLOWPROC_HEIGHT__05 = %(SLOWPROC_HEIGHT__05)s
SLOWPROC_HEIGHT__06 = %(SLOWPROC_HEIGHT__06)s
SLOWPROC_HEIGHT__07 = %(SLOWPROC_HEIGHT__07)s
SLOWPROC_HEIGHT__08 = %(SLOWPROC_HEIGHT__08)s
SLOWPROC_HEIGHT__09 = %(SLOWPROC_HEIGHT__09)s
SLOWPROC_HEIGHT__10 = %(SLOWPROC_HEIGHT__10)s
SLOWPROC_HEIGHT__11 = %(SLOWPROC_HEIGHT__11)s
SLOWPROC_HEIGHT__12 = %(SLOWPROC_HEIGHT__12)s
SLOWPROC_HEIGHT__13 = %(SLOWPROC_HEIGHT__13)s
# defaut = 0., 30., 30., 20., 20., 20., 15., 15., 15., .5, .6, 1.0, 1.0
# Fraction des 3 types de sol (0-dim mode)
# Si IMPOSE_VEG
# Determine la fraction des 3 types de sol
# dans le maillage selon l'ordre : sand loam and clay.
SOIL_FRACTIONS__01 = %(SOIL_FRACTIONS__01)s
SOIL_FRACTIONS__02 = %(SOIL_FRACTIONS__02)s
SOIL_FRACTIONS__03 = %(SOIL_FRACTIONS__03)s
# defaut = 0.28, 0.52, 0.20
# Temperature utilisee pour l'initialisation de la LAI
# Si il n'y a pas de LAI dans le fichier de redemarrage,
# c'est cette temperature qui est utilisee pour la LAI initial.
SLOWPROC_LAI_TEMPDIAG = %(SLOWPROC_LAI_TEMPDIAG)s
# defaut = 280.
# Niveau de sol (en m) utilise pour les calculs de canopee
# Si STOMATE n'est pas active
# La temperature a une profondeur du sol est utilisee pour
# determiner la LAI lorsque STOMATE n'est pas active.
SECHIBA_ZCANOP = %(SECHIBA_ZCANOP)s
# defaut = 0.5
# Fraction des autres types de surface dans le maillage (0-D)
# Si IMPOSE_VEG
# Indique la fraction de glace, lacs, etc... si elle n'est pas donnee
# dans le fichier de redemarrage. Pour l'instant, il n'y a que de la
# glace.
# Q :laisser ca tant qu'il n'y a que la glace. Pb avec setvar?????
SECHIBA_FRAC_NOBIO = %(SECHIBA_FRAC_NOBIO)s
# defaut = 0.0
# Fraction de l'argile (0-D)
# Si IMPOSE_VEG
# Determine la fraction de l'argile dans la case
CLAY_FRACTION = %(CLAY_FRACTION)s
# defaut = 0.2
# Les parametres de surface doivent etre donnes.
# Cette option permet d'imposer les parametres de surface
# (albedo, rugosite, emission). C'est surtout utilise
# pour les simulations sur un point. Sur le globe, cela n'a
# pas de sens d'imposer les memes parametres partout.
IMPOSE_AZE = %(IMPOSE_AZE)s
# defaut = n
# Emission des radiations ondes longues.
# Si IMPOSE_AZE
# L'emissivite de surface sont utilisees pour calculer les emissions _LE_ ??
# de la surface dans les calculs sur un point. Les valeurs doivent
# etre comprises entre 0.97 et 1. Le GCM utilise 0.98.
CONDVEG_EMIS = %(CONDVEG_EMIS)s
# defaut = 1.0
# Albedo de surface dans la gamme du visible.
# Si IMPOSE_AZE
# L'albedo de surface dans la gamme de longueur d'ondes
# du visible pour les tests sur un point.
# Regardez dans un fichier de forcage pour imposer une valeur correcte.
CONDVEG_ALBVIS = %(CONDVEG_ALBVIS)s
# defaut = 0.25
# Albedo de surface dans la gamme des infrarouges.
# Si IMPOSE_AZE
# L'albedo de surface dans la gamme de longueur d'ondes
# des infrarouges pour les tests sur un point.
# Regardez dans un fichier de forcage pour imposer une valeur correcte.
CONDVEG_ALBNIR = %(CONDVEG_ALBNIR)s
# defaut = 0.25
# Methode de moyennage pour la rugosite de surface z0
# Si ce drapeau est place a 'y', alors le __Cdrag neutre__?? est moyenne
# plutot que le log(z0). Il est preferable d'utiliser cette premiere
# methode.
Z0CDRAG_AVE = %(Z0CDRAG_AVE)s
# defaut = y
# Rugosite de surface z0 (m)
# Si IMPOSE_AZE
# La rugosite de surface pour les tests sur un point.
# Regardez dans un fichier de forcage pour imposer une valeur correcte.
CONDVEG_Z0 = %(CONDVEG_Z0)s
# defaut = 0.15
# Hauteur a ajouter a la hauteur du premier niveau (en m)
# Si IMPOSE_AZE
# ORCHIDEE suppose que la hauteur du niveau atmospherique est egale
# au niveau 0 du vent. Aussi, pour prendre en compte la rugosite
# due a la vegetation, on doit la corriger par une fraction de la hauteur
# de vegetation. On l'appelle hauteur de rugosite.
ROUGHHEIGHT = %(ROUGHHEIGHT)s
# defaut = 0.0
# Nom du fichier de l'albedo de sol
# L'albedo de la neige utilise dans SECHIBA
# Avec cette option, on peut impose l'albedo de la neige.
# Lorsque l'on prend la valeur par defaut, on utilise
# le modele d'albedo de neige developpe par Chalita en 1993.
CONDVEG_SNOWA = %(CONDVEG_SNOWA)s
# defaut = modele de Chalita.
# Permet de chifter entre les formulation du calcul de l'albedo du sol nu.
# Si ce parametre est a TRUE, c'est l'ancien modele qui est utilise. L'albedo du
# sol nu dependra alors de l'humidite du sol. Si il est mis a FALSE,
# l'albedo du sol nu sera uniquement fonction de la couleur du sol.
ALB_BARE_MODEL = %(ALB_BARE_MODEL)s
# defaut = FALSE
# Masse de neige initiale si pas dans le fichier de redemarrage.
# La valeur initiale de la masse de neige lorsque l'on a pas de
# fichier de restart.
HYDROL_SNOW = %(HYDROL_SNOW)s
# defaut = 0.0
# L'age de la neige initiale si pas dans le fichier de redemarrage.
# La valeur initiale de l'age de la neige lorsque l'on a pas de
# fichier de restart.
HYDROL_SNOWAGE = %(HYDROL_SNOWAGE)s
# defaut = 0.0
# Le taux de neige initiale sur la glace, les lacs, etc ...
# La valeur initiale du taux de neige sur la glace, les lacs
# lorsque l'on a pas de fichier de restart.
HYDROL_SNOW_NOBIO = %(HYDROL_SNOW_NOBIO)s
# defaut = 0.0
# L'age de la neige initiale sur la glace, les lacs ...
# La valeur initiale de l'age de la neige sur la glace, les lacs,
# lorsque l'on a pas de fichier de restart.
HYDROL_SNOW_NOBIO_AGE = %(HYDROL_SNOW_NOBIO_AGE)s
# defaut = 0.0
# Hauteur initiale du sol sec dans les Tags ORCHIDEE_1.3 a 1.5.
# La valeur hauteur initiale du sol sec lorsque l'on a pas de
# fichier de restart.
HYDROL_HDRY = %(HYDROL_HDRY)s
# defaut = 0.0
# Contrainte d'humidite initiale du sol
# C'est la valeur initiale de la contrainte d'humidite du sol
# si elle n'est pas dans le fichier de redemarrage.
HYDROL_HUMR = %(HYDROL_HUMR)s
# defaut = 1.0
# Profondeur totale du reservoir du sol
HYDROL_SOIL_DEPTH = %(HYDROL_SOIL_DEPTH)s
# defaut = 2.
# Profondeur racinaire
HYDROL_HUMCSTE = %(HYDROL_HUMCSTE)s
# defaut = 5., .8, .8, 1., .8, .8, 1., 1., .8, 4., 4., 4., 4.
# Humidite initiale profonde du sol
# C'est la valeur initiale de l'humidite profonde du sol
# si elle n'est pas dans le fichier de redemarrage.
# La valeur par defaut est le sol sature.
HYDROL_BQSB = %(HYDROL_BQSB)s
# defaut = Maximum quantity of water (Kg/M3) * Total depth of soil reservoir = 150. * 2
# Humidite initiale superficielle du sol
# C'est la valeur initiale de l'humidite superficielle du sol
# si elle n'est pas dans le fichier de redemarrage.
HYDROL_GQSB = %(HYDROL_GQSB)s
# defaut = 0.0
# Profondeur initiale du reservoir superficiel
# C'est la valeur initiale de la profondeur du reservoir superficiel
# si elle n'est pas dans le fichier de redemarrage.
HYDROL_DSG = %(HYDROL_DSG)s
# defaut = 0.0
# Assechement initial au dessus du reservoir superficiel
# C'est la valeur initiale de l'assechement au dessus du reservoir superficiel
# si elle n'est pas dans le fichier de redemarrage.
# La valeur par defaut est calculee d'apres les grandeurs precedentes.
# Elle devrait etre correcte dans la plupart des cas.
HYDROL_DSP = %(HYDROL_DSP)s
# defaut = Total depth of soil reservoir - HYDROL_BQSB / Maximum quantity of water (Kg/M3) = 0.0
# Quantite initiale de l'eau dans la canopee
# si elle n'est pas dans le fichier de redemarrage.
HYDROL_QSV = %(HYDROL_QSV)s
# defaut = 0.0
# Humidite du sol sur chaque carreau et niveau
# La valeur initiale de mc si elle n'est pas dans le fichier
# de redemarrage.
HYDROL_MOISTURE_CONTENT = %(HYDROL_MOISTURE_CONTENT)s
# defaut = 0.3
# US_NVM_NSTM_NSLM
# La valeur initiale de l'humidite relative
# si elle n'est pas dans le fichier de redemarrage.
US_INIT = %(US_INIT)s
# defaut = 0.0
# Coefficients du drainage libre en sous-sol
# Indique les valeurs des coefficients du drainage libre.
FREE_DRAIN_COEF = %(FREE_DRAIN_COEF)s
# defaut = 1.0, 1.0, 1.0
# evaporation sur sol brut pour chaque sol
# si elle n'est pas dans le fichier de redemarrage.
EVAPNU_SOIL = %(EVAPNU_SOIL)s
# defaut = 0.0
# Temperature de surface initiale
# si elle n'est pas dans le fichier de redemarrage.
ENERBIL_TSURF = %(ENERBIL_TSURF)s
# defaut = 280.
# Potentiel initial d'evaporation du sol
# si elle n'est pas dans le fichier de redemarrage.
ENERBIL_EVAPOT = %(ENERBIL_EVAPOT)s
# defaut = 0.0
# Profil initial de temperature du sol si il n'est pas dans le restart
# La valeur initiale du profil de temperarure du sol. Cela ne devrait etre
# utilise qu'en cas de redemarrage du modele. On ne prend qu'une valeur ici
# car on suppose la temperature constante le long de la colone.
THERMOSOIL_TPRO = %(THERMOSOIL_TPRO)s
# defaut = 280.
# Niveau initial du CO2 dans les feuilles
# si il n'est pas dans le fichier de redemarrage.
DIFFUCO_LEAFCI = %(DIFFUCO_LEAFCI)s
# defaut = 233.
# Conservation du cdrag du gcm.
# Placer ce parametre a .TRUE. si vous desirez conserver le q_cdrag calcule par le GCM.
# Conservation du coefficient cdrag du gcm pour le calcul des flux de chaleur latent et sensible..
# TRUE si q_cdrag vaut zero a l'initialisation (et FALSE pour les calculs off-line).
CDRAG_FROM_GCM = %(CDRAG_FROM_GCM)s
# defaut = SI q_cdrag == 0 ldq_cdrag_from_gcm = .FALSE. SINON .TRUE.
# Bouton articificiel pour regler la croissance ou decroissance de la resistance de la canopee.
# Ajout de Nathalie - 28 Mars 2006 - sur les conseils de Frederic Hourdin.
# Par PFT.
RVEG_PFT = %(RVEG_PFT)s
# defaut = 1.
# Coefficient de reservoir d'interception.
# Ce coefficient indique la quantite de LAI transforme en taille du reservoir ????
# d'interception pour slowproc_derivvar ou stomate. ????
SECHIBA_QSINT = %(SECHIBA_QSINT)s
# defaut = 0.1
#**************************************************************************
# LAI
#**************************************************************************
# Lecture de la carte de LAI
# Permet la lecture d'une carte de LAI
# Si n => modele impose entre LAI_min et LAI_max,
# suivant le tableau type_of_lai (dans constantes_veg.f90)
# - mean : lai(ji,jv) = undemi * (llaimax(jv) + llaimin(jv))
# - inter : llaimin(jv) + tempfunc(stempdiag(ji,lcanop)) * (llaimax(jv) - llaimin(jv))
# la carte n'est pas lue si un seul point ??
LAI_MAP = %(LAI_MAP)s
# defaut = n
# Nom du fichier de LAI
# Si LAI_MAP
# C'est le nom du fichier ouvert pour la lecture de la
# carte de LAI. Habituelleemnt, SECHIBA tourne avec
# une carte 5km x 5km qui vient de celle de Nicolas Viovy.
LAI_FILE = %(LAI_FILE)s
# defaut = ../surfmap/lai2D.nc
# Drapeau pour utiliser la vieille "interpolation" de la LAI.
# Si LAI_MAP
# Si vous desirez reproduire des resultats obtenus avec l'ancienne "interpolation"
# de la carte de LAI, activez ce drapeau.
SLOWPROC_LAI_OLD_INTERPOL = %(SLOWPROC_LAI_OLD_INTERPOL)s
# defaut = n
#**************************************************************************
#**************************************************************************
# LAND_USE
#**************************************************************************
# Lecture d'une carte de vegetation pour le land_use
# On modifie les proportions des differentes pft
LAND_USE = %(LAND_USE)s
# defaut = n
# Annee de lecture de la carte de vegetation pour le land_use
# Decalage en annee de la lecture de la carte de land_use
# Pour un fichier de vegetation avec une seule annee, sans axe de temps,
# on indique ici VEGET_YEAR=0.
# La valeur par defaut est 133 pour designer l'annee 1982
# (car 1982 - 1850 + 1 = 133)
# Si LAND_USE
# If LAND_USE
VEGET_YEAR = %(VEGET_YEAR)s
# defaut = 133
# Ce logique indique qu'une nouvelle carte de LAND USE va etre utilisee (depuis 1.9.5 version).
# Ce parametre sert a eviter le compteur veget_year present dans
# le restart de SECHIBA et permet de le reinitialiser a une nouvelle valeure indiquee
# par le parametre VEGET_YEAR.
# Il ne doit donc etre utilise que lors d'un changement de fichier de LAND USE.
# If LAND_USE
VEGET_REINIT = %(VEGET_REINIT)s
# defaut = n
# Frequence de mise a jour de la carte de vegetation (jusqu'a la version 1.9)
# Les donnees veget seront mises a jour avec ce pas de temps
# Si LAND_USE
VEGET_LENGTH = %(VEGET_LENGTH)s
# defaut = 1Y
# Frequence de mise a jour de la carte de vegetation (a partir de la version 2.0)
# Les donnees veget seront mises a jour avec ce pas de temps
# Si LAND_USE
VEGET_UPDATE = %(VEGET_UPDATE)s
# defaut = 1Y
# Utilisation des sols et deforestation
# Prend en compte les modifications dues a l'utilisation des sols,
# notament l'impact de la deforestation.
# Si LAND_USE
LAND_COVER_CHANGE = %(LAND_COVER_CHANGE)s
# defaut = n
#**************************************************************************
# Calcule-t-on l'agriculture ?
# On determine ici si l'on calcule l'agriculture.
AGRICULTURE = %(AGRICULTURE)s
# defaut = y
# Modele de moisson pour les PFTs agricoles.
# Traite la reorganisation de la biomasse apres les moissons pour l'agriculture.
# Change les turnover journaliers.
HARVEST_AGRI = %(HARVEST_AGRI)s
# defaut = y
# Modelise-t-on les herbivores ?
# Cette option declanche la modelisation des effets des herbivores.
HERBIVORES = %(HERBIVORES)s
# defaut = n
# L'expansion des PFTs peut-elle depasser d'une maille ?
# L'activation de cette option autorise les expansions des PFTs
# a traverser les mailles.
TREAT_EXPANSION = %(TREAT_EXPANSION)s
# defaut = n
#**************************************************************************
# Compteur de jour de simulation
# Cette variable est utilisee par les processus a calculer une
# seule fois par jour.
SECHIBA_DAY = %(SECHIBA_DAY)s
# defaut = 0.0
# Pas de temps de STOMATE et les autres processus lents
# Pas de temps (en s) de la mise a jour de la couverture
# de vegetation, du LAI, etc. C'est aussi le pas de temps de STOMATE
DT_SLOW = %(DT_SLOW)s
# defaut = un_jour = 86400.
#**************************************************************************
# Flag pour tester le bilan de l'eau
# Ce parametre active la verification
# du bilan d'eau entre deux pas de temps.
CHECK_WATERBAL = %(CHECK_WATERBAL)s
# defaut = n
# Permet l'utilisation du modele hydrologique multi-couches de Patricia De Rosnay
# Cette option declanche l'utilisation du modeles 11 couches pour l'hydrologie verticale.
# Cette modelisation utilise une diffusion verticale adapte de CWRR de Patricia De Rosnay.
# Sinon, on utilise l'hydrologie standard dite de Choisnel.
HYDROL_CWRR = %(HYDROL_CWRR)s
# defaut = n
# Verifie le bilan d'eau pour le modele CWRR.
# Ce parametre permet de tester en detail le bilan
# d'eau entre chaque pas de temps pour le modele d'hydrologie CWRR.
CHECK_CWRR = %(CHECK_CWRR)s
# defaut = n
# Faire de la diffusion horizontale ?
# Si TRUE, alors l'eau peut diffuser dans les directions horizontales
# en les reservoirs d'eau des PFT.
HYDROL_OK_HDIFF = %(HYDROL_OK_HDIFF)s
# defaut = n
# Temps de latence (en s) pour la diffusion horizontale de l'eau
# Si HYDROL_OK_HDIFF
# Definit la vitesse de diffusion horizontale entre chaque
# reservoirs d'eau des PFT. Une valeur infinie indique que
# l'on a pas de diffusion.
HYDROL_TAU_HDIFF = %(HYDROL_TAU_HDIFF)s
# default = 86400.
# Pourcentage des precipitations qui ne sont pas interceptees par la canopee (seulement pour le TAG 1.6).
# Pendant un evenement pluvieux PERCENT_THROUGHFALL pourcentage de la pluie va directement
# au sol sans etre intercepte par les feuilles de la vegetation.
PERCENT_THROUGHFALL = %(PERCENT_THROUGHFALL)s
# defaut = 30.
# Pourcentage par PFT des precipitations qui ne sont pas interceptees par la canopee (a partir du TAG 1.8).
# Pendant un evenement pluvieux PERCENT_THROUGHFALL_PFT pourcentage de la pluie va directement
# au sol sans etre intercepte par les feuilles de la vegetation, pour chaque PFT.
PERCENT_THROUGHFALL_PFT = %(PERCENT_THROUGHFALL_PFT)s
# defaut = 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30., 30.
# Option pour la transmission des rivieres
# Cette option declanche l'ecoulement et le drainage de l'eau
# jusqu'aux oceans et vers les __rivieres souterraines__ ??
RIVER_ROUTING = %(RIVER_ROUTING)s
# defaut = n
# Nom du fichier qui contient les informations de routage.
# Le fichier permet au module du routage de lire les grilles
# a resolution haute frequence des bassins et les directions
# d'ecoulement d'un maillage a l'autre.
ROUTING_FILE = %(ROUTING_FILE)s
# defaut = routing.nc
# Pas de temps du routage
# Si RIVER_ROUTING
# Indique le pas de temps en secondes pour le schema de routage.
# Ce nombre doit etre un mutliple du pas de temps d'ORCHIDEE.
# Un jour est une bonne valeur.
ROUTING_TIMESTEP = %(ROUTING_TIMESTEP)s
# defaut = 86400
# Nombre de rivieres
# Si RIVER_ROUTING
# Ce parametre donne le nombre de rivieres dans les grands bassins.
# Ces rivieres seront traites separement et ne diffuseront pas ensembles
# vers les cotes et les oceans.
ROUTING_RIVERS = %(ROUTING_RIVERS)s
# defaut = 50
# Doit-on calculer les flux d'irrigation ?
# En cas de routage, on calcule les flux d'irrigation.
# C'est fait avec une hypothese simple : on souhaite avoir une carte
# correcte des zones d'irrigation et on a une fonction simple qui
# estime les besoins en irrigation.
DO_IRRIGATION = %(DO_IRRIGATION)s
# defaut = n
# Nom du fichier qui contient la carte des zones irriguees.
# Si IRRIGATE
# Le nom du fichier qui est ouvert pour lire un champ
# avec une zone en m^2 des zones irriguees a l'interieur de
# chaque maille 0.5 par 0.5 degres de la grille. La carte courrante
# est celle utilisee par le ""Center for Environmental Systems Research""
# in Kassel (1995).
IRRIGATION_FILE = %(IRRIGATION_FILE)s
# defaut = irrigated.nc
# Doit-on calculer les inondations en plaine ?
# Ce drapeau force le modele a ternir compte des inondations en plaine.
# Et l'eau
DO_FLOODPLAINS = %(DO_FLOODPLAINS)s
# defaut = n
#**************************************************************************
""" | PypiClean |
/Flask_AdminLTE3-1.0.9-py3-none-any.whl/flask_adminlte3/static/plugins/codemirror/addon/hint/sql-hint.js |
(function(mod) {
if (typeof exports == "object" && typeof module == "object") // CommonJS
mod(require("../../lib/codemirror"), require("../../mode/sql/sql"));
else if (typeof define == "function" && define.amd) // AMD
define(["../../lib/codemirror", "../../mode/sql/sql"], mod);
else // Plain browser env
mod(CodeMirror);
})(function(CodeMirror) {
"use strict";
var tables;
var defaultTable;
var keywords;
var identifierQuote;
var CONS = {
QUERY_DIV: ";",
ALIAS_KEYWORD: "AS"
};
var Pos = CodeMirror.Pos, cmpPos = CodeMirror.cmpPos;
function isArray(val) { return Object.prototype.toString.call(val) == "[object Array]" }
function getKeywords(editor) {
var mode = editor.doc.modeOption;
if (mode === "sql") mode = "text/x-sql";
return CodeMirror.resolveMode(mode).keywords;
}
function getIdentifierQuote(editor) {
var mode = editor.doc.modeOption;
if (mode === "sql") mode = "text/x-sql";
return CodeMirror.resolveMode(mode).identifierQuote || "`";
}
function getText(item) {
return typeof item == "string" ? item : item.text;
}
function wrapTable(name, value) {
if (isArray(value)) value = {columns: value}
if (!value.text) value.text = name
return value
}
function parseTables(input) {
var result = {}
if (isArray(input)) {
for (var i = input.length - 1; i >= 0; i--) {
var item = input[i]
result[getText(item).toUpperCase()] = wrapTable(getText(item), item)
}
} else if (input) {
for (var name in input)
result[name.toUpperCase()] = wrapTable(name, input[name])
}
return result
}
function getTable(name) {
return tables[name.toUpperCase()]
}
function shallowClone(object) {
var result = {};
for (var key in object) if (object.hasOwnProperty(key))
result[key] = object[key];
return result;
}
function match(string, word) {
var len = string.length;
var sub = getText(word).substr(0, len);
return string.toUpperCase() === sub.toUpperCase();
}
function addMatches(result, search, wordlist, formatter) {
if (isArray(wordlist)) {
for (var i = 0; i < wordlist.length; i++)
if (match(search, wordlist[i])) result.push(formatter(wordlist[i]))
} else {
for (var word in wordlist) if (wordlist.hasOwnProperty(word)) {
var val = wordlist[word]
if (!val || val === true)
val = word
else
val = val.displayText ? {text: val.text, displayText: val.displayText} : val.text
if (match(search, val)) result.push(formatter(val))
}
}
}
function cleanName(name) {
// Get rid name from identifierQuote and preceding dot(.)
if (name.charAt(0) == ".") {
name = name.substr(1);
}
// replace duplicated identifierQuotes with single identifierQuotes
// and remove single identifierQuotes
var nameParts = name.split(identifierQuote+identifierQuote);
for (var i = 0; i < nameParts.length; i++)
nameParts[i] = nameParts[i].replace(new RegExp(identifierQuote,"g"), "");
return nameParts.join(identifierQuote);
}
function insertIdentifierQuotes(name) {
var nameParts = getText(name).split(".");
for (var i = 0; i < nameParts.length; i++)
nameParts[i] = identifierQuote +
// duplicate identifierQuotes
nameParts[i].replace(new RegExp(identifierQuote,"g"), identifierQuote+identifierQuote) +
identifierQuote;
var escaped = nameParts.join(".");
if (typeof name == "string") return escaped;
name = shallowClone(name);
name.text = escaped;
return name;
}
function nameCompletion(cur, token, result, editor) {
// Try to complete table, column names and return start position of completion
var useIdentifierQuotes = false;
var nameParts = [];
var start = token.start;
var cont = true;
while (cont) {
cont = (token.string.charAt(0) == ".");
useIdentifierQuotes = useIdentifierQuotes || (token.string.charAt(0) == identifierQuote);
start = token.start;
nameParts.unshift(cleanName(token.string));
token = editor.getTokenAt(Pos(cur.line, token.start));
if (token.string == ".") {
cont = true;
token = editor.getTokenAt(Pos(cur.line, token.start));
}
}
// Try to complete table names
var string = nameParts.join(".");
addMatches(result, string, tables, function(w) {
return useIdentifierQuotes ? insertIdentifierQuotes(w) : w;
});
// Try to complete columns from defaultTable
addMatches(result, string, defaultTable, function(w) {
return useIdentifierQuotes ? insertIdentifierQuotes(w) : w;
});
// Try to complete columns
string = nameParts.pop();
var table = nameParts.join(".");
var alias = false;
var aliasTable = table;
// Check if table is available. If not, find table by Alias
if (!getTable(table)) {
var oldTable = table;
table = findTableByAlias(table, editor);
if (table !== oldTable) alias = true;
}
var columns = getTable(table);
if (columns && columns.columns)
columns = columns.columns;
if (columns) {
addMatches(result, string, columns, function(w) {
var tableInsert = table;
if (alias == true) tableInsert = aliasTable;
if (typeof w == "string") {
w = tableInsert + "." + w;
} else {
w = shallowClone(w);
w.text = tableInsert + "." + w.text;
}
return useIdentifierQuotes ? insertIdentifierQuotes(w) : w;
});
}
return start;
}
function eachWord(lineText, f) {
var words = lineText.split(/\s+/)
for (var i = 0; i < words.length; i++)
if (words[i]) f(words[i].replace(/[`,;]/g, ''))
}
function findTableByAlias(alias, editor) {
var doc = editor.doc;
var fullQuery = doc.getValue();
var aliasUpperCase = alias.toUpperCase();
var previousWord = "";
var table = "";
var separator = [];
var validRange = {
start: Pos(0, 0),
end: Pos(editor.lastLine(), editor.getLineHandle(editor.lastLine()).length)
};
//add separator
var indexOfSeparator = fullQuery.indexOf(CONS.QUERY_DIV);
while(indexOfSeparator != -1) {
separator.push(doc.posFromIndex(indexOfSeparator));
indexOfSeparator = fullQuery.indexOf(CONS.QUERY_DIV, indexOfSeparator+1);
}
separator.unshift(Pos(0, 0));
separator.push(Pos(editor.lastLine(), editor.getLineHandle(editor.lastLine()).text.length));
//find valid range
var prevItem = null;
var current = editor.getCursor()
for (var i = 0; i < separator.length; i++) {
if ((prevItem == null || cmpPos(current, prevItem) > 0) && cmpPos(current, separator[i]) <= 0) {
validRange = {start: prevItem, end: separator[i]};
break;
}
prevItem = separator[i];
}
if (validRange.start) {
var query = doc.getRange(validRange.start, validRange.end, false);
for (var i = 0; i < query.length; i++) {
var lineText = query[i];
eachWord(lineText, function(word) {
var wordUpperCase = word.toUpperCase();
if (wordUpperCase === aliasUpperCase && getTable(previousWord))
table = previousWord;
if (wordUpperCase !== CONS.ALIAS_KEYWORD)
previousWord = word;
});
if (table) break;
}
}
return table;
}
CodeMirror.registerHelper("hint", "sql", function(editor, options) {
tables = parseTables(options && options.tables)
var defaultTableName = options && options.defaultTable;
var disableKeywords = options && options.disableKeywords;
defaultTable = defaultTableName && getTable(defaultTableName);
keywords = getKeywords(editor);
identifierQuote = getIdentifierQuote(editor);
if (defaultTableName && !defaultTable)
defaultTable = findTableByAlias(defaultTableName, editor);
defaultTable = defaultTable || [];
if (defaultTable.columns)
defaultTable = defaultTable.columns;
var cur = editor.getCursor();
var result = [];
var token = editor.getTokenAt(cur), start, end, search;
if (token.end > cur.ch) {
token.end = cur.ch;
token.string = token.string.slice(0, cur.ch - token.start);
}
if (token.string.match(/^[.`"'\w@][\w$#]*$/g)) {
search = token.string;
start = token.start;
end = token.end;
} else {
start = end = cur.ch;
search = "";
}
if (search.charAt(0) == "." || search.charAt(0) == identifierQuote) {
start = nameCompletion(cur, token, result, editor);
} else {
var objectOrClass = function(w, className) {
if (typeof w === "object") {
w.className = className;
} else {
w = { text: w, className: className };
}
return w;
};
addMatches(result, search, defaultTable, function(w) {
return objectOrClass(w, "CodeMirror-hint-table CodeMirror-hint-default-table");
});
addMatches(
result,
search,
tables, function(w) {
return objectOrClass(w, "CodeMirror-hint-table");
}
);
if (!disableKeywords)
addMatches(result, search, keywords, function(w) {
return objectOrClass(w.toUpperCase(), "CodeMirror-hint-keyword");
});
}
return {list: result, from: Pos(cur.line, start), to: Pos(cur.line, end)};
});
}); | PypiClean |
/DendroPy_calver-2023.330.2-py3-none-any.whl/dendropy/legacy/seqmodel.py |
##############################################################################
## DendroPy Phylogenetic Computing Library.
##
## Copyright 2010-2015 Jeet Sukumaran and Mark T. Holder.
## All rights reserved.
##
## See "LICENSE.rst" for terms and conditions of usage.
##
## If you use this work or any portion thereof in published work,
## please cite it as:
##
## Sukumaran, J. and M. T. Holder. 2010. DendroPy: a Python library
## for phylogenetic computing. Bioinformatics 26: 1569-1571.
##
##############################################################################
"""
DEPRECATED IN DENDROPY 4: USE `dendropy.model.discrete`.
"""
from dendropy.model import discrete
from dendropy.utility import deprecate
class SeqModel(discrete.DiscreteCharacterEvolutionModel):
def __init__(self, state_alphabet, rng=None):
deprecate.dendropy_deprecation_warning(
preamble="Deprecated since DendroPy 4: The 'dendropy.seqmodel.SeqModel' class has moved to 'dendropy.model.discrete.DiscreteCharacterEvolutionModel'.",
old_construct="from dendropy import seqmodel\nm = seqmodel.SeqModel(...)",
new_construct="from dendropy.model import discrete\nm = discrete.DiscreteCharacterEvolutionModel(...)")
discrete.DiscreteCharacterEvolutionModel.__init__(
self,
state_alphabet=state_alphabet,
rng=rng)
class Hky85SeqModel(discrete.Hky85):
def __init__(self, kappa=1.0, base_freqs=None, state_alphabet=None, rng=None):
deprecate.dendropy_deprecation_warning(
preamble="Deprecated since DendroPy 4: The 'dendropy.seqmodel.Hky85SeqModel' class has moved to 'dendropy.model.discrete.Hky85'.",
old_construct="from dendropy import seqmodel\nm = seqmodel.NucleotideSeqModel(...)",
new_construct="from dendropy.model import discrete\ndiscrete.Hky85(...)")
discrete.Hky85.__init__(
self,
kappa=kappa,
base_freqs=base_freqs,
state_alphabet=state_alphabet,
rng=rng)
class Jc69SeqModel(discrete.Jc69):
def __init__(self, state_alphabet=None, rng=None):
deprecate.dendropy_deprecation_warning(
preamble="Deprecated since DendroPy 4: The 'dendropy.seqmodel.Jc69SeqModel' class has moved to 'dendropy.model.discrete.Jc69'.",
old_construct="from dendropy import seqmodel\nm = seqmodel.NucleotideSeqModel(...)",
new_construct="from dendropy.model import discrete\ndiscrete.Jc69(...)")
discrete.Jc69.__init__(
self,
state_alphabet=state_alphabet,
rng=rng) | PypiClean |
/GB2260-v2-0.2.1.tar.gz/GB2260-v2-0.2.1/gb2260_v2/data/curated/revision_201705.py | from __future__ import unicode_literals
name = '201705'
division_schema = {
'110000': '北京市',
'110101': '东城区',
'110102': '西城区',
'110105': '朝阳区',
'110106': '丰台区',
'110107': '石景山区',
'110108': '海淀区',
'110109': '门头沟区',
'110111': '房山区',
'110112': '通州区',
'110113': '顺义区',
'110114': '昌平区',
'110115': '大兴区',
'110116': '怀柔区',
'110117': '平谷区',
'110118': '密云区',
'110119': '延庆区',
'120000': '天津市',
'120101': '和平区',
'120102': '河东区',
'120103': '河西区',
'120104': '南开区',
'120105': '河北区',
'120106': '红桥区',
'120110': '东丽区',
'120111': '西青区',
'120112': '津南区',
'120113': '北辰区',
'120114': '武清区',
'120115': '宝坻区',
'120116': '滨海新区',
'120117': '宁河区',
'120118': '静海区',
'120119': '蓟州区',
'130000': '河北省',
'130100': '石家庄市',
'130102': '长安区',
'130104': '桥西区',
'130105': '新华区',
'130107': '井陉矿区',
'130108': '裕华区',
'130109': '藁城区',
'130110': '鹿泉区',
'130111': '栾城区',
'130121': '井陉县',
'130123': '正定县',
'130125': '行唐县',
'130126': '灵寿县',
'130127': '高邑县',
'130128': '深泽县',
'130129': '赞皇县',
'130130': '无极县',
'130131': '平山县',
'130132': '元氏县',
'130133': '赵县',
'130181': '辛集市',
'130183': '晋州市',
'130184': '新乐市',
'130200': '唐山市',
'130202': '路南区',
'130203': '路北区',
'130204': '古冶区',
'130205': '开平区',
'130207': '丰南区',
'130208': '丰润区',
'130209': '曹妃甸区',
'130223': '滦县',
'130224': '滦南县',
'130225': '乐亭县',
'130227': '迁西县',
'130229': '玉田县',
'130281': '遵化市',
'130283': '迁安市',
'130300': '秦皇岛市',
'130302': '海港区',
'130303': '山海关区',
'130304': '北戴河区',
'130306': '抚宁区',
'130321': '青龙满族自治县',
'130322': '昌黎县',
'130324': '卢龙县',
'130400': '邯郸市',
'130402': '邯山区',
'130403': '丛台区',
'130404': '复兴区',
'130406': '峰峰矿区',
'130407': '肥乡区',
'130408': '永年区',
'130423': '临漳县',
'130424': '成安县',
'130425': '大名县',
'130426': '涉县',
'130427': '磁县',
'130430': '邱县',
'130431': '鸡泽县',
'130432': '广平县',
'130433': '馆陶县',
'130434': '魏县',
'130435': '曲周县',
'130481': '武安市',
'130500': '邢台市',
'130502': '桥东区',
'130503': '桥西区',
'130521': '邢台县',
'130522': '临城县',
'130523': '内丘县',
'130524': '柏乡县',
'130525': '隆尧县',
'130526': '任县',
'130527': '南和县',
'130528': '宁晋县',
'130529': '巨鹿县',
'130530': '新河县',
'130531': '广宗县',
'130532': '平乡县',
'130533': '威县',
'130534': '清河县',
'130535': '临西县',
'130581': '南宫市',
'130582': '沙河市',
'130600': '保定市',
'130602': '竞秀区',
'130606': '莲池区',
'130607': '满城区',
'130608': '清苑区',
'130609': '徐水区',
'130623': '涞水县',
'130624': '阜平县',
'130626': '定兴县',
'130627': '唐县',
'130628': '高阳县',
'130629': '容城县',
'130630': '涞源县',
'130631': '望都县',
'130632': '安新县',
'130633': '易县',
'130634': '曲阳县',
'130635': '蠡县',
'130636': '顺平县',
'130637': '博野县',
'130638': '雄县',
'130681': '涿州市',
'130682': '定州市',
'130683': '安国市',
'130684': '高碑店市',
'130700': '张家口市',
'130702': '桥东区',
'130703': '桥西区',
'130705': '宣化区',
'130706': '下花园区',
'130708': '万全区',
'130709': '崇礼区',
'130722': '张北县',
'130723': '康保县',
'130724': '沽源县',
'130725': '尚义县',
'130726': '蔚县',
'130727': '阳原县',
'130728': '怀安县',
'130730': '怀来县',
'130731': '涿鹿县',
'130732': '赤城县',
'130800': '承德市',
'130802': '双桥区',
'130803': '双滦区',
'130804': '鹰手营子矿区',
'130821': '承德县',
'130822': '兴隆县',
'130824': '滦平县',
'130825': '隆化县',
'130826': '丰宁满族自治县',
'130827': '宽城满族自治县',
'130828': '围场满族蒙古族自治县',
'130881': '平泉市',
'130900': '沧州市',
'130902': '新华区',
'130903': '运河区',
'130921': '沧县',
'130922': '青县',
'130923': '东光县',
'130924': '海兴县',
'130925': '盐山县',
'130926': '肃宁县',
'130927': '南皮县',
'130928': '吴桥县',
'130929': '献县',
'130930': '孟村回族自治县',
'130981': '泊头市',
'130982': '任丘市',
'130983': '黄骅市',
'130984': '河间市',
'131000': '廊坊市',
'131002': '安次区',
'131003': '广阳区',
'131022': '固安县',
'131023': '永清县',
'131024': '香河县',
'131025': '大城县',
'131026': '文安县',
'131028': '大厂回族自治县',
'131081': '霸州市',
'131082': '三河市',
'131100': '衡水市',
'131102': '桃城区',
'131103': '冀州区',
'131121': '枣强县',
'131122': '武邑县',
'131123': '武强县',
'131124': '饶阳县',
'131125': '安平县',
'131126': '故城县',
'131127': '景县',
'131128': '阜城县',
'131182': '深州市',
'140000': '山西省',
'140100': '太原市',
'140105': '小店区',
'140106': '迎泽区',
'140107': '杏花岭区',
'140108': '尖草坪区',
'140109': '万柏林区',
'140110': '晋源区',
'140121': '清徐县',
'140122': '阳曲县',
'140123': '娄烦县',
'140181': '古交市',
'140200': '大同市',
'140202': '城区',
'140203': '矿区',
'140211': '南郊区',
'140212': '新荣区',
'140221': '阳高县',
'140222': '天镇县',
'140223': '广灵县',
'140224': '灵丘县',
'140225': '浑源县',
'140226': '左云县',
'140227': '大同县',
'140300': '阳泉市',
'140302': '城区',
'140303': '矿区',
'140311': '郊区',
'140321': '平定县',
'140322': '盂县',
'140400': '长治市',
'140402': '城区',
'140411': '郊区',
'140421': '长治县',
'140423': '襄垣县',
'140424': '屯留县',
'140425': '平顺县',
'140426': '黎城县',
'140427': '壶关县',
'140428': '长子县',
'140429': '武乡县',
'140430': '沁县',
'140431': '沁源县',
'140481': '潞城市',
'140500': '晋城市',
'140502': '城区',
'140521': '沁水县',
'140522': '阳城县',
'140524': '陵川县',
'140525': '泽州县',
'140581': '高平市',
'140600': '朔州市',
'140602': '朔城区',
'140603': '平鲁区',
'140621': '山阴县',
'140622': '应县',
'140623': '右玉县',
'140624': '怀仁县',
'140700': '晋中市',
'140702': '榆次区',
'140721': '榆社县',
'140722': '左权县',
'140723': '和顺县',
'140724': '昔阳县',
'140725': '寿阳县',
'140726': '太谷县',
'140727': '祁县',
'140728': '平遥县',
'140729': '灵石县',
'140781': '介休市',
'140800': '运城市',
'140802': '盐湖区',
'140821': '临猗县',
'140822': '万荣县',
'140823': '闻喜县',
'140824': '稷山县',
'140825': '新绛县',
'140826': '绛县',
'140827': '垣曲县',
'140828': '夏县',
'140829': '平陆县',
'140830': '芮城县',
'140881': '永济市',
'140882': '河津市',
'140900': '忻州市',
'140902': '忻府区',
'140921': '定襄县',
'140922': '五台县',
'140923': '代县',
'140924': '繁峙县',
'140925': '宁武县',
'140926': '静乐县',
'140927': '神池县',
'140928': '五寨县',
'140929': '岢岚县',
'140930': '河曲县',
'140931': '保德县',
'140932': '偏关县',
'140981': '原平市',
'141000': '临汾市',
'141002': '尧都区',
'141021': '曲沃县',
'141022': '翼城县',
'141023': '襄汾县',
'141024': '洪洞县',
'141025': '古县',
'141026': '安泽县',
'141027': '浮山县',
'141028': '吉县',
'141029': '乡宁县',
'141030': '大宁县',
'141031': '隰县',
'141032': '永和县',
'141033': '蒲县',
'141034': '汾西县',
'141081': '侯马市',
'141082': '霍州市',
'141100': '吕梁市',
'141102': '离石区',
'141121': '文水县',
'141122': '交城县',
'141123': '兴县',
'141124': '临县',
'141125': '柳林县',
'141126': '石楼县',
'141127': '岚县',
'141128': '方山县',
'141129': '中阳县',
'141130': '交口县',
'141181': '孝义市',
'141182': '汾阳市',
'150000': '内蒙古自治区',
'150100': '呼和浩特市',
'150102': '新城区',
'150103': '回民区',
'150104': '玉泉区',
'150105': '赛罕区',
'150121': '土默特左旗',
'150122': '托克托县',
'150123': '和林格尔县',
'150124': '清水河县',
'150125': '武川县',
'150200': '包头市',
'150202': '东河区',
'150203': '昆都仑区',
'150204': '青山区',
'150205': '石拐区',
'150206': '白云鄂博矿区',
'150207': '九原区',
'150221': '土默特右旗',
'150222': '固阳县',
'150223': '达尔罕茂明安联合旗',
'150300': '乌海市',
'150302': '海勃湾区',
'150303': '海南区',
'150304': '乌达区',
'150400': '赤峰市',
'150402': '红山区',
'150403': '元宝山区',
'150404': '松山区',
'150421': '阿鲁科尔沁旗',
'150422': '巴林左旗',
'150423': '巴林右旗',
'150424': '林西县',
'150425': '克什克腾旗',
'150426': '翁牛特旗',
'150428': '喀喇沁旗',
'150429': '宁城县',
'150430': '敖汉旗',
'150500': '通辽市',
'150502': '科尔沁区',
'150521': '科尔沁左翼中旗',
'150522': '科尔沁左翼后旗',
'150523': '开鲁县',
'150524': '库伦旗',
'150525': '奈曼旗',
'150526': '扎鲁特旗',
'150581': '霍林郭勒市',
'150600': '鄂尔多斯市',
'150602': '东胜区',
'150603': '康巴什区',
'150621': '达拉特旗',
'150622': '准格尔旗',
'150623': '鄂托克前旗',
'150624': '鄂托克旗',
'150625': '杭锦旗',
'150626': '乌审旗',
'150627': '伊金霍洛旗',
'150700': '呼伦贝尔市',
'150702': '海拉尔区',
'150703': '扎赉诺尔区',
'150721': '阿荣旗',
'150722': '莫力达瓦达斡尔族自治旗',
'150723': '鄂伦春自治旗',
'150724': '鄂温克族自治旗',
'150725': '陈巴尔虎旗',
'150726': '新巴尔虎左旗',
'150727': '新巴尔虎右旗',
'150781': '满洲里市',
'150782': '牙克石市',
'150783': '扎兰屯市',
'150784': '额尔古纳市',
'150785': '根河市',
'150800': '巴彦淖尔市',
'150802': '临河区',
'150821': '五原县',
'150822': '磴口县',
'150823': '乌拉特前旗',
'150824': '乌拉特中旗',
'150825': '乌拉特后旗',
'150826': '杭锦后旗',
'150900': '乌兰察布市',
'150902': '集宁区',
'150921': '卓资县',
'150922': '化德县',
'150923': '商都县',
'150924': '兴和县',
'150925': '凉城县',
'150926': '察哈尔右翼前旗',
'150927': '察哈尔右翼中旗',
'150928': '察哈尔右翼后旗',
'150929': '四子王旗',
'150981': '丰镇市',
'152200': '兴安盟',
'152201': '乌兰浩特市',
'152202': '阿尔山市',
'152221': '科尔沁右翼前旗',
'152222': '科尔沁右翼中旗',
'152223': '扎赉特旗',
'152224': '突泉县',
'152500': '锡林郭勒盟',
'152501': '二连浩特市',
'152502': '锡林浩特市',
'152522': '阿巴嘎旗',
'152523': '苏尼特左旗',
'152524': '苏尼特右旗',
'152525': '东乌珠穆沁旗',
'152526': '西乌珠穆沁旗',
'152527': '太仆寺旗',
'152528': '镶黄旗',
'152529': '正镶白旗',
'152530': '正蓝旗',
'152531': '多伦县',
'152900': '阿拉善盟',
'152921': '阿拉善左旗',
'152922': '阿拉善右旗',
'152923': '额济纳旗',
'210000': '辽宁省',
'210100': '沈阳市',
'210102': '和平区',
'210103': '沈河区',
'210104': '大东区',
'210105': '皇姑区',
'210106': '铁西区',
'210111': '苏家屯区',
'210112': '浑南区',
'210113': '沈北新区',
'210114': '于洪区',
'210115': '辽中区',
'210123': '康平县',
'210124': '法库县',
'210181': '新民市',
'210200': '大连市',
'210202': '中山区',
'210203': '西岗区',
'210204': '沙河口区',
'210211': '甘井子区',
'210212': '旅顺口区',
'210213': '金州区',
'210214': '普兰店区',
'210224': '长海县',
'210281': '瓦房店市',
'210283': '庄河市',
'210300': '鞍山市',
'210302': '铁东区',
'210303': '铁西区',
'210304': '立山区',
'210311': '千山区',
'210321': '台安县',
'210323': '岫岩满族自治县',
'210381': '海城市',
'210400': '抚顺市',
'210402': '新抚区',
'210403': '东洲区',
'210404': '望花区',
'210411': '顺城区',
'210421': '抚顺县',
'210422': '新宾满族自治县',
'210423': '清原满族自治县',
'210500': '本溪市',
'210502': '平山区',
'210503': '溪湖区',
'210504': '明山区',
'210505': '南芬区',
'210521': '本溪满族自治县',
'210522': '桓仁满族自治县',
'210600': '丹东市',
'210602': '元宝区',
'210603': '振兴区',
'210604': '振安区',
'210624': '宽甸满族自治县',
'210681': '东港市',
'210682': '凤城市',
'210700': '锦州市',
'210702': '古塔区',
'210703': '凌河区',
'210711': '太和区',
'210726': '黑山县',
'210727': '义县',
'210781': '凌海市',
'210782': '北镇市',
'210800': '营口市',
'210802': '站前区',
'210803': '西市区',
'210804': '鲅鱼圈区',
'210811': '老边区',
'210881': '盖州市',
'210882': '大石桥市',
'210900': '阜新市',
'210902': '海州区',
'210903': '新邱区',
'210904': '太平区',
'210905': '清河门区',
'210911': '细河区',
'210921': '阜新蒙古族自治县',
'210922': '彰武县',
'211000': '辽阳市',
'211002': '白塔区',
'211003': '文圣区',
'211004': '宏伟区',
'211005': '弓长岭区',
'211011': '太子河区',
'211021': '辽阳县',
'211081': '灯塔市',
'211100': '盘锦市',
'211102': '双台子区',
'211103': '兴隆台区',
'211104': '大洼区',
'211122': '盘山县',
'211200': '铁岭市',
'211202': '银州区',
'211204': '清河区',
'211221': '铁岭县',
'211223': '西丰县',
'211224': '昌图县',
'211281': '调兵山市',
'211282': '开原市',
'211300': '朝阳市',
'211302': '双塔区',
'211303': '龙城区',
'211321': '朝阳县',
'211322': '建平县',
'211324': '喀喇沁左翼蒙古族自治县',
'211381': '北票市',
'211382': '凌源市',
'211400': '葫芦岛市',
'211402': '连山区',
'211403': '龙港区',
'211404': '南票区',
'211421': '绥中县',
'211422': '建昌县',
'211481': '兴城市',
'220000': '吉林省',
'220100': '长春市',
'220102': '南关区',
'220103': '宽城区',
'220104': '朝阳区',
'220105': '二道区',
'220106': '绿园区',
'220112': '双阳区',
'220113': '九台区',
'220122': '农安县',
'220182': '榆树市',
'220183': '德惠市',
'220200': '吉林市',
'220202': '昌邑区',
'220203': '龙潭区',
'220204': '船营区',
'220211': '丰满区',
'220221': '永吉县',
'220281': '蛟河市',
'220282': '桦甸市',
'220283': '舒兰市',
'220284': '磐石市',
'220300': '四平市',
'220302': '铁西区',
'220303': '铁东区',
'220322': '梨树县',
'220323': '伊通满族自治县',
'220381': '公主岭市',
'220382': '双辽市',
'220400': '辽源市',
'220402': '龙山区',
'220403': '西安区',
'220421': '东丰县',
'220422': '东辽县',
'220500': '通化市',
'220502': '东昌区',
'220503': '二道江区',
'220521': '通化县',
'220523': '辉南县',
'220524': '柳河县',
'220581': '梅河口市',
'220582': '集安市',
'220600': '白山市',
'220602': '浑江区',
'220605': '江源区',
'220621': '抚松县',
'220622': '靖宇县',
'220623': '长白朝鲜族自治县',
'220681': '临江市',
'220700': '松原市',
'220702': '宁江区',
'220721': '前郭尔罗斯蒙古族自治县',
'220722': '长岭县',
'220723': '乾安县',
'220781': '扶余市',
'220800': '白城市',
'220802': '洮北区',
'220821': '镇赉县',
'220822': '通榆县',
'220881': '洮南市',
'220882': '大安市',
'222400': '延边朝鲜族自治州',
'222401': '延吉市',
'222402': '图们市',
'222403': '敦化市',
'222404': '珲春市',
'222405': '龙井市',
'222406': '和龙市',
'222424': '汪清县',
'222426': '安图县',
'230000': '黑龙江省',
'230100': '哈尔滨市',
'230102': '道里区',
'230103': '南岗区',
'230104': '道外区',
'230108': '平房区',
'230109': '松北区',
'230110': '香坊区',
'230111': '呼兰区',
'230112': '阿城区',
'230113': '双城区',
'230123': '依兰县',
'230124': '方正县',
'230125': '宾县',
'230126': '巴彦县',
'230127': '木兰县',
'230128': '通河县',
'230129': '延寿县',
'230183': '尚志市',
'230184': '五常市',
'230200': '齐齐哈尔市',
'230202': '龙沙区',
'230203': '建华区',
'230204': '铁锋区',
'230205': '昂昂溪区',
'230206': '富拉尔基区',
'230207': '碾子山区',
'230208': '梅里斯达斡尔族区',
'230221': '龙江县',
'230223': '依安县',
'230224': '泰来县',
'230225': '甘南县',
'230227': '富裕县',
'230229': '克山县',
'230230': '克东县',
'230231': '拜泉县',
'230281': '讷河市',
'230300': '鸡西市',
'230302': '鸡冠区',
'230303': '恒山区',
'230304': '滴道区',
'230305': '梨树区',
'230306': '城子河区',
'230307': '麻山区',
'230321': '鸡东县',
'230381': '虎林市',
'230382': '密山市',
'230400': '鹤岗市',
'230402': '向阳区',
'230403': '工农区',
'230404': '南山区',
'230405': '兴安区',
'230406': '东山区',
'230407': '兴山区',
'230421': '萝北县',
'230422': '绥滨县',
'230500': '双鸭山市',
'230502': '尖山区',
'230503': '岭东区',
'230505': '四方台区',
'230506': '宝山区',
'230521': '集贤县',
'230522': '友谊县',
'230523': '宝清县',
'230524': '饶河县',
'230600': '大庆市',
'230602': '萨尔图区',
'230603': '龙凤区',
'230604': '让胡路区',
'230605': '红岗区',
'230606': '大同区',
'230621': '肇州县',
'230622': '肇源县',
'230623': '林甸县',
'230624': '杜尔伯特蒙古族自治县',
'230700': '伊春市',
'230702': '伊春区',
'230703': '南岔区',
'230704': '友好区',
'230705': '西林区',
'230706': '翠峦区',
'230707': '新青区',
'230708': '美溪区',
'230709': '金山屯区',
'230710': '五营区',
'230711': '乌马河区',
'230712': '汤旺河区',
'230713': '带岭区',
'230714': '乌伊岭区',
'230715': '红星区',
'230716': '上甘岭区',
'230722': '嘉荫县',
'230781': '铁力市',
'230800': '佳木斯市',
'230803': '向阳区',
'230804': '前进区',
'230805': '东风区',
'230811': '郊区',
'230822': '桦南县',
'230826': '桦川县',
'230828': '汤原县',
'230881': '同江市',
'230882': '富锦市',
'230883': '抚远市',
'230900': '七台河市',
'230902': '新兴区',
'230903': '桃山区',
'230904': '茄子河区',
'230921': '勃利县',
'231000': '牡丹江市',
'231002': '东安区',
'231003': '阳明区',
'231004': '爱民区',
'231005': '西安区',
'231025': '林口县',
'231081': '绥芬河市',
'231083': '海林市',
'231084': '宁安市',
'231085': '穆棱市',
'231086': '东宁市',
'231100': '黑河市',
'231102': '爱辉区',
'231121': '嫩江县',
'231123': '逊克县',
'231124': '孙吴县',
'231181': '北安市',
'231182': '五大连池市',
'231200': '绥化市',
'231202': '北林区',
'231221': '望奎县',
'231222': '兰西县',
'231223': '青冈县',
'231224': '庆安县',
'231225': '明水县',
'231226': '绥棱县',
'231281': '安达市',
'231282': '肇东市',
'231283': '海伦市',
'232700': '大兴安岭地区',
'232721': '呼玛县',
'232722': '塔河县',
'232723': '漠河县',
'310000': '上海市',
'310101': '黄浦区',
'310104': '徐汇区',
'310105': '长宁区',
'310106': '静安区',
'310107': '普陀区',
'310109': '虹口区',
'310110': '杨浦区',
'310112': '闵行区',
'310113': '宝山区',
'310114': '嘉定区',
'310115': '浦东新区',
'310116': '金山区',
'310117': '松江区',
'310118': '青浦区',
'310120': '奉贤区',
'310151': '崇明区',
'320000': '江苏省',
'320100': '南京市',
'320102': '玄武区',
'320104': '秦淮区',
'320105': '建邺区',
'320106': '鼓楼区',
'320111': '浦口区',
'320113': '栖霞区',
'320114': '雨花台区',
'320115': '江宁区',
'320116': '六合区',
'320117': '溧水区',
'320118': '高淳区',
'320200': '无锡市',
'320205': '锡山区',
'320206': '惠山区',
'320211': '滨湖区',
'320213': '梁溪区',
'320214': '新吴区',
'320281': '江阴市',
'320282': '宜兴市',
'320300': '徐州市',
'320302': '鼓楼区',
'320303': '云龙区',
'320305': '贾汪区',
'320311': '泉山区',
'320312': '铜山区',
'320321': '丰县',
'320322': '沛县',
'320324': '睢宁县',
'320381': '新沂市',
'320382': '邳州市',
'320400': '常州市',
'320402': '天宁区',
'320404': '钟楼区',
'320411': '新北区',
'320412': '武进区',
'320413': '金坛区',
'320481': '溧阳市',
'320500': '苏州市',
'320505': '虎丘区',
'320506': '吴中区',
'320507': '相城区',
'320508': '姑苏区',
'320509': '吴江区',
'320581': '常熟市',
'320582': '张家港市',
'320583': '昆山市',
'320585': '太仓市',
'320600': '南通市',
'320602': '崇川区',
'320611': '港闸区',
'320612': '通州区',
'320621': '海安县',
'320623': '如东县',
'320681': '启东市',
'320682': '如皋市',
'320684': '海门市',
'320700': '连云港市',
'320703': '连云区',
'320706': '海州区',
'320707': '赣榆区',
'320722': '东海县',
'320723': '灌云县',
'320724': '灌南县',
'320800': '淮安市',
'320803': '淮安区',
'320804': '淮阴区',
'320812': '清江浦区',
'320813': '洪泽区',
'320826': '涟水县',
'320830': '盱眙县',
'320831': '金湖县',
'320900': '盐城市',
'320902': '亭湖区',
'320903': '盐都区',
'320904': '大丰区',
'320921': '响水县',
'320922': '滨海县',
'320923': '阜宁县',
'320924': '射阳县',
'320925': '建湖县',
'320981': '东台市',
'321000': '扬州市',
'321002': '广陵区',
'321003': '邗江区',
'321012': '江都区',
'321023': '宝应县',
'321081': '仪征市',
'321084': '高邮市',
'321100': '镇江市',
'321102': '京口区',
'321111': '润州区',
'321112': '丹徒区',
'321181': '丹阳市',
'321182': '扬中市',
'321183': '句容市',
'321200': '泰州市',
'321202': '海陵区',
'321203': '高港区',
'321204': '姜堰区',
'321281': '兴化市',
'321282': '靖江市',
'321283': '泰兴市',
'321300': '宿迁市',
'321302': '宿城区',
'321311': '宿豫区',
'321322': '沭阳县',
'321323': '泗阳县',
'321324': '泗洪县',
'330000': '浙江省',
'330100': '杭州市',
'330102': '上城区',
'330103': '下城区',
'330104': '江干区',
'330105': '拱墅区',
'330106': '西湖区',
'330108': '滨江区',
'330109': '萧山区',
'330110': '余杭区',
'330111': '富阳区',
'330122': '桐庐县',
'330127': '淳安县',
'330182': '建德市',
'330185': '临安市',
'330200': '宁波市',
'330203': '海曙区',
'330205': '江北区',
'330206': '北仑区',
'330211': '镇海区',
'330212': '鄞州区',
'330213': '奉化区',
'330225': '象山县',
'330226': '宁海县',
'330281': '余姚市',
'330282': '慈溪市',
'330300': '温州市',
'330302': '鹿城区',
'330303': '龙湾区',
'330304': '瓯海区',
'330305': '洞头区',
'330324': '永嘉县',
'330326': '平阳县',
'330327': '苍南县',
'330328': '文成县',
'330329': '泰顺县',
'330381': '瑞安市',
'330382': '乐清市',
'330400': '嘉兴市',
'330402': '南湖区',
'330411': '秀洲区',
'330421': '嘉善县',
'330424': '海盐县',
'330481': '海宁市',
'330482': '平湖市',
'330483': '桐乡市',
'330500': '湖州市',
'330502': '吴兴区',
'330503': '南浔区',
'330521': '德清县',
'330522': '长兴县',
'330523': '安吉县',
'330600': '绍兴市',
'330602': '越城区',
'330603': '柯桥区',
'330604': '上虞区',
'330624': '新昌县',
'330681': '诸暨市',
'330683': '嵊州市',
'330700': '金华市',
'330702': '婺城区',
'330703': '金东区',
'330723': '武义县',
'330726': '浦江县',
'330727': '磐安县',
'330781': '兰溪市',
'330782': '义乌市',
'330783': '东阳市',
'330784': '永康市',
'330800': '衢州市',
'330802': '柯城区',
'330803': '衢江区',
'330822': '常山县',
'330824': '开化县',
'330825': '龙游县',
'330881': '江山市',
'330900': '舟山市',
'330902': '定海区',
'330903': '普陀区',
'330921': '岱山县',
'330922': '嵊泗县',
'331000': '台州市',
'331002': '椒江区',
'331003': '黄岩区',
'331004': '路桥区',
'331021': '玉环县',
'331022': '三门县',
'331023': '天台县',
'331024': '仙居县',
'331081': '温岭市',
'331082': '临海市',
'331100': '丽水市',
'331102': '莲都区',
'331121': '青田县',
'331122': '缙云县',
'331123': '遂昌县',
'331124': '松阳县',
'331125': '云和县',
'331126': '庆元县',
'331127': '景宁畲族自治县',
'331181': '龙泉市',
'340000': '安徽省',
'340100': '合肥市',
'340102': '瑶海区',
'340103': '庐阳区',
'340104': '蜀山区',
'340111': '包河区',
'340121': '长丰县',
'340122': '肥东县',
'340123': '肥西县',
'340124': '庐江县',
'340181': '巢湖市',
'340200': '芜湖市',
'340202': '镜湖区',
'340203': '弋江区',
'340207': '鸠江区',
'340208': '三山区',
'340221': '芜湖县',
'340222': '繁昌县',
'340223': '南陵县',
'340225': '无为县',
'340300': '蚌埠市',
'340302': '龙子湖区',
'340303': '蚌山区',
'340304': '禹会区',
'340311': '淮上区',
'340321': '怀远县',
'340322': '五河县',
'340323': '固镇县',
'340400': '淮南市',
'340402': '大通区',
'340403': '田家庵区',
'340404': '谢家集区',
'340405': '八公山区',
'340406': '潘集区',
'340421': '凤台县',
'340422': '寿县',
'340500': '马鞍山市',
'340503': '花山区',
'340504': '雨山区',
'340506': '博望区',
'340521': '当涂县',
'340522': '含山县',
'340523': '和县',
'340600': '淮北市',
'340602': '杜集区',
'340603': '相山区',
'340604': '烈山区',
'340621': '濉溪县',
'340700': '铜陵市',
'340705': '铜官区',
'340706': '义安区',
'340711': '郊区',
'340722': '枞阳县',
'340800': '安庆市',
'340802': '迎江区',
'340803': '大观区',
'340811': '宜秀区',
'340822': '怀宁县',
'340824': '潜山县',
'340825': '太湖县',
'340826': '宿松县',
'340827': '望江县',
'340828': '岳西县',
'340881': '桐城市',
'341000': '黄山市',
'341002': '屯溪区',
'341003': '黄山区',
'341004': '徽州区',
'341021': '歙县',
'341022': '休宁县',
'341023': '黟县',
'341024': '祁门县',
'341100': '滁州市',
'341102': '琅琊区',
'341103': '南谯区',
'341122': '来安县',
'341124': '全椒县',
'341125': '定远县',
'341126': '凤阳县',
'341181': '天长市',
'341182': '明光市',
'341200': '阜阳市',
'341202': '颍州区',
'341203': '颍东区',
'341204': '颍泉区',
'341221': '临泉县',
'341222': '太和县',
'341225': '阜南县',
'341226': '颍上县',
'341282': '界首市',
'341300': '宿州市',
'341302': '埇桥区',
'341321': '砀山县',
'341322': '萧县',
'341323': '灵璧县',
'341324': '泗县',
'341500': '六安市',
'341502': '金安区',
'341503': '裕安区',
'341504': '叶集区',
'341522': '霍邱县',
'341523': '舒城县',
'341524': '金寨县',
'341525': '霍山县',
'341600': '亳州市',
'341602': '谯城区',
'341621': '涡阳县',
'341622': '蒙城县',
'341623': '利辛县',
'341700': '池州市',
'341702': '贵池区',
'341721': '东至县',
'341722': '石台县',
'341723': '青阳县',
'341800': '宣城市',
'341802': '宣州区',
'341821': '郎溪县',
'341822': '广德县',
'341823': '泾县',
'341824': '绩溪县',
'341825': '旌德县',
'341881': '宁国市',
'350000': '福建省',
'350100': '福州市',
'350102': '鼓楼区',
'350103': '台江区',
'350104': '仓山区',
'350105': '马尾区',
'350111': '晋安区',
'350121': '闽侯县',
'350122': '连江县',
'350123': '罗源县',
'350124': '闽清县',
'350125': '永泰县',
'350128': '平潭县',
'350181': '福清市',
'350182': '长乐市',
'350200': '厦门市',
'350203': '思明区',
'350205': '海沧区',
'350206': '湖里区',
'350211': '集美区',
'350212': '同安区',
'350213': '翔安区',
'350300': '莆田市',
'350302': '城厢区',
'350303': '涵江区',
'350304': '荔城区',
'350305': '秀屿区',
'350322': '仙游县',
'350400': '三明市',
'350402': '梅列区',
'350403': '三元区',
'350421': '明溪县',
'350423': '清流县',
'350424': '宁化县',
'350425': '大田县',
'350426': '尤溪县',
'350427': '沙县',
'350428': '将乐县',
'350429': '泰宁县',
'350430': '建宁县',
'350481': '永安市',
'350500': '泉州市',
'350502': '鲤城区',
'350503': '丰泽区',
'350504': '洛江区',
'350505': '泉港区',
'350521': '惠安县',
'350524': '安溪县',
'350525': '永春县',
'350526': '德化县',
'350527': '金门县',
'350581': '石狮市',
'350582': '晋江市',
'350583': '南安市',
'350600': '漳州市',
'350602': '芗城区',
'350603': '龙文区',
'350622': '云霄县',
'350623': '漳浦县',
'350624': '诏安县',
'350625': '长泰县',
'350626': '东山县',
'350627': '南靖县',
'350628': '平和县',
'350629': '华安县',
'350681': '龙海市',
'350700': '南平市',
'350702': '延平区',
'350703': '建阳区',
'350721': '顺昌县',
'350722': '浦城县',
'350723': '光泽县',
'350724': '松溪县',
'350725': '政和县',
'350781': '邵武市',
'350782': '武夷山市',
'350783': '建瓯市',
'350800': '龙岩市',
'350802': '新罗区',
'350803': '永定区',
'350821': '长汀县',
'350823': '上杭县',
'350824': '武平县',
'350825': '连城县',
'350881': '漳平市',
'350900': '宁德市',
'350902': '蕉城区',
'350921': '霞浦县',
'350922': '古田县',
'350923': '屏南县',
'350924': '寿宁县',
'350925': '周宁县',
'350926': '柘荣县',
'350981': '福安市',
'350982': '福鼎市',
'360000': '江西省',
'360100': '南昌市',
'360102': '东湖区',
'360103': '西湖区',
'360104': '青云谱区',
'360105': '湾里区',
'360111': '青山湖区',
'360112': '新建区',
'360121': '南昌县',
'360123': '安义县',
'360124': '进贤县',
'360200': '景德镇市',
'360202': '昌江区',
'360203': '珠山区',
'360222': '浮梁县',
'360281': '乐平市',
'360300': '萍乡市',
'360302': '安源区',
'360313': '湘东区',
'360321': '莲花县',
'360322': '上栗县',
'360323': '芦溪县',
'360400': '九江市',
'360402': '濂溪区',
'360403': '浔阳区',
'360421': '九江县',
'360423': '武宁县',
'360424': '修水县',
'360425': '永修县',
'360426': '德安县',
'360428': '都昌县',
'360429': '湖口县',
'360430': '彭泽县',
'360481': '瑞昌市',
'360482': '共青城市',
'360483': '庐山市',
'360500': '新余市',
'360502': '渝水区',
'360521': '分宜县',
'360600': '鹰潭市',
'360602': '月湖区',
'360622': '余江县',
'360681': '贵溪市',
'360700': '赣州市',
'360702': '章贡区',
'360703': '南康区',
'360704': '赣县区',
'360722': '信丰县',
'360723': '大余县',
'360724': '上犹县',
'360725': '崇义县',
'360726': '安远县',
'360727': '龙南县',
'360728': '定南县',
'360729': '全南县',
'360730': '宁都县',
'360731': '于都县',
'360732': '兴国县',
'360733': '会昌县',
'360734': '寻乌县',
'360735': '石城县',
'360781': '瑞金市',
'360800': '吉安市',
'360802': '吉州区',
'360803': '青原区',
'360821': '吉安县',
'360822': '吉水县',
'360823': '峡江县',
'360824': '新干县',
'360825': '永丰县',
'360826': '泰和县',
'360827': '遂川县',
'360828': '万安县',
'360829': '安福县',
'360830': '永新县',
'360881': '井冈山市',
'360900': '宜春市',
'360902': '袁州区',
'360921': '奉新县',
'360922': '万载县',
'360923': '上高县',
'360924': '宜丰县',
'360925': '靖安县',
'360926': '铜鼓县',
'360981': '丰城市',
'360982': '樟树市',
'360983': '高安市',
'361000': '抚州市',
'361002': '临川区',
'361003': '东乡区',
'361021': '南城县',
'361022': '黎川县',
'361023': '南丰县',
'361024': '崇仁县',
'361025': '乐安县',
'361026': '宜黄县',
'361027': '金溪县',
'361028': '资溪县',
'361030': '广昌县',
'361100': '上饶市',
'361102': '信州区',
'361103': '广丰区',
'361121': '上饶县',
'361123': '玉山县',
'361124': '铅山县',
'361125': '横峰县',
'361126': '弋阳县',
'361127': '余干县',
'361128': '鄱阳县',
'361129': '万年县',
'361130': '婺源县',
'361181': '德兴市',
'370000': '山东省',
'370100': '济南市',
'370102': '历下区',
'370103': '市中区',
'370104': '槐荫区',
'370105': '天桥区',
'370112': '历城区',
'370113': '长清区',
'370114': '章丘区',
'370124': '平阴县',
'370125': '济阳县',
'370126': '商河县',
'370200': '青岛市',
'370202': '市南区',
'370203': '市北区',
'370211': '黄岛区',
'370212': '崂山区',
'370213': '李沧区',
'370214': '城阳区',
'370281': '胶州市',
'370282': '即墨市',
'370283': '平度市',
'370285': '莱西市',
'370300': '淄博市',
'370302': '淄川区',
'370303': '张店区',
'370304': '博山区',
'370305': '临淄区',
'370306': '周村区',
'370321': '桓台县',
'370322': '高青县',
'370323': '沂源县',
'370400': '枣庄市',
'370402': '市中区',
'370403': '薛城区',
'370404': '峄城区',
'370405': '台儿庄区',
'370406': '山亭区',
'370481': '滕州市',
'370500': '东营市',
'370502': '东营区',
'370503': '河口区',
'370505': '垦利区',
'370522': '利津县',
'370523': '广饶县',
'370600': '烟台市',
'370602': '芝罘区',
'370611': '福山区',
'370612': '牟平区',
'370613': '莱山区',
'370634': '长岛县',
'370681': '龙口市',
'370682': '莱阳市',
'370683': '莱州市',
'370684': '蓬莱市',
'370685': '招远市',
'370686': '栖霞市',
'370687': '海阳市',
'370700': '潍坊市',
'370702': '潍城区',
'370703': '寒亭区',
'370704': '坊子区',
'370705': '奎文区',
'370724': '临朐县',
'370725': '昌乐县',
'370781': '青州市',
'370782': '诸城市',
'370783': '寿光市',
'370784': '安丘市',
'370785': '高密市',
'370786': '昌邑市',
'370800': '济宁市',
'370811': '任城区',
'370812': '兖州区',
'370826': '微山县',
'370827': '鱼台县',
'370828': '金乡县',
'370829': '嘉祥县',
'370830': '汶上县',
'370831': '泗水县',
'370832': '梁山县',
'370881': '曲阜市',
'370883': '邹城市',
'370900': '泰安市',
'370902': '泰山区',
'370911': '岱岳区',
'370921': '宁阳县',
'370923': '东平县',
'370982': '新泰市',
'370983': '肥城市',
'371000': '威海市',
'371002': '环翠区',
'371003': '文登区',
'371082': '荣成市',
'371083': '乳山市',
'371100': '日照市',
'371102': '东港区',
'371103': '岚山区',
'371121': '五莲县',
'371122': '莒县',
'371200': '莱芜市',
'371202': '莱城区',
'371203': '钢城区',
'371300': '临沂市',
'371302': '兰山区',
'371311': '罗庄区',
'371312': '河东区',
'371321': '沂南县',
'371322': '郯城县',
'371323': '沂水县',
'371324': '兰陵县',
'371325': '费县',
'371326': '平邑县',
'371327': '莒南县',
'371328': '蒙阴县',
'371329': '临沭县',
'371400': '德州市',
'371402': '德城区',
'371403': '陵城区',
'371422': '宁津县',
'371423': '庆云县',
'371424': '临邑县',
'371425': '齐河县',
'371426': '平原县',
'371427': '夏津县',
'371428': '武城县',
'371481': '乐陵市',
'371482': '禹城市',
'371500': '聊城市',
'371502': '东昌府区',
'371521': '阳谷县',
'371522': '莘县',
'371523': '茌平县',
'371524': '东阿县',
'371525': '冠县',
'371526': '高唐县',
'371581': '临清市',
'371600': '滨州市',
'371602': '滨城区',
'371603': '沾化区',
'371621': '惠民县',
'371622': '阳信县',
'371623': '无棣县',
'371625': '博兴县',
'371626': '邹平县',
'371700': '菏泽市',
'371702': '牡丹区',
'371703': '定陶区',
'371721': '曹县',
'371722': '单县',
'371723': '成武县',
'371724': '巨野县',
'371725': '郓城县',
'371726': '鄄城县',
'371728': '东明县',
'410000': '河南省',
'410100': '郑州市',
'410102': '中原区',
'410103': '二七区',
'410104': '管城回族区',
'410105': '金水区',
'410106': '上街区',
'410108': '惠济区',
'410122': '中牟县',
'410181': '巩义市',
'410182': '荥阳市',
'410183': '新密市',
'410184': '新郑市',
'410185': '登封市',
'410200': '开封市',
'410202': '龙亭区',
'410203': '顺河回族区',
'410204': '鼓楼区',
'410205': '禹王台区',
'410212': '祥符区',
'410221': '杞县',
'410222': '通许县',
'410223': '尉氏县',
'410225': '兰考县',
'410300': '洛阳市',
'410302': '老城区',
'410303': '西工区',
'410304': '瀍河回族区',
'410305': '涧西区',
'410306': '吉利区',
'410311': '洛龙区',
'410322': '孟津县',
'410323': '新安县',
'410324': '栾川县',
'410325': '嵩县',
'410326': '汝阳县',
'410327': '宜阳县',
'410328': '洛宁县',
'410329': '伊川县',
'410381': '偃师市',
'410400': '平顶山市',
'410402': '新华区',
'410403': '卫东区',
'410404': '石龙区',
'410411': '湛河区',
'410421': '宝丰县',
'410422': '叶县',
'410423': '鲁山县',
'410425': '郏县',
'410481': '舞钢市',
'410482': '汝州市',
'410500': '安阳市',
'410502': '文峰区',
'410503': '北关区',
'410505': '殷都区',
'410506': '龙安区',
'410522': '安阳县',
'410523': '汤阴县',
'410526': '滑县',
'410527': '内黄县',
'410581': '林州市',
'410600': '鹤壁市',
'410602': '鹤山区',
'410603': '山城区',
'410611': '淇滨区',
'410621': '浚县',
'410622': '淇县',
'410700': '新乡市',
'410702': '红旗区',
'410703': '卫滨区',
'410704': '凤泉区',
'410711': '牧野区',
'410721': '新乡县',
'410724': '获嘉县',
'410725': '原阳县',
'410726': '延津县',
'410727': '封丘县',
'410728': '长垣县',
'410781': '卫辉市',
'410782': '辉县市',
'410800': '焦作市',
'410802': '解放区',
'410803': '中站区',
'410804': '马村区',
'410811': '山阳区',
'410821': '修武县',
'410822': '博爱县',
'410823': '武陟县',
'410825': '温县',
'410882': '沁阳市',
'410883': '孟州市',
'410900': '濮阳市',
'410902': '华龙区',
'410922': '清丰县',
'410923': '南乐县',
'410926': '范县',
'410927': '台前县',
'410928': '濮阳县',
'411000': '许昌市',
'411002': '魏都区',
'411003': '建安区',
'411024': '鄢陵县',
'411025': '襄城县',
'411081': '禹州市',
'411082': '长葛市',
'411100': '漯河市',
'411102': '源汇区',
'411103': '郾城区',
'411104': '召陵区',
'411121': '舞阳县',
'411122': '临颍县',
'411200': '三门峡市',
'411202': '湖滨区',
'411203': '陕州区',
'411221': '渑池县',
'411224': '卢氏县',
'411281': '义马市',
'411282': '灵宝市',
'411300': '南阳市',
'411302': '宛城区',
'411303': '卧龙区',
'411321': '南召县',
'411322': '方城县',
'411323': '西峡县',
'411324': '镇平县',
'411325': '内乡县',
'411326': '淅川县',
'411327': '社旗县',
'411328': '唐河县',
'411329': '新野县',
'411330': '桐柏县',
'411381': '邓州市',
'411400': '商丘市',
'411402': '梁园区',
'411403': '睢阳区',
'411421': '民权县',
'411422': '睢县',
'411423': '宁陵县',
'411424': '柘城县',
'411425': '虞城县',
'411426': '夏邑县',
'411481': '永城市',
'411500': '信阳市',
'411502': '浉河区',
'411503': '平桥区',
'411521': '罗山县',
'411522': '光山县',
'411523': '新县',
'411524': '商城县',
'411525': '固始县',
'411526': '潢川县',
'411527': '淮滨县',
'411528': '息县',
'411600': '周口市',
'411602': '川汇区',
'411621': '扶沟县',
'411622': '西华县',
'411623': '商水县',
'411624': '沈丘县',
'411625': '郸城县',
'411626': '淮阳县',
'411627': '太康县',
'411628': '鹿邑县',
'411681': '项城市',
'411700': '驻马店市',
'411702': '驿城区',
'411721': '西平县',
'411722': '上蔡县',
'411723': '平舆县',
'411724': '正阳县',
'411725': '确山县',
'411726': '泌阳县',
'411727': '汝南县',
'411728': '遂平县',
'411729': '新蔡县',
'419001': '济源市',
'420000': '湖北省',
'420100': '武汉市',
'420102': '江岸区',
'420103': '江汉区',
'420104': '硚口区',
'420105': '汉阳区',
'420106': '武昌区',
'420107': '青山区',
'420111': '洪山区',
'420112': '东西湖区',
'420113': '汉南区',
'420114': '蔡甸区',
'420115': '江夏区',
'420116': '黄陂区',
'420117': '新洲区',
'420200': '黄石市',
'420202': '黄石港区',
'420203': '西塞山区',
'420204': '下陆区',
'420205': '铁山区',
'420222': '阳新县',
'420281': '大冶市',
'420300': '十堰市',
'420302': '茅箭区',
'420303': '张湾区',
'420304': '郧阳区',
'420322': '郧西县',
'420323': '竹山县',
'420324': '竹溪县',
'420325': '房县',
'420381': '丹江口市',
'420500': '宜昌市',
'420502': '西陵区',
'420503': '伍家岗区',
'420504': '点军区',
'420505': '猇亭区',
'420506': '夷陵区',
'420525': '远安县',
'420526': '兴山县',
'420527': '秭归县',
'420528': '长阳土家族自治县',
'420529': '五峰土家族自治县',
'420581': '宜都市',
'420582': '当阳市',
'420583': '枝江市',
'420600': '襄阳市',
'420602': '襄城区',
'420606': '樊城区',
'420607': '襄州区',
'420624': '南漳县',
'420625': '谷城县',
'420626': '保康县',
'420682': '老河口市',
'420683': '枣阳市',
'420684': '宜城市',
'420700': '鄂州市',
'420702': '梁子湖区',
'420703': '华容区',
'420704': '鄂城区',
'420800': '荆门市',
'420802': '东宝区',
'420804': '掇刀区',
'420821': '京山县',
'420822': '沙洋县',
'420881': '钟祥市',
'420900': '孝感市',
'420902': '孝南区',
'420921': '孝昌县',
'420922': '大悟县',
'420923': '云梦县',
'420981': '应城市',
'420982': '安陆市',
'420984': '汉川市',
'421000': '荆州市',
'421002': '沙市区',
'421003': '荆州区',
'421022': '公安县',
'421023': '监利县',
'421024': '江陵县',
'421081': '石首市',
'421083': '洪湖市',
'421087': '松滋市',
'421100': '黄冈市',
'421102': '黄州区',
'421121': '团风县',
'421122': '红安县',
'421123': '罗田县',
'421124': '英山县',
'421125': '浠水县',
'421126': '蕲春县',
'421127': '黄梅县',
'421181': '麻城市',
'421182': '武穴市',
'421200': '咸宁市',
'421202': '咸安区',
'421221': '嘉鱼县',
'421222': '通城县',
'421223': '崇阳县',
'421224': '通山县',
'421281': '赤壁市',
'421300': '随州市',
'421303': '曾都区',
'421321': '随县',
'421381': '广水市',
'422800': '恩施土家族苗族自治州',
'422801': '恩施市',
'422802': '利川市',
'422822': '建始县',
'422823': '巴东县',
'422825': '宣恩县',
'422826': '咸丰县',
'422827': '来凤县',
'422828': '鹤峰县',
'429004': '仙桃市',
'429005': '潜江市',
'429006': '天门市',
'429021': '神农架林区',
'430000': '湖南省',
'430100': '长沙市',
'430102': '芙蓉区',
'430103': '天心区',
'430104': '岳麓区',
'430105': '开福区',
'430111': '雨花区',
'430112': '望城区',
'430121': '长沙县',
'430124': '宁乡县',
'430181': '浏阳市',
'430200': '株洲市',
'430202': '荷塘区',
'430203': '芦淞区',
'430204': '石峰区',
'430211': '天元区',
'430221': '株洲县',
'430223': '攸县',
'430224': '茶陵县',
'430225': '炎陵县',
'430281': '醴陵市',
'430300': '湘潭市',
'430302': '雨湖区',
'430304': '岳塘区',
'430321': '湘潭县',
'430381': '湘乡市',
'430382': '韶山市',
'430400': '衡阳市',
'430405': '珠晖区',
'430406': '雁峰区',
'430407': '石鼓区',
'430408': '蒸湘区',
'430412': '南岳区',
'430421': '衡阳县',
'430422': '衡南县',
'430423': '衡山县',
'430424': '衡东县',
'430426': '祁东县',
'430481': '耒阳市',
'430482': '常宁市',
'430500': '邵阳市',
'430502': '双清区',
'430503': '大祥区',
'430511': '北塔区',
'430521': '邵东县',
'430522': '新邵县',
'430523': '邵阳县',
'430524': '隆回县',
'430525': '洞口县',
'430527': '绥宁县',
'430528': '新宁县',
'430529': '城步苗族自治县',
'430581': '武冈市',
'430600': '岳阳市',
'430602': '岳阳楼区',
'430603': '云溪区',
'430611': '君山区',
'430621': '岳阳县',
'430623': '华容县',
'430624': '湘阴县',
'430626': '平江县',
'430681': '汨罗市',
'430682': '临湘市',
'430700': '常德市',
'430702': '武陵区',
'430703': '鼎城区',
'430721': '安乡县',
'430722': '汉寿县',
'430723': '澧县',
'430724': '临澧县',
'430725': '桃源县',
'430726': '石门县',
'430781': '津市市',
'430800': '张家界市',
'430802': '永定区',
'430811': '武陵源区',
'430821': '慈利县',
'430822': '桑植县',
'430900': '益阳市',
'430902': '资阳区',
'430903': '赫山区',
'430921': '南县',
'430922': '桃江县',
'430923': '安化县',
'430981': '沅江市',
'431000': '郴州市',
'431002': '北湖区',
'431003': '苏仙区',
'431021': '桂阳县',
'431022': '宜章县',
'431023': '永兴县',
'431024': '嘉禾县',
'431025': '临武县',
'431026': '汝城县',
'431027': '桂东县',
'431028': '安仁县',
'431081': '资兴市',
'431100': '永州市',
'431102': '零陵区',
'431103': '冷水滩区',
'431121': '祁阳县',
'431122': '东安县',
'431123': '双牌县',
'431124': '道县',
'431125': '江永县',
'431126': '宁远县',
'431127': '蓝山县',
'431128': '新田县',
'431129': '江华瑶族自治县',
'431200': '怀化市',
'431202': '鹤城区',
'431221': '中方县',
'431222': '沅陵县',
'431223': '辰溪县',
'431224': '溆浦县',
'431225': '会同县',
'431226': '麻阳苗族自治县',
'431227': '新晃侗族自治县',
'431228': '芷江侗族自治县',
'431229': '靖州苗族侗族自治县',
'431230': '通道侗族自治县',
'431281': '洪江市',
'431300': '娄底市',
'431302': '娄星区',
'431321': '双峰县',
'431322': '新化县',
'431381': '冷水江市',
'431382': '涟源市',
'433100': '湘西土家族苗族自治州',
'433101': '吉首市',
'433122': '泸溪县',
'433123': '凤凰县',
'433124': '花垣县',
'433125': '保靖县',
'433126': '古丈县',
'433127': '永顺县',
'433130': '龙山县',
'440000': '广东省',
'440100': '广州市',
'440103': '荔湾区',
'440104': '越秀区',
'440105': '海珠区',
'440106': '天河区',
'440111': '白云区',
'440112': '黄埔区',
'440113': '番禺区',
'440114': '花都区',
'440115': '南沙区',
'440117': '从化区',
'440118': '增城区',
'440200': '韶关市',
'440203': '武江区',
'440204': '浈江区',
'440205': '曲江区',
'440222': '始兴县',
'440224': '仁化县',
'440229': '翁源县',
'440232': '乳源瑶族自治县',
'440233': '新丰县',
'440281': '乐昌市',
'440282': '南雄市',
'440300': '深圳市',
'440303': '罗湖区',
'440304': '福田区',
'440305': '南山区',
'440306': '宝安区',
'440307': '龙岗区',
'440308': '盐田区',
'440309': '龙华区',
'440310': '坪山区',
'440400': '珠海市',
'440402': '香洲区',
'440403': '斗门区',
'440404': '金湾区',
'440500': '汕头市',
'440507': '龙湖区',
'440511': '金平区',
'440512': '濠江区',
'440513': '潮阳区',
'440514': '潮南区',
'440515': '澄海区',
'440523': '南澳县',
'440600': '佛山市',
'440604': '禅城区',
'440605': '南海区',
'440606': '顺德区',
'440607': '三水区',
'440608': '高明区',
'440700': '江门市',
'440703': '蓬江区',
'440704': '江海区',
'440705': '新会区',
'440781': '台山市',
'440783': '开平市',
'440784': '鹤山市',
'440785': '恩平市',
'440800': '湛江市',
'440802': '赤坎区',
'440803': '霞山区',
'440804': '坡头区',
'440811': '麻章区',
'440823': '遂溪县',
'440825': '徐闻县',
'440881': '廉江市',
'440882': '雷州市',
'440883': '吴川市',
'440900': '茂名市',
'440902': '茂南区',
'440904': '电白区',
'440981': '高州市',
'440982': '化州市',
'440983': '信宜市',
'441200': '肇庆市',
'441202': '端州区',
'441203': '鼎湖区',
'441204': '高要区',
'441223': '广宁县',
'441224': '怀集县',
'441225': '封开县',
'441226': '德庆县',
'441284': '四会市',
'441300': '惠州市',
'441302': '惠城区',
'441303': '惠阳区',
'441322': '博罗县',
'441323': '惠东县',
'441324': '龙门县',
'441400': '梅州市',
'441402': '梅江区',
'441403': '梅县区',
'441422': '大埔县',
'441423': '丰顺县',
'441424': '五华县',
'441426': '平远县',
'441427': '蕉岭县',
'441481': '兴宁市',
'441500': '汕尾市',
'441502': '城区',
'441521': '海丰县',
'441523': '陆河县',
'441581': '陆丰市',
'441600': '河源市',
'441602': '源城区',
'441621': '紫金县',
'441622': '龙川县',
'441623': '连平县',
'441624': '和平县',
'441625': '东源县',
'441700': '阳江市',
'441702': '江城区',
'441704': '阳东区',
'441721': '阳西县',
'441781': '阳春市',
'441800': '清远市',
'441802': '清城区',
'441803': '清新区',
'441821': '佛冈县',
'441823': '阳山县',
'441825': '连山壮族瑶族自治县',
'441826': '连南瑶族自治县',
'441881': '英德市',
'441882': '连州市',
'441900': '东莞市',
'442000': '中山市',
'445100': '潮州市',
'445102': '湘桥区',
'445103': '潮安区',
'445122': '饶平县',
'445200': '揭阳市',
'445202': '榕城区',
'445203': '揭东区',
'445222': '揭西县',
'445224': '惠来县',
'445281': '普宁市',
'445300': '云浮市',
'445302': '云城区',
'445303': '云安区',
'445321': '新兴县',
'445322': '郁南县',
'445381': '罗定市',
'450000': '广西壮族自治区',
'450100': '南宁市',
'450102': '兴宁区',
'450103': '青秀区',
'450105': '江南区',
'450107': '西乡塘区',
'450108': '良庆区',
'450109': '邕宁区',
'450110': '武鸣区',
'450123': '隆安县',
'450124': '马山县',
'450125': '上林县',
'450126': '宾阳县',
'450127': '横县',
'450200': '柳州市',
'450202': '城中区',
'450203': '鱼峰区',
'450204': '柳南区',
'450205': '柳北区',
'450206': '柳江区',
'450222': '柳城县',
'450223': '鹿寨县',
'450224': '融安县',
'450225': '融水苗族自治县',
'450226': '三江侗族自治县',
'450300': '桂林市',
'450302': '秀峰区',
'450303': '叠彩区',
'450304': '象山区',
'450305': '七星区',
'450311': '雁山区',
'450312': '临桂区',
'450321': '阳朔县',
'450323': '灵川县',
'450324': '全州县',
'450325': '兴安县',
'450326': '永福县',
'450327': '灌阳县',
'450328': '龙胜各族自治县',
'450329': '资源县',
'450330': '平乐县',
'450331': '荔浦县',
'450332': '恭城瑶族自治县',
'450400': '梧州市',
'450403': '万秀区',
'450405': '长洲区',
'450406': '龙圩区',
'450421': '苍梧县',
'450422': '藤县',
'450423': '蒙山县',
'450481': '岑溪市',
'450500': '北海市',
'450502': '海城区',
'450503': '银海区',
'450512': '铁山港区',
'450521': '合浦县',
'450600': '防城港市',
'450602': '港口区',
'450603': '防城区',
'450621': '上思县',
'450681': '东兴市',
'450700': '钦州市',
'450702': '钦南区',
'450703': '钦北区',
'450721': '灵山县',
'450722': '浦北县',
'450800': '贵港市',
'450802': '港北区',
'450803': '港南区',
'450804': '覃塘区',
'450821': '平南县',
'450881': '桂平市',
'450900': '玉林市',
'450902': '玉州区',
'450903': '福绵区',
'450921': '容县',
'450922': '陆川县',
'450923': '博白县',
'450924': '兴业县',
'450981': '北流市',
'451000': '百色市',
'451002': '右江区',
'451021': '田阳县',
'451022': '田东县',
'451023': '平果县',
'451024': '德保县',
'451026': '那坡县',
'451027': '凌云县',
'451028': '乐业县',
'451029': '田林县',
'451030': '西林县',
'451031': '隆林各族自治县',
'451081': '靖西市',
'451100': '贺州市',
'451102': '八步区',
'451103': '平桂区',
'451121': '昭平县',
'451122': '钟山县',
'451123': '富川瑶族自治县',
'451200': '河池市',
'451202': '金城江区',
'451203': '宜州区',
'451221': '南丹县',
'451222': '天峨县',
'451223': '凤山县',
'451224': '东兰县',
'451225': '罗城仫佬族自治县',
'451226': '环江毛南族自治县',
'451227': '巴马瑶族自治县',
'451228': '都安瑶族自治县',
'451229': '大化瑶族自治县',
'451300': '来宾市',
'451302': '兴宾区',
'451321': '忻城县',
'451322': '象州县',
'451323': '武宣县',
'451324': '金秀瑶族自治县',
'451381': '合山市',
'451400': '崇左市',
'451402': '江州区',
'451421': '扶绥县',
'451422': '宁明县',
'451423': '龙州县',
'451424': '大新县',
'451425': '天等县',
'451481': '凭祥市',
'460000': '海南省',
'460100': '海口市',
'460105': '秀英区',
'460106': '龙华区',
'460107': '琼山区',
'460108': '美兰区',
'460200': '三亚市',
'460202': '海棠区',
'460203': '吉阳区',
'460204': '天涯区',
'460205': '崖州区',
'460300': '三沙市',
'460400': '儋州市',
'469001': '五指山市',
'469002': '琼海市',
'469005': '文昌市',
'469006': '万宁市',
'469007': '东方市',
'469021': '定安县',
'469022': '屯昌县',
'469023': '澄迈县',
'469024': '临高县',
'469025': '白沙黎族自治县',
'469026': '昌江黎族自治县',
'469027': '乐东黎族自治县',
'469028': '陵水黎族自治县',
'469029': '保亭黎族苗族自治县',
'469030': '琼中黎族苗族自治县',
'500000': '重庆市',
'500101': '万州区',
'500102': '涪陵区',
'500103': '渝中区',
'500104': '大渡口区',
'500105': '江北区',
'500106': '沙坪坝区',
'500107': '九龙坡区',
'500108': '南岸区',
'500109': '北碚区',
'500110': '綦江区',
'500111': '大足区',
'500112': '渝北区',
'500113': '巴南区',
'500114': '黔江区',
'500115': '长寿区',
'500116': '江津区',
'500117': '合川区',
'500118': '永川区',
'500119': '南川区',
'500120': '璧山区',
'500151': '铜梁区',
'500152': '潼南区',
'500153': '荣昌区',
'500154': '开州区',
'500155': '梁平区',
'500156': '武隆区',
'500229': '城口县',
'500230': '丰都县',
'500231': '垫江县',
'500233': '忠县',
'500235': '云阳县',
'500236': '奉节县',
'500237': '巫山县',
'500238': '巫溪县',
'500240': '石柱土家族自治县',
'500241': '秀山土家族苗族自治县',
'500242': '酉阳土家族苗族自治县',
'500243': '彭水苗族土家族自治县',
'510000': '四川省',
'510100': '成都市',
'510104': '锦江区',
'510105': '青羊区',
'510106': '金牛区',
'510107': '武侯区',
'510108': '成华区',
'510112': '龙泉驿区',
'510113': '青白江区',
'510114': '新都区',
'510115': '温江区',
'510116': '双流区',
'510117': '郫都区',
'510121': '金堂县',
'510129': '大邑县',
'510131': '蒲江县',
'510132': '新津县',
'510181': '都江堰市',
'510182': '彭州市',
'510183': '邛崃市',
'510184': '崇州市',
'510185': '简阳市',
'510300': '自贡市',
'510302': '自流井区',
'510303': '贡井区',
'510304': '大安区',
'510311': '沿滩区',
'510321': '荣县',
'510322': '富顺县',
'510400': '攀枝花市',
'510402': '东区',
'510403': '西区',
'510411': '仁和区',
'510421': '米易县',
'510422': '盐边县',
'510500': '泸州市',
'510502': '江阳区',
'510503': '纳溪区',
'510504': '龙马潭区',
'510521': '泸县',
'510522': '合江县',
'510524': '叙永县',
'510525': '古蔺县',
'510600': '德阳市',
'510603': '旌阳区',
'510623': '中江县',
'510626': '罗江县',
'510681': '广汉市',
'510682': '什邡市',
'510683': '绵竹市',
'510700': '绵阳市',
'510703': '涪城区',
'510704': '游仙区',
'510705': '安州区',
'510722': '三台县',
'510723': '盐亭县',
'510725': '梓潼县',
'510726': '北川羌族自治县',
'510727': '平武县',
'510781': '江油市',
'510800': '广元市',
'510802': '利州区',
'510811': '昭化区',
'510812': '朝天区',
'510821': '旺苍县',
'510822': '青川县',
'510823': '剑阁县',
'510824': '苍溪县',
'510900': '遂宁市',
'510903': '船山区',
'510904': '安居区',
'510921': '蓬溪县',
'510922': '射洪县',
'510923': '大英县',
'511000': '内江市',
'511002': '市中区',
'511011': '东兴区',
'511024': '威远县',
'511025': '资中县',
'511028': '隆昌县',
'511100': '乐山市',
'511102': '市中区',
'511111': '沙湾区',
'511112': '五通桥区',
'511113': '金口河区',
'511123': '犍为县',
'511124': '井研县',
'511126': '夹江县',
'511129': '沐川县',
'511132': '峨边彝族自治县',
'511133': '马边彝族自治县',
'511181': '峨眉山市',
'511300': '南充市',
'511302': '顺庆区',
'511303': '高坪区',
'511304': '嘉陵区',
'511321': '南部县',
'511322': '营山县',
'511323': '蓬安县',
'511324': '仪陇县',
'511325': '西充县',
'511381': '阆中市',
'511400': '眉山市',
'511402': '东坡区',
'511403': '彭山区',
'511421': '仁寿县',
'511423': '洪雅县',
'511424': '丹棱县',
'511425': '青神县',
'511500': '宜宾市',
'511502': '翠屏区',
'511503': '南溪区',
'511521': '宜宾县',
'511523': '江安县',
'511524': '长宁县',
'511525': '高县',
'511526': '珙县',
'511527': '筠连县',
'511528': '兴文县',
'511529': '屏山县',
'511600': '广安市',
'511602': '广安区',
'511603': '前锋区',
'511621': '岳池县',
'511622': '武胜县',
'511623': '邻水县',
'511681': '华蓥市',
'511700': '达州市',
'511702': '通川区',
'511703': '达川区',
'511722': '宣汉县',
'511723': '开江县',
'511724': '大竹县',
'511725': '渠县',
'511781': '万源市',
'511800': '雅安市',
'511802': '雨城区',
'511803': '名山区',
'511822': '荥经县',
'511823': '汉源县',
'511824': '石棉县',
'511825': '天全县',
'511826': '芦山县',
'511827': '宝兴县',
'511900': '巴中市',
'511902': '巴州区',
'511903': '恩阳区',
'511921': '通江县',
'511922': '南江县',
'511923': '平昌县',
'512000': '资阳市',
'512002': '雁江区',
'512021': '安岳县',
'512022': '乐至县',
'513200': '阿坝藏族羌族自治州',
'513201': '马尔康市',
'513221': '汶川县',
'513222': '理县',
'513223': '茂县',
'513224': '松潘县',
'513225': '九寨沟县',
'513226': '金川县',
'513227': '小金县',
'513228': '黑水县',
'513230': '壤塘县',
'513231': '阿坝县',
'513232': '若尔盖县',
'513233': '红原县',
'513300': '甘孜藏族自治州',
'513301': '康定市',
'513322': '泸定县',
'513323': '丹巴县',
'513324': '九龙县',
'513325': '雅江县',
'513326': '道孚县',
'513327': '炉霍县',
'513328': '甘孜县',
'513329': '新龙县',
'513330': '德格县',
'513331': '白玉县',
'513332': '石渠县',
'513333': '色达县',
'513334': '理塘县',
'513335': '巴塘县',
'513336': '乡城县',
'513337': '稻城县',
'513338': '得荣县',
'513400': '凉山彝族自治州',
'513401': '西昌市',
'513422': '木里藏族自治县',
'513423': '盐源县',
'513424': '德昌县',
'513425': '会理县',
'513426': '会东县',
'513427': '宁南县',
'513428': '普格县',
'513429': '布拖县',
'513430': '金阳县',
'513431': '昭觉县',
'513432': '喜德县',
'513433': '冕宁县',
'513434': '越西县',
'513435': '甘洛县',
'513436': '美姑县',
'513437': '雷波县',
'520000': '贵州省',
'520100': '贵阳市',
'520102': '南明区',
'520103': '云岩区',
'520111': '花溪区',
'520112': '乌当区',
'520113': '白云区',
'520115': '观山湖区',
'520121': '开阳县',
'520122': '息烽县',
'520123': '修文县',
'520181': '清镇市',
'520200': '六盘水市',
'520201': '钟山区',
'520203': '六枝特区',
'520221': '水城县',
'520222': '盘县',
'520300': '遵义市',
'520302': '红花岗区',
'520303': '汇川区',
'520304': '播州区',
'520322': '桐梓县',
'520323': '绥阳县',
'520324': '正安县',
'520325': '道真仡佬族苗族自治县',
'520326': '务川仡佬族苗族自治县',
'520327': '凤冈县',
'520328': '湄潭县',
'520329': '余庆县',
'520330': '习水县',
'520381': '赤水市',
'520382': '仁怀市',
'520400': '安顺市',
'520402': '西秀区',
'520403': '平坝区',
'520422': '普定县',
'520423': '镇宁布依族苗族自治县',
'520424': '关岭布依族苗族自治县',
'520425': '紫云苗族布依族自治县',
'520500': '毕节市',
'520502': '七星关区',
'520521': '大方县',
'520522': '黔西县',
'520523': '金沙县',
'520524': '织金县',
'520525': '纳雍县',
'520526': '威宁彝族回族苗族自治县',
'520527': '赫章县',
'520600': '铜仁市',
'520602': '碧江区',
'520603': '万山区',
'520621': '江口县',
'520622': '玉屏侗族自治县',
'520623': '石阡县',
'520624': '思南县',
'520625': '印江土家族苗族自治县',
'520626': '德江县',
'520627': '沿河土家族自治县',
'520628': '松桃苗族自治县',
'522300': '黔西南布依族苗族自治州',
'522301': '兴义市',
'522322': '兴仁县',
'522323': '普安县',
'522324': '晴隆县',
'522325': '贞丰县',
'522326': '望谟县',
'522327': '册亨县',
'522328': '安龙县',
'522600': '黔东南苗族侗族自治州',
'522601': '凯里市',
'522622': '黄平县',
'522623': '施秉县',
'522624': '三穗县',
'522625': '镇远县',
'522626': '岑巩县',
'522627': '天柱县',
'522628': '锦屏县',
'522629': '剑河县',
'522630': '台江县',
'522631': '黎平县',
'522632': '榕江县',
'522633': '从江县',
'522634': '雷山县',
'522635': '麻江县',
'522636': '丹寨县',
'522700': '黔南布依族苗族自治州',
'522701': '都匀市',
'522702': '福泉市',
'522722': '荔波县',
'522723': '贵定县',
'522725': '瓮安县',
'522726': '独山县',
'522727': '平塘县',
'522728': '罗甸县',
'522729': '长顺县',
'522730': '龙里县',
'522731': '惠水县',
'522732': '三都水族自治县',
'530000': '云南省',
'530100': '昆明市',
'530102': '五华区',
'530103': '盘龙区',
'530111': '官渡区',
'530112': '西山区',
'530113': '东川区',
'530114': '呈贡区',
'530115': '晋宁区',
'530124': '富民县',
'530125': '宜良县',
'530126': '石林彝族自治县',
'530127': '嵩明县',
'530128': '禄劝彝族苗族自治县',
'530129': '寻甸回族彝族自治县',
'530181': '安宁市',
'530300': '曲靖市',
'530302': '麒麟区',
'530303': '沾益区',
'530321': '马龙县',
'530322': '陆良县',
'530323': '师宗县',
'530324': '罗平县',
'530325': '富源县',
'530326': '会泽县',
'530381': '宣威市',
'530400': '玉溪市',
'530402': '红塔区',
'530403': '江川区',
'530422': '澄江县',
'530423': '通海县',
'530424': '华宁县',
'530425': '易门县',
'530426': '峨山彝族自治县',
'530427': '新平彝族傣族自治县',
'530428': '元江哈尼族彝族傣族自治县',
'530500': '保山市',
'530502': '隆阳区',
'530521': '施甸县',
'530523': '龙陵县',
'530524': '昌宁县',
'530581': '腾冲市',
'530600': '昭通市',
'530602': '昭阳区',
'530621': '鲁甸县',
'530622': '巧家县',
'530623': '盐津县',
'530624': '大关县',
'530625': '永善县',
'530626': '绥江县',
'530627': '镇雄县',
'530628': '彝良县',
'530629': '威信县',
'530630': '水富县',
'530700': '丽江市',
'530702': '古城区',
'530721': '玉龙纳西族自治县',
'530722': '永胜县',
'530723': '华坪县',
'530724': '宁蒗彝族自治县',
'530800': '普洱市',
'530802': '思茅区',
'530821': '宁洱哈尼族彝族自治县',
'530822': '墨江哈尼族自治县',
'530823': '景东彝族自治县',
'530824': '景谷傣族彝族自治县',
'530825': '镇沅彝族哈尼族拉祜族自治县',
'530826': '江城哈尼族彝族自治县',
'530827': '孟连傣族拉祜族佤族自治县',
'530828': '澜沧拉祜族自治县',
'530829': '西盟佤族自治县',
'530900': '临沧市',
'530902': '临翔区',
'530921': '凤庆县',
'530922': '云县',
'530923': '永德县',
'530924': '镇康县',
'530925': '双江拉祜族佤族布朗族傣族自治县',
'530926': '耿马傣族佤族自治县',
'530927': '沧源佤族自治县',
'532300': '楚雄彝族自治州',
'532301': '楚雄市',
'532322': '双柏县',
'532323': '牟定县',
'532324': '南华县',
'532325': '姚安县',
'532326': '大姚县',
'532327': '永仁县',
'532328': '元谋县',
'532329': '武定县',
'532331': '禄丰县',
'532500': '红河哈尼族彝族自治州',
'532501': '个旧市',
'532502': '开远市',
'532503': '蒙自市',
'532504': '弥勒市',
'532523': '屏边苗族自治县',
'532524': '建水县',
'532525': '石屏县',
'532527': '泸西县',
'532528': '元阳县',
'532529': '红河县',
'532530': '金平苗族瑶族傣族自治县',
'532531': '绿春县',
'532532': '河口瑶族自治县',
'532600': '文山壮族苗族自治州',
'532601': '文山市',
'532622': '砚山县',
'532623': '西畴县',
'532624': '麻栗坡县',
'532625': '马关县',
'532626': '丘北县',
'532627': '广南县',
'532628': '富宁县',
'532800': '西双版纳傣族自治州',
'532801': '景洪市',
'532822': '勐海县',
'532823': '勐腊县',
'532900': '大理白族自治州',
'532901': '大理市',
'532922': '漾濞彝族自治县',
'532923': '祥云县',
'532924': '宾川县',
'532925': '弥渡县',
'532926': '南涧彝族自治县',
'532927': '巍山彝族回族自治县',
'532928': '永平县',
'532929': '云龙县',
'532930': '洱源县',
'532931': '剑川县',
'532932': '鹤庆县',
'533100': '德宏傣族景颇族自治州',
'533102': '瑞丽市',
'533103': '芒市',
'533122': '梁河县',
'533123': '盈江县',
'533124': '陇川县',
'533300': '怒江傈僳族自治州',
'533301': '泸水市',
'533323': '福贡县',
'533324': '贡山独龙族怒族自治县',
'533325': '兰坪白族普米族自治县',
'533400': '迪庆藏族自治州',
'533401': '香格里拉市',
'533422': '德钦县',
'533423': '维西傈僳族自治县',
'540000': '西藏自治区',
'540100': '拉萨市',
'540102': '城关区',
'540103': '堆龙德庆区',
'540121': '林周县',
'540122': '当雄县',
'540123': '尼木县',
'540124': '曲水县',
'540126': '达孜县',
'540127': '墨竹工卡县',
'540200': '日喀则市',
'540202': '桑珠孜区',
'540221': '南木林县',
'540222': '江孜县',
'540223': '定日县',
'540224': '萨迦县',
'540225': '拉孜县',
'540226': '昂仁县',
'540227': '谢通门县',
'540228': '白朗县',
'540229': '仁布县',
'540230': '康马县',
'540231': '定结县',
'540232': '仲巴县',
'540233': '亚东县',
'540234': '吉隆县',
'540235': '聂拉木县',
'540236': '萨嘎县',
'540237': '岗巴县',
'540300': '昌都市',
'540302': '卡若区',
'540321': '江达县',
'540322': '贡觉县',
'540323': '类乌齐县',
'540324': '丁青县',
'540325': '察雅县',
'540326': '八宿县',
'540327': '左贡县',
'540328': '芒康县',
'540329': '洛隆县',
'540330': '边坝县',
'540400': '林芝市',
'540402': '巴宜区',
'540421': '工布江达县',
'540422': '米林县',
'540423': '墨脱县',
'540424': '波密县',
'540425': '察隅县',
'540426': '朗县',
'540500': '山南市',
'540502': '乃东区',
'540521': '扎囊县',
'540522': '贡嘎县',
'540523': '桑日县',
'540524': '琼结县',
'540525': '曲松县',
'540526': '措美县',
'540527': '洛扎县',
'540528': '加查县',
'540529': '隆子县',
'540530': '错那县',
'540531': '浪卡子县',
'542400': '那曲地区',
'542421': '那曲县',
'542422': '嘉黎县',
'542423': '比如县',
'542424': '聂荣县',
'542425': '安多县',
'542426': '申扎县',
'542427': '索县',
'542428': '班戈县',
'542429': '巴青县',
'542430': '尼玛县',
'542431': '双湖县',
'542500': '阿里地区',
'542521': '普兰县',
'542522': '札达县',
'542523': '噶尔县',
'542524': '日土县',
'542525': '革吉县',
'542526': '改则县',
'542527': '措勤县',
'610000': '陕西省',
'610100': '西安市',
'610102': '新城区',
'610103': '碑林区',
'610104': '莲湖区',
'610111': '灞桥区',
'610112': '未央区',
'610113': '雁塔区',
'610114': '阎良区',
'610115': '临潼区',
'610116': '长安区',
'610117': '高陵区',
'610118': '鄠邑区',
'610122': '蓝田县',
'610124': '周至县',
'610200': '铜川市',
'610202': '王益区',
'610203': '印台区',
'610204': '耀州区',
'610222': '宜君县',
'610300': '宝鸡市',
'610302': '渭滨区',
'610303': '金台区',
'610304': '陈仓区',
'610322': '凤翔县',
'610323': '岐山县',
'610324': '扶风县',
'610326': '眉县',
'610327': '陇县',
'610328': '千阳县',
'610329': '麟游县',
'610330': '凤县',
'610331': '太白县',
'610400': '咸阳市',
'610402': '秦都区',
'610403': '杨陵区',
'610404': '渭城区',
'610422': '三原县',
'610423': '泾阳县',
'610424': '乾县',
'610425': '礼泉县',
'610426': '永寿县',
'610427': '彬县',
'610428': '长武县',
'610429': '旬邑县',
'610430': '淳化县',
'610431': '武功县',
'610481': '兴平市',
'610500': '渭南市',
'610502': '临渭区',
'610503': '华州区',
'610522': '潼关县',
'610523': '大荔县',
'610524': '合阳县',
'610525': '澄城县',
'610526': '蒲城县',
'610527': '白水县',
'610528': '富平县',
'610581': '韩城市',
'610582': '华阴市',
'610600': '延安市',
'610602': '宝塔区',
'610603': '安塞区',
'610621': '延长县',
'610622': '延川县',
'610623': '子长县',
'610625': '志丹县',
'610626': '吴起县',
'610627': '甘泉县',
'610628': '富县',
'610629': '洛川县',
'610630': '宜川县',
'610631': '黄龙县',
'610632': '黄陵县',
'610700': '汉中市',
'610702': '汉台区',
'610721': '南郑县',
'610722': '城固县',
'610723': '洋县',
'610724': '西乡县',
'610725': '勉县',
'610726': '宁强县',
'610727': '略阳县',
'610728': '镇巴县',
'610729': '留坝县',
'610730': '佛坪县',
'610800': '榆林市',
'610802': '榆阳区',
'610803': '横山区',
'610822': '府谷县',
'610824': '靖边县',
'610825': '定边县',
'610826': '绥德县',
'610827': '米脂县',
'610828': '佳县',
'610829': '吴堡县',
'610830': '清涧县',
'610831': '子洲县',
'610881': '神木市',
'610900': '安康市',
'610902': '汉滨区',
'610921': '汉阴县',
'610922': '石泉县',
'610923': '宁陕县',
'610924': '紫阳县',
'610925': '岚皋县',
'610926': '平利县',
'610927': '镇坪县',
'610928': '旬阳县',
'610929': '白河县',
'611000': '商洛市',
'611002': '商州区',
'611021': '洛南县',
'611022': '丹凤县',
'611023': '商南县',
'611024': '山阳县',
'611025': '镇安县',
'611026': '柞水县',
'620000': '甘肃省',
'620100': '兰州市',
'620102': '城关区',
'620103': '七里河区',
'620104': '西固区',
'620105': '安宁区',
'620111': '红古区',
'620121': '永登县',
'620122': '皋兰县',
'620123': '榆中县',
'620200': '嘉峪关市',
'620300': '金昌市',
'620302': '金川区',
'620321': '永昌县',
'620400': '白银市',
'620402': '白银区',
'620403': '平川区',
'620421': '靖远县',
'620422': '会宁县',
'620423': '景泰县',
'620500': '天水市',
'620502': '秦州区',
'620503': '麦积区',
'620521': '清水县',
'620522': '秦安县',
'620523': '甘谷县',
'620524': '武山县',
'620525': '张家川回族自治县',
'620600': '武威市',
'620602': '凉州区',
'620621': '民勤县',
'620622': '古浪县',
'620623': '天祝藏族自治县',
'620700': '张掖市',
'620702': '甘州区',
'620721': '肃南裕固族自治县',
'620722': '民乐县',
'620723': '临泽县',
'620724': '高台县',
'620725': '山丹县',
'620800': '平凉市',
'620802': '崆峒区',
'620821': '泾川县',
'620822': '灵台县',
'620823': '崇信县',
'620824': '华亭县',
'620825': '庄浪县',
'620826': '静宁县',
'620900': '酒泉市',
'620902': '肃州区',
'620921': '金塔县',
'620922': '瓜州县',
'620923': '肃北蒙古族自治县',
'620924': '阿克塞哈萨克族自治县',
'620981': '玉门市',
'620982': '敦煌市',
'621000': '庆阳市',
'621002': '西峰区',
'621021': '庆城县',
'621022': '环县',
'621023': '华池县',
'621024': '合水县',
'621025': '正宁县',
'621026': '宁县',
'621027': '镇原县',
'621100': '定西市',
'621102': '安定区',
'621121': '通渭县',
'621122': '陇西县',
'621123': '渭源县',
'621124': '临洮县',
'621125': '漳县',
'621126': '岷县',
'621200': '陇南市',
'621202': '武都区',
'621221': '成县',
'621222': '文县',
'621223': '宕昌县',
'621224': '康县',
'621225': '西和县',
'621226': '礼县',
'621227': '徽县',
'621228': '两当县',
'622900': '临夏回族自治州',
'622901': '临夏市',
'622921': '临夏县',
'622922': '康乐县',
'622923': '永靖县',
'622924': '广河县',
'622925': '和政县',
'622926': '东乡族自治县',
'622927': '积石山保安族东乡族撒拉族自治县',
'623000': '甘南藏族自治州',
'623001': '合作市',
'623021': '临潭县',
'623022': '卓尼县',
'623023': '舟曲县',
'623024': '迭部县',
'623025': '玛曲县',
'623026': '碌曲县',
'623027': '夏河县',
'630000': '青海省',
'630100': '西宁市',
'630102': '城东区',
'630103': '城中区',
'630104': '城西区',
'630105': '城北区',
'630121': '大通回族土族自治县',
'630122': '湟中县',
'630123': '湟源县',
'630200': '海东市',
'630202': '乐都区',
'630203': '平安区',
'630222': '民和回族土族自治县',
'630223': '互助土族自治县',
'630224': '化隆回族自治县',
'630225': '循化撒拉族自治县',
'632200': '海北藏族自治州',
'632221': '门源回族自治县',
'632222': '祁连县',
'632223': '海晏县',
'632224': '刚察县',
'632300': '黄南藏族自治州',
'632321': '同仁县',
'632322': '尖扎县',
'632323': '泽库县',
'632324': '河南蒙古族自治县',
'632500': '海南藏族自治州',
'632521': '共和县',
'632522': '同德县',
'632523': '贵德县',
'632524': '兴海县',
'632525': '贵南县',
'632600': '果洛藏族自治州',
'632621': '玛沁县',
'632622': '班玛县',
'632623': '甘德县',
'632624': '达日县',
'632625': '久治县',
'632626': '玛多县',
'632700': '玉树藏族自治州',
'632701': '玉树市',
'632722': '杂多县',
'632723': '称多县',
'632724': '治多县',
'632725': '囊谦县',
'632726': '曲麻莱县',
'632800': '海西蒙古族藏族自治州',
'632801': '格尔木市',
'632802': '德令哈市',
'632821': '乌兰县',
'632822': '都兰县',
'632823': '天峻县',
'640000': '宁夏回族自治区',
'640100': '银川市',
'640104': '兴庆区',
'640105': '西夏区',
'640106': '金凤区',
'640121': '永宁县',
'640122': '贺兰县',
'640181': '灵武市',
'640200': '石嘴山市',
'640202': '大武口区',
'640205': '惠农区',
'640221': '平罗县',
'640300': '吴忠市',
'640302': '利通区',
'640303': '红寺堡区',
'640323': '盐池县',
'640324': '同心县',
'640381': '青铜峡市',
'640400': '固原市',
'640402': '原州区',
'640422': '西吉县',
'640423': '隆德县',
'640424': '泾源县',
'640425': '彭阳县',
'640500': '中卫市',
'640502': '沙坡头区',
'640521': '中宁县',
'640522': '海原县',
'650000': '新疆维吾尔自治区',
'650100': '乌鲁木齐市',
'650102': '天山区',
'650103': '沙依巴克区',
'650104': '新市区',
'650105': '水磨沟区',
'650106': '头屯河区',
'650107': '达坂城区',
'650109': '米东区',
'650121': '乌鲁木齐县',
'650200': '克拉玛依市',
'650202': '独山子区',
'650203': '克拉玛依区',
'650204': '白碱滩区',
'650205': '乌尔禾区',
'650400': '吐鲁番市',
'650402': '高昌区',
'650421': '鄯善县',
'650422': '托克逊县',
'650500': '哈密市',
'650502': '伊州区',
'650521': '巴里坤哈萨克自治县',
'650522': '伊吾县',
'652300': '昌吉回族自治州',
'652301': '昌吉市',
'652302': '阜康市',
'652323': '呼图壁县',
'652324': '玛纳斯县',
'652325': '奇台县',
'652327': '吉木萨尔县',
'652328': '木垒哈萨克自治县',
'652700': '博尔塔拉蒙古自治州',
'652701': '博乐市',
'652702': '阿拉山口市',
'652722': '精河县',
'652723': '温泉县',
'652800': '巴音郭楞蒙古自治州',
'652801': '库尔勒市',
'652822': '轮台县',
'652823': '尉犁县',
'652824': '若羌县',
'652825': '且末县',
'652826': '焉耆回族自治县',
'652827': '和静县',
'652828': '和硕县',
'652829': '博湖县',
'652900': '阿克苏地区',
'652901': '阿克苏市',
'652922': '温宿县',
'652923': '库车县',
'652924': '沙雅县',
'652925': '新和县',
'652926': '拜城县',
'652927': '乌什县',
'652928': '阿瓦提县',
'652929': '柯坪县',
'653000': '克孜勒苏柯尔克孜自治州',
'653001': '阿图什市',
'653022': '阿克陶县',
'653023': '阿合奇县',
'653024': '乌恰县',
'653100': '喀什地区',
'653101': '喀什市',
'653121': '疏附县',
'653122': '疏勒县',
'653123': '英吉沙县',
'653124': '泽普县',
'653125': '莎车县',
'653126': '叶城县',
'653127': '麦盖提县',
'653128': '岳普湖县',
'653129': '伽师县',
'653130': '巴楚县',
'653131': '塔什库尔干塔吉克自治县',
'653200': '和田地区',
'653201': '和田市',
'653221': '和田县',
'653222': '墨玉县',
'653223': '皮山县',
'653224': '洛浦县',
'653225': '策勒县',
'653226': '于田县',
'653227': '民丰县',
'654000': '伊犁哈萨克自治州',
'654002': '伊宁市',
'654003': '奎屯市',
'654004': '霍尔果斯市',
'654021': '伊宁县',
'654022': '察布查尔锡伯自治县',
'654023': '霍城县',
'654024': '巩留县',
'654025': '新源县',
'654026': '昭苏县',
'654027': '特克斯县',
'654028': '尼勒克县',
'654200': '塔城地区',
'654201': '塔城市',
'654202': '乌苏市',
'654221': '额敏县',
'654223': '沙湾县',
'654224': '托里县',
'654225': '裕民县',
'654226': '和布克赛尔蒙古自治县',
'654300': '阿勒泰地区',
'654301': '阿勒泰市',
'654321': '布尔津县',
'654322': '富蕴县',
'654323': '福海县',
'654324': '哈巴河县',
'654325': '青河县',
'654326': '吉木乃县',
'659001': '石河子市',
'659002': '阿拉尔市',
'659003': '图木舒克市',
'659004': '五家渠市',
'659005': '北屯市',
'659006': '铁门关市',
'659007': '双河市',
'659008': '可克达拉市',
'659009': '昆玉市',
'710000': '台湾省',
'810000': '香港特别行政区',
'820000': '澳门特别行政区',
} | PypiClean |
/Djaloha-0.4.2.tar.gz/Djaloha-0.4.2/djaloha/static/aloha.0.20.20/plugins/extra/browser/vendor/jquery.jstree.js | define(['aloha/jquery'], function(jQuery) {
var $ = jQuery;
/*
* jsTree 1.0-rc3
* http://jstree.com/
*
* Copyright (c) 2010 Ivan Bozhanov (vakata.com)
*
* Licensed same as jquery - under the terms of either the MIT License or the GPL Version 2 License
* http://www.opensource.org/licenses/mit-license.php
* http://www.gnu.org/licenses/gpl.html
*
* $Date: 2011-02-09 01:17:14 +0200 (ср, 09 февр 2011) $
* $Revision: 236 $
*/
/*jslint browser: true, onevar: true, undef: true, bitwise: true, strict: true */
/*global window : false, clearInterval: false, clearTimeout: false, document: false, setInterval: false, setTimeout: false, jQuery: false, navigator: false, XSLTProcessor: false, DOMParser: false, XMLSerializer: false*/
// top wrapper to prevent multiple inclusion (is this OK?)
(function () { if(jQuery && jQuery.jstree) { return; }
var is_ie6 = false, is_ie7 = false, is_ff2 = false;
/*
* jsTree core
*/
(function ($) {
// Common functions not related to jsTree
// decided to move them to a `vakata` "namespace"
$.vakata = {};
// CSS related functions
$.vakata.css = {
get_css : function(rule_name, delete_flag, sheet) {
rule_name = rule_name.toLowerCase();
var css_rules = sheet.cssRules || sheet.rules,
j = 0;
do {
if(css_rules.length && j > css_rules.length + 5) { return false; }
if(css_rules[j].selectorText && css_rules[j].selectorText.toLowerCase() == rule_name) {
if(delete_flag === true) {
if(sheet.removeRule) { sheet.removeRule(j); }
if(sheet.deleteRule) { sheet.deleteRule(j); }
return true;
}
else { return css_rules[j]; }
}
}
while (css_rules[++j]);
return false;
},
add_css : function(rule_name, sheet) {
if($.jstree.css.get_css(rule_name, false, sheet)) { return false; }
if(sheet.insertRule) { sheet.insertRule(rule_name + ' { }', 0); } else { sheet.addRule(rule_name, null, 0); }
return $.vakata.css.get_css(rule_name);
},
remove_css : function(rule_name, sheet) {
return $.vakata.css.get_css(rule_name, true, sheet);
},
add_sheet : function(opts) {
var tmp = false, is_new = true;
if(opts.str) {
if(opts.title) { tmp = $("style[id='" + opts.title + "-stylesheet']")[0]; }
if(tmp) { is_new = false; }
else {
tmp = document.createElement("style");
tmp.setAttribute('type',"text/css");
if(opts.title) { tmp.setAttribute("id", opts.title + "-stylesheet"); }
}
if(tmp.styleSheet) {
if(is_new) {
document.getElementsByTagName("head")[0].appendChild(tmp);
tmp.styleSheet.cssText = opts.str;
}
else {
tmp.styleSheet.cssText = tmp.styleSheet.cssText + " " + opts.str;
}
}
else {
tmp.appendChild(document.createTextNode(opts.str));
document.getElementsByTagName("head")[0].appendChild(tmp);
}
return tmp.sheet || tmp.styleSheet;
}
if(opts.url) {
if(document.createStyleSheet) {
try { tmp = document.createStyleSheet(opts.url); } catch (e) { }
}
else {
tmp = document.createElement('link');
tmp.rel = 'stylesheet';
tmp.type = 'text/css';
tmp.media = "all";
tmp.href = opts.url;
document.getElementsByTagName("head")[0].appendChild(tmp);
return tmp.styleSheet;
}
}
}
};
// private variables
var instances = [], // instance array (used by $.jstree.reference/create/focused)
focused_instance = -1, // the index in the instance array of the currently focused instance
plugins = {}, // list of included plugins
prepared_move = {}; // for the move_node function
// jQuery plugin wrapper (thanks to jquery UI widget function)
$.fn.jstree = function (settings) {
var isMethodCall = (typeof settings == 'string'), // is this a method call like $().jstree("open_node")
args = Array.prototype.slice.call(arguments, 1),
returnValue = this;
// if a method call execute the method on all selected instances
if(isMethodCall) {
if(settings.substring(0, 1) == '_') { return returnValue; }
this.each(function() {
var instance = instances[$.data(this, "jstree-instance-id")],
methodValue = (instance && $.isFunction(instance[settings])) ? instance[settings].apply(instance, args) : instance;
if(typeof methodValue !== "undefined" && (settings.indexOf("is_") === 0 || (methodValue !== true && methodValue !== false))) { returnValue = methodValue; return false; }
});
}
else {
this.each(function() {
// extend settings and allow for multiple hashes and $.data
var instance_id = $.data(this, "jstree-instance-id"),
a = [],
b = settings ? $.extend({}, true, settings) : {},
c = $(this),
s = false,
t = [];
a = a.concat(args);
if(c.data("jstree")) { a.push(c.data("jstree")); }
b = a.length ? $.extend.apply(null, [true, b].concat(a)) : b;
// if an instance already exists, destroy it first
if(typeof instance_id !== "undefined" && instances[instance_id]) { instances[instance_id].destroy(); }
// push a new empty object to the instances array
instance_id = parseInt(instances.push({}),10) - 1;
// store the jstree instance id to the container element
$.data(this, "jstree-instance-id", instance_id);
// clean up all plugins
b.plugins = $.isArray(b.plugins) ? b.plugins : $.jstree.defaults.plugins.slice();
b.plugins.unshift("core");
// only unique plugins
b.plugins = b.plugins.sort().join(",,").replace(/(,|^)([^,]+)(,,\2)+(,|$)/g,"$1$2$4").replace(/,,+/g,",").replace(/,$/,"").split(",");
// extend defaults with passed data
s = $.extend(true, {}, $.jstree.defaults, b);
s.plugins = b.plugins;
$.each(plugins, function (i, val) {
if($.inArray(i, s.plugins) === -1) { s[i] = null; delete s[i]; }
else { t.push(i); }
});
s.plugins = t;
// push the new object to the instances array (at the same time set the default classes to the container) and init
instances[instance_id] = new $.jstree._instance(instance_id, $(this).addClass("jstree jstree-" + instance_id), s);
// init all activated plugins for this instance
$.each(instances[instance_id]._get_settings().plugins, function (i, val) { instances[instance_id].data[val] = {}; });
$.each(instances[instance_id]._get_settings().plugins, function (i, val) { if(plugins[val]) { plugins[val].__init.apply(instances[instance_id]); } });
// initialize the instance
setTimeout(function() { instances[instance_id].init(); }, 0);
});
}
// return the jquery selection (or if it was a method call that returned a value - the returned value)
return returnValue;
};
// object to store exposed functions and objects
$.jstree = {
defaults : {
plugins : []
},
_focused : function () { return instances[focused_instance] || null; },
_reference : function (needle) {
// get by instance id
if(instances[needle]) { return instances[needle]; }
// get by DOM (if still no luck - return null
var o = $(needle);
if(!o.length && typeof needle === "string") { o = $("#" + needle); }
if(!o.length) { return null; }
return instances[o.closest(".jstree").data("jstree-instance-id")] || null;
},
_instance : function (index, container, settings) {
// for plugins to store data in
this.data = { core : {} };
this.get_settings = function () { return $.extend(true, {}, settings); };
this._get_settings = function () { return settings; };
this.get_index = function () { return index; };
this.get_container = function () { return container; };
this.get_container_ul = function () { return container.children("ul:eq(0)"); };
this._set_settings = function (s) {
settings = $.extend(true, {}, settings, s);
};
},
_fn : { },
plugin : function (pname, pdata) {
pdata = $.extend({}, {
__init : $.noop,
__destroy : $.noop,
_fn : {},
defaults : false
}, pdata);
plugins[pname] = pdata;
$.jstree.defaults[pname] = pdata.defaults;
$.each(pdata._fn, function (i, val) {
val.plugin = pname;
val.old = $.jstree._fn[i];
$.jstree._fn[i] = function () {
var rslt,
func = val,
args = Array.prototype.slice.call(arguments),
evnt = new $.Event("before.jstree"),
rlbk = false;
if(this.data.core.locked === true && i !== "unlock" && i !== "is_locked") { return; }
// Check if function belongs to the included plugins of this instance
do {
if(func && func.plugin && $.inArray(func.plugin, this._get_settings().plugins) !== -1) { break; }
func = func.old;
} while(func);
if(!func) { return; }
// context and function to trigger events, then finally call the function
if(i.indexOf("_") === 0) {
rslt = func.apply(this, args);
}
else {
rslt = this.get_container().triggerHandler(evnt, { "func" : i, "inst" : this, "args" : args, "plugin" : func.plugin });
if(rslt === false) { return; }
if(typeof rslt !== "undefined") { args = rslt; }
rslt = func.apply(
$.extend({}, this, {
__callback : function (data) {
this.get_container().triggerHandler( i + '.jstree', { "inst" : this, "args" : args, "rslt" : data, "rlbk" : rlbk });
},
__rollback : function () {
rlbk = this.get_rollback();
return rlbk;
},
__call_old : function (replace_arguments) {
return func.old.apply(this, (replace_arguments ? Array.prototype.slice.call(arguments, 1) : args ) );
}
}), args);
}
// return the result
return rslt;
};
$.jstree._fn[i].old = val.old;
$.jstree._fn[i].plugin = pname;
});
},
rollback : function (rb) {
if(rb) {
if(!$.isArray(rb)) { rb = [ rb ]; }
$.each(rb, function (i, val) {
instances[val.i].set_rollback(val.h, val.d);
});
}
}
};
// set the prototype for all instances
$.jstree._fn = $.jstree._instance.prototype = {};
// load the css when DOM is ready
$(function() {
// code is copied from jQuery ($.browser is deprecated + there is a bug in IE)
var u = navigator.userAgent.toLowerCase(),
v = (u.match( /.+?(?:rv|it|ra|ie)[\/: ]([\d.]+)/ ) || [0,'0'])[1],
css_string = '' +
'.jstree ul, .jstree li { display:block; margin:0 0 0 0; padding:0 0 0 0; list-style-type:none; } ' +
'.jstree li { display:block; min-height:18px; line-height:18px; white-space:nowrap; margin-left:18px; min-width:18px; } ' +
'.jstree-rtl li { margin-left:0; margin-right:18px; } ' +
'.jstree > ul > li { margin-left:0px; } ' +
'.jstree-rtl > ul > li { margin-right:0px; } ' +
'.jstree ins { display:inline-block; text-decoration:none; width:18px; height:18px; margin:0 0 0 0; padding:0; } ' +
'.jstree a { display:inline-block; line-height:16px; height:16px; color:black; white-space:nowrap; text-decoration:none; padding:1px 2px; margin:0; } ' +
'.jstree a:focus { outline: none; } ' +
'.jstree a > ins { height:16px; width:16px; } ' +
'.jstree a > .jstree-icon { margin-right:3px; } ' +
'.jstree-rtl a > .jstree-icon { margin-left:3px; margin-right:0; } ' +
'li.jstree-open > ul { display:block; } ' +
'li.jstree-closed > ul { display:none; } ';
// Correct IE 6 (does not support the > CSS selector)
if(/msie/.test(u) && parseInt(v, 10) == 6) {
is_ie6 = true;
// fix image flicker and lack of caching
try {
document.execCommand("BackgroundImageCache", false, true);
} catch (err) { }
css_string += '' +
'.jstree li { height:18px; margin-left:0; margin-right:0; } ' +
'.jstree li li { margin-left:18px; } ' +
'.jstree-rtl li li { margin-left:0px; margin-right:18px; } ' +
'li.jstree-open ul { display:block; } ' +
'li.jstree-closed ul { display:none !important; } ' +
'.jstree li a { display:inline; border-width:0 !important; padding:0px 2px !important; } ' +
'.jstree li a ins { height:16px; width:16px; margin-right:3px; } ' +
'.jstree-rtl li a ins { margin-right:0px; margin-left:3px; } ';
}
// Correct IE 7 (shifts anchor nodes onhover)
if(/msie/.test(u) && parseInt(v, 10) == 7) {
is_ie7 = true;
css_string += '.jstree li a { border-width:0 !important; padding:0px 2px !important; } ';
}
// correct ff2 lack of display:inline-block
if(!/compatible/.test(u) && /mozilla/.test(u) && parseFloat(v, 10) < 1.9) {
is_ff2 = true;
css_string += '' +
'.jstree ins { display:-moz-inline-box; } ' +
'.jstree li { line-height:12px; } ' + // WHY??
'.jstree a { display:-moz-inline-box; } ' +
'.jstree .jstree-no-icons .jstree-checkbox { display:-moz-inline-stack !important; } ';
/* this shouldn't be here as it is theme specific */
}
// the default stylesheet
$.vakata.css.add_sheet({ str : css_string, title : "jstree" });
});
// core functions (open, close, create, update, delete)
$.jstree.plugin("core", {
__init : function () {
this.data.core.locked = false;
this.data.core.to_open = this.get_settings().core.initially_open;
this.data.core.to_load = this.get_settings().core.initially_load;
},
defaults : {
html_titles : false,
animation : 500,
initially_open : [],
initially_load : [],
open_parents : true,
notify_plugins : true,
rtl : false,
load_open : false,
strings : {
loading : "Loading ...",
new_node : "New node",
multiple_selection : "Multiple selection"
}
},
_fn : {
init : function () {
this.set_focus();
if(this._get_settings().core.rtl) {
this.get_container().addClass("jstree-rtl").css("direction", "rtl");
}
this.get_container().html("<ul><li class='jstree-last jstree-leaf'><ins> </ins><a class='jstree-loading' href='#'><ins class='jstree-icon'> </ins>" + this._get_string("loading") + "</a></li></ul>");
this.data.core.li_height = this.get_container_ul().find("li.jstree-closed, li.jstree-leaf").eq(0).height() || 18;
this.get_container()
.delegate("li > ins", "click.jstree", $.proxy(function (event) {
var trgt = $(event.target);
if(trgt.is("ins") && event.pageY - trgt.offset().top < this.data.core.li_height) { this.toggle_node(trgt); }
}, this))
.bind("mousedown.jstree", $.proxy(function () {
this.set_focus(); // This used to be setTimeout(set_focus,0) - why?
}, this))
.bind("dblclick.jstree", function (event) {
var sel;
if(document.selection && document.selection.empty) { document.selection.empty(); }
else {
if(window.getSelection) {
sel = window.getSelection();
try {
sel.removeAllRanges();
sel.collapse();
} catch (err) { }
}
}
});
if(this._get_settings().core.notify_plugins) {
this.get_container()
.bind("load_node.jstree", $.proxy(function (e, data) {
var o = this._get_node(data.rslt.obj),
t = this;
if(o === -1) { o = this.get_container_ul(); }
if(!o.length) { return; }
o.find("li").each(function () {
var th = $(this);
if(th.data("jstree")) {
$.each(th.data("jstree"), function (plugin, values) {
if(t.data[plugin] && $.isFunction(t["_" + plugin + "_notify"])) {
t["_" + plugin + "_notify"].call(t, th, values);
}
});
}
});
}, this));
}
if(this._get_settings().core.load_open) {
this.get_container()
.bind("load_node.jstree", $.proxy(function (e, data) {
var o = this._get_node(data.rslt.obj),
t = this;
if(o === -1) { o = this.get_container_ul(); }
if(!o.length) { return; }
o.find("li.jstree-open:not(:has(ul))").each(function () {
t.load_node(this, $.noop, $.noop);
});
}, this));
}
this.__callback();
this.load_node(-1, function () { this.loaded(); this.reload_nodes(); });
},
destroy : function () {
var i,
n = this.get_index(),
s = this._get_settings(),
_this = this;
$.each(s.plugins, function (i, val) {
try { plugins[val].__destroy.apply(_this); } catch(err) { }
});
this.__callback();
// set focus to another instance if this one is focused
if(this.is_focused()) {
for(i in instances) {
if(instances.hasOwnProperty(i) && i != n) {
instances[i].set_focus();
break;
}
}
}
// if no other instance found
if(n === focused_instance) { focused_instance = -1; }
// remove all traces of jstree in the DOM (only the ones set using jstree*) and cleans all events
this.get_container()
.unbind(".jstree")
.undelegate(".jstree")
.removeData("jstree-instance-id")
.find("[class^='jstree']")
.andSelf()
.attr("class", function () { return this.className.replace(/jstree[^ ]*|$/ig,''); });
$(document)
.unbind(".jstree-" + n)
.undelegate(".jstree-" + n);
// remove the actual data
instances[n] = null;
delete instances[n];
},
_core_notify : function (n, data) {
if(data.opened) {
this.open_node(n, false, true);
}
},
lock : function () {
this.data.core.locked = true;
this.get_container().children("ul").addClass("jstree-locked").css("opacity","0.7");
this.__callback({});
},
unlock : function () {
this.data.core.locked = false;
this.get_container().children("ul").removeClass("jstree-locked").css("opacity","1");
this.__callback({});
},
is_locked : function () { return this.data.core.locked; },
save_opened : function () {
var _this = this;
this.data.core.to_open = [];
this.get_container_ul().find("li.jstree-open").each(function () {
if(this.id) { _this.data.core.to_open.push("#" + this.id.toString().replace(/^#/,"").replace(/\\\//g,"/").replace(/\//g,"\\\/").replace(/\\\./g,".").replace(/\./g,"\\.").replace(/\:/g,"\\:")); }
});
this.__callback(_this.data.core.to_open);
},
save_loaded : function () { },
reload_nodes : function (is_callback) {
var _this = this,
done = true,
current = [],
remaining = [];
if(!is_callback) {
this.data.core.reopen = false;
this.data.core.refreshing = true;
this.data.core.to_open = $.map($.makeArray(this.data.core.to_open), function (n) { return "#" + n.toString().replace(/^#/,"").replace(/\\\//g,"/").replace(/\//g,"\\\/").replace(/\\\./g,".").replace(/\./g,"\\.").replace(/\:/g,"\\:"); });
this.data.core.to_load = $.map($.makeArray(this.data.core.to_load), function (n) { return "#" + n.toString().replace(/^#/,"").replace(/\\\//g,"/").replace(/\//g,"\\\/").replace(/\\\./g,".").replace(/\./g,"\\.").replace(/\:/g,"\\:"); });
if(this.data.core.to_open.length) {
this.data.core.to_load = this.data.core.to_load.concat(this.data.core.to_open);
}
}
if(this.data.core.to_load.length) {
$.each(this.data.core.to_load, function (i, val) {
if(val == "#") { return true; }
if($(val).length) { current.push(val); }
else { remaining.push(val); }
});
if(current.length) {
this.data.core.to_load = remaining;
$.each(current, function (i, val) {
if(!_this._is_loaded(val)) {
_this.load_node(val, function () { _this.reload_nodes(true); }, function () { _this.reload_nodes(true); });
done = false;
}
});
}
}
if(this.data.core.to_open.length) {
$.each(this.data.core.to_open, function (i, val) {
_this.open_node(val, false, true);
});
}
if(done) {
// TODO: find a more elegant approach to syncronizing returning requests
if(this.data.core.reopen) { clearTimeout(this.data.core.reopen); }
this.data.core.reopen = setTimeout(function () { _this.__callback({}, _this); }, 50);
this.data.core.refreshing = false;
this.reopen();
}
},
reopen : function () {
var _this = this;
if(this.data.core.to_open.length) {
$.each(this.data.core.to_open, function (i, val) {
_this.open_node(val, false, true);
});
}
this.__callback({});
},
refresh : function (obj) {
var _this = this;
this.save_opened();
if(!obj) { obj = -1; }
obj = this._get_node(obj);
if(!obj) { obj = -1; }
if(obj !== -1) { obj.children("UL").remove(); }
else { this.get_container_ul().empty(); }
this.load_node(obj, function () { _this.__callback({ "obj" : obj}); _this.reload_nodes(); });
},
// Dummy function to fire after the first load (so that there is a jstree.loaded event)
loaded : function () {
this.__callback();
},
// deal with focus
set_focus : function () {
if(this.is_focused()) { return; }
var f = $.jstree._focused();
if(f) { f.unset_focus(); }
this.get_container().addClass("jstree-focused");
focused_instance = this.get_index();
this.__callback();
},
is_focused : function () {
return focused_instance == this.get_index();
},
unset_focus : function () {
if(this.is_focused()) {
this.get_container().removeClass("jstree-focused");
focused_instance = -1;
}
this.__callback();
},
// traverse
_get_node : function (obj) {
var $obj = $(obj, this.get_container());
if($obj.is(".jstree") || obj == -1) { return -1; }
$obj = $obj.closest("li", this.get_container());
return $obj.length ? $obj : false;
},
_get_next : function (obj, strict) {
obj = this._get_node(obj);
if(obj === -1) { return this.get_container().find("> ul > li:first-child"); }
if(!obj.length) { return false; }
if(strict) { return (obj.nextAll("li").size() > 0) ? obj.nextAll("li:eq(0)") : false; }
if(obj.hasClass("jstree-open")) { return obj.find("li:eq(0)"); }
else if(obj.nextAll("li").size() > 0) { return obj.nextAll("li:eq(0)"); }
else { return obj.parentsUntil(".jstree","li").next("li").eq(0); }
},
_get_prev : function (obj, strict) {
obj = this._get_node(obj);
if(obj === -1) { return this.get_container().find("> ul > li:last-child"); }
if(!obj.length) { return false; }
if(strict) { return (obj.prevAll("li").length > 0) ? obj.prevAll("li:eq(0)") : false; }
if(obj.prev("li").length) {
obj = obj.prev("li").eq(0);
while(obj.hasClass("jstree-open")) { obj = obj.children("ul:eq(0)").children("li:last"); }
return obj;
}
else { var o = obj.parentsUntil(".jstree","li:eq(0)"); return o.length ? o : false; }
},
_get_parent : function (obj) {
obj = this._get_node(obj);
if(obj == -1 || !obj.length) { return false; }
var o = obj.parentsUntil(".jstree", "li:eq(0)");
return o.length ? o : -1;
},
_get_children : function (obj) {
obj = this._get_node(obj);
if(obj === -1) { return this.get_container().children("ul:eq(0)").children("li"); }
if(!obj.length) { return false; }
return obj.children("ul:eq(0)").children("li");
},
get_path : function (obj, id_mode) {
var p = [],
_this = this;
obj = this._get_node(obj);
if(obj === -1 || !obj || !obj.length) { return false; }
obj.parentsUntil(".jstree", "li").each(function () {
p.push( id_mode ? this.id : _this.get_text(this) );
});
p.reverse();
p.push( id_mode ? obj.attr("id") : this.get_text(obj) );
return p;
},
// string functions
_get_string : function (key) {
return this._get_settings().core.strings[key] || key;
},
is_open : function (obj) { obj = this._get_node(obj); return obj && obj !== -1 && obj.hasClass("jstree-open"); },
is_closed : function (obj) { obj = this._get_node(obj); return obj && obj !== -1 && obj.hasClass("jstree-closed"); },
is_leaf : function (obj) { obj = this._get_node(obj); return obj && obj !== -1 && obj.hasClass("jstree-leaf"); },
correct_state : function (obj) {
obj = this._get_node(obj);
if(!obj || obj === -1) { return false; }
obj.removeClass("jstree-closed jstree-open").addClass("jstree-leaf").children("ul").remove();
this.__callback({ "obj" : obj });
},
// open/close
open_node : function (obj, callback, skip_animation) {
obj = this._get_node(obj);
if(!obj.length) { return false; }
if(!obj.hasClass("jstree-closed")) { if(callback) { callback.call(); } return false; }
var s = skip_animation || is_ie6 ? 0 : this._get_settings().core.animation,
t = this;
if(!this._is_loaded(obj)) {
obj.children("a").addClass("jstree-loading");
this.load_node(obj, function () { t.open_node(obj, callback, skip_animation); }, callback);
}
else {
if(this._get_settings().core.open_parents) {
obj.parentsUntil(".jstree",".jstree-closed").each(function () {
t.open_node(this, false, true);
});
}
if(s) { obj.children("ul").css("display","none"); }
obj.removeClass("jstree-closed").addClass("jstree-open").children("a").removeClass("jstree-loading");
if(s) { obj.children("ul").stop(true, true).slideDown(s, function () { this.style.display = ""; t.after_open(obj); }); }
else { t.after_open(obj); }
this.__callback({ "obj" : obj });
if(callback) { callback.call(); }
}
},
after_open : function (obj) { this.__callback({ "obj" : obj }); },
close_node : function (obj, skip_animation) {
obj = this._get_node(obj);
var s = skip_animation || is_ie6 ? 0 : this._get_settings().core.animation,
t = this;
if(!obj.length || !obj.hasClass("jstree-open")) { return false; }
if(s) { obj.children("ul").attr("style","display:block !important"); }
obj.removeClass("jstree-open").addClass("jstree-closed");
if(s) { obj.children("ul").stop(true, true).slideUp(s, function () { this.style.display = ""; t.after_close(obj); }); }
else { t.after_close(obj); }
this.__callback({ "obj" : obj });
},
after_close : function (obj) { this.__callback({ "obj" : obj }); },
toggle_node : function (obj) {
obj = this._get_node(obj);
if(obj.hasClass("jstree-closed")) { return this.open_node(obj); }
if(obj.hasClass("jstree-open")) { return this.close_node(obj); }
},
open_all : function (obj, do_animation, original_obj) {
obj = obj ? this._get_node(obj) : -1;
if(!obj || obj === -1) { obj = this.get_container_ul(); }
if(original_obj) {
obj = obj.find("li.jstree-closed");
}
else {
original_obj = obj;
if(obj.is(".jstree-closed")) { obj = obj.find("li.jstree-closed").andSelf(); }
else { obj = obj.find("li.jstree-closed"); }
}
var _this = this;
obj.each(function () {
var __this = this;
if(!_this._is_loaded(this)) { _this.open_node(this, function() { _this.open_all(__this, do_animation, original_obj); }, !do_animation); }
else { _this.open_node(this, false, !do_animation); }
});
// so that callback is fired AFTER all nodes are open
if(original_obj.find('li.jstree-closed').length === 0) { this.__callback({ "obj" : original_obj }); }
},
close_all : function (obj, do_animation) {
var _this = this;
obj = obj ? this._get_node(obj) : this.get_container();
if(!obj || obj === -1) { obj = this.get_container_ul(); }
obj.find("li.jstree-open").andSelf().each(function () { _this.close_node(this, !do_animation); });
this.__callback({ "obj" : obj });
},
clean_node : function (obj) {
obj = obj && obj != -1 ? $(obj) : this.get_container_ul();
obj = obj.is("li") ? obj.find("li").andSelf() : obj.find("li");
obj.removeClass("jstree-last")
.filter("li:last-child").addClass("jstree-last").end()
.filter(":has(li)")
.not(".jstree-open").removeClass("jstree-leaf").addClass("jstree-closed");
obj.not(".jstree-open, .jstree-closed").addClass("jstree-leaf").children("ul").remove();
this.__callback({ "obj" : obj });
},
// rollback
get_rollback : function () {
this.__callback();
return { i : this.get_index(), h : this.get_container().children("ul").clone(true), d : this.data };
},
set_rollback : function (html, data) {
this.get_container().empty().append(html);
this.data = data;
this.__callback();
},
// Dummy functions to be overwritten by any datastore plugin included
load_node : function (obj, s_call, e_call) { this.__callback({ "obj" : obj }); },
_is_loaded : function (obj) { return true; },
// Basic operations: create
create_node : function (obj, position, js, callback, is_loaded) {
obj = this._get_node(obj);
position = typeof position === "undefined" ? "last" : position;
var d = $("<li />"),
s = this._get_settings().core,
tmp;
if(obj !== -1 && !obj.length) { return false; }
if(!is_loaded && !this._is_loaded(obj)) { this.load_node(obj, function () { this.create_node(obj, position, js, callback, true); }); return false; }
this.__rollback();
if(typeof js === "string") { js = { "data" : js }; }
if(!js) { js = {}; }
if(js.attr) { d.attr(js.attr); }
if(js.metadata) { d.data(js.metadata); }
if(js.state) { d.addClass("jstree-" + js.state); }
if(!js.data) { js.data = this._get_string("new_node"); }
if(!$.isArray(js.data)) { tmp = js.data; js.data = []; js.data.push(tmp); }
$.each(js.data, function (i, m) {
tmp = $("<a />");
if($.isFunction(m)) { m = m.call(this, js); }
if(typeof m == "string") { tmp.attr('href','#')[ s.html_titles ? "html" : "text" ](m); }
else {
if(!m.attr) { m.attr = {}; }
if(!m.attr.href) { m.attr.href = '#'; }
tmp.attr(m.attr)[ s.html_titles ? "html" : "text" ](m.title);
if(m.language) { tmp.addClass(m.language); }
}
tmp.prepend("<ins class='jstree-icon'> </ins>");
if(m.icon) {
if(m.icon.indexOf("/") === -1) { tmp.children("ins").addClass(m.icon); }
else { tmp.children("ins").css("background","url('" + m.icon + "') center center no-repeat"); }
}
d.append(tmp);
});
d.prepend("<ins class='jstree-icon'> </ins>");
if(obj === -1) {
obj = this.get_container();
if(position === "before") { position = "first"; }
if(position === "after") { position = "last"; }
}
switch(position) {
case "before": obj.before(d); tmp = this._get_parent(obj); break;
case "after" : obj.after(d); tmp = this._get_parent(obj); break;
case "inside":
case "first" :
if(!obj.children("ul").length) { obj.append("<ul />"); }
obj.children("ul").prepend(d);
tmp = obj;
break;
case "last":
if(!obj.children("ul").length) { obj.append("<ul />"); }
obj.children("ul").append(d);
tmp = obj;
break;
default:
if(!obj.children("ul").length) { obj.append("<ul />"); }
if(!position) { position = 0; }
tmp = obj.children("ul").children("li").eq(position);
if(tmp.length) { tmp.before(d); }
else { obj.children("ul").append(d); }
tmp = obj;
break;
}
if(tmp === -1 || tmp.get(0) === this.get_container().get(0)) { tmp = -1; }
this.clean_node(tmp);
this.__callback({ "obj" : d, "parent" : tmp });
if(callback) { callback.call(this, d); }
return d;
},
// Basic operations: rename (deal with text)
get_text : function (obj) {
obj = this._get_node(obj);
if(!obj.length) { return false; }
var s = this._get_settings().core.html_titles;
obj = obj.children("a:eq(0)");
if(s) {
obj = obj.clone();
obj.children("INS").remove();
return obj.html();
}
else {
obj = obj.contents().filter(function() { return this.nodeType == 3; })[0];
return obj.nodeValue;
}
},
set_text : function (obj, val) {
obj = this._get_node(obj);
if(!obj.length) { return false; }
obj = obj.children("a:eq(0)");
if(this._get_settings().core.html_titles) {
var tmp = obj.children("INS").clone();
obj.html(val).prepend(tmp);
this.__callback({ "obj" : obj, "name" : val });
return true;
}
else {
obj = obj.contents().filter(function() { return this.nodeType == 3; })[0];
this.__callback({ "obj" : obj, "name" : val });
return (obj.nodeValue = val);
}
},
rename_node : function (obj, val) {
obj = this._get_node(obj);
this.__rollback();
if(obj && obj.length && this.set_text.apply(this, Array.prototype.slice.call(arguments))) { this.__callback({ "obj" : obj, "name" : val }); }
},
// Basic operations: deleting nodes
delete_node : function (obj) {
obj = this._get_node(obj);
if(!obj.length) { return false; }
this.__rollback();
var p = this._get_parent(obj), prev = $([]), t = this;
obj.each(function () {
prev = prev.add(t._get_prev(this));
});
obj = obj.detach();
if(p !== -1 && p.find("> ul > li").length === 0) {
p.removeClass("jstree-open jstree-closed").addClass("jstree-leaf");
}
this.clean_node(p);
this.__callback({ "obj" : obj, "prev" : prev, "parent" : p });
return obj;
},
prepare_move : function (o, r, pos, cb, is_cb) {
var p = {};
p.ot = $.jstree._reference(o) || this;
p.o = p.ot._get_node(o);
p.r = r === - 1 ? -1 : this._get_node(r);
p.p = (typeof pos === "undefined" || pos === false) ? "last" : pos; // TODO: move to a setting
if(!is_cb && prepared_move.o && prepared_move.o[0] === p.o[0] && prepared_move.r[0] === p.r[0] && prepared_move.p === p.p) {
this.__callback(prepared_move);
if(cb) { cb.call(this, prepared_move); }
return;
}
p.ot = $.jstree._reference(p.o) || this;
p.rt = $.jstree._reference(p.r) || this; // r === -1 ? p.ot : $.jstree._reference(p.r) || this
if(p.r === -1 || !p.r) {
p.cr = -1;
switch(p.p) {
case "first":
case "before":
case "inside":
p.cp = 0;
break;
case "after":
case "last":
p.cp = p.rt.get_container().find(" > ul > li").length;
break;
default:
p.cp = p.p;
break;
}
}
else {
if(!/^(before|after)$/.test(p.p) && !this._is_loaded(p.r)) {
return this.load_node(p.r, function () { this.prepare_move(o, r, pos, cb, true); });
}
switch(p.p) {
case "before":
p.cp = p.r.index();
p.cr = p.rt._get_parent(p.r);
break;
case "after":
p.cp = p.r.index() + 1;
p.cr = p.rt._get_parent(p.r);
break;
case "inside":
case "first":
p.cp = 0;
p.cr = p.r;
break;
case "last":
p.cp = p.r.find(" > ul > li").length;
p.cr = p.r;
break;
default:
p.cp = p.p;
p.cr = p.r;
break;
}
}
p.np = p.cr == -1 ? p.rt.get_container() : p.cr;
p.op = p.ot._get_parent(p.o);
p.cop = p.o.index();
if(p.op === -1) { p.op = p.ot ? p.ot.get_container() : this.get_container(); }
if(!/^(before|after)$/.test(p.p) && p.op && p.np && p.op[0] === p.np[0] && p.o.index() < p.cp) { p.cp++; }
//if(p.p === "before" && p.op && p.np && p.op[0] === p.np[0] && p.o.index() < p.cp) { p.cp--; }
p.or = p.np.find(" > ul > li:nth-child(" + (p.cp + 1) + ")");
prepared_move = p;
this.__callback(prepared_move);
if(cb) { cb.call(this, prepared_move); }
},
check_move : function () {
var obj = prepared_move, ret = true, r = obj.r === -1 ? this.get_container() : obj.r;
if(!obj || !obj.o || obj.or[0] === obj.o[0]) { return false; }
if(obj.op && obj.np && obj.op[0] === obj.np[0] && obj.cp - 1 === obj.o.index()) { return false; }
obj.o.each(function () {
if(r.parentsUntil(".jstree", "li").andSelf().index(this) !== -1) { ret = false; return false; }
});
return ret;
},
move_node : function (obj, ref, position, is_copy, is_prepared, skip_check) {
if(!is_prepared) {
return this.prepare_move(obj, ref, position, function (p) {
this.move_node(p, false, false, is_copy, true, skip_check);
});
}
if(is_copy) {
prepared_move.cy = true;
}
if(!skip_check && !this.check_move()) { return false; }
this.__rollback();
var o = false;
if(is_copy) {
o = obj.o.clone(true);
o.find("*[id]").andSelf().each(function () {
if(this.id) { this.id = "copy_" + this.id; }
});
}
else { o = obj.o; }
if(obj.or.length) { obj.or.before(o); }
else {
if(!obj.np.children("ul").length) { $("<ul />").appendTo(obj.np); }
obj.np.children("ul:eq(0)").append(o);
}
try {
obj.ot.clean_node(obj.op);
obj.rt.clean_node(obj.np);
if(!obj.op.find("> ul > li").length) {
obj.op.removeClass("jstree-open jstree-closed").addClass("jstree-leaf").children("ul").remove();
}
} catch (e) { }
if(is_copy) {
prepared_move.cy = true;
prepared_move.oc = o;
}
this.__callback(prepared_move);
return prepared_move;
},
_get_move : function () { return prepared_move; }
}
});
})(jQuery);
//*/
/*
* jsTree ui plugin
* This plugins handles selecting/deselecting/hovering/dehovering nodes
*/
(function ($) {
var scrollbar_width, e1, e2;
$(function() {
if (/msie/.test(navigator.userAgent.toLowerCase())) {
e1 = $('<textarea cols="10" rows="2"></textarea>').css({ position: 'absolute', top: -1000, left: 0 }).appendTo('body');
e2 = $('<textarea cols="10" rows="2" style="overflow: hidden;"></textarea>').css({ position: 'absolute', top: -1000, left: 0 }).appendTo('body');
scrollbar_width = e1.width() - e2.width();
e1.add(e2).remove();
}
else {
e1 = $('<div />').css({ width: 100, height: 100, overflow: 'auto', position: 'absolute', top: -1000, left: 0 })
.prependTo('body').append('<div />').find('div').css({ width: '100%', height: 200 });
scrollbar_width = 100 - e1.width();
e1.parent().remove();
}
});
$.jstree.plugin("ui", {
__init : function () {
this.data.ui.selected = $();
this.data.ui.last_selected = false;
this.data.ui.hovered = null;
this.data.ui.to_select = this.get_settings().ui.initially_select;
this.get_container()
.delegate("a", "click.jstree", $.proxy(function (event) {
event.preventDefault();
event.currentTarget.blur();
if(!$(event.currentTarget).hasClass("jstree-loading")) {
this.select_node(event.currentTarget, true, event);
}
}, this))
.delegate("a", "mouseenter.jstree", $.proxy(function (event) {
if(!$(event.currentTarget).hasClass("jstree-loading")) {
this.hover_node(event.target);
}
}, this))
.delegate("a", "mouseleave.jstree", $.proxy(function (event) {
if(!$(event.currentTarget).hasClass("jstree-loading")) {
this.dehover_node(event.target);
}
}, this))
.bind("reopen.jstree", $.proxy(function () {
this.reselect();
}, this))
.bind("get_rollback.jstree", $.proxy(function () {
this.dehover_node();
this.save_selected();
}, this))
.bind("set_rollback.jstree", $.proxy(function () {
this.reselect();
}, this))
.bind("close_node.jstree", $.proxy(function (event, data) {
var s = this._get_settings().ui,
obj = this._get_node(data.rslt.obj),
clk = (obj && obj.length) ? obj.children("ul").find("a.jstree-clicked") : $(),
_this = this;
if(s.selected_parent_close === false || !clk.length) { return; }
clk.each(function () {
_this.deselect_node(this);
if(s.selected_parent_close === "select_parent") { _this.select_node(obj); }
});
}, this))
.bind("delete_node.jstree", $.proxy(function (event, data) {
var s = this._get_settings().ui.select_prev_on_delete,
obj = this._get_node(data.rslt.obj),
clk = (obj && obj.length) ? obj.find("a.jstree-clicked") : [],
_this = this;
clk.each(function () { _this.deselect_node(this); });
if(s && clk.length) {
data.rslt.prev.each(function () {
if(this.parentNode) { _this.select_node(this); return false; /* if return false is removed all prev nodes will be selected */}
});
}
}, this))
.bind("move_node.jstree", $.proxy(function (event, data) {
if(data.rslt.cy) {
data.rslt.oc.find("a.jstree-clicked").removeClass("jstree-clicked");
}
}, this));
},
defaults : {
select_limit : -1, // 0, 1, 2 ... or -1 for unlimited
select_multiple_modifier : "ctrl", // on, or ctrl, shift, alt
select_range_modifier : "shift",
selected_parent_close : "select_parent", // false, "deselect", "select_parent"
selected_parent_open : true,
select_prev_on_delete : true,
disable_selecting_children : false,
initially_select : []
},
_fn : {
_get_node : function (obj, allow_multiple) {
if(typeof obj === "undefined" || obj === null) { return allow_multiple ? this.data.ui.selected : this.data.ui.last_selected; }
var $obj = $(obj, this.get_container());
if($obj.is(".jstree") || obj == -1) { return -1; }
$obj = $obj.closest("li", this.get_container());
return $obj.length ? $obj : false;
},
_ui_notify : function (n, data) {
if(data.selected) {
this.select_node(n, false);
}
},
save_selected : function () {
var _this = this;
this.data.ui.to_select = [];
this.data.ui.selected.each(function () { if(this.id) { _this.data.ui.to_select.push("#" + this.id.toString().replace(/^#/,"").replace(/\\\//g,"/").replace(/\//g,"\\\/").replace(/\\\./g,".").replace(/\./g,"\\.").replace(/\:/g,"\\:")); } });
this.__callback(this.data.ui.to_select);
},
reselect : function () {
var _this = this,
s = this.data.ui.to_select;
s = $.map($.makeArray(s), function (n) { return "#" + n.toString().replace(/^#/,"").replace(/\\\//g,"/").replace(/\//g,"\\\/").replace(/\\\./g,".").replace(/\./g,"\\.").replace(/\:/g,"\\:"); });
// this.deselect_all(); WHY deselect, breaks plugin state notifier?
$.each(s, function (i, val) { if(val && val !== "#") { _this.select_node(val); } });
this.data.ui.selected = this.data.ui.selected.filter(function () { return this.parentNode; });
this.__callback();
},
refresh : function (obj) {
this.save_selected();
return this.__call_old();
},
hover_node : function (obj) {
obj = this._get_node(obj);
if(!obj.length) { return false; }
//if(this.data.ui.hovered && obj.get(0) === this.data.ui.hovered.get(0)) { return; }
if(!obj.hasClass("jstree-hovered")) { this.dehover_node(); }
this.data.ui.hovered = obj.children("a").addClass("jstree-hovered").parent();
this._fix_scroll(obj);
this.__callback({ "obj" : obj });
},
dehover_node : function () {
var obj = this.data.ui.hovered, p;
if(!obj || !obj.length) { return false; }
p = obj.children("a").removeClass("jstree-hovered").parent();
if(this.data.ui.hovered[0] === p[0]) { this.data.ui.hovered = null; }
this.__callback({ "obj" : obj });
},
select_node : function (obj, check, e) {
obj = this._get_node(obj);
if(obj == -1 || !obj || !obj.length) { return false; }
var s = this._get_settings().ui,
is_multiple = (s.select_multiple_modifier == "on" || (s.select_multiple_modifier !== false && e && e[s.select_multiple_modifier + "Key"])),
is_range = (s.select_range_modifier !== false && e && e[s.select_range_modifier + "Key"] && this.data.ui.last_selected && this.data.ui.last_selected[0] !== obj[0] && this.data.ui.last_selected.parent()[0] === obj.parent()[0]),
is_selected = this.is_selected(obj),
proceed = true,
t = this;
if(check) {
if(s.disable_selecting_children && is_multiple &&
(
(obj.parentsUntil(".jstree","li").children("a.jstree-clicked").length) ||
(obj.children("ul").find("a.jstree-clicked:eq(0)").length)
)
) {
return false;
}
proceed = false;
switch(!0) {
case (is_range):
this.data.ui.last_selected.addClass("jstree-last-selected");
obj = obj[ obj.index() < this.data.ui.last_selected.index() ? "nextUntil" : "prevUntil" ](".jstree-last-selected").andSelf();
if(s.select_limit == -1 || obj.length < s.select_limit) {
this.data.ui.last_selected.removeClass("jstree-last-selected");
this.data.ui.selected.each(function () {
if(this !== t.data.ui.last_selected[0]) { t.deselect_node(this); }
});
is_selected = false;
proceed = true;
}
else {
proceed = false;
}
break;
case (is_selected && !is_multiple):
this.deselect_all();
is_selected = false;
proceed = true;
break;
case (!is_selected && !is_multiple):
if(s.select_limit == -1 || s.select_limit > 0) {
this.deselect_all();
proceed = true;
}
break;
case (is_selected && is_multiple):
this.deselect_node(obj);
break;
case (!is_selected && is_multiple):
if(s.select_limit == -1 || this.data.ui.selected.length + 1 <= s.select_limit) {
proceed = true;
}
break;
}
}
if(proceed && !is_selected) {
if(!is_range) { this.data.ui.last_selected = obj; }
obj.children("a").addClass("jstree-clicked");
if(s.selected_parent_open) {
obj.parents(".jstree-closed").each(function () { t.open_node(this, false, true); });
}
this.data.ui.selected = this.data.ui.selected.add(obj);
this._fix_scroll(obj.eq(0));
this.__callback({ "obj" : obj, "e" : e });
}
},
_fix_scroll : function (obj) {
var c = this.get_container()[0], t;
if(c.scrollHeight > c.offsetHeight) {
obj = this._get_node(obj);
if(!obj || obj === -1 || !obj.length || !obj.is(":visible")) { return; }
t = obj.offset().top - this.get_container().offset().top;
if(t < 0) {
c.scrollTop = c.scrollTop + t - 1;
}
if(t + this.data.core.li_height + (c.scrollWidth > c.offsetWidth ? scrollbar_width : 0) > c.offsetHeight) {
c.scrollTop = c.scrollTop + (t - c.offsetHeight + this.data.core.li_height + 1 + (c.scrollWidth > c.offsetWidth ? scrollbar_width : 0));
}
}
},
deselect_node : function (obj) {
obj = this._get_node(obj);
if(!obj.length) { return false; }
if(this.is_selected(obj)) {
obj.children("a").removeClass("jstree-clicked");
this.data.ui.selected = this.data.ui.selected.not(obj);
if(this.data.ui.last_selected.get(0) === obj.get(0)) { this.data.ui.last_selected = this.data.ui.selected.eq(0); }
this.__callback({ "obj" : obj });
}
},
toggle_select : function (obj) {
obj = this._get_node(obj);
if(!obj.length) { return false; }
if(this.is_selected(obj)) { this.deselect_node(obj); }
else { this.select_node(obj); }
},
is_selected : function (obj) { return this.data.ui.selected.index(this._get_node(obj)) >= 0; },
get_selected : function (context) {
return context ? $(context).find("a.jstree-clicked").parent() : this.data.ui.selected;
},
deselect_all : function (context) {
var ret = context ? $(context).find("a.jstree-clicked").parent() : this.get_container().find("a.jstree-clicked").parent();
ret.children("a.jstree-clicked").removeClass("jstree-clicked");
this.data.ui.selected = $([]);
this.data.ui.last_selected = false;
this.__callback({ "obj" : ret });
}
}
});
// include the selection plugin by default
$.jstree.defaults.plugins.push("ui");
})(jQuery);
//*/
/*
* jsTree CRRM plugin
* Handles creating/renaming/removing/moving nodes by user interaction.
*/
(function ($) {
$.jstree.plugin("crrm", {
__init : function () {
this.get_container()
.bind("move_node.jstree", $.proxy(function (e, data) {
if(this._get_settings().crrm.move.open_onmove) {
var t = this;
data.rslt.np.parentsUntil(".jstree").andSelf().filter(".jstree-closed").each(function () {
t.open_node(this, false, true);
});
}
}, this));
},
defaults : {
input_width_limit : 200,
move : {
always_copy : false, // false, true or "multitree"
open_onmove : true,
default_position : "last",
check_move : function (m) { return true; }
}
},
_fn : {
_show_input : function (obj, callback) {
obj = this._get_node(obj);
var rtl = this._get_settings().core.rtl,
w = this._get_settings().crrm.input_width_limit,
w1 = obj.children("ins").width(),
w2 = obj.find("> a:visible > ins").width() * obj.find("> a:visible > ins").length,
t = this.get_text(obj),
h1 = $("<div />", { css : { "position" : "absolute", "top" : "-200px", "left" : (rtl ? "0px" : "-1000px"), "visibility" : "hidden" } }).appendTo("body"),
h2 = obj.css("position","relative").append(
$("<input />", {
"value" : t,
"class" : "jstree-rename-input",
// "size" : t.length,
"css" : {
"padding" : "0",
"border" : "1px solid silver",
"position" : "absolute",
"left" : (rtl ? "auto" : (w1 + w2 + 4) + "px"),
"right" : (rtl ? (w1 + w2 + 4) + "px" : "auto"),
"top" : "0px",
"height" : (this.data.core.li_height - 2) + "px",
"lineHeight" : (this.data.core.li_height - 2) + "px",
"width" : "150px" // will be set a bit further down
},
"blur" : $.proxy(function () {
var i = obj.children(".jstree-rename-input"),
v = i.val();
if(v === "") { v = t; }
h1.remove();
i.remove(); // rollback purposes
this.set_text(obj,t); // rollback purposes
this.rename_node(obj, v);
callback.call(this, obj, v, t);
obj.css("position","");
}, this),
"keyup" : function (event) {
var key = event.keyCode || event.which;
if(key == 27) { this.value = t; this.blur(); return; }
else if(key == 13) { this.blur(); return; }
else {
h2.width(Math.min(h1.text("pW" + this.value).width(),w));
}
},
"keypress" : function(event) {
var key = event.keyCode || event.which;
if(key == 13) { return false; }
}
})
).children(".jstree-rename-input");
this.set_text(obj, "");
h1.css({
fontFamily : h2.css('fontFamily') || '',
fontSize : h2.css('fontSize') || '',
fontWeight : h2.css('fontWeight') || '',
fontStyle : h2.css('fontStyle') || '',
fontStretch : h2.css('fontStretch') || '',
fontVariant : h2.css('fontVariant') || '',
letterSpacing : h2.css('letterSpacing') || '',
wordSpacing : h2.css('wordSpacing') || ''
});
h2.width(Math.min(h1.text("pW" + h2[0].value).width(),w))[0].select();
},
rename : function (obj) {
obj = this._get_node(obj);
this.__rollback();
var f = this.__callback;
this._show_input(obj, function (obj, new_name, old_name) {
f.call(this, { "obj" : obj, "new_name" : new_name, "old_name" : old_name });
});
},
create : function (obj, position, js, callback, skip_rename) {
var t, _this = this;
obj = this._get_node(obj);
if(!obj) { obj = -1; }
this.__rollback();
t = this.create_node(obj, position, js, function (t) {
var p = this._get_parent(t),
pos = $(t).index();
if(callback) { callback.call(this, t); }
if(p.length && p.hasClass("jstree-closed")) { this.open_node(p, false, true); }
if(!skip_rename) {
this._show_input(t, function (obj, new_name, old_name) {
_this.__callback({ "obj" : obj, "name" : new_name, "parent" : p, "position" : pos });
});
}
else { _this.__callback({ "obj" : t, "name" : this.get_text(t), "parent" : p, "position" : pos }); }
});
return t;
},
remove : function (obj) {
obj = this._get_node(obj, true);
var p = this._get_parent(obj), prev = this._get_prev(obj);
this.__rollback();
obj = this.delete_node(obj);
if(obj !== false) { this.__callback({ "obj" : obj, "prev" : prev, "parent" : p }); }
},
check_move : function () {
if(!this.__call_old()) { return false; }
var s = this._get_settings().crrm.move;
if(!s.check_move.call(this, this._get_move())) { return false; }
return true;
},
move_node : function (obj, ref, position, is_copy, is_prepared, skip_check) {
var s = this._get_settings().crrm.move;
if(!is_prepared) {
if(typeof position === "undefined") { position = s.default_position; }
if(position === "inside" && !s.default_position.match(/^(before|after)$/)) { position = s.default_position; }
return this.__call_old(true, obj, ref, position, is_copy, false, skip_check);
}
// if the move is already prepared
if(s.always_copy === true || (s.always_copy === "multitree" && obj.rt.get_index() !== obj.ot.get_index() )) {
is_copy = true;
}
this.__call_old(true, obj, ref, position, is_copy, true, skip_check);
},
cut : function (obj) {
obj = this._get_node(obj, true);
if(!obj || !obj.length) { return false; }
this.data.crrm.cp_nodes = false;
this.data.crrm.ct_nodes = obj;
this.__callback({ "obj" : obj });
},
copy : function (obj) {
obj = this._get_node(obj, true);
if(!obj || !obj.length) { return false; }
this.data.crrm.ct_nodes = false;
this.data.crrm.cp_nodes = obj;
this.__callback({ "obj" : obj });
},
paste : function (obj) {
obj = this._get_node(obj);
if(!obj || !obj.length) { return false; }
var nodes = this.data.crrm.ct_nodes ? this.data.crrm.ct_nodes : this.data.crrm.cp_nodes;
if(!this.data.crrm.ct_nodes && !this.data.crrm.cp_nodes) { return false; }
if(this.data.crrm.ct_nodes) { this.move_node(this.data.crrm.ct_nodes, obj); this.data.crrm.ct_nodes = false; }
if(this.data.crrm.cp_nodes) { this.move_node(this.data.crrm.cp_nodes, obj, false, true); }
this.__callback({ "obj" : obj, "nodes" : nodes });
}
}
});
// include the crr plugin by default
// $.jstree.defaults.plugins.push("crrm");
})(jQuery);
//*/
/*
* jsTree themes plugin
* Handles loading and setting themes, as well as detecting path to themes, etc.
*/
(function ($) {
var themes_loaded = [];
// this variable stores the path to the themes folder - if left as false - it will be autodetected
$.jstree._themes = false;
$.jstree.plugin("themes", {
__init : function () {
this.get_container()
.bind("init.jstree", $.proxy(function () {
var s = this._get_settings().themes;
this.data.themes.dots = s.dots;
this.data.themes.icons = s.icons;
this.set_theme(s.theme, s.url);
}, this))
.bind("loaded.jstree", $.proxy(function () {
// bound here too, as simple HTML tree's won't honor dots & icons otherwise
if(!this.data.themes.dots) { this.hide_dots(); }
else { this.show_dots(); }
if(!this.data.themes.icons) { this.hide_icons(); }
else { this.show_icons(); }
}, this));
},
defaults : {
theme : "default",
url : false,
dots : true,
icons : true
},
_fn : {
set_theme : function (theme_name, theme_url) {
if(!theme_name) { return false; }
if(!theme_url) { theme_url = $.jstree._themes + theme_name + '/style.css'; }
if($.inArray(theme_url, themes_loaded) == -1) {
$.vakata.css.add_sheet({ "url" : theme_url });
themes_loaded.push(theme_url);
}
if(this.data.themes.theme != theme_name) {
this.get_container().removeClass('jstree-' + this.data.themes.theme);
this.data.themes.theme = theme_name;
}
this.get_container().addClass('jstree-' + theme_name);
if(!this.data.themes.dots) { this.hide_dots(); }
else { this.show_dots(); }
if(!this.data.themes.icons) { this.hide_icons(); }
else { this.show_icons(); }
this.__callback();
},
get_theme : function () { return this.data.themes.theme; },
show_dots : function () { this.data.themes.dots = true; this.get_container().children("ul").removeClass("jstree-no-dots"); },
hide_dots : function () { this.data.themes.dots = false; this.get_container().children("ul").addClass("jstree-no-dots"); },
toggle_dots : function () { if(this.data.themes.dots) { this.hide_dots(); } else { this.show_dots(); } },
show_icons : function () { this.data.themes.icons = true; this.get_container().children("ul").removeClass("jstree-no-icons"); },
hide_icons : function () { this.data.themes.icons = false; this.get_container().children("ul").addClass("jstree-no-icons"); },
toggle_icons: function () { if(this.data.themes.icons) { this.hide_icons(); } else { this.show_icons(); } }
}
});
// autodetect themes path
$(function () {
if($.jstree._themes === false) {
$("script").each(function () {
if(this.src.toString().match(/jquery\.jstree[^\/]*?\.js(\?.*)?$/)) {
$.jstree._themes = this.src.toString().replace(/jquery\.jstree[^\/]*?\.js(\?.*)?$/, "") + 'themes/';
return false;
}
});
}
if($.jstree._themes === false) { $.jstree._themes = "themes/"; }
});
// include the themes plugin by default
$.jstree.defaults.plugins.push("themes");
})(jQuery);
//*/
/*
* jsTree hotkeys plugin
* Enables keyboard navigation for all tree instances
* Depends on the jstree ui & jquery hotkeys plugins
*/
(function ($) {
var bound = [];
function exec(i, event) {
var f = $.jstree._focused(), tmp;
if(f && f.data && f.data.hotkeys && f.data.hotkeys.enabled) {
tmp = f._get_settings().hotkeys[i];
if(tmp) { return tmp.call(f, event); }
}
}
$.jstree.plugin("hotkeys", {
__init : function () {
if(typeof $.hotkeys === "undefined") { throw "jsTree hotkeys: jQuery hotkeys plugin not included."; }
if(!this.data.ui) { throw "jsTree hotkeys: jsTree UI plugin not included."; }
$.each(this._get_settings().hotkeys, function (i, v) {
if(v !== false && $.inArray(i, bound) == -1) {
$(document).bind("keydown", i, function (event) { return exec(i, event); });
bound.push(i);
}
});
this.get_container()
.bind("lock.jstree", $.proxy(function () {
if(this.data.hotkeys.enabled) { this.data.hotkeys.enabled = false; this.data.hotkeys.revert = true; }
}, this))
.bind("unlock.jstree", $.proxy(function () {
if(this.data.hotkeys.revert) { this.data.hotkeys.enabled = true; }
}, this));
this.enable_hotkeys();
},
defaults : {
"up" : function () {
var o = this.data.ui.hovered || this.data.ui.last_selected || -1;
this.hover_node(this._get_prev(o));
return false;
},
"ctrl+up" : function () {
var o = this.data.ui.hovered || this.data.ui.last_selected || -1;
this.hover_node(this._get_prev(o));
return false;
},
"shift+up" : function () {
var o = this.data.ui.hovered || this.data.ui.last_selected || -1;
this.hover_node(this._get_prev(o));
return false;
},
"down" : function () {
var o = this.data.ui.hovered || this.data.ui.last_selected || -1;
this.hover_node(this._get_next(o));
return false;
},
"ctrl+down" : function () {
var o = this.data.ui.hovered || this.data.ui.last_selected || -1;
this.hover_node(this._get_next(o));
return false;
},
"shift+down" : function () {
var o = this.data.ui.hovered || this.data.ui.last_selected || -1;
this.hover_node(this._get_next(o));
return false;
},
"left" : function () {
var o = this.data.ui.hovered || this.data.ui.last_selected;
if(o) {
if(o.hasClass("jstree-open")) { this.close_node(o); }
else { this.hover_node(this._get_prev(o)); }
}
return false;
},
"ctrl+left" : function () {
var o = this.data.ui.hovered || this.data.ui.last_selected;
if(o) {
if(o.hasClass("jstree-open")) { this.close_node(o); }
else { this.hover_node(this._get_prev(o)); }
}
return false;
},
"shift+left" : function () {
var o = this.data.ui.hovered || this.data.ui.last_selected;
if(o) {
if(o.hasClass("jstree-open")) { this.close_node(o); }
else { this.hover_node(this._get_prev(o)); }
}
return false;
},
"right" : function () {
var o = this.data.ui.hovered || this.data.ui.last_selected;
if(o && o.length) {
if(o.hasClass("jstree-closed")) { this.open_node(o); }
else { this.hover_node(this._get_next(o)); }
}
return false;
},
"ctrl+right" : function () {
var o = this.data.ui.hovered || this.data.ui.last_selected;
if(o && o.length) {
if(o.hasClass("jstree-closed")) { this.open_node(o); }
else { this.hover_node(this._get_next(o)); }
}
return false;
},
"shift+right" : function () {
var o = this.data.ui.hovered || this.data.ui.last_selected;
if(o && o.length) {
if(o.hasClass("jstree-closed")) { this.open_node(o); }
else { this.hover_node(this._get_next(o)); }
}
return false;
},
"space" : function () {
if(this.data.ui.hovered) { this.data.ui.hovered.children("a:eq(0)").click(); }
return false;
},
"ctrl+space" : function (event) {
event.type = "click";
if(this.data.ui.hovered) { this.data.ui.hovered.children("a:eq(0)").trigger(event); }
return false;
},
"shift+space" : function (event) {
event.type = "click";
if(this.data.ui.hovered) { this.data.ui.hovered.children("a:eq(0)").trigger(event); }
return false;
},
"f2" : function () { this.rename(this.data.ui.hovered || this.data.ui.last_selected); },
"del" : function () { this.remove(this.data.ui.hovered || this._get_node(null)); }
},
_fn : {
enable_hotkeys : function () {
this.data.hotkeys.enabled = true;
},
disable_hotkeys : function () {
this.data.hotkeys.enabled = false;
}
}
});
})(jQuery);
//*/
/*
* jsTree JSON plugin
* The JSON data store. Datastores are build by overriding the `load_node` and `_is_loaded` functions.
*/
(function ($) {
$.jstree.plugin("json_data", {
__init : function() {
var s = this._get_settings().json_data;
if(s.progressive_unload) {
this.get_container().bind("after_close.jstree", function (e, data) {
data.rslt.obj.children("ul").remove();
});
}
},
defaults : {
// `data` can be a function:
// * accepts two arguments - node being loaded and a callback to pass the result to
// * will be executed in the current tree's scope & ajax won't be supported
data : false,
ajax : false,
correct_state : true,
progressive_render : false,
progressive_unload : false
},
_fn : {
load_node : function (obj, s_call, e_call) { var _this = this; this.load_node_json(obj, function () { _this.__callback({ "obj" : _this._get_node(obj) }); s_call.call(this); }, e_call); },
_is_loaded : function (obj) {
var s = this._get_settings().json_data;
obj = this._get_node(obj);
return obj == -1 || !obj || (!s.ajax && !s.progressive_render && !$.isFunction(s.data)) || obj.is(".jstree-open, .jstree-leaf") || obj.children("ul").children("li").length > 0;
},
refresh : function (obj) {
obj = this._get_node(obj);
var s = this._get_settings().json_data;
if(obj && obj !== -1 && s.progressive_unload && ($.isFunction(s.data) || !!s.ajax)) {
obj.removeData("jstree-children");
}
return this.__call_old();
},
load_node_json : function (obj, s_call, e_call) {
var s = this.get_settings().json_data, d,
error_func = function () {},
success_func = function () {};
obj = this._get_node(obj);
if(obj && obj !== -1 && (s.progressive_render || s.progressive_unload) && !obj.is(".jstree-open, .jstree-leaf") && obj.children("ul").children("li").length === 0 && obj.data("jstree-children")) {
d = this._parse_json(obj.data("jstree-children"), obj);
if(d) {
obj.append(d);
if(!s.progressive_unload) { obj.removeData("jstree-children"); }
}
this.clean_node(obj);
if(s_call) { s_call.call(this); }
return;
}
if(obj && obj !== -1) {
if(obj.data("jstree-is-loading")) { return; }
else { obj.data("jstree-is-loading",true); }
}
switch(!0) {
case (!s.data && !s.ajax): throw "Neither data nor ajax settings supplied.";
// function option added here for easier model integration (also supporting async - see callback)
case ($.isFunction(s.data)):
s.data.call(this, obj, $.proxy(function (d) {
d = this._parse_json(d, obj);
if(!d) {
if(obj === -1 || !obj) {
if(s.correct_state) { this.get_container().children("ul").empty(); }
}
else {
obj.children("a.jstree-loading").removeClass("jstree-loading");
obj.removeData("jstree-is-loading");
if(s.correct_state) { this.correct_state(obj); }
}
if(e_call) { e_call.call(this); }
}
else {
if(obj === -1 || !obj) { this.get_container().children("ul").empty().append(d.children()); }
else { obj.append(d).children("a.jstree-loading").removeClass("jstree-loading"); obj.removeData("jstree-is-loading"); }
this.clean_node(obj);
if(s_call) { s_call.call(this); }
}
}, this));
break;
case (!!s.data && !s.ajax) || (!!s.data && !!s.ajax && (!obj || obj === -1)):
if(!obj || obj == -1) {
d = this._parse_json(s.data, obj);
if(d) {
this.get_container().children("ul").empty().append(d.children());
this.clean_node();
}
else {
if(s.correct_state) { this.get_container().children("ul").empty(); }
}
}
if(s_call) { s_call.call(this); }
break;
case (!s.data && !!s.ajax) || (!!s.data && !!s.ajax && obj && obj !== -1):
error_func = function (x, t, e) {
var ef = this.get_settings().json_data.ajax.error;
if(ef) { ef.call(this, x, t, e); }
if(obj != -1 && obj.length) {
obj.children("a.jstree-loading").removeClass("jstree-loading");
obj.removeData("jstree-is-loading");
if(t === "success" && s.correct_state) { this.correct_state(obj); }
}
else {
if(t === "success" && s.correct_state) { this.get_container().children("ul").empty(); }
}
if(e_call) { e_call.call(this); }
};
success_func = function (d, t, x) {
var sf = this.get_settings().json_data.ajax.success;
if(sf) { d = sf.call(this,d,t,x) || d; }
if(d === "" || (d && d.toString && d.toString().replace(/^[\s\n]+$/,"") === "") || (!$.isArray(d) && !$.isPlainObject(d))) {
return error_func.call(this, x, t, "");
}
d = this._parse_json(d, obj);
if(d) {
if(obj === -1 || !obj) { this.get_container().children("ul").empty().append(d.children()); }
else { obj.append(d).children("a.jstree-loading").removeClass("jstree-loading"); obj.removeData("jstree-is-loading"); }
this.clean_node(obj);
if(s_call) { s_call.call(this); }
}
else {
if(obj === -1 || !obj) {
if(s.correct_state) {
this.get_container().children("ul").empty();
if(s_call) { s_call.call(this); }
}
}
else {
obj.children("a.jstree-loading").removeClass("jstree-loading");
obj.removeData("jstree-is-loading");
if(s.correct_state) {
this.correct_state(obj);
if(s_call) { s_call.call(this); }
}
}
}
};
s.ajax.context = this;
s.ajax.error = error_func;
s.ajax.success = success_func;
if(!s.ajax.dataType) { s.ajax.dataType = "json"; }
if($.isFunction(s.ajax.url)) { s.ajax.url = s.ajax.url.call(this, obj); }
if($.isFunction(s.ajax.data)) { s.ajax.data = s.ajax.data.call(this, obj); }
$.ajax(s.ajax);
break;
}
},
_parse_json : function (js, obj, is_callback) {
var d = false,
p = this._get_settings(),
s = p.json_data,
t = p.core.html_titles,
tmp, i, j, ul1, ul2;
if(!js) { return d; }
if(s.progressive_unload && obj && obj !== -1) {
obj.data("jstree-children", d);
}
if($.isArray(js)) {
d = $();
if(!js.length) { return false; }
for(i = 0, j = js.length; i < j; i++) {
tmp = this._parse_json(js[i], obj, true);
if(tmp.length) { d = d.add(tmp); }
}
}
else {
if(typeof js == "string") { js = { data : js }; }
if(!js.data && js.data !== "") { return d; }
d = $("<li />");
if(js.attr) { d.attr(js.attr); }
if(js.metadata) { d.data(js.metadata); }
if(js.state) { d.addClass("jstree-" + js.state); }
if(!$.isArray(js.data)) { tmp = js.data; js.data = []; js.data.push(tmp); }
$.each(js.data, function (i, m) {
tmp = $("<a />");
if($.isFunction(m)) { m = m.call(this, js); }
if(typeof m == "string") { tmp.attr('href','#')[ t ? "html" : "text" ](m); }
else {
if(!m.attr) { m.attr = {}; }
if(!m.attr.href) { m.attr.href = '#'; }
tmp.attr(m.attr)[ t ? "html" : "text" ](m.title);
if(m.language) { tmp.addClass(m.language); }
}
tmp.prepend("<ins class='jstree-icon'> </ins>");
if(!m.icon && js.icon) { m.icon = js.icon; }
if(m.icon) {
if(m.icon.indexOf("/") === -1) { tmp.children("ins").addClass(m.icon); }
else { tmp.children("ins").css("background","url('" + m.icon + "') center center no-repeat"); }
}
d.append(tmp);
});
d.prepend("<ins class='jstree-icon'> </ins>");
if(js.children) {
if(s.progressive_render && js.state !== "open") {
d.addClass("jstree-closed").data("jstree-children", js.children);
}
else {
if(s.progressive_unload) { d.data("jstree-children", js.children); }
if($.isArray(js.children) && js.children.length) {
tmp = this._parse_json(js.children, obj, true);
if(tmp.length) {
ul2 = $("<ul />");
ul2.append(tmp);
d.append(ul2);
}
}
}
}
}
if(!is_callback) {
ul1 = $("<ul />");
ul1.append(d);
d = ul1;
}
return d;
},
get_json : function (obj, li_attr, a_attr, is_callback) {
var result = [],
s = this._get_settings(),
_this = this,
tmp1, tmp2, li, a, t, lang;
obj = this._get_node(obj);
if(!obj || obj === -1) { obj = this.get_container().find("> ul > li"); }
li_attr = $.isArray(li_attr) ? li_attr : [ "id", "class" ];
if(!is_callback && this.data.types) { li_attr.push(s.types.type_attr); }
a_attr = $.isArray(a_attr) ? a_attr : [ ];
obj.each(function () {
li = $(this);
tmp1 = { data : [] };
if(li_attr.length) { tmp1.attr = { }; }
$.each(li_attr, function (i, v) {
tmp2 = li.attr(v);
if(tmp2 && tmp2.length && tmp2.replace(/jstree[^ ]*/ig,'').length) {
tmp1.attr[v] = (" " + tmp2).replace(/ jstree[^ ]*/ig,'').replace(/\s+$/ig," ").replace(/^ /,"").replace(/ $/,"");
}
});
if(li.hasClass("jstree-open")) { tmp1.state = "open"; }
if(li.hasClass("jstree-closed")) { tmp1.state = "closed"; }
if(li.data()) { tmp1.metadata = li.data(); }
a = li.children("a");
a.each(function () {
t = $(this);
if(
a_attr.length ||
$.inArray("languages", s.plugins) !== -1 ||
t.children("ins").get(0).style.backgroundImage.length ||
(t.children("ins").get(0).className && t.children("ins").get(0).className.replace(/jstree[^ ]*|$/ig,'').length)
) {
lang = false;
if($.inArray("languages", s.plugins) !== -1 && $.isArray(s.languages) && s.languages.length) {
$.each(s.languages, function (l, lv) {
if(t.hasClass(lv)) {
lang = lv;
return false;
}
});
}
tmp2 = { attr : { }, title : _this.get_text(t, lang) };
$.each(a_attr, function (k, z) {
tmp2.attr[z] = (" " + (t.attr(z) || "")).replace(/ jstree[^ ]*/ig,'').replace(/\s+$/ig," ").replace(/^ /,"").replace(/ $/,"");
});
if($.inArray("languages", s.plugins) !== -1 && $.isArray(s.languages) && s.languages.length) {
$.each(s.languages, function (k, z) {
if(t.hasClass(z)) { tmp2.language = z; return true; }
});
}
if(t.children("ins").get(0).className.replace(/jstree[^ ]*|$/ig,'').replace(/^\s+$/ig,"").length) {
tmp2.icon = t.children("ins").get(0).className.replace(/jstree[^ ]*|$/ig,'').replace(/\s+$/ig," ").replace(/^ /,"").replace(/ $/,"");
}
if(t.children("ins").get(0).style.backgroundImage.length) {
tmp2.icon = t.children("ins").get(0).style.backgroundImage.replace("url(","").replace(")","");
}
}
else {
tmp2 = _this.get_text(t);
}
if(a.length > 1) { tmp1.data.push(tmp2); }
else { tmp1.data = tmp2; }
});
li = li.find("> ul > li");
if(li.length) { tmp1.children = _this.get_json(li, li_attr, a_attr, true); }
result.push(tmp1);
});
return result;
}
}
});
})(jQuery);
//*/
/*
* jsTree languages plugin
* Adds support for multiple language versions in one tree
* This basically allows for many titles coexisting in one node, but only one of them being visible at any given time
* This is useful for maintaining the same structure in many languages (hence the name of the plugin)
*/
(function ($) {
$.jstree.plugin("languages", {
__init : function () { this._load_css(); },
defaults : [],
_fn : {
set_lang : function (i) {
var langs = this._get_settings().languages,
st = false,
selector = ".jstree-" + this.get_index() + ' a';
if(!$.isArray(langs) || langs.length === 0) { return false; }
if($.inArray(i,langs) == -1) {
if(!!langs[i]) { i = langs[i]; }
else { return false; }
}
if(i == this.data.languages.current_language) { return true; }
st = $.vakata.css.get_css(selector + "." + this.data.languages.current_language, false, this.data.languages.language_css);
if(st !== false) { st.style.display = "none"; }
st = $.vakata.css.get_css(selector + "." + i, false, this.data.languages.language_css);
if(st !== false) { st.style.display = ""; }
this.data.languages.current_language = i;
this.__callback(i);
return true;
},
get_lang : function () {
return this.data.languages.current_language;
},
_get_string : function (key, lang) {
var langs = this._get_settings().languages,
s = this._get_settings().core.strings;
if($.isArray(langs) && langs.length) {
lang = (lang && $.inArray(lang,langs) != -1) ? lang : this.data.languages.current_language;
}
if(s[lang] && s[lang][key]) { return s[lang][key]; }
if(s[key]) { return s[key]; }
return key;
},
get_text : function (obj, lang) {
obj = this._get_node(obj) || this.data.ui.last_selected;
if(!obj.size()) { return false; }
var langs = this._get_settings().languages,
s = this._get_settings().core.html_titles;
if($.isArray(langs) && langs.length) {
lang = (lang && $.inArray(lang,langs) != -1) ? lang : this.data.languages.current_language;
obj = obj.children("a." + lang);
}
else { obj = obj.children("a:eq(0)"); }
if(s) {
obj = obj.clone();
obj.children("INS").remove();
return obj.html();
}
else {
obj = obj.contents().filter(function() { return this.nodeType == 3; })[0];
return obj.nodeValue;
}
},
set_text : function (obj, val, lang) {
obj = this._get_node(obj) || this.data.ui.last_selected;
if(!obj.size()) { return false; }
var langs = this._get_settings().languages,
s = this._get_settings().core.html_titles,
tmp;
if($.isArray(langs) && langs.length) {
lang = (lang && $.inArray(lang,langs) != -1) ? lang : this.data.languages.current_language;
obj = obj.children("a." + lang);
}
else { obj = obj.children("a:eq(0)"); }
if(s) {
tmp = obj.children("INS").clone();
obj.html(val).prepend(tmp);
this.__callback({ "obj" : obj, "name" : val, "lang" : lang });
return true;
}
else {
obj = obj.contents().filter(function() { return this.nodeType == 3; })[0];
this.__callback({ "obj" : obj, "name" : val, "lang" : lang });
return (obj.nodeValue = val);
}
},
_load_css : function () {
var langs = this._get_settings().languages,
str = "/* languages css */",
selector = ".jstree-" + this.get_index() + ' a',
ln;
if($.isArray(langs) && langs.length) {
this.data.languages.current_language = langs[0];
for(ln = 0; ln < langs.length; ln++) {
str += selector + "." + langs[ln] + " {";
if(langs[ln] != this.data.languages.current_language) { str += " display:none; "; }
str += " } ";
}
this.data.languages.language_css = $.vakata.css.add_sheet({ 'str' : str, 'title' : "jstree-languages" });
}
},
create_node : function (obj, position, js, callback) {
var t = this.__call_old(true, obj, position, js, function (t) {
var langs = this._get_settings().languages,
a = t.children("a"),
ln;
if($.isArray(langs) && langs.length) {
for(ln = 0; ln < langs.length; ln++) {
if(!a.is("." + langs[ln])) {
t.append(a.eq(0).clone().removeClass(langs.join(" ")).addClass(langs[ln]));
}
}
a.not("." + langs.join(", .")).remove();
}
if(callback) { callback.call(this, t); }
});
return t;
}
}
});
})(jQuery);
//*/
/*
* jsTree cookies plugin
* Stores the currently opened/selected nodes in a cookie and then restores them
* Depends on the jquery.cookie plugin
*/
(function ($) {
$.jstree.plugin("cookies", {
__init : function () {
if(typeof $.cookie === "undefined") { throw "jsTree cookie: jQuery cookie plugin not included."; }
var s = this._get_settings().cookies,
tmp;
if(!!s.save_loaded) {
tmp = $.cookie(s.save_loaded);
if(tmp && tmp.length) { this.data.core.to_load = tmp.split(","); }
}
if(!!s.save_opened) {
tmp = $.cookie(s.save_opened);
if(tmp && tmp.length) { this.data.core.to_open = tmp.split(","); }
}
if(!!s.save_selected) {
tmp = $.cookie(s.save_selected);
if(tmp && tmp.length && this.data.ui) { this.data.ui.to_select = tmp.split(","); }
}
this.get_container()
.one( ( this.data.ui ? "reselect" : "reopen" ) + ".jstree", $.proxy(function () {
this.get_container()
.bind("open_node.jstree close_node.jstree select_node.jstree deselect_node.jstree", $.proxy(function (e) {
if(this._get_settings().cookies.auto_save) { this.save_cookie((e.handleObj.namespace + e.handleObj.type).replace("jstree","")); }
}, this));
}, this));
},
defaults : {
save_loaded : "jstree_load",
save_opened : "jstree_open",
save_selected : "jstree_select",
auto_save : true,
cookie_options : {}
},
_fn : {
save_cookie : function (c) {
if(this.data.core.refreshing) { return; }
var s = this._get_settings().cookies;
if(!c) { // if called manually and not by event
if(s.save_loaded) {
this.save_loaded();
$.cookie(s.save_loaded, this.data.core.to_load.join(","), s.cookie_options);
}
if(s.save_opened) {
this.save_opened();
$.cookie(s.save_opened, this.data.core.to_open.join(","), s.cookie_options);
}
if(s.save_selected && this.data.ui) {
this.save_selected();
$.cookie(s.save_selected, this.data.ui.to_select.join(","), s.cookie_options);
}
return;
}
switch(c) {
case "open_node":
case "close_node":
if(!!s.save_opened) {
this.save_opened();
$.cookie(s.save_opened, this.data.core.to_open.join(","), s.cookie_options);
}
if(!!s.save_loaded) {
this.save_loaded();
$.cookie(s.save_loaded, this.data.core.to_load.join(","), s.cookie_options);
}
break;
case "select_node":
case "deselect_node":
if(!!s.save_selected && this.data.ui) {
this.save_selected();
$.cookie(s.save_selected, this.data.ui.to_select.join(","), s.cookie_options);
}
break;
}
}
}
});
// include cookies by default
// $.jstree.defaults.plugins.push("cookies");
})(jQuery);
//*/
/*
* jsTree sort plugin
* Sorts items alphabetically (or using any other function)
*/
(function ($) {
$.jstree.plugin("sort", {
__init : function () {
this.get_container()
.bind("load_node.jstree", $.proxy(function (e, data) {
var obj = this._get_node(data.rslt.obj);
obj = obj === -1 ? this.get_container().children("ul") : obj.children("ul");
this.sort(obj);
}, this))
.bind("rename_node.jstree create_node.jstree create.jstree", $.proxy(function (e, data) {
this.sort(data.rslt.obj.parent());
}, this))
.bind("move_node.jstree", $.proxy(function (e, data) {
var m = data.rslt.np == -1 ? this.get_container() : data.rslt.np;
this.sort(m.children("ul"));
}, this));
},
defaults : function (a, b) { return this.get_text(a) > this.get_text(b) ? 1 : -1; },
_fn : {
sort : function (obj) {
var s = this._get_settings().sort,
t = this;
obj.append($.makeArray(obj.children("li")).sort($.proxy(s, t)));
obj.find("> li > ul").each(function() { t.sort($(this)); });
this.clean_node(obj);
}
}
});
})(jQuery);
//*/
/*
* jsTree DND plugin
* Drag and drop plugin for moving/copying nodes
*/
(function ($) {
var o = false,
r = false,
m = false,
ml = false,
sli = false,
sti = false,
dir1 = false,
dir2 = false,
last_pos = false;
$.vakata.dnd = {
is_down : false,
is_drag : false,
helper : false,
scroll_spd : 10,
init_x : 0,
init_y : 0,
threshold : 5,
helper_left : 5,
helper_top : 10,
user_data : {},
drag_start : function (e, data, html) {
if($.vakata.dnd.is_drag) { $.vakata.drag_stop({}); }
try {
e.currentTarget.unselectable = "on";
e.currentTarget.onselectstart = function() { return false; };
if(e.currentTarget.style) { e.currentTarget.style.MozUserSelect = "none"; }
} catch(err) { }
$.vakata.dnd.init_x = e.pageX;
$.vakata.dnd.init_y = e.pageY;
$.vakata.dnd.user_data = data;
$.vakata.dnd.is_down = true;
$.vakata.dnd.helper = $("<div id='vakata-dragged' />").html(html); //.fadeTo(10,0.25);
$(document).bind("mousemove", $.vakata.dnd.drag);
$(document).bind("mouseup", $.vakata.dnd.drag_stop);
return false;
},
drag : function (e) {
if(!$.vakata.dnd.is_down) { return; }
if(!$.vakata.dnd.is_drag) {
if(Math.abs(e.pageX - $.vakata.dnd.init_x) > 5 || Math.abs(e.pageY - $.vakata.dnd.init_y) > 5) {
$.vakata.dnd.helper.appendTo("body");
$.vakata.dnd.is_drag = true;
$(document).triggerHandler("drag_start.vakata", { "event" : e, "data" : $.vakata.dnd.user_data });
}
else { return; }
}
// maybe use a scrolling parent element instead of document?
if(e.type === "mousemove") { // thought of adding scroll in order to move the helper, but mouse poisition is n/a
var d = $(document), t = d.scrollTop(), l = d.scrollLeft();
if(e.pageY - t < 20) {
if(sti && dir1 === "down") { clearInterval(sti); sti = false; }
if(!sti) { dir1 = "up"; sti = setInterval(function () { $(document).scrollTop($(document).scrollTop() - $.vakata.dnd.scroll_spd); }, 150); }
}
else {
if(sti && dir1 === "up") { clearInterval(sti); sti = false; }
}
if($(window).height() - (e.pageY - t) < 20) {
if(sti && dir1 === "up") { clearInterval(sti); sti = false; }
if(!sti) { dir1 = "down"; sti = setInterval(function () { $(document).scrollTop($(document).scrollTop() + $.vakata.dnd.scroll_spd); }, 150); }
}
else {
if(sti && dir1 === "down") { clearInterval(sti); sti = false; }
}
if(e.pageX - l < 20) {
if(sli && dir2 === "right") { clearInterval(sli); sli = false; }
if(!sli) { dir2 = "left"; sli = setInterval(function () { $(document).scrollLeft($(document).scrollLeft() - $.vakata.dnd.scroll_spd); }, 150); }
}
else {
if(sli && dir2 === "left") { clearInterval(sli); sli = false; }
}
if($(window).width() - (e.pageX - l) < 20) {
if(sli && dir2 === "left") { clearInterval(sli); sli = false; }
if(!sli) { dir2 = "right"; sli = setInterval(function () { $(document).scrollLeft($(document).scrollLeft() + $.vakata.dnd.scroll_spd); }, 150); }
}
else {
if(sli && dir2 === "right") { clearInterval(sli); sli = false; }
}
}
$.vakata.dnd.helper.css({ left : (e.pageX + $.vakata.dnd.helper_left) + "px", top : (e.pageY + $.vakata.dnd.helper_top) + "px" });
$(document).triggerHandler("drag.vakata", { "event" : e, "data" : $.vakata.dnd.user_data });
},
drag_stop : function (e) {
if(sli) { clearInterval(sli); }
if(sti) { clearInterval(sti); }
$(document).unbind("mousemove", $.vakata.dnd.drag);
$(document).unbind("mouseup", $.vakata.dnd.drag_stop);
$(document).triggerHandler("drag_stop.vakata", { "event" : e, "data" : $.vakata.dnd.user_data });
$.vakata.dnd.helper.remove();
$.vakata.dnd.init_x = 0;
$.vakata.dnd.init_y = 0;
$.vakata.dnd.user_data = {};
$.vakata.dnd.is_down = false;
$.vakata.dnd.is_drag = false;
}
};
$(function() {
var css_string = '#vakata-dragged { display:block; margin:0 0 0 0; padding:4px 4px 4px 24px; position:absolute; top:-2000px; line-height:16px; z-index:10000; } ';
$.vakata.css.add_sheet({ str : css_string, title : "vakata" });
});
$.jstree.plugin("dnd", {
__init : function () {
this.data.dnd = {
active : false,
after : false,
inside : false,
before : false,
off : false,
prepared : false,
w : 0,
to1 : false,
to2 : false,
cof : false,
cw : false,
ch : false,
i1 : false,
i2 : false,
mto : false
};
this.get_container()
.bind("mouseenter.jstree", $.proxy(function (e) {
if($.vakata.dnd.is_drag && $.vakata.dnd.user_data.jstree) {
if(this.data.themes) {
m.attr("class", "jstree-" + this.data.themes.theme);
if(ml) { ml.attr("class", "jstree-" + this.data.themes.theme); }
$.vakata.dnd.helper.attr("class", "jstree-dnd-helper jstree-" + this.data.themes.theme);
}
//if($(e.currentTarget).find("> ul > li").length === 0) {
if(e.currentTarget === e.target && $.vakata.dnd.user_data.obj && $($.vakata.dnd.user_data.obj).length && $($.vakata.dnd.user_data.obj).parents(".jstree:eq(0)")[0] !== e.target) { // node should not be from the same tree
var tr = $.jstree._reference(e.target), dc;
if(tr.data.dnd.foreign) {
dc = tr._get_settings().dnd.drag_check.call(this, { "o" : o, "r" : tr.get_container(), is_root : true });
if(dc === true || dc.inside === true || dc.before === true || dc.after === true) {
$.vakata.dnd.helper.children("ins").attr("class","jstree-ok");
}
}
else {
tr.prepare_move(o, tr.get_container(), "last");
if(tr.check_move()) {
$.vakata.dnd.helper.children("ins").attr("class","jstree-ok");
}
}
}
}
}, this))
.bind("mouseup.jstree", $.proxy(function (e) {
//if($.vakata.dnd.is_drag && $.vakata.dnd.user_data.jstree && $(e.currentTarget).find("> ul > li").length === 0) {
if($.vakata.dnd.is_drag && $.vakata.dnd.user_data.jstree && e.currentTarget === e.target && $.vakata.dnd.user_data.obj && $($.vakata.dnd.user_data.obj).length && $($.vakata.dnd.user_data.obj).parents(".jstree:eq(0)")[0] !== e.target) { // node should not be from the same tree
var tr = $.jstree._reference(e.currentTarget), dc;
if(tr.data.dnd.foreign) {
dc = tr._get_settings().dnd.drag_check.call(this, { "o" : o, "r" : tr.get_container(), is_root : true });
if(dc === true || dc.inside === true || dc.before === true || dc.after === true) {
tr._get_settings().dnd.drag_finish.call(this, { "o" : o, "r" : tr.get_container(), is_root : true });
}
}
else {
tr.move_node(o, tr.get_container(), "last", e[tr._get_settings().dnd.copy_modifier + "Key"]);
}
}
}, this))
.bind("mouseleave.jstree", $.proxy(function (e) {
if(e.relatedTarget && e.relatedTarget.id && e.relatedTarget.id === "jstree-marker-line") {
return false;
}
if($.vakata.dnd.is_drag && $.vakata.dnd.user_data.jstree) {
if(this.data.dnd.i1) { clearInterval(this.data.dnd.i1); }
if(this.data.dnd.i2) { clearInterval(this.data.dnd.i2); }
if(this.data.dnd.to1) { clearTimeout(this.data.dnd.to1); }
if(this.data.dnd.to2) { clearTimeout(this.data.dnd.to2); }
if($.vakata.dnd.helper.children("ins").hasClass("jstree-ok")) {
$.vakata.dnd.helper.children("ins").attr("class","jstree-invalid");
}
}
}, this))
.bind("mousemove.jstree", $.proxy(function (e) {
if($.vakata.dnd.is_drag && $.vakata.dnd.user_data.jstree) {
var cnt = this.get_container()[0];
// Horizontal scroll
if(e.pageX + 24 > this.data.dnd.cof.left + this.data.dnd.cw) {
if(this.data.dnd.i1) { clearInterval(this.data.dnd.i1); }
this.data.dnd.i1 = setInterval($.proxy(function () { this.scrollLeft += $.vakata.dnd.scroll_spd; }, cnt), 100);
}
else if(e.pageX - 24 < this.data.dnd.cof.left) {
if(this.data.dnd.i1) { clearInterval(this.data.dnd.i1); }
this.data.dnd.i1 = setInterval($.proxy(function () { this.scrollLeft -= $.vakata.dnd.scroll_spd; }, cnt), 100);
}
else {
if(this.data.dnd.i1) { clearInterval(this.data.dnd.i1); }
}
// Vertical scroll
if(e.pageY + 24 > this.data.dnd.cof.top + this.data.dnd.ch) {
if(this.data.dnd.i2) { clearInterval(this.data.dnd.i2); }
this.data.dnd.i2 = setInterval($.proxy(function () { this.scrollTop += $.vakata.dnd.scroll_spd; }, cnt), 100);
}
else if(e.pageY - 24 < this.data.dnd.cof.top) {
if(this.data.dnd.i2) { clearInterval(this.data.dnd.i2); }
this.data.dnd.i2 = setInterval($.proxy(function () { this.scrollTop -= $.vakata.dnd.scroll_spd; }, cnt), 100);
}
else {
if(this.data.dnd.i2) { clearInterval(this.data.dnd.i2); }
}
}
}, this))
.bind("scroll.jstree", $.proxy(function (e) {
if($.vakata.dnd.is_drag && $.vakata.dnd.user_data.jstree && m && ml) {
m.hide();
ml.hide();
}
}, this))
.delegate("a", "mousedown.jstree", $.proxy(function (e) {
if(e.which === 1) {
this.start_drag(e.currentTarget, e);
return false;
}
}, this))
.delegate("a", "mouseenter.jstree", $.proxy(function (e) {
if($.vakata.dnd.is_drag && $.vakata.dnd.user_data.jstree) {
this.dnd_enter(e.currentTarget);
}
}, this))
.delegate("a", "mousemove.jstree", $.proxy(function (e) {
if($.vakata.dnd.is_drag && $.vakata.dnd.user_data.jstree) {
if(!r || !r.length || r.children("a")[0] !== e.currentTarget) {
this.dnd_enter(e.currentTarget);
}
if(typeof this.data.dnd.off.top === "undefined") { this.data.dnd.off = $(e.target).offset(); }
this.data.dnd.w = (e.pageY - (this.data.dnd.off.top || 0)) % this.data.core.li_height;
if(this.data.dnd.w < 0) { this.data.dnd.w += this.data.core.li_height; }
this.dnd_show();
}
}, this))
.delegate("a", "mouseleave.jstree", $.proxy(function (e) {
if($.vakata.dnd.is_drag && $.vakata.dnd.user_data.jstree) {
if(e.relatedTarget && e.relatedTarget.id && e.relatedTarget.id === "jstree-marker-line") {
return false;
}
if(m) { m.hide(); }
if(ml) { ml.hide(); }
/*
var ec = $(e.currentTarget).closest("li"),
er = $(e.relatedTarget).closest("li");
if(er[0] !== ec.prev()[0] && er[0] !== ec.next()[0]) {
if(m) { m.hide(); }
if(ml) { ml.hide(); }
}
*/
this.data.dnd.mto = setTimeout(
(function (t) { return function () { t.dnd_leave(e); }; })(this),
0);
}
}, this))
.delegate("a", "mouseup.jstree", $.proxy(function (e) {
if($.vakata.dnd.is_drag && $.vakata.dnd.user_data.jstree) {
this.dnd_finish(e);
}
}, this));
$(document)
.bind("drag_stop.vakata", $.proxy(function () {
if(this.data.dnd.to1) { clearTimeout(this.data.dnd.to1); }
if(this.data.dnd.to2) { clearTimeout(this.data.dnd.to2); }
if(this.data.dnd.i1) { clearInterval(this.data.dnd.i1); }
if(this.data.dnd.i2) { clearInterval(this.data.dnd.i2); }
this.data.dnd.after = false;
this.data.dnd.before = false;
this.data.dnd.inside = false;
this.data.dnd.off = false;
this.data.dnd.prepared = false;
this.data.dnd.w = false;
this.data.dnd.to1 = false;
this.data.dnd.to2 = false;
this.data.dnd.i1 = false;
this.data.dnd.i2 = false;
this.data.dnd.active = false;
this.data.dnd.foreign = false;
if(m) { m.css({ "top" : "-2000px" }); }
if(ml) { ml.css({ "top" : "-2000px" }); }
}, this))
.bind("drag_start.vakata", $.proxy(function (e, data) {
if(data.data.jstree) {
var et = $(data.event.target);
if(et.closest(".jstree").hasClass("jstree-" + this.get_index())) {
this.dnd_enter(et);
}
}
}, this));
/*
.bind("keydown.jstree-" + this.get_index() + " keyup.jstree-" + this.get_index(), $.proxy(function(e) {
if($.vakata.dnd.is_drag && $.vakata.dnd.user_data.jstree && !this.data.dnd.foreign) {
var h = $.vakata.dnd.helper.children("ins");
if(e[this._get_settings().dnd.copy_modifier + "Key"] && h.hasClass("jstree-ok")) {
h.parent().html(h.parent().html().replace(/ \(Copy\)$/, "") + " (Copy)");
}
else {
h.parent().html(h.parent().html().replace(/ \(Copy\)$/, ""));
}
}
}, this)); */
var s = this._get_settings().dnd;
if(s.drag_target) {
$(document)
.delegate(s.drag_target, "mousedown.jstree-" + this.get_index(), $.proxy(function (e) {
o = e.target;
$.vakata.dnd.drag_start(e, { jstree : true, obj : e.target }, "<ins class='jstree-icon'></ins>" + $(e.target).text() );
if(this.data.themes) {
if(m) { m.attr("class", "jstree-" + this.data.themes.theme); }
if(ml) { ml.attr("class", "jstree-" + this.data.themes.theme); }
$.vakata.dnd.helper.attr("class", "jstree-dnd-helper jstree-" + this.data.themes.theme);
}
$.vakata.dnd.helper.children("ins").attr("class","jstree-invalid");
var cnt = this.get_container();
this.data.dnd.cof = cnt.offset();
this.data.dnd.cw = parseInt(cnt.width(),10);
this.data.dnd.ch = parseInt(cnt.height(),10);
this.data.dnd.foreign = true;
e.preventDefault();
}, this));
}
if(s.drop_target) {
$(document)
.delegate(s.drop_target, "mouseenter.jstree-" + this.get_index(), $.proxy(function (e) {
if(this.data.dnd.active && this._get_settings().dnd.drop_check.call(this, { "o" : o, "r" : $(e.target), "e" : e })) {
$.vakata.dnd.helper.children("ins").attr("class","jstree-ok");
}
}, this))
.delegate(s.drop_target, "mouseleave.jstree-" + this.get_index(), $.proxy(function (e) {
if(this.data.dnd.active) {
$.vakata.dnd.helper.children("ins").attr("class","jstree-invalid");
}
}, this))
.delegate(s.drop_target, "mouseup.jstree-" + this.get_index(), $.proxy(function (e) {
if(this.data.dnd.active && $.vakata.dnd.helper.children("ins").hasClass("jstree-ok")) {
this._get_settings().dnd.drop_finish.call(this, { "o" : o, "r" : $(e.target), "e" : e });
}
}, this));
}
},
defaults : {
copy_modifier : "ctrl",
check_timeout : 100,
open_timeout : 500,
drop_target : ".jstree-drop",
drop_check : function (data) { return true; },
drop_finish : $.noop,
drag_target : ".jstree-draggable",
drag_finish : $.noop,
drag_check : function (data) { return { after : false, before : false, inside : true }; }
},
_fn : {
dnd_prepare : function () {
if(!r || !r.length) { return; }
this.data.dnd.off = r.offset();
if(this._get_settings().core.rtl) {
this.data.dnd.off.right = this.data.dnd.off.left + r.width();
}
if(this.data.dnd.foreign) {
var a = this._get_settings().dnd.drag_check.call(this, { "o" : o, "r" : r });
this.data.dnd.after = a.after;
this.data.dnd.before = a.before;
this.data.dnd.inside = a.inside;
this.data.dnd.prepared = true;
return this.dnd_show();
}
this.prepare_move(o, r, "before");
this.data.dnd.before = this.check_move();
this.prepare_move(o, r, "after");
this.data.dnd.after = this.check_move();
if(this._is_loaded(r)) {
this.prepare_move(o, r, "inside");
this.data.dnd.inside = this.check_move();
}
else {
this.data.dnd.inside = false;
}
this.data.dnd.prepared = true;
return this.dnd_show();
},
dnd_show : function () {
if(!this.data.dnd.prepared) { return; }
var o = ["before","inside","after"],
r = false,
rtl = this._get_settings().core.rtl,
pos;
if(this.data.dnd.w < this.data.core.li_height/3) { o = ["before","inside","after"]; }
else if(this.data.dnd.w <= this.data.core.li_height*2/3) {
o = this.data.dnd.w < this.data.core.li_height/2 ? ["inside","before","after"] : ["inside","after","before"];
}
else { o = ["after","inside","before"]; }
$.each(o, $.proxy(function (i, val) {
if(this.data.dnd[val]) {
$.vakata.dnd.helper.children("ins").attr("class","jstree-ok");
r = val;
return false;
}
}, this));
if(r === false) { $.vakata.dnd.helper.children("ins").attr("class","jstree-invalid"); }
pos = rtl ? (this.data.dnd.off.right - 18) : (this.data.dnd.off.left + 10);
switch(r) {
case "before":
m.css({ "left" : pos + "px", "top" : (this.data.dnd.off.top - 6) + "px" }).show();
if(ml) { ml.css({ "left" : (pos + 8) + "px", "top" : (this.data.dnd.off.top - 1) + "px" }).show(); }
break;
case "after":
m.css({ "left" : pos + "px", "top" : (this.data.dnd.off.top + this.data.core.li_height - 6) + "px" }).show();
if(ml) { ml.css({ "left" : (pos + 8) + "px", "top" : (this.data.dnd.off.top + this.data.core.li_height - 1) + "px" }).show(); }
break;
case "inside":
m.css({ "left" : pos + ( rtl ? -4 : 4) + "px", "top" : (this.data.dnd.off.top + this.data.core.li_height/2 - 5) + "px" }).show();
if(ml) { ml.hide(); }
break;
default:
m.hide();
if(ml) { ml.hide(); }
break;
}
last_pos = r;
return r;
},
dnd_open : function () {
this.data.dnd.to2 = false;
this.open_node(r, $.proxy(this.dnd_prepare,this), true);
},
dnd_finish : function (e) {
if(this.data.dnd.foreign) {
if(this.data.dnd.after || this.data.dnd.before || this.data.dnd.inside) {
this._get_settings().dnd.drag_finish.call(this, { "o" : o, "r" : r, "p" : last_pos });
}
}
else {
this.dnd_prepare();
this.move_node(o, r, last_pos, e[this._get_settings().dnd.copy_modifier + "Key"]);
}
o = false;
r = false;
m.hide();
if(ml) { ml.hide(); }
},
dnd_enter : function (obj) {
if(this.data.dnd.mto) {
clearTimeout(this.data.dnd.mto);
this.data.dnd.mto = false;
}
var s = this._get_settings().dnd;
this.data.dnd.prepared = false;
r = this._get_node(obj);
if(s.check_timeout) {
// do the calculations after a minimal timeout (users tend to drag quickly to the desired location)
if(this.data.dnd.to1) { clearTimeout(this.data.dnd.to1); }
this.data.dnd.to1 = setTimeout($.proxy(this.dnd_prepare, this), s.check_timeout);
}
else {
this.dnd_prepare();
}
if(s.open_timeout) {
if(this.data.dnd.to2) { clearTimeout(this.data.dnd.to2); }
if(r && r.length && r.hasClass("jstree-closed")) {
// if the node is closed - open it, then recalculate
this.data.dnd.to2 = setTimeout($.proxy(this.dnd_open, this), s.open_timeout);
}
}
else {
if(r && r.length && r.hasClass("jstree-closed")) {
this.dnd_open();
}
}
},
dnd_leave : function (e) {
this.data.dnd.after = false;
this.data.dnd.before = false;
this.data.dnd.inside = false;
$.vakata.dnd.helper.children("ins").attr("class","jstree-invalid");
m.hide();
if(ml) { ml.hide(); }
if(r && r[0] === e.target.parentNode) {
if(this.data.dnd.to1) {
clearTimeout(this.data.dnd.to1);
this.data.dnd.to1 = false;
}
if(this.data.dnd.to2) {
clearTimeout(this.data.dnd.to2);
this.data.dnd.to2 = false;
}
}
},
start_drag : function (obj, e) {
o = this._get_node(obj);
if(this.data.ui && this.is_selected(o)) { o = this._get_node(null, true); }
var dt = o.length > 1 ? this._get_string("multiple_selection") : this.get_text(o),
cnt = this.get_container();
if(!this._get_settings().core.html_titles) { dt = dt.replace(/</ig,"<").replace(/>/ig,">"); }
$.vakata.dnd.drag_start(e, { jstree : true, obj : o }, "<ins class='jstree-icon'></ins>" + dt );
if(this.data.themes) {
if(m) { m.attr("class", "jstree-" + this.data.themes.theme); }
if(ml) { ml.attr("class", "jstree-" + this.data.themes.theme); }
$.vakata.dnd.helper.attr("class", "jstree-dnd-helper jstree-" + this.data.themes.theme);
}
this.data.dnd.cof = cnt.offset();
this.data.dnd.cw = parseInt(cnt.width(),10);
this.data.dnd.ch = parseInt(cnt.height(),10);
this.data.dnd.active = true;
}
}
});
$(function() {
var css_string = '' +
'#vakata-dragged ins { display:block; text-decoration:none; width:16px; height:16px; margin:0 0 0 0; padding:0; position:absolute; top:4px; left:4px; ' +
' -moz-border-radius:4px; border-radius:4px; -webkit-border-radius:4px; ' +
'} ' +
'#vakata-dragged .jstree-ok { background:green; } ' +
'#vakata-dragged .jstree-invalid { background:red; } ' +
'#jstree-marker { padding:0; margin:0; font-size:12px; overflow:hidden; height:12px; width:8px; position:absolute; top:-30px; z-index:10001; background-repeat:no-repeat; display:none; background-color:transparent; text-shadow:1px 1px 1px white; color:black; line-height:10px; } ' +
'#jstree-marker-line { padding:0; margin:0; line-height:0%; font-size:1px; overflow:hidden; height:1px; width:100px; position:absolute; top:-30px; z-index:10000; background-repeat:no-repeat; display:none; background-color:#456c43; ' +
' cursor:pointer; border:1px solid #eeeeee; border-left:0; -moz-box-shadow: 0px 0px 2px #666; -webkit-box-shadow: 0px 0px 2px #666; box-shadow: 0px 0px 2px #666; ' +
' -moz-border-radius:1px; border-radius:1px; -webkit-border-radius:1px; ' +
'}' +
'';
$.vakata.css.add_sheet({ str : css_string, title : "jstree" });
m = $("<div />").attr({ id : "jstree-marker" }).hide().html("»")
.bind("mouseleave mouseenter", function (e) {
m.hide();
ml.hide();
e.preventDefault();
e.stopImmediatePropagation();
return false;
})
.appendTo("body");
ml = $("<div />").attr({ id : "jstree-marker-line" }).hide()
.bind("mouseup", function (e) {
if(r && r.length) {
r.children("a").trigger(e);
e.preventDefault();
e.stopImmediatePropagation();
return false;
}
})
.bind("mouseleave", function (e) {
var rt = $(e.relatedTarget);
if(rt.is(".jstree") || rt.closest(".jstree").length === 0) {
if(r && r.length) {
r.children("a").trigger(e);
m.hide();
ml.hide();
e.preventDefault();
e.stopImmediatePropagation();
return false;
}
}
})
.appendTo("body");
$(document).bind("drag_start.vakata", function (e, data) {
if(data.data.jstree) { m.show(); if(ml) { ml.show(); } }
});
$(document).bind("drag_stop.vakata", function (e, data) {
if(data.data.jstree) { m.hide(); if(ml) { ml.hide(); } }
});
});
})(jQuery);
//*/
/*
* jsTree checkbox plugin
* Inserts checkboxes in front of every node
* Depends on the ui plugin
* DOES NOT WORK NICELY WITH MULTITREE DRAG'N'DROP
*/
(function ($) {
$.jstree.plugin("checkbox", {
__init : function () {
this.data.checkbox.noui = this._get_settings().checkbox.override_ui;
if(this.data.ui && this.data.checkbox.noui) {
this.select_node = this.deselect_node = this.deselect_all = $.noop;
this.get_selected = this.get_checked;
}
this.get_container()
.bind("open_node.jstree create_node.jstree clean_node.jstree refresh.jstree", $.proxy(function (e, data) {
this._prepare_checkboxes(data.rslt.obj);
}, this))
.bind("loaded.jstree", $.proxy(function (e) {
this._prepare_checkboxes();
}, this))
.delegate( (this.data.ui && this.data.checkbox.noui ? "a" : "ins.jstree-checkbox") , "click.jstree", $.proxy(function (e) {
e.preventDefault();
if(this._get_node(e.target).hasClass("jstree-checked")) { this.uncheck_node(e.target); }
else { this.check_node(e.target); }
if(this.data.ui && this.data.checkbox.noui) {
this.save_selected();
if(this.data.cookies) { this.save_cookie("select_node"); }
}
else {
e.stopImmediatePropagation();
return false;
}
}, this));
},
defaults : {
override_ui : false,
two_state : false,
real_checkboxes : false,
checked_parent_open : true,
real_checkboxes_names : function (n) { return [ ("check_" + (n[0].id || Math.ceil(Math.random() * 10000))) , 1]; }
},
__destroy : function () {
this.get_container()
.find("input.jstree-real-checkbox").removeClass("jstree-real-checkbox").end()
.find("ins.jstree-checkbox").remove();
},
_fn : {
_checkbox_notify : function (n, data) {
if(data.checked) {
this.check_node(n, false);
}
},
_prepare_checkboxes : function (obj) {
obj = !obj || obj == -1 ? this.get_container().find("> ul > li") : this._get_node(obj);
if(obj === false) { return; } // added for removing root nodes
var c, _this = this, t, ts = this._get_settings().checkbox.two_state, rc = this._get_settings().checkbox.real_checkboxes, rcn = this._get_settings().checkbox.real_checkboxes_names;
obj.each(function () {
t = $(this);
c = t.is("li") && (t.hasClass("jstree-checked") || (rc && t.children(":checked").length)) ? "jstree-checked" : "jstree-unchecked";
t.find("li").andSelf().each(function () {
var $t = $(this), nm;
$t.children("a" + (_this.data.languages ? "" : ":eq(0)") ).not(":has(.jstree-checkbox)").prepend("<ins class='jstree-checkbox'> </ins>").parent().not(".jstree-checked, .jstree-unchecked").addClass( ts ? "jstree-unchecked" : c );
if(rc) {
if(!$t.children(":checkbox").length) {
nm = rcn.call(_this, $t);
$t.prepend("<input type='checkbox' class='jstree-real-checkbox' id='" + nm[0] + "' name='" + nm[0] + "' value='" + nm[1] + "' />");
}
else {
$t.children(":checkbox").addClass("jstree-real-checkbox");
}
if(c === "jstree-checked") {
$t.children(":checkbox").attr("checked","checked");
}
}
if(c === "jstree-checked" && !ts) {
$t.find("li").addClass("jstree-checked");
}
});
});
if(!ts) {
if(obj.length === 1 && obj.is("li")) { this._repair_state(obj); }
if(obj.is("li")) { obj.each(function () { _this._repair_state(this); }); }
else { obj.find("> ul > li").each(function () { _this._repair_state(this); }); }
obj.find(".jstree-checked").parent().parent().each(function () { _this._repair_state(this); });
}
},
change_state : function (obj, state) {
obj = this._get_node(obj);
var coll = false, rc = this._get_settings().checkbox.real_checkboxes;
if(!obj || obj === -1) { return false; }
state = (state === false || state === true) ? state : obj.hasClass("jstree-checked");
if(this._get_settings().checkbox.two_state) {
if(state) {
obj.removeClass("jstree-checked").addClass("jstree-unchecked");
if(rc) { obj.children(":checkbox").removeAttr("checked"); }
}
else {
obj.removeClass("jstree-unchecked").addClass("jstree-checked");
if(rc) { obj.children(":checkbox").attr("checked","checked"); }
}
}
else {
if(state) {
coll = obj.find("li").andSelf();
if(!coll.filter(".jstree-checked, .jstree-undetermined").length) { return false; }
coll.removeClass("jstree-checked jstree-undetermined").addClass("jstree-unchecked");
if(rc) { coll.children(":checkbox").removeAttr("checked"); }
}
else {
coll = obj.find("li").andSelf();
if(!coll.filter(".jstree-unchecked, .jstree-undetermined").length) { return false; }
coll.removeClass("jstree-unchecked jstree-undetermined").addClass("jstree-checked");
if(rc) { coll.children(":checkbox").attr("checked","checked"); }
if(this.data.ui) { this.data.ui.last_selected = obj; }
this.data.checkbox.last_selected = obj;
}
obj.parentsUntil(".jstree", "li").each(function () {
var $this = $(this);
if(state) {
if($this.children("ul").children("li.jstree-checked, li.jstree-undetermined").length) {
$this.parentsUntil(".jstree", "li").andSelf().removeClass("jstree-checked jstree-unchecked").addClass("jstree-undetermined");
if(rc) { $this.parentsUntil(".jstree", "li").andSelf().children(":checkbox").removeAttr("checked"); }
return false;
}
else {
$this.removeClass("jstree-checked jstree-undetermined").addClass("jstree-unchecked");
if(rc) { $this.children(":checkbox").removeAttr("checked"); }
}
}
else {
if($this.children("ul").children("li.jstree-unchecked, li.jstree-undetermined").length) {
$this.parentsUntil(".jstree", "li").andSelf().removeClass("jstree-checked jstree-unchecked").addClass("jstree-undetermined");
if(rc) { $this.parentsUntil(".jstree", "li").andSelf().children(":checkbox").removeAttr("checked"); }
return false;
}
else {
$this.removeClass("jstree-unchecked jstree-undetermined").addClass("jstree-checked");
if(rc) { $this.children(":checkbox").attr("checked","checked"); }
}
}
});
}
if(this.data.ui && this.data.checkbox.noui) { this.data.ui.selected = this.get_checked(); }
this.__callback(obj);
return true;
},
check_node : function (obj) {
if(this.change_state(obj, false)) {
obj = this._get_node(obj);
if(this._get_settings().checkbox.checked_parent_open) {
var t = this;
obj.parents(".jstree-closed").each(function () { t.open_node(this, false, true); });
}
this.__callback({ "obj" : obj });
}
},
uncheck_node : function (obj) {
if(this.change_state(obj, true)) { this.__callback({ "obj" : this._get_node(obj) }); }
},
check_all : function () {
var _this = this,
coll = this._get_settings().checkbox.two_state ? this.get_container_ul().find("li") : this.get_container_ul().children("li");
coll.each(function () {
_this.change_state(this, false);
});
this.__callback();
},
uncheck_all : function () {
var _this = this,
coll = this._get_settings().checkbox.two_state ? this.get_container_ul().find("li") : this.get_container_ul().children("li");
coll.each(function () {
_this.change_state(this, true);
});
this.__callback();
},
is_checked : function(obj) {
obj = this._get_node(obj);
return obj.length ? obj.is(".jstree-checked") : false;
},
get_checked : function (obj, get_all) {
obj = !obj || obj === -1 ? this.get_container() : this._get_node(obj);
return get_all || this._get_settings().checkbox.two_state ? obj.find(".jstree-checked") : obj.find("> ul > .jstree-checked, .jstree-undetermined > ul > .jstree-checked");
},
get_unchecked : function (obj, get_all) {
obj = !obj || obj === -1 ? this.get_container() : this._get_node(obj);
return get_all || this._get_settings().checkbox.two_state ? obj.find(".jstree-unchecked") : obj.find("> ul > .jstree-unchecked, .jstree-undetermined > ul > .jstree-unchecked");
},
show_checkboxes : function () { this.get_container().children("ul").removeClass("jstree-no-checkboxes"); },
hide_checkboxes : function () { this.get_container().children("ul").addClass("jstree-no-checkboxes"); },
_repair_state : function (obj) {
obj = this._get_node(obj);
if(!obj.length) { return; }
var rc = this._get_settings().checkbox.real_checkboxes,
a = obj.find("> ul > .jstree-checked").length,
b = obj.find("> ul > .jstree-undetermined").length,
c = obj.find("> ul > li").length;
if(c === 0) { if(obj.hasClass("jstree-undetermined")) { this.change_state(obj, false); } }
else if(a === 0 && b === 0) { this.change_state(obj, true); }
else if(a === c) { this.change_state(obj, false); }
else {
obj.parentsUntil(".jstree","li").andSelf().removeClass("jstree-checked jstree-unchecked").addClass("jstree-undetermined");
if(rc) { obj.parentsUntil(".jstree", "li").andSelf().children(":checkbox").removeAttr("checked"); }
}
},
reselect : function () {
if(this.data.ui && this.data.checkbox.noui) {
var _this = this,
s = this.data.ui.to_select;
s = $.map($.makeArray(s), function (n) { return "#" + n.toString().replace(/^#/,"").replace(/\\\//g,"/").replace(/\//g,"\\\/").replace(/\\\./g,".").replace(/\./g,"\\.").replace(/\:/g,"\\:"); });
this.deselect_all();
$.each(s, function (i, val) { _this.check_node(val); });
this.__callback();
}
else {
this.__call_old();
}
},
save_loaded : function () {
var _this = this;
this.data.core.to_load = [];
this.get_container_ul().find("li.jstree-closed.jstree-undetermined").each(function () {
if(this.id) { _this.data.core.to_load.push("#" + this.id); }
});
}
}
});
$(function() {
var css_string = '.jstree .jstree-real-checkbox { display:none; } ';
$.vakata.css.add_sheet({ str : css_string, title : "jstree" });
});
})(jQuery);
//*/
/*
* jsTree XML plugin
* The XML data store. Datastores are build by overriding the `load_node` and `_is_loaded` functions.
*/
(function ($) {
$.vakata.xslt = function (xml, xsl, callback) {
var rs = "", xm, xs, processor, support;
// TODO: IE9 no XSLTProcessor, no document.recalc
if(document.recalc) {
xm = document.createElement('xml');
xs = document.createElement('xml');
xm.innerHTML = xml;
xs.innerHTML = xsl;
$("body").append(xm).append(xs);
setTimeout( (function (xm, xs, callback) {
return function () {
callback.call(null, xm.transformNode(xs.XMLDocument));
setTimeout( (function (xm, xs) { return function () { $(xm).remove(); $(xs).remove(); }; })(xm, xs), 200);
};
})(xm, xs, callback), 100);
return true;
}
if(typeof window.DOMParser !== "undefined" && typeof window.XMLHttpRequest !== "undefined" && typeof window.XSLTProcessor === "undefined") {
xml = new DOMParser().parseFromString(xml, "text/xml");
xsl = new DOMParser().parseFromString(xsl, "text/xml");
// alert(xml.transformNode());
// callback.call(null, new XMLSerializer().serializeToString(rs));
}
if(typeof window.DOMParser !== "undefined" && typeof window.XMLHttpRequest !== "undefined" && typeof window.XSLTProcessor !== "undefined") {
processor = new XSLTProcessor();
support = $.isFunction(processor.transformDocument) ? (typeof window.XMLSerializer !== "undefined") : true;
if(!support) { return false; }
xml = new DOMParser().parseFromString(xml, "text/xml");
xsl = new DOMParser().parseFromString(xsl, "text/xml");
if($.isFunction(processor.transformDocument)) {
rs = document.implementation.createDocument("", "", null);
processor.transformDocument(xml, xsl, rs, null);
callback.call(null, new XMLSerializer().serializeToString(rs));
return true;
}
else {
processor.importStylesheet(xsl);
rs = processor.transformToFragment(xml, document);
callback.call(null, $("<div />").append(rs).html());
return true;
}
}
return false;
};
var xsl = {
'nest' : '<' + '?xml version="1.0" encoding="utf-8" ?>' +
'<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" >' +
'<xsl:output method="html" encoding="utf-8" omit-xml-declaration="yes" standalone="no" indent="no" media-type="text/html" />' +
'<xsl:template match="/">' +
' <xsl:call-template name="nodes">' +
' <xsl:with-param name="node" select="/root" />' +
' </xsl:call-template>' +
'</xsl:template>' +
'<xsl:template name="nodes">' +
' <xsl:param name="node" />' +
' <ul>' +
' <xsl:for-each select="$node/item">' +
' <xsl:variable name="children" select="count(./item) > 0" />' +
' <li>' +
' <xsl:attribute name="class">' +
' <xsl:if test="position() = last()">jstree-last </xsl:if>' +
' <xsl:choose>' +
' <xsl:when test="@state = \'open\'">jstree-open </xsl:when>' +
' <xsl:when test="$children or @hasChildren or @state = \'closed\'">jstree-closed </xsl:when>' +
' <xsl:otherwise>jstree-leaf </xsl:otherwise>' +
' </xsl:choose>' +
' <xsl:value-of select="@class" />' +
' </xsl:attribute>' +
' <xsl:for-each select="@*">' +
' <xsl:if test="name() != \'class\' and name() != \'state\' and name() != \'hasChildren\'">' +
' <xsl:attribute name="{name()}"><xsl:value-of select="." /></xsl:attribute>' +
' </xsl:if>' +
' </xsl:for-each>' +
' <ins class="jstree-icon"><xsl:text> </xsl:text></ins>' +
' <xsl:for-each select="content/name">' +
' <a>' +
' <xsl:attribute name="href">' +
' <xsl:choose>' +
' <xsl:when test="@href"><xsl:value-of select="@href" /></xsl:when>' +
' <xsl:otherwise>#</xsl:otherwise>' +
' </xsl:choose>' +
' </xsl:attribute>' +
' <xsl:attribute name="class"><xsl:value-of select="@lang" /> <xsl:value-of select="@class" /></xsl:attribute>' +
' <xsl:attribute name="style"><xsl:value-of select="@style" /></xsl:attribute>' +
' <xsl:for-each select="@*">' +
' <xsl:if test="name() != \'style\' and name() != \'class\' and name() != \'href\'">' +
' <xsl:attribute name="{name()}"><xsl:value-of select="." /></xsl:attribute>' +
' </xsl:if>' +
' </xsl:for-each>' +
' <ins>' +
' <xsl:attribute name="class">jstree-icon ' +
' <xsl:if test="string-length(attribute::icon) > 0 and not(contains(@icon,\'/\'))"><xsl:value-of select="@icon" /></xsl:if>' +
' </xsl:attribute>' +
' <xsl:if test="string-length(attribute::icon) > 0 and contains(@icon,\'/\')"><xsl:attribute name="style">background:url(<xsl:value-of select="@icon" />) center center no-repeat;</xsl:attribute></xsl:if>' +
' <xsl:text> </xsl:text>' +
' </ins>' +
' <xsl:copy-of select="./child::node()" />' +
' </a>' +
' </xsl:for-each>' +
' <xsl:if test="$children or @hasChildren"><xsl:call-template name="nodes"><xsl:with-param name="node" select="current()" /></xsl:call-template></xsl:if>' +
' </li>' +
' </xsl:for-each>' +
' </ul>' +
'</xsl:template>' +
'</xsl:stylesheet>',
'flat' : '<' + '?xml version="1.0" encoding="utf-8" ?>' +
'<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" >' +
'<xsl:output method="html" encoding="utf-8" omit-xml-declaration="yes" standalone="no" indent="no" media-type="text/xml" />' +
'<xsl:template match="/">' +
' <ul>' +
' <xsl:for-each select="//item[not(@parent_id) or @parent_id=0 or not(@parent_id = //item/@id)]">' + /* the last `or` may be removed */
' <xsl:call-template name="nodes">' +
' <xsl:with-param name="node" select="." />' +
' <xsl:with-param name="is_last" select="number(position() = last())" />' +
' </xsl:call-template>' +
' </xsl:for-each>' +
' </ul>' +
'</xsl:template>' +
'<xsl:template name="nodes">' +
' <xsl:param name="node" />' +
' <xsl:param name="is_last" />' +
' <xsl:variable name="children" select="count(//item[@parent_id=$node/attribute::id]) > 0" />' +
' <li>' +
' <xsl:attribute name="class">' +
' <xsl:if test="$is_last = true()">jstree-last </xsl:if>' +
' <xsl:choose>' +
' <xsl:when test="@state = \'open\'">jstree-open </xsl:when>' +
' <xsl:when test="$children or @hasChildren or @state = \'closed\'">jstree-closed </xsl:when>' +
' <xsl:otherwise>jstree-leaf </xsl:otherwise>' +
' </xsl:choose>' +
' <xsl:value-of select="@class" />' +
' </xsl:attribute>' +
' <xsl:for-each select="@*">' +
' <xsl:if test="name() != \'parent_id\' and name() != \'hasChildren\' and name() != \'class\' and name() != \'state\'">' +
' <xsl:attribute name="{name()}"><xsl:value-of select="." /></xsl:attribute>' +
' </xsl:if>' +
' </xsl:for-each>' +
' <ins class="jstree-icon"><xsl:text> </xsl:text></ins>' +
' <xsl:for-each select="content/name">' +
' <a>' +
' <xsl:attribute name="href">' +
' <xsl:choose>' +
' <xsl:when test="@href"><xsl:value-of select="@href" /></xsl:when>' +
' <xsl:otherwise>#</xsl:otherwise>' +
' </xsl:choose>' +
' </xsl:attribute>' +
' <xsl:attribute name="class"><xsl:value-of select="@lang" /> <xsl:value-of select="@class" /></xsl:attribute>' +
' <xsl:attribute name="style"><xsl:value-of select="@style" /></xsl:attribute>' +
' <xsl:for-each select="@*">' +
' <xsl:if test="name() != \'style\' and name() != \'class\' and name() != \'href\'">' +
' <xsl:attribute name="{name()}"><xsl:value-of select="." /></xsl:attribute>' +
' </xsl:if>' +
' </xsl:for-each>' +
' <ins>' +
' <xsl:attribute name="class">jstree-icon ' +
' <xsl:if test="string-length(attribute::icon) > 0 and not(contains(@icon,\'/\'))"><xsl:value-of select="@icon" /></xsl:if>' +
' </xsl:attribute>' +
' <xsl:if test="string-length(attribute::icon) > 0 and contains(@icon,\'/\')"><xsl:attribute name="style">background:url(<xsl:value-of select="@icon" />) center center no-repeat;</xsl:attribute></xsl:if>' +
' <xsl:text> </xsl:text>' +
' </ins>' +
' <xsl:copy-of select="./child::node()" />' +
' </a>' +
' </xsl:for-each>' +
' <xsl:if test="$children">' +
' <ul>' +
' <xsl:for-each select="//item[@parent_id=$node/attribute::id]">' +
' <xsl:call-template name="nodes">' +
' <xsl:with-param name="node" select="." />' +
' <xsl:with-param name="is_last" select="number(position() = last())" />' +
' </xsl:call-template>' +
' </xsl:for-each>' +
' </ul>' +
' </xsl:if>' +
' </li>' +
'</xsl:template>' +
'</xsl:stylesheet>'
},
escape_xml = function(string) {
return string
.toString()
.replace(/&/g, '&')
.replace(/</g, '<')
.replace(/>/g, '>')
.replace(/"/g, '"')
.replace(/'/g, ''');
};
$.jstree.plugin("xml_data", {
defaults : {
data : false,
ajax : false,
xsl : "flat",
clean_node : false,
correct_state : true,
get_skip_empty : false,
get_include_preamble : true
},
_fn : {
load_node : function (obj, s_call, e_call) { var _this = this; this.load_node_xml(obj, function () { _this.__callback({ "obj" : _this._get_node(obj) }); s_call.call(this); }, e_call); },
_is_loaded : function (obj) {
var s = this._get_settings().xml_data;
obj = this._get_node(obj);
return obj == -1 || !obj || (!s.ajax && !$.isFunction(s.data)) || obj.is(".jstree-open, .jstree-leaf") || obj.children("ul").children("li").size() > 0;
},
load_node_xml : function (obj, s_call, e_call) {
var s = this.get_settings().xml_data,
error_func = function () {},
success_func = function () {};
obj = this._get_node(obj);
if(obj && obj !== -1) {
if(obj.data("jstree-is-loading")) { return; }
else { obj.data("jstree-is-loading",true); }
}
switch(!0) {
case (!s.data && !s.ajax): throw "Neither data nor ajax settings supplied.";
case ($.isFunction(s.data)):
s.data.call(this, obj, $.proxy(function (d) {
this.parse_xml(d, $.proxy(function (d) {
if(d) {
d = d.replace(/ ?xmlns="[^"]*"/ig, "");
if(d.length > 10) {
d = $(d);
if(obj === -1 || !obj) { this.get_container().children("ul").empty().append(d.children()); }
else { obj.children("a.jstree-loading").removeClass("jstree-loading"); obj.append(d); obj.removeData("jstree-is-loading"); }
if(s.clean_node) { this.clean_node(obj); }
if(s_call) { s_call.call(this); }
}
else {
if(obj && obj !== -1) {
obj.children("a.jstree-loading").removeClass("jstree-loading");
obj.removeData("jstree-is-loading");
if(s.correct_state) {
this.correct_state(obj);
if(s_call) { s_call.call(this); }
}
}
else {
if(s.correct_state) {
this.get_container().children("ul").empty();
if(s_call) { s_call.call(this); }
}
}
}
}
}, this));
}, this));
break;
case (!!s.data && !s.ajax) || (!!s.data && !!s.ajax && (!obj || obj === -1)):
if(!obj || obj == -1) {
this.parse_xml(s.data, $.proxy(function (d) {
if(d) {
d = d.replace(/ ?xmlns="[^"]*"/ig, "");
if(d.length > 10) {
d = $(d);
this.get_container().children("ul").empty().append(d.children());
if(s.clean_node) { this.clean_node(obj); }
if(s_call) { s_call.call(this); }
}
}
else {
if(s.correct_state) {
this.get_container().children("ul").empty();
if(s_call) { s_call.call(this); }
}
}
}, this));
}
break;
case (!s.data && !!s.ajax) || (!!s.data && !!s.ajax && obj && obj !== -1):
error_func = function (x, t, e) {
var ef = this.get_settings().xml_data.ajax.error;
if(ef) { ef.call(this, x, t, e); }
if(obj !== -1 && obj.length) {
obj.children("a.jstree-loading").removeClass("jstree-loading");
obj.removeData("jstree-is-loading");
if(t === "success" && s.correct_state) { this.correct_state(obj); }
}
else {
if(t === "success" && s.correct_state) { this.get_container().children("ul").empty(); }
}
if(e_call) { e_call.call(this); }
};
success_func = function (d, t, x) {
d = x.responseText;
var sf = this.get_settings().xml_data.ajax.success;
if(sf) { d = sf.call(this,d,t,x) || d; }
if(d === "" || (d && d.toString && d.toString().replace(/^[\s\n]+$/,"") === "")) {
return error_func.call(this, x, t, "");
}
this.parse_xml(d, $.proxy(function (d) {
if(d) {
d = d.replace(/ ?xmlns="[^"]*"/ig, "");
if(d.length > 10) {
d = $(d);
if(obj === -1 || !obj) { this.get_container().children("ul").empty().append(d.children()); }
else { obj.children("a.jstree-loading").removeClass("jstree-loading"); obj.append(d); obj.removeData("jstree-is-loading"); }
if(s.clean_node) { this.clean_node(obj); }
if(s_call) { s_call.call(this); }
}
else {
if(obj && obj !== -1) {
obj.children("a.jstree-loading").removeClass("jstree-loading");
obj.removeData("jstree-is-loading");
if(s.correct_state) {
this.correct_state(obj);
if(s_call) { s_call.call(this); }
}
}
else {
if(s.correct_state) {
this.get_container().children("ul").empty();
if(s_call) { s_call.call(this); }
}
}
}
}
}, this));
};
s.ajax.context = this;
s.ajax.error = error_func;
s.ajax.success = success_func;
if(!s.ajax.dataType) { s.ajax.dataType = "xml"; }
if($.isFunction(s.ajax.url)) { s.ajax.url = s.ajax.url.call(this, obj); }
if($.isFunction(s.ajax.data)) { s.ajax.data = s.ajax.data.call(this, obj); }
$.ajax(s.ajax);
break;
}
},
parse_xml : function (xml, callback) {
var s = this._get_settings().xml_data;
$.vakata.xslt(xml, xsl[s.xsl], callback);
},
get_xml : function (tp, obj, li_attr, a_attr, is_callback) {
var result = "",
s = this._get_settings(),
_this = this,
tmp1, tmp2, li, a, lang;
if(!tp) { tp = "flat"; }
if(!is_callback) { is_callback = 0; }
obj = this._get_node(obj);
if(!obj || obj === -1) { obj = this.get_container().find("> ul > li"); }
li_attr = $.isArray(li_attr) ? li_attr : [ "id", "class" ];
if(!is_callback && this.data.types && $.inArray(s.types.type_attr, li_attr) === -1) { li_attr.push(s.types.type_attr); }
a_attr = $.isArray(a_attr) ? a_attr : [ ];
if(!is_callback) {
if(s.xml_data.get_include_preamble) {
result += '<' + '?xml version="1.0" encoding="UTF-8"?' + '>';
}
result += "<root>";
}
obj.each(function () {
result += "<item";
li = $(this);
$.each(li_attr, function (i, v) {
var t = li.attr(v);
if(!s.xml_data.get_skip_empty || typeof t !== "undefined") {
result += " " + v + "=\"" + escape_xml((" " + (t || "")).replace(/ jstree[^ ]*/ig,'').replace(/\s+$/ig," ").replace(/^ /,"").replace(/ $/,"")) + "\"";
}
});
if(li.hasClass("jstree-open")) { result += " state=\"open\""; }
if(li.hasClass("jstree-closed")) { result += " state=\"closed\""; }
if(tp === "flat") { result += " parent_id=\"" + escape_xml(is_callback) + "\""; }
result += ">";
result += "<content>";
a = li.children("a");
a.each(function () {
tmp1 = $(this);
lang = false;
result += "<name";
if($.inArray("languages", s.plugins) !== -1) {
$.each(s.languages, function (k, z) {
if(tmp1.hasClass(z)) { result += " lang=\"" + escape_xml(z) + "\""; lang = z; return false; }
});
}
if(a_attr.length) {
$.each(a_attr, function (k, z) {
var t = tmp1.attr(z);
if(!s.xml_data.get_skip_empty || typeof t !== "undefined") {
result += " " + z + "=\"" + escape_xml((" " + t || "").replace(/ jstree[^ ]*/ig,'').replace(/\s+$/ig," ").replace(/^ /,"").replace(/ $/,"")) + "\"";
}
});
}
if(tmp1.children("ins").get(0).className.replace(/jstree[^ ]*|$/ig,'').replace(/^\s+$/ig,"").length) {
result += ' icon="' + escape_xml(tmp1.children("ins").get(0).className.replace(/jstree[^ ]*|$/ig,'').replace(/\s+$/ig," ").replace(/^ /,"").replace(/ $/,"")) + '"';
}
if(tmp1.children("ins").get(0).style.backgroundImage.length) {
result += ' icon="' + escape_xml(tmp1.children("ins").get(0).style.backgroundImage.replace("url(","").replace(")","").replace(/'/ig,"").replace(/"/ig,"")) + '"';
}
result += ">";
result += "<![CDATA[" + _this.get_text(tmp1, lang) + "]]>";
result += "</name>";
});
result += "</content>";
tmp2 = li[0].id || true;
li = li.find("> ul > li");
if(li.length) { tmp2 = _this.get_xml(tp, li, li_attr, a_attr, tmp2); }
else { tmp2 = ""; }
if(tp == "nest") { result += tmp2; }
result += "</item>";
if(tp == "flat") { result += tmp2; }
});
if(!is_callback) { result += "</root>"; }
return result;
}
}
});
})(jQuery);
//*/
/*
* jsTree search plugin
* Enables both sync and async search on the tree
* DOES NOT WORK WITH JSON PROGRESSIVE RENDER
*/
(function ($) {
$.expr[':'].jstree_contains = function(a,i,m){
return (a.textContent || a.innerText || "").toLowerCase().indexOf(m[3].toLowerCase())>=0;
};
$.expr[':'].jstree_title_contains = function(a,i,m) {
return (a.getAttribute("title") || "").toLowerCase().indexOf(m[3].toLowerCase())>=0;
};
$.jstree.plugin("search", {
__init : function () {
this.data.search.str = "";
this.data.search.result = $();
if(this._get_settings().search.show_only_matches) {
this.get_container()
.bind("search.jstree", function (e, data) {
$(this).children("ul").find("li").hide().removeClass("jstree-last");
data.rslt.nodes.parentsUntil(".jstree").andSelf().show()
.filter("ul").each(function () { $(this).children("li:visible").eq(-1).addClass("jstree-last"); });
})
.bind("clear_search.jstree", function () {
$(this).children("ul").find("li").css("display","").end().end().jstree("clean_node", -1);
});
}
},
defaults : {
ajax : false,
search_method : "jstree_contains", // for case insensitive - jstree_contains
show_only_matches : false
},
_fn : {
search : function (str, skip_async) {
if($.trim(str) === "") { this.clear_search(); return; }
var s = this.get_settings().search,
t = this,
error_func = function () { },
success_func = function () { };
this.data.search.str = str;
if(!skip_async && s.ajax !== false && this.get_container_ul().find("li.jstree-closed:not(:has(ul)):eq(0)").length > 0) {
this.search.supress_callback = true;
error_func = function () { };
success_func = function (d, t, x) {
var sf = this.get_settings().search.ajax.success;
if(sf) { d = sf.call(this,d,t,x) || d; }
this.data.search.to_open = d;
this._search_open();
};
s.ajax.context = this;
s.ajax.error = error_func;
s.ajax.success = success_func;
if($.isFunction(s.ajax.url)) { s.ajax.url = s.ajax.url.call(this, str); }
if($.isFunction(s.ajax.data)) { s.ajax.data = s.ajax.data.call(this, str); }
if(!s.ajax.data) { s.ajax.data = { "search_string" : str }; }
if(!s.ajax.dataType || /^json/.exec(s.ajax.dataType)) { s.ajax.dataType = "json"; }
$.ajax(s.ajax);
return;
}
if(this.data.search.result.length) { this.clear_search(); }
this.data.search.result = this.get_container().find("a" + (this.data.languages ? "." + this.get_lang() : "" ) + ":" + (s.search_method) + "(" + this.data.search.str + ")");
this.data.search.result.addClass("jstree-search").parent().parents(".jstree-closed").each(function () {
t.open_node(this, false, true);
});
this.__callback({ nodes : this.data.search.result, str : str });
},
clear_search : function (str) {
this.data.search.result.removeClass("jstree-search");
this.__callback(this.data.search.result);
this.data.search.result = $();
},
_search_open : function (is_callback) {
var _this = this,
done = true,
current = [],
remaining = [];
if(this.data.search.to_open.length) {
$.each(this.data.search.to_open, function (i, val) {
if(val == "#") { return true; }
if($(val).length && $(val).is(".jstree-closed")) { current.push(val); }
else { remaining.push(val); }
});
if(current.length) {
this.data.search.to_open = remaining;
$.each(current, function (i, val) {
_this.open_node(val, function () { _this._search_open(true); });
});
done = false;
}
}
if(done) { this.search(this.data.search.str, true); }
}
}
});
})(jQuery);
//*/
/*
* jsTree contextmenu plugin
*/
(function ($) {
$.vakata.context = {
hide_on_mouseleave : false,
cnt : $("<div id='vakata-contextmenu' />"),
vis : false,
tgt : false,
par : false,
func : false,
data : false,
rtl : false,
show : function (s, t, x, y, d, p, rtl) {
$.vakata.context.rtl = !!rtl;
var html = $.vakata.context.parse(s), h, w;
if(!html) { return; }
$.vakata.context.vis = true;
$.vakata.context.tgt = t;
$.vakata.context.par = p || t || null;
$.vakata.context.data = d || null;
$.vakata.context.cnt
.html(html)
.css({ "visibility" : "hidden", "display" : "block", "left" : 0, "top" : 0 });
if($.vakata.context.hide_on_mouseleave) {
$.vakata.context.cnt
.one("mouseleave", function(e) { $.vakata.context.hide(); });
}
h = $.vakata.context.cnt.height();
w = $.vakata.context.cnt.width();
if(x + w > $(document).width()) {
x = $(document).width() - (w + 5);
$.vakata.context.cnt.find("li > ul").addClass("right");
}
if(y + h > $(document).height()) {
y = y - (h + t[0].offsetHeight);
$.vakata.context.cnt.find("li > ul").addClass("bottom");
}
$.vakata.context.cnt
.css({ "left" : x, "top" : y })
.find("li:has(ul)")
.bind("mouseenter", function (e) {
var w = $(document).width(),
h = $(document).height(),
ul = $(this).children("ul").show();
if(w !== $(document).width()) { ul.toggleClass("right"); }
if(h !== $(document).height()) { ul.toggleClass("bottom"); }
})
.bind("mouseleave", function (e) {
$(this).children("ul").hide();
})
.end()
.css({ "visibility" : "visible" })
.show();
$(document).triggerHandler("context_show.vakata");
},
hide : function () {
$.vakata.context.vis = false;
$.vakata.context.cnt.attr("class","").css({ "visibility" : "hidden" });
$(document).triggerHandler("context_hide.vakata");
},
parse : function (s, is_callback) {
if(!s) { return false; }
var str = "",
tmp = false,
was_sep = true;
if(!is_callback) { $.vakata.context.func = {}; }
str += "<ul>";
$.each(s, function (i, val) {
if(!val) { return true; }
$.vakata.context.func[i] = val.action;
if(!was_sep && val.separator_before) {
str += "<li class='vakata-separator vakata-separator-before'></li>";
}
was_sep = false;
str += "<li class='" + (val._class || "") + (val._disabled ? " jstree-contextmenu-disabled " : "") + "'><ins ";
if(val.icon && val.icon.indexOf("/") === -1) { str += " class='" + val.icon + "' "; }
if(val.icon && val.icon.indexOf("/") !== -1) { str += " style='background:url(" + val.icon + ") center center no-repeat;' "; }
str += "> </ins><a href='#' rel='" + i + "'>";
if(val.submenu) {
str += "<span style='float:" + ($.vakata.context.rtl ? "left" : "right") + ";'>»</span>";
}
str += val.label + "</a>";
if(val.submenu) {
tmp = $.vakata.context.parse(val.submenu, true);
if(tmp) { str += tmp; }
}
str += "</li>";
if(val.separator_after) {
str += "<li class='vakata-separator vakata-separator-after'></li>";
was_sep = true;
}
});
str = str.replace(/<li class\='vakata-separator vakata-separator-after'\><\/li\>$/,"");
str += "</ul>";
$(document).triggerHandler("context_parse.vakata");
return str.length > 10 ? str : false;
},
exec : function (i) {
if($.isFunction($.vakata.context.func[i])) {
// if is string - eval and call it!
$.vakata.context.func[i].call($.vakata.context.data, $.vakata.context.par);
return true;
}
else { return false; }
}
};
$(function () {
var css_string = '' +
'#vakata-contextmenu { display:block; visibility:hidden; left:0; top:-200px; position:absolute; margin:0; padding:0; min-width:180px; background:#ebebeb; border:1px solid silver; z-index:10000; *width:180px; } ' +
'#vakata-contextmenu ul { min-width:180px; *width:180px; } ' +
'#vakata-contextmenu ul, #vakata-contextmenu li { margin:0; padding:0; list-style-type:none; display:block; } ' +
'#vakata-contextmenu li { line-height:20px; min-height:20px; position:relative; padding:0px; } ' +
'#vakata-contextmenu li a { padding:1px 6px; line-height:17px; display:block; text-decoration:none; margin:1px 1px 0 1px; } ' +
'#vakata-contextmenu li ins { float:left; width:16px; height:16px; text-decoration:none; margin-right:2px; } ' +
'#vakata-contextmenu li a:hover, #vakata-contextmenu li.vakata-hover > a { background:gray; color:white; } ' +
'#vakata-contextmenu li ul { display:none; position:absolute; top:-2px; left:100%; background:#ebebeb; border:1px solid gray; } ' +
'#vakata-contextmenu .right { right:100%; left:auto; } ' +
'#vakata-contextmenu .bottom { bottom:-1px; top:auto; } ' +
'#vakata-contextmenu li.vakata-separator { min-height:0; height:1px; line-height:1px; font-size:1px; overflow:hidden; margin:0 2px; background:silver; /* border-top:1px solid #fefefe; */ padding:0; } ';
$.vakata.css.add_sheet({ str : css_string, title : "vakata" });
$.vakata.context.cnt
.delegate("a","click", function (e) { e.preventDefault(); })
.delegate("a","mouseup", function (e) {
if(!$(this).parent().hasClass("jstree-contextmenu-disabled") && $.vakata.context.exec($(this).attr("rel"))) {
$.vakata.context.hide();
}
else { $(this).blur(); }
})
.delegate("a","mouseover", function () {
$.vakata.context.cnt.find(".vakata-hover").removeClass("vakata-hover");
})
.appendTo("body");
$(document).bind("mousedown", function (e) { if($.vakata.context.vis && !$.contains($.vakata.context.cnt[0], e.target)) { $.vakata.context.hide(); } });
if(typeof $.hotkeys !== "undefined") {
$(document)
.bind("keydown", "up", function (e) {
if($.vakata.context.vis) {
var o = $.vakata.context.cnt.find("ul:visible").last().children(".vakata-hover").removeClass("vakata-hover").prevAll("li:not(.vakata-separator)").first();
if(!o.length) { o = $.vakata.context.cnt.find("ul:visible").last().children("li:not(.vakata-separator)").last(); }
o.addClass("vakata-hover");
e.stopImmediatePropagation();
e.preventDefault();
}
})
.bind("keydown", "down", function (e) {
if($.vakata.context.vis) {
var o = $.vakata.context.cnt.find("ul:visible").last().children(".vakata-hover").removeClass("vakata-hover").nextAll("li:not(.vakata-separator)").first();
if(!o.length) { o = $.vakata.context.cnt.find("ul:visible").last().children("li:not(.vakata-separator)").first(); }
o.addClass("vakata-hover");
e.stopImmediatePropagation();
e.preventDefault();
}
})
.bind("keydown", "right", function (e) {
if($.vakata.context.vis) {
$.vakata.context.cnt.find(".vakata-hover").children("ul").show().children("li:not(.vakata-separator)").removeClass("vakata-hover").first().addClass("vakata-hover");
e.stopImmediatePropagation();
e.preventDefault();
}
})
.bind("keydown", "left", function (e) {
if($.vakata.context.vis) {
$.vakata.context.cnt.find(".vakata-hover").children("ul").hide().children(".vakata-separator").removeClass("vakata-hover");
e.stopImmediatePropagation();
e.preventDefault();
}
})
.bind("keydown", "esc", function (e) {
$.vakata.context.hide();
e.preventDefault();
})
.bind("keydown", "space", function (e) {
$.vakata.context.cnt.find(".vakata-hover").last().children("a").click();
e.preventDefault();
});
}
});
$.jstree.plugin("contextmenu", {
__init : function () {
this.get_container()
.delegate("a", "contextmenu.jstree", $.proxy(function (e) {
e.preventDefault();
if(!$(e.currentTarget).hasClass("jstree-loading")) {
this.show_contextmenu(e.currentTarget, e.pageX, e.pageY);
}
}, this))
.delegate("a", "click.jstree", $.proxy(function (e) {
if(this.data.contextmenu) {
$.vakata.context.hide();
}
}, this))
.bind("destroy.jstree", $.proxy(function () {
// TODO: move this to descruct method
if(this.data.contextmenu) {
$.vakata.context.hide();
}
}, this));
$(document).bind("context_hide.vakata", $.proxy(function () { this.data.contextmenu = false; }, this));
},
defaults : {
select_node : false, // requires UI plugin
show_at_node : true,
items : { // Could be a function that should return an object like this one
"create" : {
"separator_before" : false,
"separator_after" : true,
"label" : "Create",
"action" : function (obj) { this.create(obj); }
},
"rename" : {
"separator_before" : false,
"separator_after" : false,
"label" : "Rename",
"action" : function (obj) { this.rename(obj); }
},
"remove" : {
"separator_before" : false,
"icon" : false,
"separator_after" : false,
"label" : "Delete",
"action" : function (obj) { if(this.is_selected(obj)) { this.remove(); } else { this.remove(obj); } }
},
"ccp" : {
"separator_before" : true,
"icon" : false,
"separator_after" : false,
"label" : "Edit",
"action" : false,
"submenu" : {
"cut" : {
"separator_before" : false,
"separator_after" : false,
"label" : "Cut",
"action" : function (obj) { this.cut(obj); }
},
"copy" : {
"separator_before" : false,
"icon" : false,
"separator_after" : false,
"label" : "Copy",
"action" : function (obj) { this.copy(obj); }
},
"paste" : {
"separator_before" : false,
"icon" : false,
"separator_after" : false,
"label" : "Paste",
"action" : function (obj) { this.paste(obj); }
}
}
}
}
},
_fn : {
show_contextmenu : function (obj, x, y) {
obj = this._get_node(obj);
var s = this.get_settings().contextmenu,
a = obj.children("a:visible:eq(0)"),
o = false,
i = false;
if(s.select_node && this.data.ui && !this.is_selected(obj)) {
this.deselect_all();
this.select_node(obj, true);
}
if(s.show_at_node || typeof x === "undefined" || typeof y === "undefined") {
o = a.offset();
x = o.left;
y = o.top + this.data.core.li_height;
}
i = obj.data("jstree") && obj.data("jstree").contextmenu ? obj.data("jstree").contextmenu : s.items;
if($.isFunction(i)) { i = i.call(this, obj); }
this.data.contextmenu = true;
$.vakata.context.show(i, a, x, y, this, obj, this._get_settings().core.rtl);
if(this.data.themes) { $.vakata.context.cnt.attr("class", "jstree-" + this.data.themes.theme + "-context"); }
}
}
});
})(jQuery);
//*/
/*
* jsTree types plugin
* Adds support types of nodes
* You can set an attribute on each li node, that represents its type.
* According to the type setting the node may get custom icon/validation rules
*/
(function ($) {
$.jstree.plugin("types", {
__init : function () {
var s = this._get_settings().types;
this.data.types.attach_to = [];
this.get_container()
.bind("init.jstree", $.proxy(function () {
var types = s.types,
attr = s.type_attr,
icons_css = "",
_this = this;
$.each(types, function (i, tp) {
$.each(tp, function (k, v) {
if(!/^(max_depth|max_children|icon|valid_children)$/.test(k)) { _this.data.types.attach_to.push(k); }
});
if(!tp.icon) { return true; }
if( tp.icon.image || tp.icon.position) {
if(i == "default") { icons_css += '.jstree-' + _this.get_index() + ' a > .jstree-icon { '; }
else { icons_css += '.jstree-' + _this.get_index() + ' li[' + attr + '="' + i + '"] > a > .jstree-icon { '; }
if(tp.icon.image) { icons_css += ' background-image:url(' + tp.icon.image + '); '; }
if(tp.icon.position){ icons_css += ' background-position:' + tp.icon.position + '; '; }
else { icons_css += ' background-position:0 0; '; }
icons_css += '} ';
}
});
if(icons_css !== "") { $.vakata.css.add_sheet({ 'str' : icons_css, title : "jstree-types" }); }
}, this))
.bind("before.jstree", $.proxy(function (e, data) {
var s, t,
o = this._get_settings().types.use_data ? this._get_node(data.args[0]) : false,
d = o && o !== -1 && o.length ? o.data("jstree") : false;
if(d && d.types && d.types[data.func] === false) { e.stopImmediatePropagation(); return false; }
if($.inArray(data.func, this.data.types.attach_to) !== -1) {
if(!data.args[0] || (!data.args[0].tagName && !data.args[0].jquery)) { return; }
s = this._get_settings().types.types;
t = this._get_type(data.args[0]);
if(
(
(s[t] && typeof s[t][data.func] !== "undefined") ||
(s["default"] && typeof s["default"][data.func] !== "undefined")
) && this._check(data.func, data.args[0]) === false
) {
e.stopImmediatePropagation();
return false;
}
}
}, this));
if(is_ie6) {
this.get_container()
.bind("load_node.jstree set_type.jstree", $.proxy(function (e, data) {
var r = data && data.rslt && data.rslt.obj && data.rslt.obj !== -1 ? this._get_node(data.rslt.obj).parent() : this.get_container_ul(),
c = false,
s = this._get_settings().types;
$.each(s.types, function (i, tp) {
if(tp.icon && (tp.icon.image || tp.icon.position)) {
c = i === "default" ? r.find("li > a > .jstree-icon") : r.find("li[" + s.type_attr + "='" + i + "'] > a > .jstree-icon");
if(tp.icon.image) { c.css("backgroundImage","url(" + tp.icon.image + ")"); }
c.css("backgroundPosition", tp.icon.position || "0 0");
}
});
}, this));
}
},
defaults : {
// defines maximum number of root nodes (-1 means unlimited, -2 means disable max_children checking)
max_children : -1,
// defines the maximum depth of the tree (-1 means unlimited, -2 means disable max_depth checking)
max_depth : -1,
// defines valid node types for the root nodes
valid_children : "all",
// whether to use $.data
use_data : false,
// where is the type stores (the rel attribute of the LI element)
type_attr : "rel",
// a list of types
types : {
// the default type
"default" : {
"max_children" : -1,
"max_depth" : -1,
"valid_children": "all"
// Bound functions - you can bind any other function here (using boolean or function)
//"select_node" : true
}
}
},
_fn : {
_types_notify : function (n, data) {
if(data.type && this._get_settings().types.use_data) {
this.set_type(data.type, n);
}
},
_get_type : function (obj) {
obj = this._get_node(obj);
return (!obj || !obj.length) ? false : obj.attr(this._get_settings().types.type_attr) || "default";
},
set_type : function (str, obj) {
obj = this._get_node(obj);
var ret = (!obj.length || !str) ? false : obj.attr(this._get_settings().types.type_attr, str);
if(ret) { this.__callback({ obj : obj, type : str}); }
return ret;
},
_check : function (rule, obj, opts) {
obj = this._get_node(obj);
var v = false, t = this._get_type(obj), d = 0, _this = this, s = this._get_settings().types, data = false;
if(obj === -1) {
if(!!s[rule]) { v = s[rule]; }
else { return; }
}
else {
if(t === false) { return; }
data = s.use_data ? obj.data("jstree") : false;
if(data && data.types && typeof data.types[rule] !== "undefined") { v = data.types[rule]; }
else if(!!s.types[t] && typeof s.types[t][rule] !== "undefined") { v = s.types[t][rule]; }
else if(!!s.types["default"] && typeof s.types["default"][rule] !== "undefined") { v = s.types["default"][rule]; }
}
if($.isFunction(v)) { v = v.call(this, obj); }
if(rule === "max_depth" && obj !== -1 && opts !== false && s.max_depth !== -2 && v !== 0) {
// also include the node itself - otherwise if root node it is not checked
obj.children("a:eq(0)").parentsUntil(".jstree","li").each(function (i) {
// check if current depth already exceeds global tree depth
if(s.max_depth !== -1 && s.max_depth - (i + 1) <= 0) { v = 0; return false; }
d = (i === 0) ? v : _this._check(rule, this, false);
// check if current node max depth is already matched or exceeded
if(d !== -1 && d - (i + 1) <= 0) { v = 0; return false; }
// otherwise - set the max depth to the current value minus current depth
if(d >= 0 && (d - (i + 1) < v || v < 0) ) { v = d - (i + 1); }
// if the global tree depth exists and it minus the nodes calculated so far is less than `v` or `v` is unlimited
if(s.max_depth >= 0 && (s.max_depth - (i + 1) < v || v < 0) ) { v = s.max_depth - (i + 1); }
});
}
return v;
},
check_move : function () {
if(!this.__call_old()) { return false; }
var m = this._get_move(),
s = m.rt._get_settings().types,
mc = m.rt._check("max_children", m.cr),
md = m.rt._check("max_depth", m.cr),
vc = m.rt._check("valid_children", m.cr),
ch = 0, d = 1, t;
if(vc === "none") { return false; }
if($.isArray(vc) && m.ot && m.ot._get_type) {
m.o.each(function () {
if($.inArray(m.ot._get_type(this), vc) === -1) { d = false; return false; }
});
if(d === false) { return false; }
}
if(s.max_children !== -2 && mc !== -1) {
ch = m.cr === -1 ? this.get_container().find("> ul > li").not(m.o).length : m.cr.find("> ul > li").not(m.o).length;
if(ch + m.o.length > mc) { return false; }
}
if(s.max_depth !== -2 && md !== -1) {
d = 0;
if(md === 0) { return false; }
if(typeof m.o.d === "undefined") {
// TODO: deal with progressive rendering and async when checking max_depth (how to know the depth of the moved node)
t = m.o;
while(t.length > 0) {
t = t.find("> ul > li");
d ++;
}
m.o.d = d;
}
if(md - m.o.d < 0) { return false; }
}
return true;
},
create_node : function (obj, position, js, callback, is_loaded, skip_check) {
if(!skip_check && (is_loaded || this._is_loaded(obj))) {
var p = (typeof position == "string" && position.match(/^before|after$/i) && obj !== -1) ? this._get_parent(obj) : this._get_node(obj),
s = this._get_settings().types,
mc = this._check("max_children", p),
md = this._check("max_depth", p),
vc = this._check("valid_children", p),
ch;
if(typeof js === "string") { js = { data : js }; }
if(!js) { js = {}; }
if(vc === "none") { return false; }
if($.isArray(vc)) {
if(!js.attr || !js.attr[s.type_attr]) {
if(!js.attr) { js.attr = {}; }
js.attr[s.type_attr] = vc[0];
}
else {
if($.inArray(js.attr[s.type_attr], vc) === -1) { return false; }
}
}
if(s.max_children !== -2 && mc !== -1) {
ch = p === -1 ? this.get_container().find("> ul > li").length : p.find("> ul > li").length;
if(ch + 1 > mc) { return false; }
}
if(s.max_depth !== -2 && md !== -1 && (md - 1) < 0) { return false; }
}
return this.__call_old(true, obj, position, js, callback, is_loaded, skip_check);
}
}
});
})(jQuery);
//*/
/*
* jsTree HTML plugin
* The HTML data store. Datastores are build by replacing the `load_node` and `_is_loaded` functions.
*/
(function ($) {
$.jstree.plugin("html_data", {
__init : function () {
// this used to use html() and clean the whitespace, but this way any attached data was lost
this.data.html_data.original_container_html = this.get_container().find(" > ul > li").clone(true);
// remove white space from LI node - otherwise nodes appear a bit to the right
this.data.html_data.original_container_html.find("li").andSelf().contents().filter(function() { return this.nodeType == 3; }).remove();
},
defaults : {
data : false,
ajax : false,
correct_state : true
},
_fn : {
load_node : function (obj, s_call, e_call) { var _this = this; this.load_node_html(obj, function () { _this.__callback({ "obj" : _this._get_node(obj) }); s_call.call(this); }, e_call); },
_is_loaded : function (obj) {
obj = this._get_node(obj);
return obj == -1 || !obj || (!this._get_settings().html_data.ajax && !$.isFunction(this._get_settings().html_data.data)) || obj.is(".jstree-open, .jstree-leaf") || obj.children("ul").children("li").size() > 0;
},
load_node_html : function (obj, s_call, e_call) {
var d,
s = this.get_settings().html_data,
error_func = function () {},
success_func = function () {};
obj = this._get_node(obj);
if(obj && obj !== -1) {
if(obj.data("jstree-is-loading")) { return; }
else { obj.data("jstree-is-loading",true); }
}
switch(!0) {
case ($.isFunction(s.data)):
s.data.call(this, obj, $.proxy(function (d) {
if(d && d !== "" && d.toString && d.toString().replace(/^[\s\n]+$/,"") !== "") {
d = $(d);
if(!d.is("ul")) { d = $("<ul />").append(d); }
if(obj == -1 || !obj) { this.get_container().children("ul").empty().append(d.children()).find("li, a").filter(function () { return !this.firstChild || !this.firstChild.tagName || this.firstChild.tagName !== "INS"; }).prepend("<ins class='jstree-icon'> </ins>").end().filter("a").children("ins:first-child").not(".jstree-icon").addClass("jstree-icon"); }
else { obj.children("a.jstree-loading").removeClass("jstree-loading"); obj.append(d).children("ul").find("li, a").filter(function () { return !this.firstChild || !this.firstChild.tagName || this.firstChild.tagName !== "INS"; }).prepend("<ins class='jstree-icon'> </ins>").end().filter("a").children("ins:first-child").not(".jstree-icon").addClass("jstree-icon"); obj.removeData("jstree-is-loading"); }
this.clean_node(obj);
if(s_call) { s_call.call(this); }
}
else {
if(obj && obj !== -1) {
obj.children("a.jstree-loading").removeClass("jstree-loading");
obj.removeData("jstree-is-loading");
if(s.correct_state) {
this.correct_state(obj);
if(s_call) { s_call.call(this); }
}
}
else {
if(s.correct_state) {
this.get_container().children("ul").empty();
if(s_call) { s_call.call(this); }
}
}
}
}, this));
break;
case (!s.data && !s.ajax):
if(!obj || obj == -1) {
this.get_container()
.children("ul").empty()
.append(this.data.html_data.original_container_html)
.find("li, a").filter(function () { return !this.firstChild || !this.firstChild.tagName || this.firstChild.tagName !== "INS"; }).prepend("<ins class='jstree-icon'> </ins>").end()
.filter("a").children("ins:first-child").not(".jstree-icon").addClass("jstree-icon");
this.clean_node();
}
if(s_call) { s_call.call(this); }
break;
case (!!s.data && !s.ajax) || (!!s.data && !!s.ajax && (!obj || obj === -1)):
if(!obj || obj == -1) {
d = $(s.data);
if(!d.is("ul")) { d = $("<ul />").append(d); }
this.get_container()
.children("ul").empty().append(d.children())
.find("li, a").filter(function () { return !this.firstChild || !this.firstChild.tagName || this.firstChild.tagName !== "INS"; }).prepend("<ins class='jstree-icon'> </ins>").end()
.filter("a").children("ins:first-child").not(".jstree-icon").addClass("jstree-icon");
this.clean_node();
}
if(s_call) { s_call.call(this); }
break;
case (!s.data && !!s.ajax) || (!!s.data && !!s.ajax && obj && obj !== -1):
obj = this._get_node(obj);
error_func = function (x, t, e) {
var ef = this.get_settings().html_data.ajax.error;
if(ef) { ef.call(this, x, t, e); }
if(obj != -1 && obj.length) {
obj.children("a.jstree-loading").removeClass("jstree-loading");
obj.removeData("jstree-is-loading");
if(t === "success" && s.correct_state) { this.correct_state(obj); }
}
else {
if(t === "success" && s.correct_state) { this.get_container().children("ul").empty(); }
}
if(e_call) { e_call.call(this); }
};
success_func = function (d, t, x) {
var sf = this.get_settings().html_data.ajax.success;
if(sf) { d = sf.call(this,d,t,x) || d; }
if(d === "" || (d && d.toString && d.toString().replace(/^[\s\n]+$/,"") === "")) {
return error_func.call(this, x, t, "");
}
if(d) {
d = $(d);
if(!d.is("ul")) { d = $("<ul />").append(d); }
if(obj == -1 || !obj) { this.get_container().children("ul").empty().append(d.children()).find("li, a").filter(function () { return !this.firstChild || !this.firstChild.tagName || this.firstChild.tagName !== "INS"; }).prepend("<ins class='jstree-icon'> </ins>").end().filter("a").children("ins:first-child").not(".jstree-icon").addClass("jstree-icon"); }
else { obj.children("a.jstree-loading").removeClass("jstree-loading"); obj.append(d).children("ul").find("li, a").filter(function () { return !this.firstChild || !this.firstChild.tagName || this.firstChild.tagName !== "INS"; }).prepend("<ins class='jstree-icon'> </ins>").end().filter("a").children("ins:first-child").not(".jstree-icon").addClass("jstree-icon"); obj.removeData("jstree-is-loading"); }
this.clean_node(obj);
if(s_call) { s_call.call(this); }
}
else {
if(obj && obj !== -1) {
obj.children("a.jstree-loading").removeClass("jstree-loading");
obj.removeData("jstree-is-loading");
if(s.correct_state) {
this.correct_state(obj);
if(s_call) { s_call.call(this); }
}
}
else {
if(s.correct_state) {
this.get_container().children("ul").empty();
if(s_call) { s_call.call(this); }
}
}
}
};
s.ajax.context = this;
s.ajax.error = error_func;
s.ajax.success = success_func;
if(!s.ajax.dataType) { s.ajax.dataType = "html"; }
if($.isFunction(s.ajax.url)) { s.ajax.url = s.ajax.url.call(this, obj); }
if($.isFunction(s.ajax.data)) { s.ajax.data = s.ajax.data.call(this, obj); }
$.ajax(s.ajax);
break;
}
}
}
});
// include the HTML data plugin by default
$.jstree.defaults.plugins.push("html_data");
})(jQuery);
//*/
/*
* jsTree themeroller plugin
* Adds support for jQuery UI themes. Include this at the end of your plugins list, also make sure "themes" is not included.
*/
(function ($) {
$.jstree.plugin("themeroller", {
__init : function () {
var s = this._get_settings().themeroller;
this.get_container()
.addClass("ui-widget-content")
.addClass("jstree-themeroller")
.delegate("a","mouseenter.jstree", function (e) {
if(!$(e.currentTarget).hasClass("jstree-loading")) {
$(this).addClass(s.item_h);
}
})
.delegate("a","mouseleave.jstree", function () {
$(this).removeClass(s.item_h);
})
.bind("init.jstree", $.proxy(function (e, data) {
data.inst.get_container().find("> ul > li > .jstree-loading > ins").addClass("ui-icon-refresh");
this._themeroller(data.inst.get_container().find("> ul > li"));
}, this))
.bind("open_node.jstree create_node.jstree", $.proxy(function (e, data) {
this._themeroller(data.rslt.obj);
}, this))
.bind("loaded.jstree refresh.jstree", $.proxy(function (e) {
this._themeroller();
}, this))
.bind("close_node.jstree", $.proxy(function (e, data) {
this._themeroller(data.rslt.obj);
}, this))
.bind("delete_node.jstree", $.proxy(function (e, data) {
this._themeroller(data.rslt.parent);
}, this))
.bind("correct_state.jstree", $.proxy(function (e, data) {
data.rslt.obj
.children("ins.jstree-icon").removeClass(s.opened + " " + s.closed + " ui-icon").end()
.find("> a > ins.ui-icon")
.filter(function() {
return this.className.toString()
.replace(s.item_clsd,"").replace(s.item_open,"").replace(s.item_leaf,"")
.indexOf("ui-icon-") === -1;
}).removeClass(s.item_open + " " + s.item_clsd).addClass(s.item_leaf || "jstree-no-icon");
}, this))
.bind("select_node.jstree", $.proxy(function (e, data) {
data.rslt.obj.children("a").addClass(s.item_a);
}, this))
.bind("deselect_node.jstree deselect_all.jstree", $.proxy(function (e, data) {
this.get_container()
.find("a." + s.item_a).removeClass(s.item_a).end()
.find("a.jstree-clicked").addClass(s.item_a);
}, this))
.bind("dehover_node.jstree", $.proxy(function (e, data) {
data.rslt.obj.children("a").removeClass(s.item_h);
}, this))
.bind("hover_node.jstree", $.proxy(function (e, data) {
this.get_container()
.find("a." + s.item_h).not(data.rslt.obj).removeClass(s.item_h);
data.rslt.obj.children("a").addClass(s.item_h);
}, this))
.bind("move_node.jstree", $.proxy(function (e, data) {
this._themeroller(data.rslt.o);
this._themeroller(data.rslt.op);
}, this));
},
__destroy : function () {
var s = this._get_settings().themeroller,
c = [ "ui-icon" ];
$.each(s, function (i, v) {
v = v.split(" ");
if(v.length) { c = c.concat(v); }
});
this.get_container()
.removeClass("ui-widget-content")
.find("." + c.join(", .")).removeClass(c.join(" "));
},
_fn : {
_themeroller : function (obj) {
var s = this._get_settings().themeroller;
obj = !obj || obj == -1 ? this.get_container_ul() : this._get_node(obj).parent();
obj
.find("li.jstree-closed")
.children("ins.jstree-icon").removeClass(s.opened).addClass("ui-icon " + s.closed).end()
.children("a").addClass(s.item)
.children("ins.jstree-icon").addClass("ui-icon")
.filter(function() {
return this.className.toString()
.replace(s.item_clsd,"").replace(s.item_open,"").replace(s.item_leaf,"")
.indexOf("ui-icon-") === -1;
}).removeClass(s.item_leaf + " " + s.item_open).addClass(s.item_clsd || "jstree-no-icon")
.end()
.end()
.end()
.end()
.find("li.jstree-open")
.children("ins.jstree-icon").removeClass(s.closed).addClass("ui-icon " + s.opened).end()
.children("a").addClass(s.item)
.children("ins.jstree-icon").addClass("ui-icon")
.filter(function() {
return this.className.toString()
.replace(s.item_clsd,"").replace(s.item_open,"").replace(s.item_leaf,"")
.indexOf("ui-icon-") === -1;
}).removeClass(s.item_leaf + " " + s.item_clsd).addClass(s.item_open || "jstree-no-icon")
.end()
.end()
.end()
.end()
.find("li.jstree-leaf")
.children("ins.jstree-icon").removeClass(s.closed + " ui-icon " + s.opened).end()
.children("a").addClass(s.item)
.children("ins.jstree-icon").addClass("ui-icon")
.filter(function() {
return this.className.toString()
.replace(s.item_clsd,"").replace(s.item_open,"").replace(s.item_leaf,"")
.indexOf("ui-icon-") === -1;
}).removeClass(s.item_clsd + " " + s.item_open).addClass(s.item_leaf || "jstree-no-icon");
}
},
defaults : {
"opened" : "ui-icon-triangle-1-se",
"closed" : "ui-icon-triangle-1-e",
"item" : "ui-state-default",
"item_h" : "ui-state-hover",
"item_a" : "ui-state-active",
"item_open" : "ui-icon-folder-open",
"item_clsd" : "ui-icon-folder-collapsed",
"item_leaf" : "ui-icon-document"
}
});
$(function() {
var css_string = '' +
'.jstree-themeroller .ui-icon { overflow:visible; } ' +
'.jstree-themeroller a { padding:0 2px; } ' +
'.jstree-themeroller .jstree-no-icon { display:none; }';
$.vakata.css.add_sheet({ str : css_string, title : "jstree" });
});
})(jQuery);
//*/
/*
* jsTree unique plugin
* Forces different names amongst siblings (still a bit experimental)
* NOTE: does not check language versions (it will not be possible to have nodes with the same title, even in different languages)
*/
(function ($) {
$.jstree.plugin("unique", {
__init : function () {
this.get_container()
.bind("before.jstree", $.proxy(function (e, data) {
var nms = [], res = true, p, t;
if(data.func == "move_node") {
// obj, ref, position, is_copy, is_prepared, skip_check
if(data.args[4] === true) {
if(data.args[0].o && data.args[0].o.length) {
data.args[0].o.children("a").each(function () { nms.push($(this).text().replace(/^\s+/g,"")); });
res = this._check_unique(nms, data.args[0].np.find("> ul > li").not(data.args[0].o), "move_node");
}
}
}
if(data.func == "create_node") {
// obj, position, js, callback, is_loaded
if(data.args[4] || this._is_loaded(data.args[0])) {
p = this._get_node(data.args[0]);
if(data.args[1] && (data.args[1] === "before" || data.args[1] === "after")) {
p = this._get_parent(data.args[0]);
if(!p || p === -1) { p = this.get_container(); }
}
if(typeof data.args[2] === "string") { nms.push(data.args[2]); }
else if(!data.args[2] || !data.args[2].data) { nms.push(this._get_string("new_node")); }
else { nms.push(data.args[2].data); }
res = this._check_unique(nms, p.find("> ul > li"), "create_node");
}
}
if(data.func == "rename_node") {
// obj, val
nms.push(data.args[1]);
t = this._get_node(data.args[0]);
p = this._get_parent(t);
if(!p || p === -1) { p = this.get_container(); }
res = this._check_unique(nms, p.find("> ul > li").not(t), "rename_node");
}
if(!res) {
e.stopPropagation();
return false;
}
}, this));
},
defaults : {
error_callback : $.noop
},
_fn : {
_check_unique : function (nms, p, func) {
var cnms = [];
p.children("a").each(function () { cnms.push($(this).text().replace(/^\s+/g,"")); });
if(!cnms.length || !nms.length) { return true; }
cnms = cnms.sort().join(",,").replace(/(,|^)([^,]+)(,,\2)+(,|$)/g,"$1$2$4").replace(/,,+/g,",").replace(/,$/,"").split(",");
if((cnms.length + nms.length) != cnms.concat(nms).sort().join(",,").replace(/(,|^)([^,]+)(,,\2)+(,|$)/g,"$1$2$4").replace(/,,+/g,",").replace(/,$/,"").split(",").length) {
this._get_settings().unique.error_callback.call(null, nms, p, func);
return false;
}
return true;
},
check_move : function () {
if(!this.__call_old()) { return false; }
var p = this._get_move(), nms = [];
if(p.o && p.o.length) {
p.o.children("a").each(function () { nms.push($(this).text().replace(/^\s+/g,"")); });
return this._check_unique(nms, p.np.find("> ul > li").not(p.o), "check_move");
}
return true;
}
}
});
})(jQuery);
//*/
/*
* jsTree wholerow plugin
* Makes select and hover work on the entire width of the node
* MAY BE HEAVY IN LARGE DOM
*/
(function ($) {
$.jstree.plugin("wholerow", {
__init : function () {
if(!this.data.ui) { throw "jsTree wholerow: jsTree UI plugin not included."; }
this.data.wholerow.html = false;
this.data.wholerow.to = false;
this.get_container()
.bind("init.jstree", $.proxy(function (e, data) {
this._get_settings().core.animation = 0;
}, this))
.bind("open_node.jstree create_node.jstree clean_node.jstree loaded.jstree", $.proxy(function (e, data) {
this._prepare_wholerow_span( data && data.rslt && data.rslt.obj ? data.rslt.obj : -1 );
}, this))
.bind("search.jstree clear_search.jstree reopen.jstree after_open.jstree after_close.jstree create_node.jstree delete_node.jstree clean_node.jstree", $.proxy(function (e, data) {
if(this.data.to) { clearTimeout(this.data.to); }
this.data.to = setTimeout( (function (t, o) { return function() { t._prepare_wholerow_ul(o); }; })(this, data && data.rslt && data.rslt.obj ? data.rslt.obj : -1), 0);
}, this))
.bind("deselect_all.jstree", $.proxy(function (e, data) {
this.get_container().find(" > .jstree-wholerow .jstree-clicked").removeClass("jstree-clicked " + (this.data.themeroller ? this._get_settings().themeroller.item_a : "" ));
}, this))
.bind("select_node.jstree deselect_node.jstree ", $.proxy(function (e, data) {
data.rslt.obj.each(function () {
var ref = data.inst.get_container().find(" > .jstree-wholerow li:visible:eq(" + ( parseInt((($(this).offset().top - data.inst.get_container().offset().top + data.inst.get_container()[0].scrollTop) / data.inst.data.core.li_height),10)) + ")");
// ref.children("a")[e.type === "select_node" ? "addClass" : "removeClass"]("jstree-clicked");
ref.children("a").attr("class",data.rslt.obj.children("a").attr("class"));
});
}, this))
.bind("hover_node.jstree dehover_node.jstree", $.proxy(function (e, data) {
this.get_container().find(" > .jstree-wholerow .jstree-hovered").removeClass("jstree-hovered " + (this.data.themeroller ? this._get_settings().themeroller.item_h : "" ));
if(e.type === "hover_node") {
var ref = this.get_container().find(" > .jstree-wholerow li:visible:eq(" + ( parseInt(((data.rslt.obj.offset().top - this.get_container().offset().top + this.get_container()[0].scrollTop) / this.data.core.li_height),10)) + ")");
// ref.children("a").addClass("jstree-hovered");
ref.children("a").attr("class",data.rslt.obj.children(".jstree-hovered").attr("class"));
}
}, this))
.delegate(".jstree-wholerow-span, ins.jstree-icon, li", "click.jstree", function (e) {
var n = $(e.currentTarget);
if(e.target.tagName === "A" || (e.target.tagName === "INS" && n.closest("li").is(".jstree-open, .jstree-closed"))) { return; }
n.closest("li").children("a:visible:eq(0)").click();
e.stopImmediatePropagation();
})
.delegate("li", "mouseover.jstree", $.proxy(function (e) {
e.stopImmediatePropagation();
if($(e.currentTarget).children(".jstree-hovered, .jstree-clicked").length) { return false; }
this.hover_node(e.currentTarget);
return false;
}, this))
.delegate("li", "mouseleave.jstree", $.proxy(function (e) {
if($(e.currentTarget).children("a").hasClass("jstree-hovered").length) { return; }
this.dehover_node(e.currentTarget);
}, this));
if(is_ie7 || is_ie6) {
$.vakata.css.add_sheet({ str : ".jstree-" + this.get_index() + " { position:relative; } ", title : "jstree" });
}
},
defaults : {
},
__destroy : function () {
this.get_container().children(".jstree-wholerow").remove();
this.get_container().find(".jstree-wholerow-span").remove();
},
_fn : {
_prepare_wholerow_span : function (obj) {
obj = !obj || obj == -1 ? this.get_container().find("> ul > li") : this._get_node(obj);
if(obj === false) { return; } // added for removing root nodes
obj.each(function () {
$(this).find("li").andSelf().each(function () {
var $t = $(this);
if($t.children(".jstree-wholerow-span").length) { return true; }
$t.prepend("<span class='jstree-wholerow-span' style='width:" + ($t.parentsUntil(".jstree","li").length * 18) + "px;'> </span>");
});
});
},
_prepare_wholerow_ul : function () {
var o = this.get_container().children("ul").eq(0), h = o.html();
o.addClass("jstree-wholerow-real");
if(this.data.wholerow.last_html !== h) {
this.data.wholerow.last_html = h;
this.get_container().children(".jstree-wholerow").remove();
this.get_container().append(
o.clone().removeClass("jstree-wholerow-real")
.wrapAll("<div class='jstree-wholerow' />").parent()
.width(o.parent()[0].scrollWidth)
.css("top", (o.height() + ( is_ie7 ? 5 : 0)) * -1 )
.find("li[id]").each(function () { this.removeAttribute("id"); }).end()
);
}
}
}
});
$(function() {
var css_string = '' +
'.jstree .jstree-wholerow-real { position:relative; z-index:1; } ' +
'.jstree .jstree-wholerow-real li { cursor:pointer; } ' +
'.jstree .jstree-wholerow-real a { border-left-color:transparent !important; border-right-color:transparent !important; } ' +
'.jstree .jstree-wholerow { position:relative; z-index:0; height:0; } ' +
'.jstree .jstree-wholerow ul, .jstree .jstree-wholerow li { width:100%; } ' +
'.jstree .jstree-wholerow, .jstree .jstree-wholerow ul, .jstree .jstree-wholerow li, .jstree .jstree-wholerow a { margin:0 !important; padding:0 !important; } ' +
'.jstree .jstree-wholerow, .jstree .jstree-wholerow ul, .jstree .jstree-wholerow li { background:transparent !important; }' +
'.jstree .jstree-wholerow ins, .jstree .jstree-wholerow span, .jstree .jstree-wholerow input { display:none !important; }' +
'.jstree .jstree-wholerow a, .jstree .jstree-wholerow a:hover { text-indent:-9999px; !important; width:100%; padding:0 !important; border-right-width:0px !important; border-left-width:0px !important; } ' +
'.jstree .jstree-wholerow-span { position:absolute; left:0; margin:0px; padding:0; height:18px; border-width:0; padding:0; z-index:0; }';
if(is_ff2) {
css_string += '' +
'.jstree .jstree-wholerow a { display:block; height:18px; margin:0; padding:0; border:0; } ' +
'.jstree .jstree-wholerow-real a { border-color:transparent !important; } ';
}
if(is_ie7 || is_ie6) {
css_string += '' +
'.jstree .jstree-wholerow, .jstree .jstree-wholerow li, .jstree .jstree-wholerow ul, .jstree .jstree-wholerow a { margin:0; padding:0; line-height:18px; } ' +
'.jstree .jstree-wholerow a { display:block; height:18px; line-height:18px; overflow:hidden; } ';
}
$.vakata.css.add_sheet({ str : css_string, title : "jstree" });
});
})(jQuery);
//*/
/*
* jsTree model plugin
* This plugin gets jstree to use a class model to retrieve data, creating great dynamism
*/
(function ($) {
var nodeInterface = ["getChildren","getChildrenCount","getAttr","getName","getProps"],
validateInterface = function(obj, inter) {
var valid = true;
obj = obj || {};
inter = [].concat(inter);
$.each(inter, function (i, v) {
if(!$.isFunction(obj[v])) { valid = false; return false; }
});
return valid;
};
$.jstree.plugin("model", {
__init : function () {
if(!this.data.json_data) { throw "jsTree model: jsTree json_data plugin not included."; }
this._get_settings().json_data.data = function (n, b) {
var obj = (n == -1) ? this._get_settings().model.object : n.data("jstree_model");
if(!validateInterface(obj, nodeInterface)) { return b.call(null, false); }
if(this._get_settings().model.async) {
obj.getChildren($.proxy(function (data) {
this.model_done(data, b);
}, this));
}
else {
this.model_done(obj.getChildren(), b);
}
};
},
defaults : {
object : false,
id_prefix : false,
async : false
},
_fn : {
model_done : function (data, callback) {
var ret = [],
s = this._get_settings(),
_this = this;
if(!$.isArray(data)) { data = [data]; }
$.each(data, function (i, nd) {
var r = nd.getProps() || {};
r.attr = nd.getAttr() || {};
if(nd.getChildrenCount()) { r.state = "closed"; }
r.data = nd.getName();
if(!$.isArray(r.data)) { r.data = [r.data]; }
if(_this.data.types && $.isFunction(nd.getType)) {
r.attr[s.types.type_attr] = nd.getType();
}
if(r.attr.id && s.model.id_prefix) { r.attr.id = s.model.id_prefix + r.attr.id; }
if(!r.metadata) { r.metadata = { }; }
r.metadata.jstree_model = nd;
ret.push(r);
});
callback.call(null, ret);
}
}
});
})(jQuery);
//*/
})();
}); | PypiClean |
/Italian%20Tweets%20Analyzer-2.3.tar.gz/Italian Tweets Analyzer-2.3/hate_tweet_map/tweets_searcher/SearchTweets.py | import concurrent
import os
from concurrent import futures
import logging
import math
import time
import pandas as pd
from concurrent.futures import Future, as_completed
from datetime import datetime, timezone
from typing import Optional
import requests
import yaml
from tqdm import tqdm
from hate_tweet_map import util
from hate_tweet_map.database import DataBase
class SearchTweets:
"""
"""
def __init__(self, mongodb: DataBase, path_to_cnfg_file: str) -> None:
"""
This method load the paramaters of the serch from the configuration file, validate these and initialize the value of the class attribute.
:param path_to_cnfg_file:
:type path_to_cnfg_file:
:param mongodb: the database instance where save the result of the search
:type mongodb: DataBase
"""
self.__twitter_users_mentioned = []
self.mongodb = mongodb
self._all = []
self.total_result = 0
self.__multi_user = False
self.__multi_user_mentioned = False
self.__multi_hashtag = False
self.__twitter_users = []
self.__twitter_hashtags = []
self.log = logging.getLogger("SEARCH")
self.log.setLevel(logging.INFO)
logging.basicConfig()
self.response = {}
# load the comfiguration file, save the parameters and validate it
with open(path_to_cnfg_file, "r") as ymlfile:
cfg = yaml.safe_load(ymlfile)
check = []
self.__twitter_keyword = cfg['twitter']['search']['keyword']
twitter_user = cfg['twitter']['search']['user']
twitter_user_mentioned = cfg['twitter']['search']['user_mentioned']
twitter_hashtag = cfg['twitter']['search']['hashtag']
#if not (self.__twitter_keyword or twitter_user):
#raise ValueError(
#'Impostare un valore per almeno uno dei due perametri [user], [keyword]')
#if not (twitter_user or twitter_user_mentioned):
# raise ValueError(
# 'Impostare un valore per entrambi i parametri [user], [to]')
#if not (twitter_user or twitter_user_mentioned or self.__twitter_keyword):
#raise ValueError('Impostare un valore per almeno uno dei tre parametri [keyword],[user],[user_mentioned]')
if twitter_user:
if "," in str(twitter_user):
self.__twitter_users = twitter_user.split(",")
self.__multi_user = True
else:
self.__twitter_users = [twitter_user]
if twitter_user_mentioned:
if "," in str(twitter_user_mentioned):
self.__twitter_users_mentioned = twitter_user_mentioned.split(",")
self.__multi_user_mentioned = True
else:
self.__twitter_users_mentioned = [twitter_user_mentioned]
if twitter_hashtag:
if "," in str(twitter_hashtag):
self.__twitter_hashtags = twitter_hashtag.split(",")
self.__multi_hashtag = True
else:
self.__twitter_hashtags = [twitter_hashtag]
self.__twitter_lang = cfg['twitter']['search']['lang']
self.__twitter_place_country = cfg['twitter']['search']["geo"]['place_country']
self.__twitter_place = cfg['twitter']['search']["geo"]['place']
self.__twitter_bounding_box = cfg['twitter']['search']["geo"]['bounding_box']
self.__twitter_point_radius_longitude = cfg['twitter']['search']["geo"]['point_radius']['longitude']
self.__twitter_point_radius_latitude = cfg['twitter']['search']["geo"]['point_radius']['latitude']
self.__twitter_point_radius_radius = cfg['twitter']['search']["geo"]['point_radius']['radius']
self.__twitter_start_time = cfg['twitter']['search']['time']['start_time']
self.__twitter_end_time = cfg['twitter']['search']['time']['end_time']
if self.__twitter_point_radius_longitude:
check.append(True)
if self.__twitter_point_radius_radius:
check.append(True)
if self.__twitter_point_radius_latitude:
check.append(True)
if 1 < check.count(True) < 3:
raise ValueError(
'To search using [point_radius] all the following parameters must be set: [latitude], [radius] e [longitude]')
check = []
if self.__twitter_place:
check.append(True)
if self.__twitter_place_country:
check.append(True)
if self.__twitter_bounding_box:
check.append(True)
if self.__twitter_point_radius_longitude:
check.append(True)
if check.count(True) > 1:
raise ValueError(
'Only one of the following paramaters must be set [bounding_box], [point_radius]')
self.__twitter_context_annotations = cfg['twitter']['search']['context_annotations']
self.__twitter_all_tweets = cfg['twitter']['search']['all_tweets']
self.__twitter_n_results = cfg['twitter']['search']['n_results']
self.__twitter_barer_token = cfg['twitter']['configuration']['barer_token']
self.__twitter_end_point = cfg['twitter']['configuration']['end_point']
self.__twitter_filter_images = cfg['twitter']['search']['filter_images']
self.__headers = {"Authorization": "Bearer {}".format(self.__twitter_barer_token)}
def __next_page(self, next_token="") -> None:
"""
Insert in the query the token to obtain the next page of the tesult of the search.
:param next_token: the token obtained from twitter to reach the next page of the search
:type next_token: str, optional
:return: None
"""
if next_token != "":
self.__query["next_token"] = next_token
def __build_query(self, user: str = None, user_mentioned: str = None, hashtag: str = None) -> None:
"""
This method build the query to send to twitter
:param user: the id or name of the user whose tweets you want, defaults to None
:type user: str, optional
:return: None
"""
# Optional params: start_time,end_time,since_id,until_id,max_results,next_token,
# expansions,tweet.fields,media.fields,poll.fields,place.fields,user.fields
self.__query = {'query': ""}
if self.__twitter_keyword:
self.__query['query'] = str(self.__twitter_keyword)
if user is not None:
if self.__twitter_keyword:
self.__query['query'] += " from: " + str(user)
else:
self.__query['query'] += "from: " + str(user)
if user_mentioned is not None:
if self.__twitter_keyword:
self.__query['query'] += " @" + str(user_mentioned)
else:
self.__query['query'] += " @" + str(user_mentioned)
if hashtag is not None:
if self.__twitter_keyword:
self.__query['query'] += " #" + str(hashtag)
else:
self.__query['query'] += " #" + str(hashtag)
if self.__twitter_filter_images is True:
self.__query['query'] += " -has:images"
if self.__twitter_lang:
self.__query['query'] += " lang:" + self.__twitter_lang
if self.__twitter_place:
self.__query['query'] += " place:" + self.__twitter_place
if self.__twitter_place_country:
self.__query['query'] += " place_country:" + self.__twitter_place_country
if self.__twitter_all_tweets:
if self.__twitter_context_annotations:
self.__query['max_results'] = str(100)
else:
self.__query['max_results'] = str(500)
# if is specified a number of result to request
elif self.__twitter_n_results:
# if the specified number is greater than 500 set the max_result query field to the max value possible so
# 500.
if self.__twitter_context_annotations:
if self.__twitter_n_results > 100:
self.__query['max_results'] = str(100)
elif self.__twitter_n_results > 500:
self.__query['max_results'] = str(500)
# if the specified number is less than 10 set the max_result field to the min value possible so 10
elif self.__twitter_n_results < 10:
self.__query['max_results'] = str(10)
# else if the value is between 10 and 500 set the max_result field query to the value given
else:
self.__query['max_results'] = str(self.__twitter_n_results)
if self.__twitter_bounding_box:
self.__query['query'] += " bounding_box:" + "[" + self.__twitter_bounding_box + "]"
elif self.__twitter_point_radius_longitude:
self.__query['query'] += " point_radius:" + "[" + str(self.__twitter_point_radius_longitude) + " " + str(
self.__twitter_point_radius_latitude) + " " + self.__twitter_point_radius_radius + "]"
self.__query['place.fields'] = "contained_within,country,country_code,full_name,geo,id,name,place_type"
self.__query['expansions'] = 'author_id,geo.place_id,referenced_tweets.id,referenced_tweets.id.author_id,attachments.media_keys'
self.__query['tweet.fields'] = 'lang,referenced_tweets,public_metrics,entities,created_at,possibly_sensitive,attachments'
self.__query['media.fields'] = 'duration_ms,height,media_key,preview_image_url,public_metrics,type,url,width,alt_text'
self.__query['user.fields'] = 'username,location'
if self.__twitter_context_annotations:
self.__query['tweet.fields'] += ',context_annotations'
if self.__twitter_start_time:
self.__query['start_time'] = str(self.__twitter_start_time)
if self.__twitter_end_time:
self.__query['end_time'] = str(self.__twitter_end_time)
@property
def twitter_lang(self):
return self.__twitter_lang
@property
def twitter_place_country(self):
return self.__twitter_place_country
@property
def twitter_point_radius_radius(self):
return self.__twitter_point_radius_radius
@property
def twitter_point_radius_longitude(self):
return self.__twitter_point_radius_longitude
@property
def twitter_point_radius_latitude(self):
return self.__twitter_point_radius_latitude
@property
def twitter_place(self):
return self.__twitter_place
@property
def twitter_start_time(self):
return self.__twitter_start_time
@property
def twitter_end_time(self):
return self.__twitter_end_time
@property
def twitter_bounding_box(self):
return self.__twitter_bounding_box
@property
def twitter_context_annotation(self):
return self.__twitter_context_annotations
@property
def twitter_n_results(self):
return self.__twitter_n_results
@property
def twitter_all_results(self):
return self.__twitter_all_tweets
@property
def twitter_end_point(self):
return self.__twitter_end_point
@property
def twitter_key_word(self):
return self.__twitter_keyword
@property
def twitter_user(self):
return self.__twitter_users
@property
def twitter_user_mentioned(self):
return self.__twitter_users_mentioned
@property
def twitter_hashtag(self):
return self.__twitter_hashtags
@property
def twitter_filter_images(self):
return self.__twitter_filter_images
def __connect_to_endpoint(self, retried: bool = False) -> dict:
"""
This method sends the request to twitter and return the response.
The possibles status codes in the twitter response are:
- 200: ok,in this case the response is a valid response;
- 429: rate limit exceeded, this means that either more requests were sent per second than allowed or more requests were sent in 15min than allowed. so in this case this method waits 1 second and tries to send the request again, if twitter still replies with a 429 code, it retrieves from the reply the time when the limit will reset and wait for that time to resubmit the request;
- 503: service overloaded, this means that twitter can't response to our requesst because there too many request to process. In this case this method wait for a minute and then retry to send the request.
- others: in this case the method raises an exception
:param retried: a parameter that indicate if it is the first retry after an error or not, defaults to False
:type retried: bool, optional
:raise Exception: when twitter response with not 200 or 429 status code.
:return: dict that contains the response from twitter
:rtype: dict
"""
# send the request to twitter, save the response, check if it's ok and if is return the response in json format
response = requests.request("GET", self.__twitter_end_point, headers=self.__headers, params=self.__query)
if response.status_code == 200:
t = response.headers.get('date')
self.log.debug("RECEIVED VALID RESPONSE")
return response.json()
# if the response status code is 429 and the value of retried is False wait for 1 second and retry to send the request
if response.status_code == 429 and not retried:
self.log.debug("RETRY")
time.sleep(1)
return self.__connect_to_endpoint(retried=True)
# if the response status code is 429 and the retried value is True it means it is at least the second attempt in a row after receiving a 429 error
elif response.status_code == 429 and retried:
self.log.warning("RATE LIMITS REACHED: WAITING")
# save the current time
now = time.time()
# transform it in utc format
now_date = datetime.fromtimestamp(now, timezone.utc)
# retrieve the time when the rate limit will be reset
reset = float(response.headers.get("x-rate-limit-reset"))
# transform it in utc format
reset_date = datetime.fromtimestamp(reset, timezone.utc)
# obatain the second to wait for reset the rate limit
sec_to_reset = (reset_date - now_date).total_seconds()
# print a bar to show the time passing
for i in tqdm(range(0, math.floor(sec_to_reset) + 1), desc="WAITING FOR (in sec)", leave=True, position=0):
time.sleep(1)
return self.__connect_to_endpoint(retried=True)
# if the response is 503 twitter is overloaded, in this case wait for a minute and retry to send the request.
elif response == 503:
self.log.warning(
"GET BAD RESPONSE FROM TWITTER: {}: {}. THE SERVICE IS OVERLOADED.".format(response.status_code,
response.text))
self.log.warning("WAITING FOR 1 MINUTE BEFORE RESEND THE REQUEST")
for i in tqdm(range(0, 60), desc="WAITING FOR (in sec)", leave=True):
time.sleep(1)
self.log.warning("RESENDING THE REQUEST")
return self.__connect_to_endpoint()
# else, fot all the other status code, raises an exception
else:
self.log.critical("GET BAD RESPONSE FROM TWITTER: {}: {}".format(response.status_code, response.text))
raise Exception(response.status_code, response.text)
def __make(self, bar) -> None:
"""
This method sends the request to twitter, elaborates it and saves the response.
After the first search the number of tweets contained in the response are checked,
if this number is equal to the number of result wanted set in the config file the method stop to send request.
If this number is less than the number of result wanted set in the config file, the difference between the two number are
done and a new request with this number as max_result query field are send, so this method a called with
result_obtained_yet parameter updated. Note that if the difference between the number of tweets obtained and the
number of tweets wanted is greater than 500 the max_result query field for the next request is set to 500 instead
if is less than 10 the max_result query field for the next request is set to 10.
Moreover if the all_tweets parameters is set to True on the file config this method resend the request to twitter
asking for 500 tweets per time (max_result = 500) until the end of the result is not reached.
:param bar:
:type bar:
:return: None
"""
# call the method to send the request to twitter
result_obtained_yet = 0
self.response = self.__connect_to_endpoint()
# while there are tweets in the response
while "meta" in self.response:
self.log.debug("RECEIVED: {} TWEETS".format(self.response['meta']['result_count']))
# save the tweets received
#save_bar = tqdm(desc="Saving", leave=False, position=1)
self.__save()
# update the value of the total result obtained
self.total_result += self.response['meta']['result_count']
bar.update(self.response['meta']['result_count'])
# check if there is another page fot the research performed
if "next_token" in self.response['meta']:
# if there is a next page and all_tweets are set to True to reach all tweets
if self.__twitter_all_tweets:
self.log.debug("ASKING FOR NEXT PAGE")
# set the max_results query field to 500.
if self.__twitter_context_annotations:
self.__query['max_results'] = str(100)
else:
self.__query['max_results'] = str(500)
# else if the all_tweets is False but is set a specific number of results to reach
elif self.__twitter_n_results:
# update the value of the number of tweets obtained yet
result_obtained_yet += int(self.response['meta']['result_count'])
# calculate how many tweets is necessary to ask
results_to_request = self.__twitter_n_results - result_obtained_yet
# set the right value
if results_to_request <= 0:
return
elif results_to_request < 10:
results_to_request = 10
elif results_to_request > 100 and self.__twitter_context_annotations:
results_to_request = 100
elif results_to_request > 500:
results_to_request = 500
self.log.debug("ASKING FOR: {} TWEETS".format(results_to_request))
self.__query['max_results'] = results_to_request
# retrieve from the response the next token and pass it to the next query
self.__next_page(next_token=self.response["meta"]["next_token"])
# resend the request
self.response = self.__connect_to_endpoint()
# if there is not a next token stop the loop
else:
self.log.debug("NO NEXT TOKEN IN RESPONSE:INTERRUPTING")
bar.close()
break
self.log.debug("THERE ARE NO OTHER PAGE AVAILABLE. ALL TWEETS REACHED")
def search(self) -> int:
"""
This method start the search on twitter. So first build the query and then send it to twitter.
If are set in the config file more users for each user tries to
retrieve the number of tweets set in n_result config file field, only after reach this number perform the
search on the next user.
:return: the number of the total tweets saved
:rtype: int
"""
bar1 = None
bar2 = None
bar3 = None
bar4 = None
bar5 = None
bar6 = None
bar7 = None
no_user = True
no_user_mentioned = True
no_hashtag = True
multi_user = False
multi_user_mentioned = False
multi_hashtag = False
one_user = False
one_user_mentioned= False
one_hashtag = False
bar = None
user_mentioned = False
if len(self.__twitter_users) > 0:
no_user = False
if len(self.__twitter_users) == 1:
one_user = True
else:
multi_user = True
if len(self.__twitter_users_mentioned) > 0:
no_user_mentioned = False
if len(self.__twitter_users_mentioned) == 1:
one_user_mentioned = True
else:
multi_user_mentioned = True
if len(self.__twitter_hashtags) > 0:
no_hashtag = False
if len(self.__twitter_hashtags) == 1:
one_hashtag = True
else:
multi_hashtag = True
if multi_user:
self.log.debug("MULTI-USERS SEARCH")
bar1 = tqdm(total=len(self.__twitter_users), leave=False, position=0, desc="INFO:MULTI-USERS SEARCH:SEARCHING")
elif one_user:
bar1 = tqdm(total=len(self.__twitter_users), leave=False, position=0, desc="INFO:SEARCH:SEARCHING FOR {}".format(self.__twitter_users[0]))
if multi_user_mentioned:
self.log.debug("MULTI-USERS-MENTIONED SEARCH")
bar2 = tqdm(total=len(self.__twitter_users_mentioned), leave=False, position=0, desc="INFO:MULTI-USERS-MENTIONED SEARCH:SEARCHING")
elif one_user_mentioned:
bar2 = tqdm(total=len(self.__twitter_users_mentioned), leave=False, position=0, desc="INFO:SEARCH:SEARCHING TO {}".format(self.__twitter_users_mentioned[0]))
if multi_hashtag:
self.log.debug("MULTI HASHTAG")
bar4 = tqdm(total=len(self.__twitter_hashtags), leave=False, position=0, desc="INFO:MULTI-HASHTAG SEARCH:SEARCHING")
elif one_hashtag:
bar4 = tqdm(total=len(self.__twitter_hashtags), leave=False, position=0, desc="INFO:SEARCH:SEARCHING HASHTAG FOR {}".format(self.__twitter_hashtags[0]))
for us in self.__twitter_users:
if multi_user:
bar1.set_description("INFO:MULTI-USERS SEARCH:SEARCHING FOR: {}".format(us))
self.log.debug("SEARCH FOR: {}".format(us))
self.__build_query(user=us)
if self.__twitter_n_results:
bar = tqdm(total=self.__twitter_n_results, desc="INFO:SEARCH:SEARCHING", leave=False, position=1)
else:
bar = tqdm(desc="INFO:SEARCH:SEARCHING", leave=False, position=1)
self.__make(bar)
bar.close()
bar1.update(1)
for us in self.__twitter_users_mentioned:
if multi_user_mentioned:
bar2.set_description("INFO:MULTI-USERS-MENTIONED SEARCH:SEARCHING TWEETS THAT MENTIONED: {}".format(us))
self.log.debug("SEARCH USER MENTIONED: {}".format(us))
self.__build_query(user_mentioned=us)
if self.__twitter_n_results:
bar3 = tqdm(total=self.__twitter_n_results, desc="INFO:SEARCH:SEARCHING", leave=False, position=1)
else:
bar3 = tqdm(desc="INFO:SEARCH:SEARCHING", leave=False, position=1)
self.__make(bar3)
bar3.close()
bar2.update(1)
for us in self.__twitter_hashtags:
if multi_hashtag:
bar4.set_description("INFO:MULTI-HASHTAG SEARCH:SEARCHING TWEETS FOR HASHTAG: {}".format(us))
self.log.debug("SEARCH HASHTAG: {}".format(us))
self.__build_query(hashtag=us)
if self.__twitter_n_results:
bar5 = tqdm(total=self.__twitter_n_results, desc="INFO:SEARCH:SEARCHING", leave=False, position=1)
else:
bar5 = tqdm(desc="INFO:SEARCH:SEARCHING", leave=False, position=1)
self.__make(bar5)
bar5.close()
bar4.update(1)
if no_user and no_user_mentioned and no_hashtag:
self.__build_query()
if self.__twitter_n_results:
bar = tqdm(total=self.__twitter_n_results, desc="INFO:SEARCH:SEARCHING", leave=False, position=0)
else:
bar = tqdm(desc="INFO:SEARCH:SEARCHING", leave=False, position=0)
self.__make(bar)
#time.sleep(0.1)
#if bar is not None:
# bar.close()
print('\n')
self.log.info('CREATING NECESSARY INDEXES ON DB')
self.log.debug(self.mongodb.create_indexes())
return self.total_result
def __save(self):
"""
THis method are called after that a request have been sent to twitter. When called this method process all
the tweets received in parallel using the multithreading and then save all tweets processed on the database.
Note that process only the tweet not already in the database.
:return: None
"""
self.log.debug("SAVING TWEETS")
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = []
for tweet in self.response.get('data'):
if not self.mongodb.is_in(tweet['id']):
self.log.debug(tweet)
# process each tweet in parallel
fut = executor.submit(util.pre_process_tweets_response, tweet , self.response['includes'])
fut.add_done_callback(self.__save_callback)
futures.append(fut)
else:
# if the tweet is already in the db not save it and update the value of the number of tweets saved.
self.total_result -= 1
for job in tqdm(as_completed(futures), total=len(futures), desc="INFO:SEARCH:SAVING", leave=False, position=1):
pass
self.mongodb.save_many(self._all)
# clean the list populate with these tweets processed.
self._all = []
def __save_callback(self, fut: Future):
# append the tweet process on a list
self._all.append(fut.result()) | PypiClean |
/Hypercorn-noteable-0.11.3.tar.gz/Hypercorn-noteable-0.11.3/src/hypercorn/typing.py | from __future__ import annotations
from multiprocessing.synchronize import Event as EventType
from typing import Any, Awaitable, Callable, Dict, Iterable, Optional, Tuple, Type, Union
import h2.events
import h11
# Till PEP 544 is accepted
try:
from typing import Literal, Protocol, TypedDict
except ImportError:
from typing_extensions import Literal, Protocol, TypedDict # type: ignore
from .config import Config, Sockets
H11SendableEvent = Union[h11.Data, h11.EndOfMessage, h11.InformationalResponse, h11.Response]
WorkerFunc = Callable[[Config, Optional[Sockets], Optional[EventType]], None]
class ASGIVersions(TypedDict, total=False):
spec_version: str
version: Union[Literal["2.0"], Literal["3.0"]]
class HTTPScope(TypedDict):
type: Literal["http"]
asgi: ASGIVersions
http_version: str
method: str
scheme: str
path: str
raw_path: bytes
query_string: bytes
root_path: str
headers: Iterable[Tuple[bytes, bytes]]
client: Optional[Tuple[str, int]]
server: Optional[Tuple[str, Optional[int]]]
extensions: Dict[str, dict]
class WebsocketScope(TypedDict):
type: Literal["websocket"]
asgi: ASGIVersions
http_version: str
scheme: str
path: str
raw_path: bytes
query_string: bytes
root_path: str
headers: Iterable[Tuple[bytes, bytes]]
client: Optional[Tuple[str, int]]
server: Optional[Tuple[str, Optional[int]]]
subprotocols: Iterable[str]
extensions: Dict[str, dict]
class LifespanScope(TypedDict):
type: Literal["lifespan"]
asgi: ASGIVersions
WWWScope = Union[HTTPScope, WebsocketScope]
Scope = Union[HTTPScope, WebsocketScope, LifespanScope]
class HTTPRequestEvent(TypedDict):
type: Literal["http.request"]
body: bytes
more_body: bool
class HTTPResponseStartEvent(TypedDict):
type: Literal["http.response.start"]
status: int
headers: Iterable[Tuple[bytes, bytes]]
class HTTPResponseBodyEvent(TypedDict):
type: Literal["http.response.body"]
body: bytes
more_body: bool
class HTTPServerPushEvent(TypedDict):
type: Literal["http.response.push"]
path: str
headers: Iterable[Tuple[bytes, bytes]]
class HTTPDisconnectEvent(TypedDict):
type: Literal["http.disconnect"]
class WebsocketConnectEvent(TypedDict):
type: Literal["websocket.connect"]
class WebsocketAcceptEvent(TypedDict):
type: Literal["websocket.accept"]
subprotocol: Optional[str]
headers: Iterable[Tuple[bytes, bytes]]
class WebsocketReceiveEvent(TypedDict):
type: Literal["websocket.receive"]
bytes: Optional[bytes]
text: Optional[str]
class WebsocketSendEvent(TypedDict):
type: Literal["websocket.send"]
bytes: Optional[bytes]
text: Optional[str]
class WebsocketResponseStartEvent(TypedDict):
type: Literal["websocket.http.response.start"]
status: int
headers: Iterable[Tuple[bytes, bytes]]
class WebsocketResponseBodyEvent(TypedDict):
type: Literal["websocket.http.response.body"]
body: bytes
more_body: bool
class WebsocketDisconnectEvent(TypedDict):
type: Literal["websocket.disconnect"]
code: int
class WebsocketCloseEvent(TypedDict):
type: Literal["websocket.close"]
code: int
class LifespanStartupEvent(TypedDict):
type: Literal["lifespan.startup"]
class LifespanShutdownEvent(TypedDict):
type: Literal["lifespan.shutdown"]
class LifespanStartupCompleteEvent(TypedDict):
type: Literal["lifespan.startup.complete"]
class LifespanStartupFailedEvent(TypedDict):
type: Literal["lifespan.startup.failed"]
message: str
class LifespanShutdownCompleteEvent(TypedDict):
type: Literal["lifespan.shutdown.complete"]
class LifespanShutdownFailedEvent(TypedDict):
type: Literal["lifespan.shutdown.failed"]
message: str
ASGIReceiveEvent = Union[
HTTPRequestEvent,
HTTPDisconnectEvent,
WebsocketConnectEvent,
WebsocketReceiveEvent,
WebsocketDisconnectEvent,
LifespanStartupEvent,
LifespanShutdownEvent,
]
ASGISendEvent = Union[
HTTPResponseStartEvent,
HTTPResponseBodyEvent,
HTTPServerPushEvent,
HTTPDisconnectEvent,
WebsocketAcceptEvent,
WebsocketSendEvent,
WebsocketResponseStartEvent,
WebsocketResponseBodyEvent,
WebsocketCloseEvent,
LifespanStartupCompleteEvent,
LifespanStartupFailedEvent,
LifespanShutdownCompleteEvent,
LifespanShutdownFailedEvent,
]
ASGIReceiveCallable = Callable[[], Awaitable[ASGIReceiveEvent]]
ASGISendCallable = Callable[[ASGISendEvent], Awaitable[None]]
class ASGI2Protocol(Protocol):
# Should replace with a Protocol when PEP 544 is accepted.
def __init__(self, scope: Scope) -> None:
...
async def __call__(self, receive: ASGIReceiveCallable, send: ASGISendCallable) -> None:
...
ASGI2Framework = Type[ASGI2Protocol]
ASGI3Framework = Callable[
[
Scope,
ASGIReceiveCallable,
ASGISendCallable,
],
Awaitable[None],
]
ASGIFramework = Union[ASGI2Framework, ASGI3Framework]
class H2SyncStream(Protocol):
scope: dict
def data_received(self, data: bytes) -> None:
...
def ended(self) -> None:
...
def reset(self) -> None:
...
def close(self) -> None:
...
async def handle_request(
self,
event: h2.events.RequestReceived,
scheme: str,
client: Tuple[str, int],
server: Tuple[str, int],
) -> None:
...
class H2AsyncStream(Protocol):
scope: dict
async def data_received(self, data: bytes) -> None:
...
async def ended(self) -> None:
...
async def reset(self) -> None:
...
async def close(self) -> None:
...
async def handle_request(
self,
event: h2.events.RequestReceived,
scheme: str,
client: Tuple[str, int],
server: Tuple[str, int],
) -> None:
...
class Event(Protocol):
def __init__(self) -> None:
...
async def clear(self) -> None:
...
async def set(self) -> None:
...
async def wait(self) -> None:
...
class Context(Protocol):
event_class: Type[Event]
async def spawn_app(
self,
app: ASGIFramework,
config: Config,
scope: Scope,
send: Callable[[Optional[ASGISendEvent]], Awaitable[None]],
) -> Callable[[ASGIReceiveEvent], Awaitable[None]]:
...
def spawn(self, func: Callable, *args: Any) -> None:
...
@staticmethod
async def sleep(wait: Union[float, int]) -> None:
...
@staticmethod
def time() -> float:
...
class ResponseSummary(TypedDict):
status: int
headers: Iterable[Tuple[bytes, bytes]] | PypiClean |
/Electrum-VTC-2.9.3.3.tar.gz/Electrum-VTC-2.9.3.3/packages/google/protobuf/internal/more_extensions_dynamic_pb2.py |
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
from google.protobuf import descriptor_pb2
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf.internal import more_extensions_pb2 as google_dot_protobuf_dot_internal_dot_more__extensions__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='google/protobuf/internal/more_extensions_dynamic.proto',
package='google.protobuf.internal',
syntax='proto2',
serialized_pb=_b('\n6google/protobuf/internal/more_extensions_dynamic.proto\x12\x18google.protobuf.internal\x1a.google/protobuf/internal/more_extensions.proto\"\x1f\n\x12\x44ynamicMessageType\x12\t\n\x01\x61\x18\x01 \x01(\x05:J\n\x17\x64ynamic_int32_extension\x12).google.protobuf.internal.ExtendedMessage\x18\x64 \x01(\x05:z\n\x19\x64ynamic_message_extension\x12).google.protobuf.internal.ExtendedMessage\x18\x65 \x01(\x0b\x32,.google.protobuf.internal.DynamicMessageType')
,
dependencies=[google_dot_protobuf_dot_internal_dot_more__extensions__pb2.DESCRIPTOR,])
DYNAMIC_INT32_EXTENSION_FIELD_NUMBER = 100
dynamic_int32_extension = _descriptor.FieldDescriptor(
name='dynamic_int32_extension', full_name='google.protobuf.internal.dynamic_int32_extension', index=0,
number=100, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=True, extension_scope=None,
options=None)
DYNAMIC_MESSAGE_EXTENSION_FIELD_NUMBER = 101
dynamic_message_extension = _descriptor.FieldDescriptor(
name='dynamic_message_extension', full_name='google.protobuf.internal.dynamic_message_extension', index=1,
number=101, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=True, extension_scope=None,
options=None)
_DYNAMICMESSAGETYPE = _descriptor.Descriptor(
name='DynamicMessageType',
full_name='google.protobuf.internal.DynamicMessageType',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='a', full_name='google.protobuf.internal.DynamicMessageType.a', index=0,
number=1, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto2',
extension_ranges=[],
oneofs=[
],
serialized_start=132,
serialized_end=163,
)
DESCRIPTOR.message_types_by_name['DynamicMessageType'] = _DYNAMICMESSAGETYPE
DESCRIPTOR.extensions_by_name['dynamic_int32_extension'] = dynamic_int32_extension
DESCRIPTOR.extensions_by_name['dynamic_message_extension'] = dynamic_message_extension
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
DynamicMessageType = _reflection.GeneratedProtocolMessageType('DynamicMessageType', (_message.Message,), dict(
DESCRIPTOR = _DYNAMICMESSAGETYPE,
__module__ = 'google.protobuf.internal.more_extensions_dynamic_pb2'
# @@protoc_insertion_point(class_scope:google.protobuf.internal.DynamicMessageType)
))
_sym_db.RegisterMessage(DynamicMessageType)
google_dot_protobuf_dot_internal_dot_more__extensions__pb2.ExtendedMessage.RegisterExtension(dynamic_int32_extension)
dynamic_message_extension.message_type = _DYNAMICMESSAGETYPE
google_dot_protobuf_dot_internal_dot_more__extensions__pb2.ExtendedMessage.RegisterExtension(dynamic_message_extension)
# @@protoc_insertion_point(module_scope) | PypiClean |
/NeuroTools-0.3.1.tar.gz/NeuroTools-0.3.1/examples/matlab_vs_python/smallnet_inlineC.py |
# Modules required
import numpy
import numpy.random as random
# Bug fix for numpy version 1.0.4 for numpy.histogram
numpy.lib.function_base.any = numpy.any
# For inline C optimization
from scipy import weave
# For measuring performance
import time
t1 = time.time()
# Excitatory and inhibitory neuron counts
Ne = 1000
Ni = 4
N = Ne+Ni
# Synaptic couplings
Je = 250.0/Ne
Ji = 0.0
# Synaptic couplings (mV)
#S = numpy.zeros((N,N))
#S[:,:Ne] = Je*random.uniform(size=(N,Ne))
#S[:,Ne:] = -Ji*random.uniform(size=(N,Ni))
# Connectivity
#S[:,:Ne][random.uniform(size=(N,Ne))-0.9<=0.0]=0.0
#S[:,Ne:][random.uniform(size=(N,Ni))-0.9<=0.0]=0.0
# 10% Connectivity
targets = []
weights = []
# excitatory
for i in xrange(Ne):
targets.append(random.permutation(numpy.arange(N))[:random.poisson(N*0.1)])
weights.append(Je*ones_like(targets[i]))
# inhibitory
for i in xrange(Ne,N):
targets.append(random.permutation(numpy.arange(N))[:random.poisson(N*0.1)])
weights.append(-Ji*ones_like(targets[i]))
# Statistics of the background external current
#mb = 3.0; sb = 4.0
#mue = mb; sigmae=sb
#sigmai = 0.0
# State variable v, initial value of 0
v = numpy.zeros(N)
# Refractory period state variable
r = numpy.zeros(N)
# storage for intermediate calculations
I = numpy.zeros(N)
Isyn = numpy.zeros(N)
# Spike timings in a list
spikes = [[] for x in xrange(N)]
#print 'mu(nu=5Hz)=%f' % (mb+Ne*Je*.015-leak,)
#print 'mu(nu=100Hz)=%f' % (mb+Ne*Je*.1-leak,)
# total duration of the simulation (ms)
dt = 0.05
duration = 400.0
t = numpy.arange(0.0,duration,dt)
vt = numpy.zeros_like(t)
# This is inline C code
c_code = """
const double mb = 3.0;
const double sb = 4.0;
double mue = mb;
double sigmae = sb;
double sigmai = 0.0;
//double dt = 0.05; // (ms)
double leak = 5.0; // (mV/ms)
double sdt = sqrt(dt);
double reset = 0.0; //(mV)
double refr = 2.5; //(ms)
double threshold = 20.0; //(mv)
double Je = 250.0/Ne;
double Ji = 0.0;
int i,j,k;
// GSL random number generation setup
const gsl_rng_type * T_gsl;
gsl_rng * r_gsl;
gsl_rng_env_setup();
T_gsl = gsl_rng_default;
r_gsl = gsl_rng_alloc (T_gsl);
py::list l;
for(i=0;i<Nt[0];i++) {
// time for a strong external input
if (t(i)>150.0) {
mue = 6.5;
sigmae = 7.5;
}
// time to restore the initial statistics of the external input
if (t(i)>300.0) {
mue = mb;
sigmae = sb;
}
// Noise plus synaptic input from last step
for (j=0;j<Ne;j++) {
I(j)=sdt*sigmae*gsl_ran_gaussian(r_gsl,1.0)+Isyn(j);
//I(j) = 0.0;
Isyn(j)=0.0;
}
for (j=Ne;j<N;j++) {
I(j)=sdt*sigmai*gsl_ran_gaussian(r_gsl,1.0)+Isyn(j);
//I(j)=0.0;
Isyn(j)=0.0;
}
// Euler's method for each neuron
for (j=0;j<N;j++) {
if (v(j)>=threshold) {
l = py::list((PyObject*)(spikes[j]));
l.append(t(i));
for (k=0;k<targets[j].size();k++) {
Isyn(targets[j][k]) += (const double)weights[j][k];
}
v(j) = reset;
r(j) = refr;
}
if(r(j)<0) {
I(j) -= dt*(leak-mue);
v(j) +=I(j);
v(j) = v(j)>=0.0 ? v(j) : 0.0;
}
else {
r(j)-=dt;
}
}
vt(i) = v(0);
}
// Clean-up the GSL random number generator
gsl_rng_free (r_gsl);
l = py::list((PyObject*)spikes[0]);
l.append(3.0);
"""
t2 = time.time()
print 'Elapsed time is ', str(t2-t1), ' seconds.'
t1 = time.time()
weave.inline(c_code, ['v','r','t','vt','dt',
'spikes','I','Isyn','Ne','Ni','N','targets','weights'],
type_converters=weave.converters.blitz,
headers = ["<gsl/gsl_rng.h>", "<gsl/gsl_randist.h>"],
libraries = ["gsl","gslcblas"])
t2 = time.time()
print 'Elapsed time is ', str(t2-t1), ' seconds.'
def myplot():
global firings
t1 = time.time()
figure()
# Membrane potential trace of the zeroeth neuron
subplot(3,1,1)
vt[vt>=20.0]=65.0
plot(t,vt)
ylabel(r'$V-V_{rest}\ \left[\rm{mV}\right]$')
# Raster plot of the spikes of the network
subplot(3,1,2)
myfirings = array(firings)
myfirings_100 = myfirings[myfirings[:,0]<min(100,Ne)]
plot(myfirings_100[:,1],myfirings_100[:,0],'.')
axis([0, duration, 0, min(100,Ne)])
ylabel('Neuron index')
# Mean firing rate of the excitatory population as a function of time
subplot(3,1,3)
# 1 ms resultion of rate histogram
dx = 1.0
x = arange(0,duration,dx)
myfirings_Ne = myfirings[myfirings[:,0]<Ne]
mean_fe,x = numpy.histogram(myfirings_Ne[:,1],x)
plot(x,mean_fe/dx/Ne*1000.0,ls='steps')
ylabel('Hz')
xlabel('time [ms]')
t2 = time.time()
print 'Finished. Elapsed', str(t2-t1), ' seconds.'
#myplot() | PypiClean |
/FireDM-2022.4.14.tar.gz/FireDM-2022.4.14/firedm/iconsbase64.py |
# APP_ICON 48x48 pixels fire logo
APP_ICON = b'iVBORw0KGgoAAAANSUhEUgAAADAAAAAwCAYAAABXAvmHAAAaoXpUWHRSYXcgcHJvZmlsZSB0eXBl\nIGV4aWYAAHjarZtpdiQ5kqT/4xRzBKwK4DhY35sbzPHnEzjJjMjKmM5+08FKOuk0N4NBRWUBrNz5\nP//7uv/FPys5u1xqs27m+Zd77nHwQ/Off+N9Dz6/7+9f+voTv//2vvv5Q+St9NeRzb6O/34//Jzg\n8zL4qfxyora+/jB//0PPX+dvfzvR14WSRhT5YX+dqH+dKMXPH8LXCcbntrz1Vn+9hXk+r/v7Ttrn\nP6dv8/vd8nXw337PldnbheukGE8KyfM9pvgZQNJ/waXxfhjvzww42fvZ3vv9ayRMyD/Nk/9lVO7v\nVfn5Kfzh/b8VJdnnfccbv0+m/bz+4/uh/PPkuzfFv+Jk/Vz5t/fzCe3vt/P93727uXvP5+5GNqbU\nvm7q+1beTxw4OVV6HzO+Kv8Vfq7vq/PVHOhdlHz75SdfK/QQmfcbcthhhBvOe11hMcQcT6y8xrgo\nlN5rqcYeVwLZIWV9hRtr6mmnRg0X5U28G3/GEt51+7vc4ia334EjY+BkQVBw+vY/8fXHE90ryIfg\n289cMa4oEDIMVU7fOYqChPuNo/Im+Pvr7/9U10QFy5vmxg0OPz+nmCV8YUs4Sq/QiQMLr59eC3V/\nnYAp4tqFwYREBbyFVIIFX2OsITCPjfoMTtRiynFSglBK3Iwy5pSM4rSoa/OZGt6xscTP23AWhSi0\nUaU0PQ1qJWIDPzU3MDRKKrmUYqWWVnoZlixbMbNqIr9RU821VKu1ttrraKnlVpq12pprvY0ee4Ic\nS7dee+u9j8FFB2cefHpwwBgzzjTzLNNmnW32ORbwWXmVZauu5lZfY8edNjyxbdfddt/jhAOUTj7l\n2KmnnX7GBWo33XzLtVtvu/2On6oF9ynrf3z9+6qF76rFVykdWH+qxkdr/T5FEJ0U1YyKxRyoeFUF\nAHRUzXwLOUen0qlmvke6okRGWVScHVQxKkjbx3LDT+3+qtxvdXM5/3/VLX5Xzql0/xOVcyrdHyr3\nn3X7h6ptqc3yyb0KqQ01qT7Rfhxw2ohtSNT+9av7/Y1ZU7brx7420fFbkzHUOWCyZlSJ6aLJuOBu\n4ZZxpjHqcK0nF7KFbife3NdJvWa4N9qkWn2mdjNQMWalxMXtHcvRCmXqnOxs/hzK6polc/vwr87a\neOcwt/XOXc65o3OHSKUUcaUyVt4cBbxzD3PPmUFczDrlGsx80GQPm6NuyhqbHbqj7GDAMlo6xk1k\nTm01dD9h+gBMzGY7FYyAWqscTItUl0/LtZvPG3RX7r+OwgcYRmibXy/TP64VvW7fx9yQed7pxDnn\nru1koweyuTJPiVy2Tj4PgvNgqibqkcrMJaQUALAx+ymAgjt1wlbsAsIJMCytuc6Y2dUNwuuuObVX\nuoiDSKe0+N9EgPs3B85JC/baGG0qZ8dR5phA/ZSwETGaMCROhEoNTXw6NAPvQ91rx2Qj5FjLqav7\nTe2ZtApk5gRmZ3LnIDwjdJ062tlwtkAd93cZv6qYNkTwJpT2i4n5E+lcqEuXm4sK3oiby/GWdXEW\nLgRqCUYan6zrxlPr7FeTSUPGlgEKcx96SZlL3lnypgj8IZxbwloVauOo43ae8/2dO/n+e1rL15SO\nhoXEtHQPta0XINu4aYyb14UzQhywyWGs/Tq6eJQz7+FjtfcNePXBAABmX3OeSJNH/ynoaDf+oZ7u\nv9Xiv9RRePKptjAZHVbRtamKnnVrDav1isuAWbj2VzFHB4eGuKYOHx6j0woknLjFk2kpptozw9lJ\nUdIdkBv0Se1nWvBIx35Dk60v+vxWn9rhcHLBXcwfXT8mnQIbwgLZn7mHGx6O45To9UzMQR+57nlC\nlfei73tEG9CRdJh5kDjnOJAHnz1jFXhg5BnRCxd3W1bb2JiAZdFTBasUiXmA7GnxkGrPZxFZEJTe\nA2NcXbT86brSAVCOydnkcrecOI4PjcrS+pcPnS6C2rGkCjUBBlso4rCeYx/d64YHkUBo2X1BbKfG\niWrBi5nxgpUBqOB0IAx+uYNcT8xtARjbV4DaVWNBTxEDBKHifM7Fjdx2II67zy94mfmPePnT6/8D\nR4iTrzlM6DWUmzYKR5OODKIDCkhJZcNaQAdO7Y4CQM4yaPQ0RRq4bYS0Y4Uv9nZZeYrfEPWR1NTX\nw7Rvog0UQfxgL1tyl9bQVe9kejknlEhlApO9afK6Vpyr57n7BQF9RqOX+XuxM6XFrS016vVug8CG\nfMOMQBsuDpR0rxbzzUCHluR3eVuVlv9R9zENJYL3cYwxUkLrOztoAVcyI/4id0EUVgGbBnVX+IrD\ntgzngiJGKaMCAdwz7Dcph/VVRCald0eoIBd0dGa2cdbk5NZPY5YCBoBp65ELe2R2UslbAxNaTuk3\ndMqf5r7cAlVwQSkYrlcXQR90CHNPH7fpZSSy+upG6Gd4eZk+duuaambzS6K4Gt3kJl5obBObpUVz\nL7wIFFm2R7kfEC7E2csHYWSqP7y6/+qA79eE9MMp3XJTSrtinlQKNR6d24Vq48LUGY21sBdyPZnj\nL8Gq7p0LFkxTjcOoGQFqq+RJcQY2oUeaxAbOg09MlzpFghHodRUUJmKyJjS3sSS4/eRHBzSyC2Ld\nQM4K03IgkIGWmDm01R2ne3VCCvoKsePmkjzfE9ycto8dkJVSiR3bRloAoBMzl8wRlTAMVVvjNIgN\nY4RjbVAjPXtGgedKB9KyVjQUFmRRryHWANHC8Awly2tsaBfRQzCsxYE9hjSP3c8wzoINwr8twK+v\n7ucNzJVEGW+V6sqVAIQ8VcIpE/EHUegJC13GDKMEB3Rs2UYGMzC+omiAeJfRqHi9SmQ1DDT82SZ9\nu/COV6H+5EF0Zgjl3W5xtOczUwui1ushawiNXCYIDmPsAJlfyj0SWOg+L+xw9rZoZIJGBTFtBzfS\nFYmCa66MM8PQQY9Y2/jYq7fwcZ6IOOkInAcmlzHgQLDA+GjcXuHjDu7juoYxHbroJJd2SQm5J9P2\nDBQYidxsLuuTdsbSVJnA0NbCf3i4AFJykkTIATLBqhJD675ny/YyU0weiOOSNSEuHUtNYQ3GWtla\n9kuOJikZxDGe9Wu4/nQnhIRh5ta2hny4JwIEJFuY0AauiywxkK4Y6Ncd8WCCKxK1t3e4XeIRBqXv\nJTEzZGsxbDwTNGYkFAAIW43HWUCZIY4CT2DDzl4jb644S3GFEuBPKaxqNntEUjG4A1SVlPr16HK9\nG3H7qil0SxK7eXfDH5NcJm4iJai2WwRRu5Pt8cNQWjjW3s+49n/5GrbbQK6RK0GMenx9MMW8STTF\njvhpWg4kaplnVFoOkE90ZuAPyT2ItmE/HW6TUlFJoiTOOxvZt0LsuCKGN6k/4gI9Y3SIjjjNQcMf\n8aiVSYos3A0kfxx2ZaM9dDqXh/+g80kS4IVgtAL055mTGiLY4doEJRQWbTkCmBJDANlYZjcoRuE0\nDIbww6UBFfgGLKJv9GAtXVFVhSALaA93I1pTEmdoKkmGzt0O10QHUd4nS7AfbUFYxbGtTVbyzF5d\nFeEp3eMVFWFQFvSXPgBaqAIfAtrYmjYW9SeoJLiSwK/8BmmTEA6JgT4Cl3Q6Rhu2rgaKZJPuc9tM\n48YwAs1X/hQ5Iqu9DL+1BoyDZMCtMoPFlmf6NlaCQJDFrHmK3aXImbS+0WZxdid6naf1nJ1Z9LSU\nFuGeDNYjAMAUze6zE8pp9S54Q7aLGYJ7BpGyurF7kRSczlS/bM4vfmNKudNLcVolpMPAg4p8WCbI\nR4cvlhFj4muX80R5JIdaYmVJxWDLG1aO60348tm88PA+0AgSBWcf8l0gVeYceoC9LLhIMC0ye7/i\nEKtG614CFjcCZxgtCNDxskjUMbobq1Mlt2OThkHrcF/y3u6/MIuToaBLjIsMRjTjZonDFVeYqwtE\ndIsxorNfSwOdaX93S9U2PpnCd04wcV+gGFHHcsj1YsRR/jIKPqgmtzDjtKEnUo+NGUI6ltioRBCa\nBdJRZDsxYlzaphp7gPk+q6k9B84L9MGQYBMy0GnIx8AVtuspkjLwnTbq2Rh+ZqTDcptmSj20abZB\nD1MUno1Dyen+V5fSHslTNECAHWZCt/oD74B35AqHgOiHFxAh15IR71gZCMkQHNwyscekYCTB5GW1\nur+ISOljBrsWAfBu3H1NzRLBBM4pnH8XUjKmBExD48ZRTp8rCW+GidW0EkH4LJLLD+icQlpVBgOE\nGjOYJztnvCoIQr3aYor5Phz5S75ZeEAUpNgGfr18OGe4ww8sGBmoKfEQyJnvaZrvg9jT6Au6BHgN\nGinEOtGubC20ShPItw0tD9E0vszQ1KFqj51PqrEs7a+gVEQBnP2UxhZ0jZlXRw10t3QFiAD9gKWF\ngcZKv7X5MLWS8qP0Vf+7JI5CthOTmHdQWQCdXY4dhbGHTLiam+GHJE9Z6i4H6Se+jYR5waLgHCG4\no3UWsRwZwElNkDGaaz1lKdr1+f21CiGtLCW3NbXqcdGOeLfWA+gOcl6vDqtRmckA5U0Z/QoJZCyt\n1KleOX/hDGckYqKLmeMlnYDx4WK7WykdO+oEDl+ObC/hYZk04Cj8hBwDJ2AuPQm4waJCMaqlIeJu\nIkeOcDqWi05azkp/20xF65XoR5nMBnYXL+3TMzfikkQR0o7D0Hl0lFmxWJomFKiHgHq5Lf9scrwR\ne0PDBwsB4SMXSgeg05CpT+uFAnpMSzWEBg6qlAEE0nB8xLaDp+CsQzRDwEpAHOHPq2UifL81Hwop\nD9pDbtGRzme453E+UQCZIxrKxETXKMtOBfO5uS3as0qHoEmJHRAtJP60F5jXAlA8pFEUhphViWye\nE3EjtOtxccjFFbWsQiPBpAwmQ02w8f10kloEBYMGNiYpy0ljjxLO35Qa0Q7PSJ2SQaFHSsV/Jmy9\nPCE2Hx+gtTVOJVeWM74JCmulHY/45cLAGk7wa1kjZRwb5Kv1typnuJVds5xhimMiVaS/vHHC3Ohd\n/P5WKBItMrG5GPc2uE1Ej+63OuEt9GNLTLXY6fn8AplR9tNopXW0dMXdqv2D+j95giTc2iV5CybM\nzlIhm1EscupXHjn+cgN/CoDES/LSZR5yebaMpJHqdafJmwB3LiQzEVB33Fv86Q+tg7z+YHALf0ef\n4qCRpY1RKeJZec/pOAIjD0PWtT942l4LhHsQrUTYQAoAiEK9+mefRLk4AxwGFSINCiQ0LUdgGZsI\nr4GjjCJpBeMAW+QXNUEOsUggRbvcWOxEb3QxptCitWZqjBFCsun7XWPbnWEn+kKo5u9jonWEvwZV\nck7DwnE7cRaiFLSVeFvLrFBTXFqH1LqrphFHhLoEZUMbCgUr45PixycFwVtbFH1rbXMrZn4hH3KG\nszrItg8yOZw/SyCqNnfEVpJFaTjsZQJmO41QR7qo2kVhGKMifsjMRi4dPv23T+fd6Bhw0yUYz/KB\ncVO66kGdgrJSWphb2KzTtJyWhneRKjBpxI94tdcCvcGDX2tpVc5ekNJaX4XWDnhLWrUltlJqlPfK\n2MO9DpTAoOUE/O7WisnUaluYWattwhDOI1QlFJQDdSi47QZg1tZaU39LUoGPuPDF/tje9h8BY0sh\n3pJKGPKEv0MUnRJEcWgrFgfrBzywAlBnhB+gK0+mVC4W6LMqiwNipjHzazwxwO4C5AJWA+BJb22k\nLIpFBO7trfwJe5cxIZ8Yeki2EucqBh55APPMN8mmDOWnInfj5dk4q2MSRWJaHo0zRqhR6EjhMSN5\nPTHp9MbKUbEMFLapDTjmRbKyQWcAYrjadz3sLE0DHDcF+XL0h9bBC+GNsUtLS7V+IfVow8eXqD8z\nnTCfLRkOm3DhHf6MVmtDLmppZ2jBK3SME90wF0QZgRl/1u4MwQoEvwW3pFvXaWkRka52bZRaYayI\n2+b76M9BDi3LfuUOxGSvPeFDuDbyqXP0KcwK7ZUdkNEGBRaBlgpa4AVDW//l/pNbtjYJNgwMhhZW\nHiLdQ4WmjhSa7EgWSTY32TcpaF26ZMgH3O5xal/HLOj7EAff4hz4ZvYumZws19Y3DJer4a+gWxmc\n5LDnVheWCKqCPEhIqLGcB6JIC1AHUA01qQW+Vv8HVPuQSxfgkDuRXsyzJ9yICnBtkVHUFsEJ6BJ+\n2IiDuG2+GurwXNVGE6LTVtjXNg1zhgQkJoPIlpM/HZnan1TCoVJHK29B7Zjya2N88H9k8up0nkmF\ngJm/iFvkCHszwTS2oU1fm8S3bPG396OgQagttSID2PVD91ObTdNgJbRwiv+XU1Mo14aToKwlJyUj\nUJeUCp5xxVAB/iVbkN/Km7tK0toipmgPgVX5d15IoxQ5MG06ZSUL7VUjHL+5gKuVOwz6bE4t+yGx\ns8PbQDJmqGnPAlLnNMQrWh15LmlFusOHL/9BMqFpvCTH9+iq3NXBNJ6yca2kRDlxcnUVwMnVS/up\nWh3AAzyFBIuLTsLc/YzBYndYDPK2zfZ0Mf44J+wZtG53UknOG2/xj4aPPs2YgcsgX7WUtNOygjOJ\nQxYa8AaMcSE5hDOY/uDzZP8ifhp9DSTlPbVXUTdmf5UHtYk0zKbFOnxRk1WmAp07HsACFJaL2bh+\nXW1IW212CMN4Tqm13EaGG5snKebWpMgjuUkIoP0X3wohEEgQ8iFH2jvOTaC8L8Folx1C/WotCd5f\njUXSXUTRC0JHvi3PjGCueM8+V5EkSxebtjzJt5S9pTm1SFO0Poe8MtswMKJI/gDZsBTNjSBGXESH\n0UMjBO/RSR2D+Lu0ZSEqAKrgiBFjzHZgpHg1cncwiNFydBk/b01tGoRriBylqBHHHiGL9nE/W9Kn\nnXst8AAxDFTX7uKjbfwkv7nvX5U8QS+g15qz0mpu2pJUVJtZHnwxvyuRrPt6KwdXSfjaZmq5Y6ep\nHAfLM6IMgPKA0ftofa4SRhmZ9UkTtB0oCUemw4CaVv4GXHMGxXL0CeEmylqL/bWb7+2z0qO1Lkw3\nKp459NvNUcu06A3AhD8EcgCRgToQrmmFu7RaOckJelRAwH/rTcs/2sc/8dfeaYKQolb0aLhNPGGg\nTMtqlL9plxbeHEE7CCDjrRQdZK+vpMdMMD+G69Vy2cAcMdHlW1nUuEgJjRBI2VlbkCRaIKrNXyjb\nK9YGNFU7RktrP4B4tK0nxNDqFXRbiXlllrV414CvSxgwwrJax0NNEXO1d8a94sgrcRKeTdoVpENr\ni0TRC2gZkLYalCPv28QP5urS+lmFLCi4lnmZMvgioMbYquo3Zm0O0p7nzi8MRdRIgXBs8wI10rcW\n1QsMqfWfc4gBeOsM9dz0vRO59fTk343/xkz4ga8VQop2Up9/8q7O/kz+S8uHcAoLrg8LwkxMZZF/\nwhXmhOIT0t5zFXiZ0vUEHuLMUCEAlydvY5tFVjAqQYOJoQlxgUxsSGck7Rhxg1qEx7N4pG2XLvop\nJNhLtbBS5j6+7GihDq7gvnHSZ9pbB8Nv1q3nQUAVQa5TuazNHgAbDgFcOUIf1zId6QihASTaJWla\n4rVgerqsJ5B4/HsGpGiFSr6ZMSXYnW6g23AG0DqNqHUepaOjsPMWCTg9Z/ArVD08hIsk1mlVkkD3\nQLjfCzkMj89EMZda73+5NzqtC1NGtD+pSb42I/Ukh++/rAZzvNdKNgkmapuZTsdSLCmDeqQzIsql\n53K6Fvi18swVORXh51QtGhMRbOEPi56q0aY/pVSjYRmiFjgK/Q1TXkdcMD0DEEQgOBdkROsOZzWL\nemZTCwKbHKrVMiiUWSMcMa+k3smP0TocRNs46IYo8qQATwUvk4axoEPPCBVsbEjaniO9kF89/Xv0\n0JbgEInzOBoxCocv9xGTaSRsLejR7qQ3ZPDEc0PFpJFM4WKcOSMJyjAm2UY9pp4cgnlD5x6uSxAM\ngqYHAChz1wNP0P1G3BFWPCFz2rECRPWlZfef/Hz+FpvdP22kShu12Iv3SAufwrnwGHq+pU4BfMHK\ncRNeBI/Mha+BbKSxIXkg5G0S9yJbradqsFD1jB4PlFeZlU2vR2bXQ6GBMFlN69u4MCiwINk3k24I\n7ginzO9cCUXBBSpGzC+F379OSQhMCROiETCd2sDseepxj10Uw0nryBMcS7evo2VECliX13oRtho3\nOQTAiIlv0DkwYto7TA0WgCRUO/WgGcZAlRXPB2PodNzbVZta0nxbEEmZR0/P6ZERGRz6s04ZiaMN\nSYc3Gwv+U3+gJBgWLcXcRPrVI6RQ3fxOgqV9rZGb1x4Nv876MfFcDu1fHdJub69r4vjxEsMTWr+E\nBavNOc/bcIzlR1DGzMpfNGz4PJ2C0k5spBLtilyorFrbxF3BW/JdACqiHkUNgFmgUD6k8Vc3KAXR\ncRWqTdpLQlX5LKd4blrPVgQ9+ZCV3k2b0kMVbHqYr3wqn6vYcXycVa3DXQ2pXC10ZtDHnKMleqrG\n6xGqpJVRpoZIg/U52y+l6/cYEMXH696RFi7PrsPSaGvowpY42aT9My2jEiO+22FR9P96N9v9x3MG\nZAWKhmRjuUABVvhujBIhqB0YP72nrnLpWkHO3CW1TfgCR78Eia1XvNLjopugVcAuuSeFZiKpQ+w9\nsHXUk3u4BpzR6cwyakBvbVyjHq5IcBkWKNM1WyQ7FJEq2kh23KDUZkDKt5ZPUI6AUB0D+U1rf1ox\nmugKfOEWHVlEQJSLEEZzvd1zqsYP3FN/eU53NbQucaRv9636pVTTK8vrSZf0f8/Q3t40eTetYkVR\nrXx6javqUUVtaqIUhoeXpSD/kb6UN7CoYZNw9iaLyGTJyA3tdpGsl3ZRD/Fjaa+7mzJAb8doqiTG\nV79FPecHC8FKyt1ahMH5b1oBDAytyAe0qWgZQg//cpt6WMv0AHCpmZTjvxa//FuYPVq2RPmhlbbc\n1aISwh9JU+Htd8cuXyyyfyu3x4wAytXwjpMG2l5ruKYn+ZJGo0cA8ayOujyKL/c9F/KemtHWMOqr\n50fePvLLTIycgEaQb6QMSlOPnv882umSUXNNQYyKdhSnZiJeECaBgBYLCCkf91PPTJ89SftsjvN5\nsgSwIiZpFaFpLzvTntSxT7zfIhVqM+pXfKJm4HMGwgXR+xYtoDPZQGkhT4CfqJ3duVUyRFPHogd5\nv5cW/PeP/7C9AcC6+78ISLKVImSj2gAAAYRpQ0NQSUNDIHByb2ZpbGUAAHicfZE9SMNAHMVfU6VS\nqh3sINIhQ3WyICriqFUoQoVQK7TqYHLpFzRpSFJcHAXXgoMfi1UHF2ddHVwFQfADxM3NSdFFSvxf\nUmgR68FxP97de9y9A4RGhWlWzzig6baZTibEbG5VDLxCQBADCCMqM8uYk6QUuo6ve/j4ehfnWd3P\n/Tn61bzFAJ9IPMsM0ybeIJ7etA3O+8QRVpJV4nPiMZMuSPzIdcXjN85FlwWeGTEz6XniCLFY7GCl\ng1nJ1IiniGOqplO+kPVY5bzFWavUWOue/IWhvL6yzHWaUSSxiCVIEKGghjIqsBGnVSfFQpr2E138\nw65fIpdCrjIYORZQhQbZ9YP/we9urcLkhJcUSgC9L47zMQIEdoFm3XG+jx2neQL4n4Erve2vNoCZ\nT9LrbS12BIS3gYvrtqbsAZc7wNCTIZuyK/lpCoUC8H5G35QDBm+B4JrXW2sfpw9AhrpK3QAHh8Bo\nkbLXu7y7r7O3f8+0+vsBGPlyg0eTK1UAABBYaVRYdFhNTDpjb20uYWRvYmUueG1wAAAAAAA8P3hw\nYWNrZXQgYmVnaW49Iu+7vyIgaWQ9Ilc1TTBNcENlaGlIenJlU3pOVGN6a2M5ZCI/Pgo8eDp4bXBt\nZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJYTVAgQ29yZSA0LjQuMC1FeGl2\nMiI+CiA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRm\nLXN5bnRheC1ucyMiPgogIDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiCiAgICB4bWxuczpp\ncHRjRXh0PSJodHRwOi8vaXB0Yy5vcmcvc3RkL0lwdGM0eG1wRXh0LzIwMDgtMDItMjkvIgogICAg\neG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iCiAgICB4bWxuczpz\ndEV2dD0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL3NUeXBlL1Jlc291cmNlRXZlbnQjIgog\nICAgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJj\nZVJlZiMiCiAgICB4bWxuczpwbHVzPSJodHRwOi8vbnMudXNlcGx1cy5vcmcvbGRmL3htcC8xLjAv\nIgogICAgeG1sbnM6R0lNUD0iaHR0cDovL3d3dy5naW1wLm9yZy94bXAvIgogICAgeG1sbnM6ZGM9\nImh0dHA6Ly9wdXJsLm9yZy9kYy9lbGVtZW50cy8xLjEvIgogICAgeG1sbnM6dGlmZj0iaHR0cDov\nL25zLmFkb2JlLmNvbS90aWZmLzEuMC8iCiAgICB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5j\nb20veGFwLzEuMC8iCiAgIHhtcE1NOkRvY3VtZW50SUQ9InhtcC5kaWQ6OEI4QTVDMDQyMjFFMTFF\nMkIxQTJGQjI2NjJGMUZFRDEiCiAgIHhtcE1NOkluc3RhbmNlSUQ9InhtcC5paWQ6OTc3YTBhMjct\nYzRkNy00MTU3LWJiNWMtNmI0Yjk3M2IwY2E1IgogICB4bXBNTTpPcmlnaW5hbERvY3VtZW50SUQ9\nInhtcC5kaWQ6NDIwOTFlODctZDQwMS00OTBkLWE4YmItYmY3ZmFkMGExNDRhIgogICBHSU1QOkFQ\nST0iMi4wIgogICBHSU1QOlBsYXRmb3JtPSJMaW51eCIKICAgR0lNUDpUaW1lU3RhbXA9IjE2MTI5\nNjY1NzM4MTExMTEiCiAgIEdJTVA6VmVyc2lvbj0iMi4xMC4yMiIKICAgZGM6Rm9ybWF0PSJpbWFn\nZS9wbmciCiAgIHRpZmY6T3JpZW50YXRpb249IjEiCiAgIHhtcDpDcmVhdG9yVG9vbD0iR0lNUCAy\nLjEwIj4KICAgPGlwdGNFeHQ6TG9jYXRpb25DcmVhdGVkPgogICAgPHJkZjpCYWcvPgogICA8L2lw\ndGNFeHQ6TG9jYXRpb25DcmVhdGVkPgogICA8aXB0Y0V4dDpMb2NhdGlvblNob3duPgogICAgPHJk\nZjpCYWcvPgogICA8L2lwdGNFeHQ6TG9jYXRpb25TaG93bj4KICAgPGlwdGNFeHQ6QXJ0d29ya09y\nT2JqZWN0PgogICAgPHJkZjpCYWcvPgogICA8L2lwdGNFeHQ6QXJ0d29ya09yT2JqZWN0PgogICA8\naXB0Y0V4dDpSZWdpc3RyeUlkPgogICAgPHJkZjpCYWcvPgogICA8L2lwdGNFeHQ6UmVnaXN0cnlJ\nZD4KICAgPHhtcE1NOkhpc3Rvcnk+CiAgICA8cmRmOlNlcT4KICAgICA8cmRmOmxpCiAgICAgIHN0\nRXZ0OmFjdGlvbj0ic2F2ZWQiCiAgICAgIHN0RXZ0OmNoYW5nZWQ9Ii8iCiAgICAgIHN0RXZ0Omlu\nc3RhbmNlSUQ9InhtcC5paWQ6Nzc3OWJkZjYtNGVkNS00ZWI2LTg1OWYtYjExNzVhM2M3MTRiIgog\nICAgICBzdEV2dDpzb2Z0d2FyZUFnZW50PSJHaW1wIDIuMTAgKExpbnV4KSIKICAgICAgc3RFdnQ6\nd2hlbj0iKzAyOjAwIi8+CiAgICA8L3JkZjpTZXE+CiAgIDwveG1wTU06SGlzdG9yeT4KICAgPHht\ncE1NOkRlcml2ZWRGcm9tCiAgICBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOjhCOEE1QzAyMjIx\nRTExRTJCMUEyRkIyNjYyRjFGRUQxIgogICAgc3RSZWY6aW5zdGFuY2VJRD0ieG1wLmlpZDo4QjhB\nNUMwMTIyMUUxMUUyQjFBMkZCMjY2MkYxRkVEMSIvPgogICA8cGx1czpJbWFnZVN1cHBsaWVyPgog\nICAgPHJkZjpTZXEvPgogICA8L3BsdXM6SW1hZ2VTdXBwbGllcj4KICAgPHBsdXM6SW1hZ2VDcmVh\ndG9yPgogICAgPHJkZjpTZXEvPgogICA8L3BsdXM6SW1hZ2VDcmVhdG9yPgogICA8cGx1czpDb3B5\ncmlnaHRPd25lcj4KICAgIDxyZGY6U2VxLz4KICAgPC9wbHVzOkNvcHlyaWdodE93bmVyPgogICA8\ncGx1czpMaWNlbnNvcj4KICAgIDxyZGY6U2VxLz4KICAgPC9wbHVzOkxpY2Vuc29yPgogIDwvcmRm\nOkRlc2NyaXB0aW9uPgogPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAK\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgIAo8P3hwYWNrZXQgZW5kPSJ3Ij8+Bodm\nXwAAAAZiS0dEAP8A/wD/oL2nkwAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAAd0SU1FB+UCCg4QDaHN\n0HMAAArfSURBVGjezZprjF3Vdcd/a+2zzzn33rkzd2Y8Hhc7tgkx4MbUSUmRDVVFyKMC2w2BIBKo\n0pQQSoUqRY2Emi+1eKQE8aFqolbpIx/6UPuhJLQkpVIasNKmadWYoDbQFoghNTbGGIw9z3vvOWev\nftjn3pkhTeO3GWlpZq7u3fv/X+u/HnufK5zjnyN3390+8N3v7LK52V3abGTv+ta/3nA66yXnCviz\nH965deHwoU+/9I2/+0goyhEEXGfkQ6e77lkn8MzHdr2z2H/ogfl9P9iJmYhC4kFzf7T/kY//Pf/w\nz6e1vpwt4Lb7pvSZf9p/T3l85jOE4EVAFERAxPCtxlcv2fPvN74lI/D8zdde9MwTz33FFrtbEweS\nGCsJgKY8fSb2OuME/ufWD1y1sH//o64sJiSNoKlBixoigij48ZHZtxyB5z98xfbegZcecxJGB+A1\nMRprAlknMPNcgkiMhnOFvqUIHLztvZf0Dr76mCVhNHoa8tWBkbdVqIfe60Lio/5FwVl36kzsq2cm\nYe9olq8fecRJv+NTI20YnUtLRt8ewQNIJfjM8BmkKXhnW94yBA49/+QDSdXbnKRG2jJGLy1JO7bi\nPS6B1EOaCokXnKu22Z7dyXkn8ManP/CupD97l8+i51sXlbh8RXxiBDJDExBniAsQep3Fr39z2/mP\nwBuH7k1ccD6FfF1AsyXg7oICHQuRQG6QBHABnIEzqu7sTeeVwOzuHVtc6O9wHtx4QFs2BO8v7pNu\n7hFm4hbiQJoxgQdmof/R05XRaRGoDh/4hEmleEMnwvB1f3GfZGNBWFCwpWavbau9T7Sqv3r28W9e\nd94IaOjfiBraqWLDAtyakmRjgQhQvekDKZAzlBDOkIXjv3pOCby4Z3cO0L1v5yaq3gbxhrQGrjfS\nzb0IfqD75YOXgI5Y3HVgZXeH/fHHps8Zgak9374foDx2+AoUkRZD7/sNBZIugZbcIAsrSfj4+iAP\noPKLLz57yzkhMP9bV90ZXj94m+3erWa9TbhlXhYjWVf8iMeT6fJHR+AWS7mgEKrurWedwLGHdl1Y\nzR15iNAbX5Q926CaxBnUnVZHA5LZstofLdlYLGneG6QWI9BaJqNq8We7f/DLl5xVAsnh/V8kFCMo\nhNnX7sUlIpkN9a6dCsRAa3Ox5mu7IFnfh7SWjzfEGzoW4pjtAEHKI9/fOdir+4VrL1p4cPvvnjEC\nc/dd80HrzewQV9fvYvF9IuEWPBFsYki7Al1uoTbDb+oijRAj4GsiqSGdAVkDK64F6P3eB7eUr/3g\n36yYu+aMEbCZV+5B6nf7WEWsmBuXzGJp9CB5qL0PqIAqlqWgCk3w7+hF76dRRqR11NL4GQnFlfbY\nb2Tl8QN/RCgmrOxNm5mcNoHZz73vvZSL23BAw5DmknYlW5IETsApqGBOCHlKaDWx+r1uXYFOVsP3\nD8hoJ0SHhKLRfeE/P2rFwvZIqD/d+/1ffP9pE5CFI78eAde6bcTqIRoHtCEBwJQh4DA+ERO5PpGh\nkFzcXfEZ8eAmQhzwFGzx2O3D8uqgmj3wkP3hHf6UCRz58m1tqsVdoiBN0AbISIgy0BpEo5ZEkGFk\nLM+xLGNAfGDSqHAbussIBCQPyGg94FULl5PUHToNEOa39rp7f+eUCbRe/a/3Y0WOGjoSk1BSw62u\nIInJ69b1EQ9WyhBoaI+CCJa4mNCuBC3BVeiaPjJWgg+1jAJusoIErFpoiAM3WeKmSySB0D/6m8WX\nr/75UyIQenO/gMamM/C8+BgFd0GFeNALCiQLUAhohSWGZY0oKZ8tS+ol0/X96OE0ktBOGUtqYuhk\nSbKxF2WXGLigZffwg6eYA/2fQePJWRo23EQSQ9sBd0EPyQpkpIIQJW95i2Fz8DmW+DcRAGlV6KoS\nSQIkAc0C2g5I0/Ab+0ha71nvhc1vt4d3bjppAoJtREFSllWOOhKpoVNl9OhYhSQB6zvM5ytmidAc\nW8oDqXuDGDJVxHXTOKVKO5Cs7SNZqF+PnbvOFSkWDlx10rcS4t0UVleb1GKnre94cAZjFaggq0qY\n0xiFZGXRsKyDFTNI6A1nJgAyQ8ZLmE9AwLUDOlEhCSABaQZkRmPEBMwV607+WmWg/7piDAggIO0q\nhheQZoVc1IU0YOoxfL10bLtVq41beAoZ6EzqaHQMKRyIoRMFklcg8QwkeV3t6v00T7OTJ+DCUYx2\nJBGWPKiRwNCbAjRCvdkEImP16aXWiHpCI8f1v7U0pqLQAMuqiNiDqRuuKbktEVBD3fwrJy+hhH1g\nG5CYULEpSVy0VY8Og2YvcfoUGcdkTS1uP4yCZW8jiKD9bw+T3Ij9hZ7UUVlmqSGpWyLQHPvvE05i\n2/OJPHohe5Kk9vBgmkxDtLyKE6eWsc67+L/gQdeATCM6jehqRFYjOoXlN2LpFZiA1be9lhkkJSRV\nbSG61Q32qtAsmWHiun85YQK9H+574I1Hru9Ya/Qb4g1MYnKlIF5qdZT1nK+gDiSJZguITiE6BboK\n0VWITta/V2HNu8BvAuejeQde6mDVZ4Y63yQDyQQarS/Je+5ZOGECYrOHGkee/p6E2esH5dMCiK/A\nl5BWS4DFYaKYfw8mCRJeWgZ6EoY2AToOOo01PwvaBPXgXO31gcU9pK5+2mzs0zWb77d9v3bifSCd\n3vRn4sq1VK/dNRgfrBDMC+YdJIqJxDOXCOgo1rwDssuBg/XKEyDjNegOyFg0HUXcT0N2E0gjRs+X\n4Kvo/UQw5yAo0vCHab99J+Whq1nce9uJR2DHX78izebDg4Yl3rCe1vNPfbumbkkG+XXg1mL5J8E1\noHqqBjuGyCiiA2sj0gYdgfRWTBqAYkmCOR8tSaFMEZ/vZ+yyq4M/dgMc/AuStV88qU7sJ9Y9IKmG\neAAJ0CcOZoNEEx8lIB7z14COQXIZ5J9Bqr0rwcpIPMnLCGgLkRaSXIjppRFBkoNvQtLGklGkaCzQ\n3noL+vznRV78XMhGPy+XPvryST8jK7+2+U+tPPpx6ocSsrZAOmVsNq0G5uPQIiOPMjzpiAObB+mA\nKMJgwF92MEAQEar5+6D/lfq1BBNFygI3c/Q1Ewx6UyTtvbL+U9tF7ilPug+4qU13h2Pf22GhOykK\nzCusquu/GSJNjEZM0hqkoNHrywCLyArww79lBJNWnLpiQqH9w1i6uCo2zOx1cZtv/nHgf/Iwt+3R\nwzo6fbvmapKBINBXSAQoMPOxGWmKSAORHNEckQyRdGjDroxfUb2wHmZKPPoKUs0groelHvLReWm+\n+wZZ8/gLp3ekvPLJv5GRtfdLbnHun0swTcEl2ODyM7z6f4AddL4a7MAGt1koVh1YFpEK0YqQTmPp\n+uOV/7lfkvEn/vGM3ErIFU/9NiM/9ZBl9Uw/5zHXiREBrPwuS/clA7C6DKx7k3wEs0WsenbZRZgS\nZByT1c9W9s6rfOdrT5zRiy257Om7Lb/wThpuUaouLHQxUsyEqvtVrAax0uTH1gnrPw70hrVExJcw\n+YWkf+Xl2eRfPXPWntTbgesuJnz/QePYh8w3pPJrQByafwpt3Fkn7E9YwxYoj98M4SCIK6H1iLH+\n3mziz0/64fcpf9XAjl69JZQHPonT6yvyjeDQ/Ha0cQciyf8DfpFq7rOVlXv/Q0gfLuWSv2yMf+mH\n5/W7ErZ414Z+94V3q/TeIXrZVm38ylbcltUi/gjwcj1fvAy8XHb/9rkk/5PviHx94Uzs/b/gDDIL\ndGS2ewAAAABJRU5ErkJggg==\n'
# 32x30 pixels black
home_icon = b'iVBORw0KGgoAAAANSUhEUgAAACAAAAAeCAYAAABNChwpAAAACXBIWXMAAAd0AAAHdAHxbc2KAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAfpJREFUSInVlj1LW1EYx383L6aD\nFPwCXQKi30KwBedOxaUUh0opCkYUt3boKmpoqJLWNGIbLSUR3yYlcwf7CTpUEIeAk+JgXjrcc+Dw\n5Nzk5PbG0gf+Q56X8/uf3JdzIVw8BqrAtVIVmAi5Vs/xGqgDLaEmkBG9Y8ApUAAeRgGft4Clloz+\nX0b+TT/gP5Rk/q2auTVyK1HDD4CU0qGlvix+r0YJ31dgHSmV63Rp1sLA5ywLHQEPLL0DQLmDgWyv\n8IxlkYoCAXjAO/zr7Rkm9gIMvO8FPmNZoGzA48Bno7YuTNj+iS+u8Cn8Z9oc/g4kA+BaH4GY6kmq\nGdmz2A3+AmiIoW8OcJuJOLBt6VkIgj+3wHeAhCNc65NhIgHsinoTeCXhk7S/Xr8a8JgjXGtTmCiJ\nekNtGICnwJ1oKBlwD/8mc4XbTMSBoqjXgWcANVEoqgG983wIuFZemNgS9RrATyNRMOAesPEXcNsj\nKu+jM4A08AH/2dduPSAXAVwrZ5iIAbP4Z0OagMhGCNdyPgum+wDXeilhMZkARl2dhogRFwOeJRdV\ntK2dcGkS0QJOAmrjXebbNhzGQBN4ElBrOMx3dnTfYTPQ6iOv8V8YaP5rA5d9NPDbpWkIuCD4bVbv\nMCs/aEydA4OuToeBY+BKyfxY6cXADf6RWwEe2Qb+AP3yZGXPWUKPAAAAAElFTkSuQmCC\n'
# 32x32 pixels black
download_icon = b'iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAACXBIWXMAAAf0AAAH9AHUXkEZAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAgtJREFUWIXF171rFFEUBfDfLhbr\naoylaxD8gCUgGEljk0JSBrUSg/4jlhpsLEJahWBaQbARtNQyRiws7BUSvxuRTUwiOBZvFibjzs4b\nd+MeuLC7b+45d9+c+z5q4jGOS7iIKZzE0XTsO97jDV7gKX5U4O6LSaxgC0lkbOEB2oMIN7GEXxWE\n87GLRTSqirfxdgDhfKzheKz4NL4OUbwbG4J3Sv/5fohni2hlBWuZz028wtmSIp/gc8HYMVwpyV8T\nOmk7P7BUUn03ZvqQz0Ry3MknTop3+zAK6EhfRT1NvIkDfYiHjUO41f1yBJsFle7XDCSp5lgdlwUD\n/m80MVcXHDkqzNZxfoQFTNWFXW1UOFUXTDgqjMOO3i79hBNDEGlhvUBjG74VDCZ4bbAOaaYcRfxf\nCOt/v359bO+eEYsaHpZwr8JyyUMJFv6hgIUI3vtwPeLB35ivID6f5pTxXiN+Kf6JCxHi05F8mzjc\nTVqJSEjwARN9xCfSZ2K4lrOJbfHbcVFnlDk+Gzs4kydYjExO8Mjezqilv8Xm3+01fQ28rEByO5Mb\n4/jsDB7sVQDhTFe0avXqjBu4Ks7xCT7q7yGEo/NGJOGW+BvTOs6ViXfRUu11lMWq3HE8Bg3h9NoZ\nQHhXMFzlq1kWLdxT7dzYEfr8r1bLo8omM4Y5zAo+OS3dz4Xr+Tvhev4cz9IiSvEH5VjROf8eZREA\nAAAASUVORK5CYII=\n'
# log_icon 32x32 pixels black
log_icon = b'iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAACXBIWXMAAA0bAAANGwFDTd31AAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAYlJREFUWIXl179LVlEcB+DHWpoC\nCXJxqCXXEHEtscagKExtMMsXJ6NFh5YIIqKhMVqLQhocJfwjlAgXh7YIhyKHyl/v69twzoVDRPnC\nOTe0Lxw4957LfT73nsPlXOqps1jEF3zEC/TUZJvELvbwCZ/RxjpOHi2MP8Vj7OAJXuJtHBtE95GC\neAN3Y/8YTsV+GwvYwIVSAabwHF+F+YYbuJSE+I7jJfBnwnxvoD+eexTRNl7hQbxmKTc+HW/cxibO\nJGNpiB/CwhwogVcBKigN8SYZa+TEb6P1C161TfThPL4JT/6wLrxqW8Ki28HVnHhjH3jVWriWE5/a\nJ76HJu7nxG91gO/i8qHBJzvEr/yXeBOjOfGbBwkfOzT46EHCx3Picx3iIznx039Bi+Jw71/iXfgg\nbB7e/QFvCZvL7DUckdcxzMpv8Caul8BhPkJD8bgLq0mAlsxfuLROYFt4yglcFDaP20mAmVI44e8l\nfdVVfw2z6C2Jw/sE3RJ+HM4J01BLLQsr/w6660Kr+glsph+VDg3kNQAAAABJRU5ErkJggg==\n'
# sett_icon 32x32 pixels black
sett_icon = b'iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAACXBIWXMAAAd0AAAHdAHxbc2KAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAArxJREFUWIWt18+LVmUUB/DPfcdR\nY8x+oDmagqQzJIRpqxYqIkITtWjnRjAMIgRp05/RrhA1kmrjRkGsKEhwZSqVijQLm1HwRzb2Y1Mp\nNlG2OPfCfa/zPvd5m/cLD7z3fb73nHPvPed7zkMeFmAnDuBi+buJF/EDPsAEFmbaTmIYe0vD92rr\nOlbWeMsx3eBM4c35BLIYlxpG6+tzdFDgRII3iaW9nBSJAArcwZIE5xYeYE2Cc1+8oX/m2uwkbnwg\nXmMKq1ucw7VeztsCgCst+zmYTm3WA3ikcb0ITw8ggLHSdt3nSHVR5cAwvhGvc0pk/Si2DCAA+Bq3\ny2DG8RNewGwVwF68PyBnudiPDwshMpPak6nCjziIL3ATf2AdXi6N5tq5gY2EqvWq4eb6WLosR3Ck\nD3sTQ3gHmzIiPoY9mE1w/sZJrMdzGTbvFULtxluIt8sg/8wwCo+Wdle28K538DZmWoiH+nBO5MWB\nFs4dvFVdLJfW8w19OK+wIWHvU6yguxcUQgNWzWHsSaHp/eAxUe9NzIgc+Zdu5Us1pjbJ7ueeYi7S\nqMjeuZ6e/Nqu45ke/6/AZ8oE7WArzmJHwtgr/yOA1xJ723EeOwuh+6tbjM2IMvw90/kyXMbjLbwr\nHZzOMDgq5Hcog7sAn2Q4hzMdHM8gEq/0KJ5IcJ4SJbY90+ZXhWjF38tPtN9wWCTSzyKr1+JVvC4x\n/zVwFZuqkngD72XeOCjsw0f1geQ78a2nxBg1jucH5OyCeOJ1Yij5FZvVBhKizdb1fqHQhm3zdH5J\n5ES9i47gLt1q1Ww2s+Ibzxc3PNzC71Y/2iS2rU3nYCy12ab/v+ieaJvIOZj8JYSp74PJYnEG7IVT\neLZcXyZ4N9XG8H4xhF34Vnc/vyZEp8IyDx9OL2C3qLB5o8BLYnSfFA2siS04h3fF8Tyrhf8HtVWw\nghy4Wb0AAAAASUVORK5CYII=\n'
# 24x20 pixels black
folder_icon = b'iVBORw0KGgoAAAANSUhEUgAAABgAAAAUCAYAAACXtf2DAAAACXBIWXMAAAGaAAABmgF6gyCSAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAKpJREFUOI3V1TsKAkEQhOHP2TUx\nE/EwYmBg7lU8rGIkGAkGsgoGGvgKZgMRQV1owYKKBuqnephpGGGBCgfcnlzV50MNNX8R+sqzpoD1\nh4AbxujW7nwK2HwBeOeLPNIKS0xa2KLXqPt7zRPKoHDoJxSBgCK6QRndoIxuUKTAcOoG50BASvLj\niNLlJ4DIEV3/vsFv7qAKBOwKeR0O5C/j9GBofxm4x1FevStM76PwTrzJmc7vAAAAAElFTkSuQmCC\n'
# playlist_icon 32x35 pixels black
playlist_icon = b'iVBORw0KGgoAAAANSUhEUgAAACAAAAAjCAYAAAD17ghaAAAACXBIWXMAAAbqAAAG6gGednyuAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAA3pJREFUWIWt11uIlVUUB/CfOtaI\nJd2v0JWieggM6skoK4kiuthLCNnTSA8SVFRQdEEzsttDvSRRZNBDD1FhPRVdMaKkC5VO9SAZKgo2\nzVijR8c5Pey9Pfvs+b4z58yZP3zs831r7bX+Z+2911qbzngKzT6frTi7zsHANAQa0chH2IZJjMfv\ncAAHO8y/C0vxOW7An9P4m4Il2NDrpAwva0ViBy4uFeYU78dXKfWIA0LY4XGsyWR7cBn+rpu8VP9r\nPon7o735WIYhfBbl5+UOyz0wFsfX8SkO4Z9MPlLHPOI6PIeXsBBP4+MoOwfX4nAnA4NYW0GsWyzX\nHo1nM9kT8dvpM7TdFRYLeyAncXuUPRrfT84nDOiwIXrA27gPP2ABFsXnLxwXdVLoJ0oCJ8aJ78zQ\n+XKsFvbLg/HbmLAZcxwuxqMEYBjrZ0jgQlyFB4RorqvRmyhGMDeO82boHEaz32tweY3eIWEPzDqB\ntcJZX4Y/cHONXkrhbUuTlqAfAmP4JPudCk9ylNb8P+GEtGFuMc4G5scxFaxUrMbjU0mgnwiUSLaO\nCOueiHSMQB2BIZzZg/NJ7eW5kb3PiMBD+BUruyQwHh0ljGgluj34t1cCC4VEtREf6tDZZARyJyvx\nS/y9WyhwU9AUOpYq7NWe1/fhzg4E3sCN05CsJPBljWxUdc1/TYhOidTgnIvHsAlbov0NuE1FtJvY\nXENgvIZAE19X6B+D54Wz38CLWIU7IqHUpF5ZEvimhsD+GucT0XCOAXyQ6bxXyG/JZOO4JifwXQ2B\nfRXOd+H6Ct1HCr1XC/lNhXy3ULI18X0NgV3FpE04rUJvgXDc8r5wdaFzqVZBSs/DicCPNQS2a+3+\nTrng1szosKLryTAoFKukuzkR+Llmwhd4F2d0cA5Pav9nz9TonSoktqS3NxHYWjPhomkcJ6wvCEzi\n7kLnJKHzyvUaicBvXTqqw7rCcFM4iiui/CwhI5Y6O2ejHyD8sxIDeEtISkM4v0LnpzmRyXZc0AeB\nRcJ6HivkiB1CczKh/WJzinAzOiG+3ysS6PnWWoEXoq0lXZA9En0OztYSEFLtFXgTXwnld1TYC+nK\nNw+XCD3CChxMSzAqJoUu0NBqrZpCiHfi9+jgFVPTdI5h3INvCdWr2aVjwpruL76NCCHdqHUnuFrI\nhIsjqVHhqL8v1Iujl5P/AWN0QXHDhK63AAAAAElFTkSuQmCC\n'
# refresh_icon 32x33 pixels black
refresh_icon = b'iVBORw0KGgoAAAANSUhEUgAAACAAAAAhCAYAAAC4JqlRAAAACXBIWXMAAAHyAAAB8gEzAEwKAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAl9JREFUWIW9102ITlEYB/Cfj0Tj\ne0hILDTDRiGRsrEQUoYSGxZSSlYkaSwIKQsWNmooNvIdihTJBimfRSiJQc2kfOermWFx3mvO3Lnv\neGfmvu+/Tvd0n+c+//859zznOYfuYxXuYFkPvs0F+/Cn0G5hdiVIJ2IHnkbkSWvFEVSXg7gGpwsk\naeJ0e4vJeRH3Qz1+l0CctIcYmxFrYHfJB+NKKngbrmMNjqdsv7CrCNExYfbOYWgp5FW4nSJ4hvmR\nz5bIdhFTi8Tqgw+R7x0M6Yq8L86nyE/oPLIFOIl5JQxoIVqieFcLPJnYmCI/UBhFb7E5FXdrltM4\nfIucznaltAdoiGL/FNK6Aw5GDq+EtZAnBglrKeFoiI3D8D0yrsyZPMFiHTNnZGJYFxmey3fq07gf\nca1XIFsaORwWcr5cOBr1Fyed5kjVlDKSQ23E1ZS8/FF48UXYgsuNr5GIkX2FaXmBtcK2WW68ifpj\nKsDXCY+0z8DMcq74YhgR9b9VirRayLgq7bWhDcP7V0jAg8LzpvaF/hqfEodJ2IPRZRIQl+W40v5D\nsht+xnb514KmDAGXROeD+pTxHZbnKKAxQ8Af3Esc9hZxOIVROQh4nBG7GXVJGg4o8uEcPThUZuBD\n1G8Ryn8tzidZUOzks0g4bvcWHwvPC9gm3C86YL/sX/ASs3IQsBDTu3LYLdTq1TovyFYcElK1Ylgh\nVMdYSAsuYxPmYoJwmhqPGdgg3KIahWtcrzEF12T/mv+1d3kISLAEN4S9u1QBO0sN3p1zfw3qhAvJ\nNKGqDRT28/d4grs4IyzekvAXYJ7txNRGRn0AAAAASUVORK5CYII=\n'
# subtitle_icon 32x32 pixels black
subtitle_icon = b'iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAACXBIWXMAAAPoAAAD6AG1e1JrAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAdpJREFUWIXt102IjWEUB/DfnTsa\n3Uw+SpSSz2SHlYWwsbFQVmLhIxtSPmLH2leymRorxUJWFkrKSlGzs5kwNMZqkpKNQmPmuhbn3Hrn\n3juIeq9y//X2dp5znvf8n3Pe55wOVHAGE2iU9LzBaVT6cRxHsS9JlIG1uIUpGMO2khwXsR0vYQaD\nXSAwiJk+VFHvAoE6qn1dcDwLPQI9Aj0CcxE4hreYxgc8warUnRcl+3DBfiTXlqb8POUJjOIetnZy\n1N9hbQuG8QoXMQ87sCj1S7AGCwt7VmKFKGqwGjVcwHocEuV+uWhGPyWwQXTIkSQw1Yn5b+IaBnAA\nC9LfdNGgUwqe4pPokB/xWKSk2sH2VxjDOxHFq63O5yIwiU24ItKwEzdE/6YlhInKHLqbGBIHOinS\n1IaGyFcTg2af9mDa3E35bMqXUh7AV5Gq5r7PaTOQ8p2U9xe+W0Oj0z+wC9fxAO+xO9dH8/0Il0Va\n6iJa83Ffe1c9h8XYg++y/7eiNQIbxbWZxDeRw+F00sRePMOX1N/GsoK+eA1f4GHuKaKWvtsIlIUa\nGv9sJewR6BH4vwjU/Vmj+VtUUe/DuCinZWMzXlfEcHoCR8TUWgbWieF0iGilp0QkyhrPx9Nn5Qef\nmpdsaNgIXwAAAABJRU5ErkJggg==\n'
# wmap 220x115 pixels blue
wmap = b'iVBORw0KGgoAAAANSUhEUgAAANwAAABzCAYAAADzEGy6AAAACXBIWXMAAAOhAAADoQG8l/2DAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAIABJREFUeJztnXl4VOX1xz/nZmFV\nEQIoLrWK1krFrXVXaMW9VqpStypYNUDmTggWqdWKqa2tIsoydwKi1qLWDflRtaJ1q7jjijuKgrsC\nQUFAyDJzfn+8dyaZzL4nMZ/n4SEz973vfSeZc+/7nvec7xEKha0/BN5A+CM+cSKOjddtKaecmbKK\nWi1lHTvSh8/4hkHM5FMqKaWcR4CvEWbjk8cA8OowgvwNWIlwBLAB4Tf45J2I/idqXxp5DqHMfacc\n6BVnpGVA7xx84tdxZN8c9NO+sHUQyo/xy+M57tcHVAGrEPbBJ2uo1J7Mle9yep0iIwW9WrXuzixZ\njlf7o7yGMJUgS7E4AWVfKjgBgHqeAw5yR3gWPrmTWrVYw1gEP0H2p06WRvQ9XodSynk0MJm50hR1\nbY9eAOwNfIFQDuwFnAz0yMtnVTbgl63z0nchGa/bUsoJKDsBewIjgQtxZH5Or2NrJXCD+2o1sBXQ\nA+U6/DIpp9cqIoU1uBBePRPlDvdVEFCgBFhOgOMoYzuCPOsefw1hNv24mY8opzffYsb9P+BF4HWU\n1+nPB9RKMK1xeHR/hCVAaU4+VyRrcBgIonnou3DY+gDwSwCU/wHj8Mv7WfQ3DrgcaACWEmQCdfIp\ntv4CiPfUPBlH7s/4miG8ujUBdqU3HzJVNmTdXwZYxbgo0A/4AOU9dwwlCG8A21PKBmbJcwijEX6L\ncAnKadQzjX/KFuBtjIEcDVwG3IPwHvV8jkcPSmsUfnkVeDinnyyEMK5DG5tX+2PrnRhjawT+TH9G\nZGls5wN1wCDgh8CvsXgOr56G8qe45wlfZXzNlmsPR1mJxWtsZlHW/WVIPu7syTFrOAevjkWpBTYC\npxBkK+pkjdvm1nB7W79GmQmAMgOLaSgV7tEmYC3wPsq3aY9FmI+6d/Bc0szS5I3aMcq1wBnAIyiV\n+OXjrPrz6giUOUTPqnZEmZ9grvUhPnkxq2uP0e7ADKAbcBWwJHysWg9A+IyZsiqra6RIYaaUtVrO\nGn6C8i0D+IhaaQbMXdRHfdpPgjHand6cjPIjLP6BTz6jVq20p5Rg1pVBMr9rx0O5BL9ck/N+C0Gt\nWtSzGniQCs4P/70yxaNHI9wM7JTmmcuxOJNZ8kpW1wcYpSX0YzvmyOdZ95UFhTE4j85BGOu+2gRc\njiPTUz7ffAEuQqhHeQRHvohqM1H7ogSZIevSGlulllHOd+TuaR8EPqWCXTO6AbQHqvVQgkxlFcOY\nL4Gs+vLoQQjPkP7v926CXECdbMzq+lGoUEWv3PebGoVZwwk/bvWqF3A9th6f8Jzxui0enYZH92At\nRwDXotwCvMdE7YGtC/DqhQBM0l40sQPNvItHPWmNzXg030zrnNisJMjOVFCGI7t0WGMDUDbRk+Oz\nNjZUsJhG+sb2KWWch9APjz6LVx/Do4dkNxYXL3/E4rWc9JUB+Tc4r26NsFebdxdTyvNU60/w6n5x\nRjYa4feAh1LexHi1vkM4i2YOA04BdsPW37KFW3HkTZQLEX6UwSivz+CctvwQi4s7tKEBoIJPXs/a\ni+fR0Xh5FOXwtM8VvqGJCxBeQTgU5SiExcbDqbFnZeN1KBO1b9K+G7iW7hRtfzT/U0qPXoRwXat3\n7ke5GuEeYEfMFOx6SrmeGfIlAJN1KzZxH8LPgc3AYJSjsHgPn7yIR8cj1AHNmLunAhOxeIYAW7D4\nFJ+k7kDx6l4ob+fg0z6NI0fmoJ/iYetkYC8cGZOw3UTtwWb6htdEXj0Yn7wQPtbIKoSt8jDCTzH7\ndOuBd4B3UX6GcC7wFRbDmSXL4549Rru73u6iUIgp5eutfv6URs5FmIAxttAYJtHMk3jVvDdVNtCf\nEe7c/ysaWYNfbmvlrQptKIemKgLMIMjLCG+hrMfWjXj1mJRGGGQVEMC4v7PhCGy9i0kaL4qlfVOr\nFsrvkCTfi0qtoJlHKOUzPPobAJRF2HosANNlM8L5eRrlTsABwC8AG/AjjMF8jwYRCPsKorF1OL35\nGq9WU6km6iiVp2IOyb/B+eVxhF8Bv6aMfZkr64HBMVrugbIEWwcBuFOzMZRyGGUcQpW2TE2ET1O4\nci/MXk8qY1xLKTsBx2O2GDKhCeEa4FS2MC3DPorLWk5xp+SJf2/d+Ht4qmgR8iBuBTyMrYuw9Yc4\nMh/lX3kdbyyEvRMcPR8TvTKTclZj6wc0sRZbn2KcDijE8KIXs9X6cwJcjrAJ4QPgZeCBtKZoYNyw\n29KbubIenzzQ5uhGjKOiCdgHE2UCMAhlDl4dhU8a8MmHAHh1NspxjNOdmSOrceQOPLodwtRW50YT\nZARefQGfvB63TYgZ8iVVugHhn+7aMR3eRLgGn/wLr75DkAPSPL+98CGwCmV/KrUsZogcgPKD8M9N\nhNZ6a4GBmJvWXOBohLfyO9yY7IpHL0CwELZgvmfv8BXNELHf2sf9B3AEJVwGTMjJCLx6Ncp7OHJL\n20ORBufR8QTxI+7armV3rAGPPofF0ygvIHxFgHrqJP6TZiDDgCuAYVHHHBnOZN2K7zgNcICe4WPC\nSSgLwY2rBAgwDYtjKOVEwHwIv1yPrUuAcxH2QhlKy1Qz1NdpKL/Cq2fik/+LO1aAWi2lnmWQUVTD\nh0CJ63C4Fbg12Qk5pVK3oRsXoJyKmXKtwJHo33syfPIatboja/kF5m+yPk7LPcM/lbAnZk31Jsbg\nAHZ2/x+U9hiyZzDCjUDr72+QgaylxcCiEUbj1bdQloe/N3AJjtwR95x4fMVlDIn9IDCGValldOMP\nKH8mvWnmiwiX45NHAOOeb+BKlBMwQcE7IxxDkEaaeDki8tvWFwgFKEfTiEP3iA1xW0cCV+Cwf9yN\ncq9einIp0ZkA3yLslPQpbUKM7iTTPTnlAYR5OLIgo/PTYZSWsB1TUPbFrGdCGQ6KMhq/3Ja36w5k\nC6HfkTILv0xw9+5ux0xHv8N4keuAXfMyjkKgvIfFEfjc6KccIEzQgQR4FBLOfRMPC67G3NX/AcTb\nL1mLcBNB7sEvrzJJe7GZsQgjMX+8NRjXfy+Ud/HLJCp1G8r5H8JfaeBxylmHMJl+XBd2v3v19zSz\nkNmyAjDu4RJuAA6OuHqAfZgtbyT9NLb6MWkimbAF+BuO/CXD81PHqxejTE3Q4l3gdBzJfo/RZAyc\njfIJQh+Uea2OLsWRlq2dGu1DM6/T8pTr6GwCFqMsweJf4WVOCHPjG4/F/FTCw4QJujMBsouTS58l\nVHBoxJ6VR/shjEJ5Gr9Eu+ht/S0Qumv/EUeuxqs7hqcAPnkQMKFayiEos4Bt3PZB4BwquCvhPpmJ\naHkJ2D/Nz7MCuIZS7kk70iUTbN0TeJXkqUUmIl+5j/5cR63E9sJ69AdYfBd1J6/W3WlmNyx8xHZ0\nAXyMI7sAJj/RJ4ux9V/AWal+nA5EEOVBhCtx5GUgNKu6CtiMcC9wMz5ZHK8DM6W09TDgHgo55xbu\nIUAtdfKue1d8AgjdKb8E3kO5AosfuVPU4zHBp7ghXhcCU9xzppinigo2n2P27u4FJre56os0ciJz\npT7mmLx6bpu7d3KUf2FRlbZTKVNqtZS1PIfyszTPXEoJZzBT3gPMnXkQOzBTPgGgUntSxiSEC4GH\nML/rc0i+V/syjvyMSt2GMpbilx+6Qelz0hxfR0KBu1EWuY677docv5kyvEyXzW1PbPll2noYwtUZ\nRQZkThB4EjP9iHcHTaWX46kTk2Zj6yAaaaSUHlh8EqP12wQZR508E3XEq0+n8fmXo5zhpvgUDlv/\niklLSkYz5u/bevG+hlL2YIasw6NTEHbBkd+5s5zHgN3THk8oQdSrx6DcjyPd8erWKF8QP6v++8AK\nlEcp42pmyEehNy1sHUStWjjyLFCdxwGsA6YjXIbZajDXNwv+TI3tY+AOoMV4HPmCuVJPWSvPZyRD\nsHg6ZmyephwWtpESTiq4sVXpj4E/ptRWmYrQF+GmVu/2p9n1/gorgJHYOpEAT5C+sa0EHmqVNnUa\n0I1xugPKsSTarvl+sCvCWJqJWM+XAm9SzyZsvRvl3LxcWrmXAB7myGr3jb9jsxJa7eekivAwyksI\nT9GPJ6LWZCba4XICSeLlhIOB59u89wrKcSmMwROemsWiSgfTky9pYAjKsSib2ERddiFFKpRQi6bo\nRRbOJMAcBjCWNaxAGIrwPAFM5nQjd1PORWQaRyqcF16reHUYyu8AKOWzjPrrrCgnM0F3JsjpKG+U\nYu6YoU2/ssRnZ8QKhMsJ8g2AO91YQCbGBmbTNchM6uQxvHoSXu2GjwXhrQJlZoIA5o0Id2G2CR6N\nOhpkHpLQ4L5FqMIniSMoLBazhQGEXOcC9GJv4LxkHy8uXkaj/CbGkS0Yr+pYIrdZfojwKLX8GOTv\n4XdtHYetE4DdyObvrZxGjb5PgFEof6HridaWxcDBCFu1ckqubVnD1WgfAvwNZXweLv45yj4IpyMM\ni/PFSYUvgEcJcjkmlOp9N0D2KSo4ilppxqu7uVOa4cCoiLOFN/DJPnF7n6g9aKIe4kxHLX7BLPlf\n0lHauoLI8KjNBDklvM5MF5Oz9zpEpDmFmIYjF2PrM8BhUUeD7EWdvOve6K4FKjMaQxfpcAeOnO1u\nuY3HPMz6AFdFeqCMqE722bVtMV7FBmCHLHpxqGBCeAo5TneglFeAfii3s4nxkVM2FWzuxWzAhsbQ\nlwAVzJZv4l7F1vuAX8U48l8cST7dBKjWnxDkOOAkYBBBzqJOXkrp3NhjuoZojytAM8qRBFlFCe8T\n+ylzIw5j8bIkA89mF+nzIhUcEbEFM1H70syp9ONmY3Bm8+4AlCtoHVLVfviYCgZHpfobo/sJjvw3\n5llV2huLF4AhwC0oD2CxCJ80xL2SicO7sc27q1AOylrXI13MH+oalAtiHA2gnI9f5mHrDBLHAX5G\nS3ZGF/lEuA+fjIx3uJQqPRyLBSgFiZbOkJtj6mqYXKxIjQpb96aRT5gr66mTjVTqcMrYDYtdCfJ8\nQmMDCHA/pdQRub6ZV3Bj8+geNPEULfGJrVGUY/HL424M6JgkvXUZW6HQuN5xAEqx2AsSGlsQEyY0\nJIfDugPlc4RJpJIEK228iYm5lXJ+gq3PAwuowEet1NNaqSkRc2Q1Xl2EcnL4PeWFNK6fG4TTiW1s\nAO+GlY/XszUtETVdFBNlA0JChbFSyriNZk6I+IK1RpiMEkQYiDIEM+XMLo9OWI0jk/HoPISXSBai\n1JCi5JxJKtwL4x08AjiCer4llGGQOjZKE8I+wBL8/DvN83NBEg+iCojSTHaKWl3kgo3AdMq4Pllo\nX8sabiDvAHu0Of45q/gBA3mQnoxiqmygRnehmbHAiZgvdybu4Kms4lIGMhu4MEnbV3Dkpyn1amIM\n3414T3gYnyQWLGqPePTXCPFTioRfhfMMbb0H4VOCrEA4ERMG10X2fAq8BAzFTMu7x2izFuGgqKDm\nOJgn1XwJoDwXdVRY6So3zQyLysyQj3DkjzgyFGUgynVAfK9fdJ+PA7cwkIdIbmygLGOMdneLgSRr\nGz01VoZTrYcCYOt5hU6pz5hNPAQJhG2Vo8I/O/IbfPJ7/OLHkROAp/M/wE6O8j9KGYojp+LI7m7M\nZIhGN7v/HIIcnKqxQWQs5WSgtXDpN64XbGHSXmrVYi1VKL4Urvk15k6RcHHZigCwAQjiSL+ELSfo\njwiwLMaRJpTb3CnioziSWnhUsbG1CvADcxF6oJwTPqYcg19abd6r4OUMlErMHmQXifkWExr4Y0AR\nlqDsjInr/ZCe7Bd+yJiUrycxD5ZGLKpS2o+NQYvBmQzsj4FtgScQRuOT9MJ0bF1EPqczFnvQzJa4\nmebGY7eFxNPcjTSzW0uYWRGZoDvTzBiU6+MKk3p1P3wsZTK9+Y4vMQHB86JUtWwdhcn46CIRZj92\nIsJ8fNLARO2B0o0Zss61gT8S5DbqxCxNjMr3OwgTYkiFpE2L88NY8z8RJlDB0Wkbm1f7AxeRvfJV\nfIIsw+J9N50oGrN1kCw7tzdl1OZ6aBkR4DyEP1PKLnHb+OQ1EGWqbEB5CFiDxtBckRhSFl20ZRNB\njsWR28PbQ9Nlc9jRMVU24MilYWMD6E01Jmoo8XZSikR6Gx25CJ/MykjMtB/rcWQZRkAmX1iY6ehN\nTNDoeEnjpUw87QRQLsSr8UO8CsfTmG2XbnFbVOlxVKqZfgepJMAI/NKiLGbrM9i6GSU9xenvI8r4\ntDI8zKzhb+652URJhcmdTF4olKWZqzFqXJkhzAZqkrTakwDL8OoaPHo7NWrEYXqyA6kF5JYCj2Hr\n6RmPMxc48gTwJEG8cdtYnECZK1sxW76JkInw6BBM/GQs71kXkTyUls6L8XjfRmh5ornZfsmtLqWJ\nIVyNZryWaCbIPCrwkcojXKlAOJtmtypLUxoJj6bc1V14ta0Me/6p1J549c8AlHAe8BuqNDrOsUp3\nAk7H4khqdHv3vK3x6hi8ejVWp86qzjWJ9F+iEXrQeuYRJLnUYgrk1uBmyVvMlZAgaiZz3lKEf7hT\n2ljZ2rEJhdNYDM3gmvlISUpMOWNRtgVaVJ8tro1oU6PbY/E4MABlCs18QTmbUNaj3ILyhwJn53ds\nAmkaTLDVd0m5LiUBqhTIj/KyKaxxLplNLfsDIKReIE94GI+OdsVc0uEDGngnzXNyQW8auRIADUvc\nDcOrRmmsRrenOUPJgy5iU8LDbsa8CWpPTuiB4clljfH8SZ375R6UkZgEyXToj62Taalwmgp9EP5J\nYonuZSiXuDlhJs5SOTGuunA+ceQvrYSMhmB0HBcQZDkeHUIzz0NUxaEusuNALPeGLK32M+Nh8RHC\nbBypy+Ug8l89x6t/cjOCs2EL5uZQnuH5S+jOUUyTTW3GthvKPAKcG9a1LDQe3QOLj/FJg1tYfgGJ\nFIK7yIavcaQfXr2OANdSJ9nXDk+T/BfzUL7MsoelCAOpoAfKLih/zWAMq2mIEX2heIHDKGUhtVqY\n4pRtMUXq93Fz2h6my9jyScjjWIbEVf3OK4X4kh2VvElC9kC5iXouwi8f42cK4IM03LSmXsF/qNLB\nbd6fAvyFIK9SqGqwbTGFH5aQP02ZLloIxTwKwtnFGEAhvmTZ1s7uidEmGWdeiuJINUGGuoHQqSM8\nGlEbzCff4sgU/HJe1oXjM2ctSnbVRrNnI/CC+y923YbOQShAfwtanL3LQhhc6t7GxBh5N1vPokZ3\nccNvnkzx3HWYzXQfygzKW1V/KTY+uRajIJ1LvgNq3AyQt1HGYDTyY7U7A6ECRw7BkUOQCBGpTQij\nya9eaaFQxJXKVzZg8c9iDKIQa7jcaO0ri6nVUuBqmvkdXrVTcsYIbyD8FEdmIgRQzstJgYtcYqVU\nRy2A4CeVqbQwBUdmotQADn6ZhxLL2zYdR+6OkJ1oonV2SC0+uZUKZqcwvvbO7HAFXWERkkZKWQ7J\ndrqXCrlxuwsnU88OmPpnl6c48XmBEo4PB6c6MjMnY8mWMdqdXuyD8CmOfIHG1OhcifIAig+LvwCj\naOJKyngC5R/Ek1UQ3qCbG4FilMKMWlgTl1HOfsCIVq0j8+28Wh0xvRU+AmBtTuU1isFSGrk4/CpU\niKMI5H9bIDrPrlA8QU9GhnOa2hOmDt1899UmzHZHyGGyglKOaq1H72bk3wpU0J1TaGZrmrmJ2Apr\nJ+PI/TGva8p/vUhLZv/XNPKj8J6gV6e7T8UQ7yAc0sH1LN8myIjwFoCtP0XYhQ38Jzsl7MwoxBru\niwJcoy3TaOS4dmlsAMEI138vWnsnFX+EsYHJyF/FuShr2MxbBNifRs7CODta8yWNLDbJqHowtk7C\no6Op1gOo1XK3vnrrYpF9KW/lrSthZhsHzl6Yohwd0dheRbgM4dCI/TZlXxQPvYsTxZP/J5xHj0SI\nWy8rL5SybUHqtGVK7GTRLSgX0cRNEdEvXt06XAqrVsup5y2giVUMZTt8rlL2lwh/Bh5EGYXZYmg7\nTW3EZM73JfLvvo4gQ8NJvalX52nPmPqB7ZD8P+F68ArpBCJnzxZmxK1N3T7QmJ7bq/HL7KhQsya2\nCv9cK40oF6NcynwJYHE1JuF2IT65AWVvTHGOWGvCckyuYNubbB+sVusbi0WZfKQi8GabykAh3qaC\naQUfTYrk3+CmySaCnFbAvaZFcWuAtxf68xzwoPtqPcI9CLHvyEbs1lClg/FzP365D4CZ8gkBDka5\nGYAgT5NZlkZLMK9VlCVAeggvEeRQulEDnIFyKC039evi7qmO0e6M16LWHM//lDKERw9B+A9mSpMv\nXqOMEUyXr/N4jdwxXoeymffDi3dTRXQ4QgVCH4KUIqhbyvijlPo0DpntUbbDYjCm/NbWCc9R3iPI\nCcyWFW0cOu2VqWzkiginhykweQXKgIiM+BDjdAAlLKSEamZJ7utnpEjhDA5C6kdLyE+G8luUMazD\nGFulVlCOD2EjQVYj7AaMJLbcwh9wJL0EyhBe3Q8lVVmBb0lmnO2HBpRL8MuM8Du2/gflKvzSotQ9\nSkvYnoMIcifQhMPuxZwBFWIfroXZ8gZe/UsGeWvJUSo7jLEBlDEBMx1KftvLZl0VRNK4rXYUYwP4\nkLKoaJH3UVZGvLMdr4aTSYUZxV5uFNbgAPoxlXrOwRS+3w7YPoNeNgELUbZB+CXQhL8I+v+ZMk4H\nIAl0TNoSZC62fonwCUGeZxP380/ZgkeHIOxMI48lyOs7IzeDbnfMifJEVzApSgArwGFYXIEilDCr\nkAOMRWGnlCE8WoPQF5iDqfedyOg+Q7kb4QMgiBKklAeYKcbTZ+sVwBQc6RgVOGvVYg3/Rjgp4z5M\n0YglwOGY6flnwM0I/wNeC28jmPy6RSRSBYO3MTewfck837AYVONIKsLD7YriGFyN9qGZ56hgX9Zw\nCsKdcVq+SU8OS7iB7dGjERZRQbeM5P0KiamwehexCz7mks+B9RgV4W4kSvsRzscn/8CjeyC8RuqK\n2MXGhyMdLqi6ODlgM2Qdyk2swcYvd6HEU7SdmjRaRCgFnmj3xgbQSDX5NzYwlWb3wtT9Po5EKTeh\nYo9+eR9JSaq+vXAOtg4q9iDSpTgGB9CfWQgjGKc7uHWQozerrRTKVDnyEBUdoFqMV4chWUtNpMuF\nOPIEyh0J2hwcLpTSjynA3zGaL4ECjC8b+mDldYspLxRnShnCq1tj0YeZ8gleHYu20VkMcmDS2tiV\nWkYpgymhMZ0qJgVlgu5MgFdJRRU611j8gkbep5TlxK/D9yTKbPzSEm7m0X4IpyOciTKU9ubBFO7B\nJ8UV8s2A4j3hwGRczxQTIdCPG6HNflFJEt2JKt2XMqZgcSfKcrz6b6r1gHwNN2MCXEkxjM1c+0A3\nWiWRaOxwhOnUaqkrNHsx/fkGR+rwyRE4sg0l/AD4NcoNpCNvkR9W0NAxpd2L+4RrS5X+DIvFtNyJ\n36aEo8IeSTAOlyaOibgbm/SVEcAlwHB3Q7QYKUGxsXUt+Y2wiU+oIOU4HUApKyChOvXLCOXuE20u\nQnXMmui2Dgf+D1wx28KyAovjmCXLi3DtrGlfBgchTff7aRFBXYNwDUHeQDgCsBE+wCcHRp07UfvS\nQC/q5FOqdXeC/AnhenySE5nqjDDljjYX7fqmWMhgHFmJrX/H3JRS5e84cmnMI149GOUZMquAmwkr\ngVsJMJPZYrK1bR2OI08W6Po5of0ZHIS+pBeiXITEKOWknIhfEkdfTNJefIeE665V6b7USWq1wnNJ\nle6EVdBsiWiEGfhkolv/7HUSC+aGWAkcjiPxg5m9OoYgs5BWGQ1tMfohZxPkOSwGohwEXEfip+O3\nwDTgM4ReCK8yi+eLHSWSC9qnwYUwlVVHAMe705xDMNPNF4F5KC+kVX4oXcwUd0tWGige3R+haMGy\nLgGCDKdOnnHjWZ8n8X7bUwQ5PSWhVFsvwXg2Y/EBykj88nabc34LJKpk8wiOHEul9qSUPYpyo8wT\nxXWaJKNWgvjkEXwyEUeOopHtgenAgYAf4RW8Gl2cMFcoawkmLfCYGHFrJRSXEiyuA3CLUvw5Trul\nKGfiMDxlVeIKphJbEexZlIOjjA3AkduBhzAlf2NhHEzf0EAz76c0jg5C+37CxcPWK4HL3VdLcOTg\nqDa1Wh6uWVdMPHoUwmPFHgZmLXcqDvdRwzYEWI5iAc8h/IcgD+OXeAaQGFvvBU4Nv1Y2EOQH4bVW\nXFSweR6ivNEBghySdEvIqyPwSXv43aZM+37CxcORKcA5GN3E6DvgKC1hrStcNF63pVIriiZl3p+n\nIYUN/PxjAQvxMIMZsg6f9MeRfjhyEj65IWNjA9wN/dbrqwXJjQ1AFOG/MQ6UYHELtZo4tlO5igk6\nMJ2hFpuOaXAQmpaMRtkPj47G1lOxdW+qdDADuRMYhFcvpYTPKGcN9QSwdR22vo1XLyzYOGulkUaG\nQ8wvVuExeXe5xXiBW3RrgkxP4+xlwCsI44icmg5hLUcnPLORwyO2jDoAHdfgABxZgMUYhLHAscCD\nWCwHRqH8xs27a+0c2AajRDUXrxZOW36urCfIaZio/mKzH7ZWhqUGarQPHs0+hael6u3CtIoXKssJ\nUuNqsown8kmZeP1bjFJjWdKxDQ5glryCI4fiSCWkUVlHuRGPXoRHCxMBUicbEc6n+FEag4AbKOEp\nbP0vzXwJ1Gbda4AFwAJ6Mjqt8xpZjsXPAfDLbQhjMApj72I21zsVHdNpEo8a3YXmNhm/yWlAuZMm\nLm5VJDF/2Hoqyi3u3tU6oB5oXdVnDeZJuF/ex9LCBThyc056qtE+BNgTn6SeEGzrElZxKPMl4L4e\nhLA2ZpRLB6fjP+Fa05xRUmc3hDGUsxiv5j9A15EFBPkBQfajgv44sjuN9AdORhnJRnbGkf1RqqAA\nESrKvTj8I2f9baEXJaRX6FBZzEDOD7925IuUjc2r3fDo7VRrhyjP3HmecGaT+nmyCzWahiMXJ29W\nILy6G0HGIlQSr5ZANijvYXFgOEO8WJjIogcqX9liAAAIL0lEQVRRxrsFKuMzWbeKypGs0n0p4XN8\nkt2eaQHoPAZn67VAtsXPm4DT4mrzFwtbP8EUMcklayjhCGbKeznuNzOMjN31+OW3MY+P0hJXafp3\nBNnPLVfW4egcU8ox2p3ciOWUAffh0dup1Ioc9JcrBuS0N7MxfWLRja1Gd8HWWVTr7syR1QiT8Wq0\n/kqN7sJA7ne9mN0o4cjCDzY3dA6D68WvgB1z1p9wNuW8hUevwta7sHUxtl7DJE2U2pInVMjtWq4R\ni1OSRnEUghnyEcq9BFmCrX+lkXVRazdbD6OZd4msFNRSa6E1xQpuSIN2P8CUsNguD70ORLgUOB04\nEphMA/fm4TpJECV3xTWCwLntKhzKL0+5VVYvo5xn3dlKa35JpHDwZiyeoFLLqGcBXh0WPrKaHcI/\nj9HujNJ2p+TWOQwOngbyX+tLOQ6vnpv367SlkRvJzab5VThydw76yS0+eQC4HdiXrVxRoxAB7kS4\nB7MvtxThEmbKKsr5FTCYhnDdbsIVgAB6M4aBLGWC7sw4HdBeQsA6k9NkJHAv+U+I/Jae7Fjw2nNG\nf7M2ix4aKGNbpksxk2HjM1F70MhpwGH4ZVzS9rZ+AOyGcFC4lHBrjArAIuAITErXFuAGlBtjZjAU\niM5jcAC2+gA7z1dZRwX9Ci7LZzb1PyTzWckXOLJD8mYdBFvPAk5CGJNwz87WnwJ3Ehlc8BmKHa5C\nVEA6m8HtCbxO/hSENwHXu9kKhcfWxZCxh05R+sesLNMeMcK0w4GNKM/GzGYYpSXh6JREeHVrlNnA\nWa3e3QKMoYL5hbx5dpY1nMGRZW4l0HywkSCHF83YAIRspkKCcUB0DPrzEfAVMALhKar08Kg2qRgb\nGHU4R85GGQbMA95FaQLuYi2LqdT4ytQ5pnM94QA3qfFfwJk57ngBjpyW4z7Tw9bHgV9kfL5wHz4Z\nmbsBtRPG67aUU552qo4p4VwNBHAknZSijOmEBkdowTwLqMpZn8op+GVhzvpLl1q1qOcbshNk3Uwj\n2zNXWlSux+kOlFGLch2OLMt6nF0kpHNNKUPMlwCOeNCclSdSylslWBaD1exK9urHPShvE+Bdyp9Q\nLkDpPA6VEF4dgUentqfA5sLXhyskwkIgFxVWPmwHxR5zIbo6D3HLCRtHwi+Bk4F1lLfaz+roGDnA\neSi/RgAlSGs9zmo9gL68WQzNm85tcORM8an4Na+VzL2Lwhsol+DIQ636e5mQ2K7gb7f7c6ni1bNR\nBgDL+I5rgSHhY8E23/Mg11HPUtCJhda67NwGt5Gv6Z1VD2+jLGRTHkokp8tsVuLlJZSfpXnmJ/Rj\nvyjXt7BtWMxAEtYdaN+Y1J4/oVxKPJ+EcD5e/ZIG5jFX6hE2o0zAZltW6e9S9nbmgM7pNGmNrZtI\nv8jgauBhGrmgXelmePXMJKWnolHewy97RrxXpdth8TlmDf8ajuyfu0EWEhVs/k3qNfeagOeAQwkV\nqUylQlMO6dxPOMM60jO4bwiyW1givT3RxOOUoqRzoxSiN3Ut+rTq4+GcjK1QVGpPyhmGEQM+Gjgs\njbPLgGER75RwNFAwg+ucXspI0tvUVBa1S2MDmCOrSU/j8h0CMfIETYDAs8C7BJmdq+HllTHaHY/e\nTjn1mLrltaRnbLFRavHor7PuJ0W+DwaXXg6bReriN8VhWsotLey4knUNHIsje0VE2LdntuYQhLOJ\nX1QyU8oQFuDV6YVI5+ncBjdZtyLdP5DwfH4GkyMcuQN4JIWWH9A3wd7hXPkuZ2MqBAFW5LF3Qalh\nYP5LQndug9vMkaS33nmDWVLsSjfJEW5N2ka5suAZDfklWnoh91yMR/PqQOrcBqcclUbr1Vgck7ex\n5JIG/gsxnCEtPIqf2ws1nILQxEqMLHo+KSWdKXsGdGKDU0FJthhe42YTA1zbYXTq50o9GnetuYxS\nRneG4oURmO2ZmXm/jvDjfHbfeQ3Oy5Exq6dGsgwfZ6Ccy0acQgwrh8T68i2ihOHMkC8LPppCICwg\n/1Lxed0E77z7cBqRbBiP70AUf8JqnO2T1SxgADdg8RbKJqwOsv7MBp+swauPoRyXpOVmTH2CTMRz\n8xpT2jkNbpL2Ygunp9CyfQm+poMJR0qu/dHZUF6BmAb3GsrtlHIvM8XUVK/WA1AeRqkAlhOKHU1M\nXqetndPgtnA2ie9utwILqejABvd9RbkTcR0oQbeenMVqHPkiqu0seQWvVgONVLCQeupJnHWxGEee\nzcewQ3ROg4MDYr4r1BPkFvz8odM5Fb4vGMWt1KUmfHJn+Gdbn6B1aeRo8r606JxOE+WBOO9fiF8m\ndxnb9xRHTkOYEOfoUiT/Wymd0+D6swgTGd6W7JJ1uuj4+GQWEhU/+hnKyKzr0YWqyiagcxrcWnoT\nHbS8CatdFLfvotg0MAHhSCwOQ6iklANjyvClSynXU6UJPaidMx/Oo2cgtMzdlQcQxsVcWHfRRa6w\ndSmwJ8rQeHXuOqvT5JA2rx/qMrYu8o5FFUozTvyikp3V4CLz2Sz2LtI4umjPVOm+WNwBPIAjf8i6\nv1mSdNO8c67hLNZFvNa8SZ930ZEp42vMsipzAaUJOhCPTks1l66zPuHWtXm9oCij6KJ9YyJSMg9W\n9ugeBLgNYT+GcCnzk8dhds4nnNK6SPyHVPDfoo2li85JjfZBeARYQgmDU9W47JwGZ/E2YIRblbpO\nlojZRXugmSHATjTz13Ds5veaaj0AW9dTo32KPZQuOikebesNT0rn3IcLMUEH0kgjwuB2UUS+i+89\nnXNKGWKmrMJiEhZ1xR5KF10A/D/QV5ZG5gtzAAAAAABJRU5ErkJggg==\n'
# 34x34 pixels black
about_icon = b'iVBORw0KGgoAAAANSUhEUgAAACIAAAAiCAYAAAA6RwvCAAAACXBIWXMAAA6uAAAOrgGjVqEKAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAphJREFUWIXF2LlrVUEUBvBfHnGp\nIiJuiPpiKm3cCzEhoiAq2AlCsAgEcQFLSxfQKn+B67/gRjAWEhXRQhAXUBARo5WaQqOgRsFYnPvw\nJSa+Ozc3+MEt3txzvvnezJmZ706TNCzCLmzBKixHS/buM97gGW7jOt7nJW7KGdeJo9iBVxjAYwzi\nUxYzF1WswVa04gZ6cTevoMnQhn58x9msk7xYh3MYQR9WFBWxD19wCcuKkojpuyKmris1+SS+onsK\nAsajJ+M8kSLiI9pLFFFDh6ip440C9wnV0yGiXsxX/5imNlET3QmkFezGHsxIyOvJ+pqwgPtFYabg\nAkaz52pi7lWxmsagUyzRlNXRJIZ4tO6Zl5BfFUu7o76xD2cSSGp4WifiA5oT88/jWu3HYvzE2gJC\nVuKy2EE3Fchfn/W9gCicFwVIysJLdDeL+hhITJ6FJePafomzJxUDmQYPcSAxeQluGluoXwqIgMN4\nAEPYXpDkVQlCduBDRfiJzwVJ3hXMq8cwWiolEJWCihiNlkaB04g5GK4Ie1fYtJSANgxWhMdMcV5l\nYzWeV3BHeMz/hW3CbFsottl1BUjumdry3YAfmF8Rlr8fBwsQTRWHsr6Hag0d4kiuJhI98WdEfmJ2\nQm5r1ufm8S/6hNvOgxacFudL/TZ/Sv6t4Jo6M1XvH46If9iDiw1IZuKtiadzZg4R+8VBt3qygC7h\nujomCygBnfiGvY0CTwjLPx1iOjPuY3kTjouR6SlRxH4xErlF1NAlzqEr4rOxKFpFYQ7LMR2TYYVY\nTSPC6K5PyN0oPjdGxOqo/is477VEu7iW2CXs4C08wmtjryVahQnfiqXijqQX9xt1kFdIDQux05+L\nmqo4xkfFNA4ae1Ez9DfFxPgN6lCbI19RTj4AAAAASUVORK5CYII=\n'
# play_icon 10x11 pixels black
play_icon = b'iVBORw0KGgoAAAANSUhEUgAAAAoAAAALCAYAAABGbhwYAAAACXBIWXMAAAYoAAAGKAE7mnbdAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAIRJREFUGJV9zzEKwlAQhOHPQJrU\nloIQ9CSpPYWFTQpv5QUsBFuPI9gEixRq84IbfcnAFjv7szvLVzv0eKfqkweKAG5Rhr5M3h84qwIV\nGtSZeZ1mFazRhWy/9YxL2hnwGE8scMlAt9wfKzwC1GEz9dw+gIcpaNAZ1xRnlO2EZfCGTK/g3T9o\npCU8zhdVhQAAAABJRU5ErkJggg==\n'
# pause_icon 10x11 pixels black
pause_icon = b'iVBORw0KGgoAAAANSUhEUgAAAAoAAAALCAYAAABGbhwYAAAACXBIWXMAAAU+AAAFPgHBI4ZIAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAEVJREFUGJXtyrEJgEAQRNGHqC2Y\nXB3Xfy9rYgleosmCm2kBDnyY+QwEruRET0bxMaN5smLLvhTfJh/zH1+Pe9kDRzKKjxuo0w4ffxYr\nKQAAAABJRU5ErkJggg==\n'
# 10x10 pixels black
delete_icon = b'iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAACXBIWXMAAASSAAAEkgEK+sW7AAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAHhJREFUGJXFzbENgWEUheHn00iU\nFjABY5jBHlZQmsIUv8ISlCI0EhXRCBGJ5nzJLxGtU70377n3Fp8ZYRxeYlNFwQSDzH08wl2cw4eC\nFYbo+Z4b1nVoMM2FWfiIeZxOa/OUt5fwHdcq28Wf+WOxZoEtXtiFn9jHeQOOAhmpEdVYMAAAAABJ\nRU5ErkJggg==\n'
# done_icon 16x14 pixels black
done_icon = b'iVBORw0KGgoAAAANSUhEUgAAABAAAAAOCAYAAAAmL5yKAAAACXBIWXMAAADiAAAA4gHdoT1DAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAjdJREFUKJF1kk1IVGEUhp9z78yo\ntXERRQVuIqGIQImISGcc74wW6SIQKahQyhLaBZlOiwF/GiRxYVDUIijTMIQ2IenYaFhBEUkGQQRB\n6KI2QaE217nfaSEzTqFnd97zPS+H93xCXl2Ziezwe9qqSi2wSyCt8BV47KatOzdqxhezb2PT0Uqj\nGvLlhKnISc3oTVXuqq1nCzzv85J/pcjnbjqIxblAgTkDlMVToS2u+PqMMXPXQ8lOycHQi8rRnqrx\nj6xT8ZfRrSuuKVXhlkGbE6HJtwBWbDK8U9EBlLqNYADX9cIqDKqRS1kYwFLLOg+M9FRNzG4Ex6ai\ndYjctoU6S2jqeBEuzxmIUK9GHgJ0JKu3taecRx2p6L4cnIocUMwwQktnMDkHOqvGPp6d+xR2e4Gl\nD6sdLQKNYPbHX9Uc+eP5VL30KCqDPaGJEQBVPlmijbkNAM8sbjYAgWBFN8IosMd1vQnbS/cpFBpf\noD0LiG0tK1q8lgEy7ys0pQBxiZtl/+JpkDdAuUKTqMQSFU9/5gIxuhfl21oGqk9sOJEV+g+/XiZj\n6oEvCj9+L/se5AeqoqcQfZYzsIy5p8qFthmnJCv2OJPfjU2NQNfAsbH02jUiDQrFgWBlzmD1I007\nLapczmimurdqan69U15LOY4Rho0QTQST7/8xAOiYdppRuoB+8cxQd/XzBYCr006ZrbQqOMZYDYnw\n+Lt8Y8lv2macEjsjFwWtBbYruMCCKEP+Iv/9+KGxX/9v9hfGX/rP0chocgAAAABJRU5ErkJggg==\n'
# hourglass_icon 11x16 pixels black
hourglass_icon = b'iVBORw0KGgoAAAANSUhEUgAAAAsAAAAQCAYAAADAvYV+AAAACXBIWXMAAAD/AAAA/wGdhUAaAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAARdJREFUKJGN0r8rhXEUBvCP94qb\nQZLucrEhFsVgsRosIjuSQXaLP8FqkIxSNgzKwGIgynCz2PwovxYM4kq6hvfcehlw6nTqfJ/vc57z\ndGowjTF0ohENUct4wz3OsJsgjyHU4hUveMJj1HqMBE4H9vwe++hLgq3wB7iA5wQ3+MAgVjAcuqvR\njzpcVBujOMI6KpEPmMIxxiEX4PPQ3oLDYNtAL06xBElm3ILUqlb0oCh1ZqEKyILzaA9tTbhCm9S6\nb5HDFpZRg8moi9j8QWoO29Esxg7F+LCD2ayU65Agxs5nxneFpEQscxIPhXDhDqtojn4J3TAQds1I\nva1k8hYTQTZQ/VX5R5Zy0tv4jMzhXXqeZanPlzjA2hdPMUrLQtYMpQAAAABJRU5ErkJggg==\n'
# dropdown_icon 20x12 pixels black
dropdown_icon = b'iVBORw0KGgoAAAANSUhEUgAAABQAAAAMCAYAAABiDJ37AAAACXBIWXMAAADAAAAAwAEwd99eAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAHFJREFUKJGt07ENwlAMRdFjRJMA\nC4FQqJg0S1AhpfqsAHOkBYogGujsK7lxcSXbz4ERJ6zkeOIKD7yK6h444PhpZAhMScd/6w5n9EnX\njAvc1O2wZS/7Q2BrGXmTdH1HLiWwx2AJZtY1URzsNRo6Na/X3hasSJejQLGSAAAAAElFTkSuQmCC\n'
# undo_icon 333x33 pixels black
undo_icon = b'iVBORw0KGgoAAAANSUhEUgAAAA0AAAANCAYAAABy6+R8AAAACXBIWXMAAADMAAAAzAHrAFApAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAALpJREFUKJGV0DFKA1EQh/HfRgOi\nndqmSWWuYBfs0nkBL+ABtLPyAh5CsLFNlTZNIKQJ2Nka2CIBF7Vai30LK85C9oOB9/jPN+8xcOov\nAzxhhR9s8IxRs2mJM2S4wzfKoD4xrqUyiY9BY5GqvufpgXDqG67QwwFu8JWyh0h6x7n/3KZ8EUkz\nnARSH1vs2r43x1EgTlEc4j4I4UK19iY5PrIWoY0XlL2O0hCvXYRjrFUL2ZsJrrsIcFkffgFtbj2m\n5MkfIgAAAABJRU5ErkJggg==\n'
# batch icon 32x32 pixels black
bat_icon = b'iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAACXBIWXMAAAHOAAABzgEzS/IjAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAActJREFUWIXt179rFEEUB/BPNERE\nIUoQbKwkhcbuBO1ikcJYiE0EISjxB3b+B4KIhZqojZVY2AgBm7SxuGC003RpTjAgQghJTlATEDGe\nxbyTTbglEdxL4X5h2Zk3j/l+Z97b2XkkdOIq3qCORkFPPTiuYCd0YBcm0R9ilvFVMehGT7SncBoe\nhbpXOFIQcRZHMR2cD2AR30Jdu7AvOBeEklobyZuoobFjG4jXoRRQCigFdObYj+FZpr+Et7iPlYz9\nGq5H+yzmpWO9R2sMxlx/kLcDe1DBAVRxEDdxe4PfjfCrYDhsHzGHX2Hvjv4cfrYia3USngh7Nfrn\no/8841MJ2wRWMbthjpEYf5yzyBoaeSFo4iQ+4JC09aOZsUvxfoofGMJxvNtkznXYLAnn8QQvsBe3\nwt6FC/iMlxgP+8W/IW9iKyEg7cCalLjntL5wLIU4/lEIDuMueqXEnJUSqbnSO/gU7SEM4IyUF1tC\nnoBVzER7AN+lUNzDbumrmJJCshZ+77FfyoMJ6WY1kxGYi/I+UAooBfzfAti+wmRFFCYPpcNoGn1t\nIO/D6+Ac65B+HpM4FQ51fCmIPFucVqUbElKpfDmUFVmeL0s7PRKcfgNRzaAcU0lblgAAAABJRU5E\nrkJggg==\n'
# audio_icon 20x20 pixels black
audio_icon = b'iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAAACXBIWXMAAACxAAAAsQHGLUmNAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAYFJREFUOI2l1D1rFUEYxfHfrjcJ\nvpQqpLIRLUynrSBYWCkaAmqjgt9A8BNIICm1ERVBC1HBxj75CCLamGBjfEdBIRiRa66xmGdwWe+s\nxHtgYfeZc/7z7OzOMFwTuIBHeIM+BniNxziHrYXsXzqL99j4x/U2vEXVuNEIfMZVnMQhHMSpqH1q\n+K6hGga8HoZ1zGF7x+Q7MI9fkZlrG87EwE/MdL1GS6ejgQ0cz8UJrETx0iZgWZcju4SxZndPpXXc\nrLbgeTCm4WE8nP8PWNbFYNyGZWlxd40AnAzgcoXvWBsRCF/Rr6Wv9GNEGKmpbT18xB7pp90bExyV\ntteXQngn7mAB43iJ3XgB9/z56xdwK+73d3RzIDw3sdjI363xqmXO22i8A9hrebNWapwohLqAY4X6\nsRr7WsU8a09ZpQ6naunkyBo0TIMOYB6rpLXL+lDjQQHY7wDmsUraFFn3a1zBsyisDwl1ATU6fIL5\nGqs4LJ1v72LWtaiXtBqeSjq5Z3EE334DgHNrz6KVYRQAAAAASUVORK5CYII=\n'
# filter_icon 15x15 pixels
filter_icon = b'iVBORw0KGgoAAAANSUhEUgAAAA8AAAAPCAYAAAA71pVKAAAACXBIWXMAAADNAAAAzQE5R7LNAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAKFJREFUKJG10DsOAVEUgOFvkNDR\nqcWU1mEnswGlXZCwBFNZA7EDCjsQhV4yBaOZaSbzUsyfnNxH7n8eN0CMEIFmEryz/SvAEvuWcs4X\nUX5YI/0jVsVsm5birqyVPo4NYoxe1SwjXCrEE4a1P4ExrgXxhklZq0USnDHFPYsIz6aqOYtC5XnZ\no8rh29CJnHYuDyruP9maYItHm2Q5IQ6Y1T36ATzsPfzX0hcXAAAAAElFTkSuQmCC\n'
# select_icon 20x17 pixels
select_icon = b'iVBORw0KGgoAAAANSUhEUgAAABQAAAARCAYAAADdRIy+AAAACXBIWXMAAADVAAAA1QFU2apiAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAWBJREFUOI2d089LVFEYxvEPOojj\nMC7DiH5I/4jQIgL3Rm3CfqzaBUGgbh1oEYIgBe3Ef8C10GJgohZqFJSVVlTLlmmOzrS459LpzL3T\n5AMH3nPu8355zuG9ZLqOVzhEd8C1hUron8SIcHANqwG4gn3lqkeQDRzhJh6jiWnYxg6qfUBluoXj\nKPU6/MTTE8BuJ7AuNqGNJ/8Ju4NOAnuDU2TvsBKZL2IOtT7JUthbTOSGDpZDfQ57wdTEeAKb1XvN\n9zgTmzpYCvWLxNyKoEWwXZxNr9DFo1Df0ztvLdwtgH3ChaI3iRPCfAE0XXsBNoaFFHjszxvmWuwD\n28X54Gvge9w4FEzDCfABHhbc5iOm8Dns64HxTyDcDwlyfQiwL9HZSNpUkQ12ETBP+guXcBXfku+1\nEOgvbeMdRkugZarjK16mCRtYw3M8w8EAsCou4zRuFBlm8Bo/9M5b0WrL/t0rKeg3TXSI1Z2tAfUA\nAAAASUVORK5CYII=\n'
# view_icon 20x14 pixels
view_icon = b'iVBORw0KGgoAAAANSUhEUgAAABQAAAAOCAYAAAAvxDzwAAAACXBIWXMAAACdAAAAnQGPcuduAAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAQtJREFUOI2V071KA1EQBeAvySqK\nRFQEEREt7MRC9Dl8Ix/FUmxsBHvFBzCksbCJAQsDYizUiBFjcW9gXXc3mwMDw525h/k7cIwXvOED\nzzgRsIYuRhXsG+ciQV7CBvbxU5FwhE6Cefmoo4VDrGRiM1jAa+b9PsFsAeEYnZyPRXhKJiRs4Tan\nwiLc1HGZE2ijh7loVdGsCbPaFmYybv8Og1SVqyUkX3iPfi+JhEuopZLS/gD96CdolpDX4ML/9bdj\ntTvCbVY9mxbhIPOCm7Hd/hSE10mmvSy6WPf3VhtYLMh/TAQl1EtId7FcEk/jm6DhIuntmU56Dw0M\ncSAsYSio4gynkexI2OwkEXzi6hcpGHSqmLcPlgAAAABJRU5ErkJggg==\n'
# clear_icon 28x32 pixels
clear_icon = b'iVBORw0KGgoAAAANSUhEUgAAABwAAAAgCAYAAAABtRhCAAAACXBIWXMAAAUiAAAFIgFTQBn1AAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAArBJREFUSIm9lk1oE0EUx99mP7If\nk83usjsOTSAKIRBLEmLaQA+FhWJFIZDLXiUq5CK51ut6LX5A20sOBS961qMKHryooAiCCIJHUcGK\n9KAIseuhnXaybppsm+0fBpY3895v3rw3yQCcgDDGfQAIMMb3uZMAAkBAP1JJkwzDeEq/s9nsk0Rh\nvu+nYDe7AACCdrttJApECL1ngYnCPM9TWJhpmmuJAhVF+cYC9443GS0tLZ1iYTzP7yQGAwCQJOkX\nCySEXE8M5vt+iuf5ASTVLJ7nIcuybuu6ftfzPIfaXdcliqJ8kSTpx9Rgtm0/hOFMAp7nB8Vi8cLU\nIFSVSmUhDAsPQsiNqUJ935cymcyLceBms3nmWKBarTYbtpmmuTEGfCwFABCUSqWL4YlCoXAtCjgz\nM3P1SCRBEP7QIK7rnh61bm5uri6K4v59NAxjPRao1+ulU6nUDg3gOM7mJH6e5yGE0IdyuXx+Yphl\nWZvAHE8mk3kWa7eTaH5+fhYh9A5Ctcjn8xO3uqqqb5vNZmnI2Gq1bNM0VwzDWNM07SXHcZFdpijK\n1263K8bZMPVdXFw8y84deod0XX/e7XbVSUFUCKFPTJwDybL8nQWoqrplmuYq+7t4RNHufPTfDMdx\nf+kCjPGDKO84G7Bt+yaN1+/3o8vA1k6W5c/Uvry8jGVZ3h4VfMQ/eQAAwd49HC32UrOj0+nIUetd\n182zmwMAIIRcoX6NRqNyKBAAQNO01ywsnU5/7PV66fA6x3HWYfetIkRlF+tZUSgULkEoS03T3mCM\nFyzLWoWDet9h/TDGt+hcvV6vTQykymazj8NgJoPBUFpBsP/oFUXxZ2wYE4jbe6JHgjmOC1RVfSUI\nwja1jap5bOVyucuiKEY2Fh2EkI2pwMKqVqvndF2/J0nSbzg4yq1xfv8AVBAGM8fw8t0AAAAASUVO\nRK5CYII=\n'
# paste_icon 25x32 pixels
paste_icon = b'iVBORw0KGgoAAAANSUhEUgAAABkAAAAgCAYAAADnnNMGAAAACXBIWXMAAAUiAAAFIgFTQBn1AAAA\nGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAh5JREFUSInd1jtoFUEUBuAvT41J\nNBEUC5+NpSAhlklnrzYW2omtloKlIJZWiqBoYWFhERstbGxioygWWiiKoI0vMMn1bXYtZoZMNvcF\nNzeFPwzszpzZ//xz5pyztI8duIcFPMaBdjf2tVjfi0kcwk28wGGsxw2MYijafmn0kR5MZO+92ISB\nSHAxzn/FfRzHrzh3Bsei3QBO4RX+YA5FTlLiaTY5Gj3dEr2cwJMmaqfxAD/wCT+FI01O7xdJhuts\n3h3Xhuqs5dgc7XbVWRtGmZSM4LRw9qKSrVHNHrxtQtIrHNHnTAncwmXU+jPjnfiLK6hFDy8J5/xG\nCOy3BkSLOIfXkfCocBtBTlLiJa5nc3ejoo2WbuI6S8FfxDw+4H22bxLbGpEsVjx8F0dH6K2QFI0M\nO0GupIhECWdxRMibhAHhktSEs0+Yw22cb0VSVTKDj0IuiWS58iJ+PO2dbUdJNSbP4+gYaxKTqvyu\nB760PPBj2IdBjMe5cY3xSKiBLUnymEzjYLQZEW5WFSkZC6GktCSpHtWdODpByfKYJM9WEyXNk3E1\nUFRJqoEn9IPB+NwnFMqEvPv9Vr9CryCpKjmBk0KmJ7t+oXMuCG2BUF7mhRZxrRUJy2NyNY5OsCLw\n3UjGokrSrbJSrgVJ8f8oqV7hKauTkFPCT8kKklls0LzStotneJhI0s/ddnxvsinP/IQeoR0kpP5P\nSNixOC4kkq7iH+xVlw4uTdjBAAAAAElFTkSuQmCC\n'
# downloadbtn_icon 95x36 pixels
downloadbtn_icon = b'iVBORw0KGgoAAAANSUhEUgAAAF8AAAAkCAYAAADvqeb3AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAACjwAAAo8BLYPxagAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAAS0SURB\nVGiB7dtriFVVFAfw35ipPeydZT6yJ/QgIoskTAssP0QZPS2CqMCkwIqICiqpIJCiqL5khOSj6FN+\nSIqiIimrIXrYw5qiUqMyK0ltKnX09mGt0z1zHcO5jl5hzh82Z6+119l73bX3Wvt1LnUMwwNYik7U\nqtRnqROf4H4cogHnYPVuoGR/SL9gQmH4Y/FnFizC2Rjc2DsVdgiDhcFfFnZej6NhfjKeRVuLlOsv\naMM8Ye95hBvUMKKFSvUnjBL2XgVb0NVSdfofurBlgHCFWouV6W+ooW1Aq7Xoz6iM30JUxm8hKuO3\nEAN7KX8Y9sl8F9Zm2p1wPc5EO+bs5LYuwWR8g0eaqaCGTdsp+6Ket8vP4IhmGt8JKDaN83dBW7Oy\nrcW9fG8Tas2GnS6xRSYO5G4QI21kk/X1S/Q27BRox3gcjJvEad1IPCFcscBZuAyHYyUW4PMsm4Sx\nWIaXMAi3YSMeS5mrMBpvZZu3YEjKH48pWIFHse5/9B2Ai3Ee9kWH8NZVWT4YU3EKDhWhdJnwnj9L\n9YzE7RiKuf/T3najmbDzTgN/QfK7cEDybhO753KI2ohLs3xa8j5M+syS3Kjk/Zz0BUn/nvR7DfW+\nVtKlMey04Xlbh8tfcVLKjOyhvIaPsGfKHIrlpbKu1L3psEPfGP+aklLjhPGKBuYILylO9H4To/dE\n9Q7ZGzNKdVyOozK/GQdmO4Xx14pQN7v0TjHnNBr/opLMHTgXXyX9RsocLEb0CdnWJPyVMuenzINJ\nr8OVYmLfYDcw/hT1HzhBGK+od2jKnFSSGStG5G9Jj8dz+FoY9hERcmr4tNROYfyHkh6he6eztfEf\nTnppqZ7rkvcP9ijVNSPlZ2NNytyc5UuSfrpUz0I7YPxmY34jTi7lf8IZmV+jPjEvL8mMFi67RIzM\ncZneFiFgnLq7N3Y0MXLpHo+3dQcxOp8rS7wVpXcOzzYXJ71ZjO79UqbwuuEN7zbme42+2GSNERMh\nfIdvRQcQ8X9I5oeX3inKl+TzQnG50I73cZr6bc8SW6M4hd2yHfoVbR1W4hW6bBI3eNOF4RcLTz1I\n98Ei5YjOaqynKTRr/DFijfsCPhOTEdwt3PADYZhBmIkjcW/KrMcXmS9GdWHo9kx74dTk9WT83qA9\nn6eLFc1xuDV5H+oeGjeIOehiHNNQz7v5nCpu+yarLwSaxo5usmrC/ac3yM7qQW4LbizJDMbfWdYp\nlr7DSvI/NNRZxPyrkx5akp2YvMaYPxCv96BLp1gKw7Ul/gb8ob7SuidlRpXaL1KHXRjzF4pJscA6\nsbV+MxUr407hAVcIN14lVj5vlmQ24C7hvj+JcLIa94ljjI6GOh8XK6NlSW8UnUw9pi/Cj/g46S4x\nQqeJlcte+RuexJcpM08MhCn5m2aJzhyuPuJ/EB46M3WbLxYHE/G9JtGbkV+hb7BDxwsV+gCV8VuI\nyvgtRGX8FqIyfgtRGb+FGCC/IWm1Iv0M/30rVXwuWN1C7Rp0+1yw2I7PU3nAzkab+sXTXOIAaX0y\nXhHb5SHbertCUxgi/gPxqvqFzFFF4QT18FOlnZtWiVPR/25xVojv8//C/uIiYZAKfYVO8eHAU+IE\ntQP+Bf/wfFOVoabvAAAAAElFTkSuQmCC\n'
# later_icon 36x36 pixels
later_icon = b'iVBORw0KGgoAAAANSUhEUgAAAF8AAAAkCAYAAADvqeb3AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAACjwAAAo8BLYPxagAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAAMnSURB\nVGiB7dtLiBxFHMfxz26CSZBETUwiJEJcF8xFiKj4wqggCEpQUTz5SDZHD0E8BBEEDQTxJBJ1BQ9x\n9STe1Eh8HATRgyARIipqTMJiHgTBxFFh120P/+r0OAyuOL2pwe4vFFVdXd3971/XY+pfNVSswTP4\nEh0UbagtdHAAT+NiPdyKk0NgZBPCCWwuhR/Hr+nEO7gZS3q/TstALBGC7xM6n8EYvJ4y9mIkk3FN\nYQRTQu8pohkUWJfRqCZxqdD7OMxhNqs5zWMWc6OiKRSZjWkaBUZGc1vRZFrxM9KKn5FW/Iy04mdk\ncabnjuEi4fP4JpMNQ0GBmXP8zLfScz8d4B4T+AH7a7Ho3DKDIlfNr4MLRQv6I7ch/5VhFX8cW3EF\nLsAxfIZXxezwbtyVyq7Bsyn9An7CKjyK68QM/iNMqj7UFtyEb3EQj+NHPLFwr9SfYex2tqpcsLNd\n6b3p/B793bWbcIkQshDCz6X0+1iUrn8+5X2t8uh+UM+r/Stm0jOHUvyrcRuWCffHNtWHOB8b8XLK\nO4zbU1iOF1P+AdEqNqjWKh5I9y/FL/AuHsKDtb3d/Ax1n/8F7sRurBWCEzV3A77Cdymvgw+7rr0h\nxYdwT0pPYzVuxJtdZU/hPpnGjWEVfxeeTOlp/Nl1bsU8165P8b0pdHNZz/FBGQfsYRV/IsU78Rwu\nx/c9ZeZSvKgn/6io5XtUY0TJLz3Hvw9k5YDknuGO4ZWesFy1jLlSLDg/1efaUykex2O4Px1/kuIt\n4sMcFpVsAlfVan0N5Bxw+4XVeKkn7/OudNmnr8XprvxyQWilGBP63fvhVKYccPct0PvNR9YB9w0h\naD862CEG1GvEz8FJbE/nj6T4BK7EHcJVUS4I/YxrhdDXi9Y9LX7rf5zKvC3mDodqeZsByFHzm84M\nitx9fqNpxc9IK35GWvEz0oqfkVFpD0luQxrG2b1S5XbB9f9YvKUu/rZdsNwoO6VtAQvNiJhgFniN\ncFqdSRnv4RYszWXd/5Sl4j8Q+4XOp3V5WDerup82LGw4Lv4DcdYde0S4X38Ta6YrcJ6WuuiItYNJ\nPCLWjv0FcMTWTXkVvw4AAAAASUVORK5CYII=\n'
if __name__ == '__main__':
icons = [x for x in globals().keys() if x.lower().endswith('icon')]
import awesometkinter as atk
import tkinter as tk
root = tk.Tk()
for name in icons:
img = atk.create_image(b64=globals()[name])
print(f'# {name} {img.width()}x{img.height()} pixels') | PypiClean |
/Mathics_Django-6.0.0-py3-none-any.whl/mathics_django/web/media/js/mathjax/jax/output/HTML-CSS/fonts/Asana-Math/Misc/Regular/Main.js | MathJax.OutputJax["HTML-CSS"].FONTDATA.FONTS.AsanaMathJax_Misc={directory:"Misc/Regular",family:"AsanaMathJax_Misc",testString:"\u2070\u2071\u2074\u2075\u2076\u2077\u2078\u2079\u207A\u207B\u207C\u207D\u207E\u207F\u2080",32:[0,0,249,0,0],8304:[696,-271,300,14,286],8305:[685,-271,209,27,183],8308:[690,-271,299,2,296],8309:[696,-271,304,14,292],8310:[696,-271,299,14,286],8311:[687,-271,299,9,290],8312:[696,-271,299,15,285],8313:[696,-271,299,13,286],8314:[593,-271,406,35,372],8315:[449,-415,385,35,351],8316:[513,-349,406,35,372],8317:[727,-162,204,35,186],8318:[727,-162,204,19,170],8319:[555,-271,412,30,383],8320:[154,271,300,14,286],8321:[147,271,299,32,254],8322:[144,271,299,6,284],8323:[154,271,299,5,281],8324:[148,271,299,2,296],8325:[154,271,304,14,292],8326:[154,271,299,14,286],8327:[145,271,299,9,290],8328:[154,271,299,15,285],8329:[154,271,299,13,286],8330:[51,271,406,35,372],8331:[-93,127,385,35,351],8332:[-29,193,406,35,372],8333:[197,368,204,35,186],8334:[197,368,204,19,170],8336:[12,277,334,31,304],8337:[22,271,328,30,294],8338:[22,271,361,31,331],8339:[11,273,359,31,329],8340:[22,271,323,30,294],8364:[683,0,721,0,689],8531:[692,3,750,15,735],8532:[689,3,781,15,766],8533:[692,7,766,15,751],8534:[689,7,781,15,766],8535:[691,7,766,15,751],8536:[690,7,766,15,751],8537:[692,7,750,15,735],8538:[692,7,750,15,735],8539:[693,1,750,14,736],8540:[691,1,750,15,736],8541:[690,1,750,15,736],8542:[691,2,677,15,662],8543:[692,0,392,15,625],8544:[692,3,336,22,315],8545:[692,3,646,30,618],8546:[692,3,966,43,924],8547:[692,9,1015,12,1004],8548:[692,9,721,8,706],8549:[692,9,1015,12,1004],8550:[692,9,1315,15,1301],8551:[692,9,1609,16,1594],8552:[700,3,979,26,954],8553:[700,3,666,14,648],8554:[700,3,954,14,940],8555:[700,3,1254,14,1236],8556:[692,3,610,22,586],8557:[709,20,708,22,670],8558:[692,3,773,22,751],8559:[692,13,945,16,926],8560:[687,3,290,21,271],8561:[687,3,544,21,523],8562:[687,3,794,21,773],8563:[687,7,826,21,802],8564:[459,7,564,6,539],8565:[687,7,834,6,813],8566:[687,7,1094,6,1065],8567:[687,7,1339,6,1313],8568:[687,3,768,21,749],8569:[469,3,515,20,496],8570:[687,3,764,20,746],8571:[687,3,1019,20,997],8572:[726,3,290,21,271],8573:[469,20,443,26,413],8574:[726,12,610,35,579],8575:[469,3,882,16,869],10033:[669,-148,601,55,546],10038:[572,0,592,45,547]};MathJax.Callback.Queue(["initFont",MathJax.OutputJax["HTML-CSS"],"AsanaMathJax_Misc"],["loadComplete",MathJax.Ajax,MathJax.OutputJax["HTML-CSS"].fontDir+"/Misc/Regular/Main.js"]); | PypiClean |
/Flask-Sijax-0.4.1.tar.gz/Flask-Sijax-0.4.1/examples/chat.py | import os
import hmac
from hashlib import sha1
from flask import Flask, g, render_template, session, abort, request
from werkzeug.security import safe_str_cmp
import flask_sijax
app = Flask(__name__)
app.secret_key = os.urandom(128)
@app.template_global('csrf_token')
def csrf_token():
"""
Generate a token string from bytes arrays. The token in the session is user
specific.
"""
if "_csrf_token" not in session:
session["_csrf_token"] = os.urandom(128)
return hmac.new(app.secret_key, session["_csrf_token"],
digestmod=sha1).hexdigest()
@app.before_request
def check_csrf_token():
"""Checks that token is correct, aborting if not"""
if request.method in ("GET",): # not exhaustive list
return
token = request.form.get("csrf_token")
if token is None:
app.logger.warning("Expected CSRF Token: not present")
abort(400)
if not safe_str_cmp(token, csrf_token()):
app.logger.warning("CSRF Token incorrect")
abort(400)
# The path where you want the extension to create the needed javascript files
# DON'T put any of your files in this directory, because they'll be deleted!
app.config["SIJAX_STATIC_PATH"] = os.path.join('.', os.path.dirname(__file__), 'static/js/sijax/')
# You need to point Sijax to the json2.js library if you want to support
# browsers that don't support JSON natively (like IE <= 7)
app.config["SIJAX_JSON_URI"] = '/static/js/sijax/json2.js'
flask_sijax.Sijax(app)
class SijaxHandler(object):
"""A container class for all Sijax handlers.
Grouping all Sijax handler functions in a class
(or a Python module) allows them all to be registered with
a single line of code.
"""
@staticmethod
def save_message(obj_response, message):
message = message.strip()
if message == '':
return obj_response.alert("Empty messages are not allowed!")
# Save message to database or whatever..
import time, hashlib
time_txt = time.strftime("%H:%M:%S", time.gmtime(time.time()))
message_id = 'message_%s' % hashlib.sha256(time_txt.encode("utf-8")).hexdigest()
message = """
<div id="%s" style="opacity: 0;">
[<strong>%s</strong>] %s
</div>
""" % (message_id, time_txt, message)
# Add message to the end of the container
obj_response.html_append('#messages', message)
# Clear the textbox and give it focus in case it has lost it
obj_response.attr('#message', 'value', '')
obj_response.script("$('#message').focus();")
# Scroll down the messages area
obj_response.script("$('#messages').attr('scrollTop', $('#messages').attr('scrollHeight'));")
# Make the new message appear in 400ms
obj_response.script("$('#%s').animate({opacity: 1}, 400);" % message_id)
@staticmethod
def clear_messages(obj_response):
# Delete all messages from the database
# Clear the messages container
obj_response.html('#messages', '')
# Clear the textbox
obj_response.attr('#message', 'value', '')
# Ensure the texbox has focus
obj_response.script("$('#message').focus();")
@flask_sijax.route(app, "/")
def index():
if g.sijax.is_sijax_request:
# The request looks like a valid Sijax request
# Let's register the handlers and tell Sijax to process it
g.sijax.register_object(SijaxHandler)
return g.sijax.process_request()
return render_template('chat.html')
if __name__ == '__main__':
app.run(debug=True, port=8080) | PypiClean |
/GeoNode-3.2.0-py3-none-any.whl/geonode/static/geonode/js/ol-2.13/lib/OpenLayers/WPSClient.js | * @requires OpenLayers/SingleFile.js
*/
/**
* @requires OpenLayers/Events.js
* @requires OpenLayers/WPSProcess.js
* @requires OpenLayers/Format/WPSDescribeProcess.js
* @requires OpenLayers/Request.js
*/
/**
* Class: OpenLayers.WPSClient
* High level API for interaction with Web Processing Services (WPS).
* An <OpenLayers.WPSClient> instance is used to create <OpenLayers.WPSProcess>
* instances for servers known to the WPSClient. The WPSClient also caches
* DescribeProcess responses to reduce the number of requests sent to servers
* when processes are created.
*/
OpenLayers.WPSClient = OpenLayers.Class({
/**
* Property: servers
* {Object} Service metadata, keyed by a local identifier.
*
* Properties:
* url - {String} the url of the server
* version - {String} WPS version of the server
* processDescription - {Object} Cache of raw DescribeProcess
* responses, keyed by process identifier.
*/
servers: null,
/**
* Property: version
* {String} The default WPS version to use if none is configured. Default
* is '1.0.0'.
*/
version: '1.0.0',
/**
* Property: lazy
* {Boolean} Should the DescribeProcess be deferred until a process is
* fully configured? Default is false.
*/
lazy: false,
/**
* Property: events
* {<OpenLayers.Events>}
*
* Supported event types:
* describeprocess - Fires when the process description is available.
* Listeners receive an object with a 'raw' property holding the raw
* DescribeProcess response, and an 'identifier' property holding the
* process identifier of the described process.
*/
events: null,
/**
* Constructor: OpenLayers.WPSClient
*
* Parameters:
* options - {Object} Object whose properties will be set on the instance.
*
* Avaliable options:
* servers - {Object} Mandatory. Service metadata, keyed by a local
* identifier. Can either be a string with the service url or an
* object literal with additional metadata:
*
* (code)
* servers: {
* local: '/geoserver/wps'
* }, {
* opengeo: {
* url: 'http://demo.opengeo.org/geoserver/wps',
* version: '1.0.0'
* }
* }
* (end)
*
* lazy - {Boolean} Optional. Set to true if DescribeProcess should not be
* requested until a process is fully configured. Default is false.
*/
initialize: function(options) {
OpenLayers.Util.extend(this, options);
this.events = new OpenLayers.Events(this);
this.servers = {};
for (var s in options.servers) {
this.servers[s] = typeof options.servers[s] == 'string' ? {
url: options.servers[s],
version: this.version,
processDescription: {}
} : options.servers[s];
}
},
/**
* APIMethod: execute
* Shortcut to execute a process with a single function call. This is
* equivalent to using <getProcess> and then calling execute on the
* process.
*
* Parameters:
* options - {Object} Options for the execute operation.
*
* Available options:
* server - {String} Mandatory. One of the local identifiers of the
* configured servers.
* process - {String} Mandatory. A process identifier known to the
* server.
* inputs - {Object} The inputs for the process, keyed by input identifier.
* For spatial data inputs, the value of an input is usually an
* <OpenLayers.Geometry>, an <OpenLayers.Feature.Vector> or an array of
* geometries or features.
* output - {String} The identifier of an output to parse. Optional. If not
* provided, the first output will be parsed.
* success - {Function} Callback to call when the process is complete.
* This function is called with an outputs object as argument, which
* will have a property with the identifier of the requested output
* (e.g. 'result'). For processes that generate spatial output, the
* value will either be a single <OpenLayers.Feature.Vector> or an
* array of features.
* scope - {Object} Optional scope for the success callback.
*/
execute: function(options) {
var process = this.getProcess(options.server, options.process);
process.execute({
inputs: options.inputs,
success: options.success,
scope: options.scope
});
},
/**
* APIMethod: getProcess
* Creates an <OpenLayers.WPSProcess>.
*
* Parameters:
* serverID - {String} Local identifier from the servers that this instance
* was constructed with.
* processID - {String} Process identifier known to the server.
*
* Returns:
* {<OpenLayers.WPSProcess>}
*/
getProcess: function(serverID, processID) {
var process = new OpenLayers.WPSProcess({
client: this,
server: serverID,
identifier: processID
});
if (!this.lazy) {
process.describe();
}
return process;
},
/**
* Method: describeProcess
*
* Parameters:
* serverID - {String} Identifier of the server
* processID - {String} Identifier of the requested process
* callback - {Function} Callback to call when the description is available
* scope - {Object} Optional execution scope for the callback function
*/
describeProcess: function(serverID, processID, callback, scope) {
var server = this.servers[serverID];
if (!server.processDescription[processID]) {
if (!(processID in server.processDescription)) {
// set to null so we know a describeFeature request is pending
server.processDescription[processID] = null;
OpenLayers.Request.GET({
url: server.url,
params: {
SERVICE: 'WPS',
VERSION: server.version,
REQUEST: 'DescribeProcess',
IDENTIFIER: processID
},
success: function(response) {
server.processDescription[processID] = response.responseText;
this.events.triggerEvent('describeprocess', {
identifier: processID,
raw: response.responseText
});
},
scope: this
});
} else {
// pending request
this.events.register('describeprocess', this, function describe(evt) {
if (evt.identifier === processID) {
this.events.unregister('describeprocess', this, describe);
callback.call(scope, evt.raw);
}
});
}
} else {
window.setTimeout(function() {
callback.call(scope, server.processDescription[processID]);
}, 0);
}
},
/**
* Method: destroy
*/
destroy: function() {
this.events.destroy();
this.events = null;
this.servers = null;
},
CLASS_NAME: 'OpenLayers.WPSClient'
}); | PypiClean |
/ASLPAw-2.2.0.tar.gz/ASLPAw-2.2.0/src/ASLPAw_package/ASLPAw_module.py | import random
from networkx.classes.graph import Graph
from networkx.classes.digraph import DiGraph
from networkx.classes.multigraph import MultiGraph
from networkx.classes.multidigraph import MultiDiGraph
from multivalued_dict_package import *
from shuffle_graph_package import shuffle_graph
__all__ = ['ASLPAw']
def _ASLPAw_networkx(data_graph: 'graph', Repeat_T: int, seed: int) -> 'DirectedGraph':
from count_dict_package import count_dict
from similarity_index_of_label_graph_package import similarity_index_of_label_graph_class
def MVDict_to_WDiGraph(mvd: multivalued_dict) -> 'DirectedGraph':
from collections import Counter
WDG = DiGraph()
for _out_node, _value_list in mvd.items():
WDG.add_weighted_edges_from((_out_node, _in_node, _weight) for _in_node, _weight in Counter(_value_list).items())
return WDG
def remove_low_frequency_label(community_label_list_for_nodes: multivalued_dict) -> DiGraph:
from sklearn.ensemble import IsolationForest
digraph_of_node_labels_and_frequencies = DiGraph()
for graph_of_node, label_list_of_nodes in community_label_list_for_nodes.items():
digraph_of_node_labels_and_frequencies.add_node(graph_of_node)
label_set = set(label_list_of_nodes)
dict_of_frequency_of_label = dict(sorted([(label_list_of_nodes.count(label_item), label_item) for label_item in label_set], key = lambda frequency_and_label: frequency_and_label[0], reverse = True))
dict_of_sn_and_frequency = dict([(sequence_number, frequency_of_label) for sequence_number, frequency_of_label in enumerate(dict_of_frequency_of_label.keys(), 1)])
list_of_mapping_points = []
for sequence_number, frequency_of_label in dict_of_sn_and_frequency.items():
list_of_mapping_points.extend([[sequence_number]] * frequency_of_label)
clf = IsolationForest(n_estimators = 120, contamination = 'auto')
clf.fit(list_of_mapping_points)
for sequence_number, frequency_of_label in dict_of_sn_and_frequency.items():
if clf.predict([[sequence_number]])[0] == 1:
label_item = dict_of_frequency_of_label.__getitem__(frequency_of_label)
digraph_of_node_labels_and_frequencies.add_edge(graph_of_node, label_item, weight = frequency_of_label)
return digraph_of_node_labels_and_frequencies
def weight_normalization(WDG: DiGraph, normalization_parameter: float) -> DiGraph:
for _edge in WDG.edges:
WDG[_edge[0]][_edge[1]]['weight'] /= normalization_parameter
return WDG
community_label_list_for_nodes = multivalued_dict([[node_of_graph, node_of_graph] for node_of_graph in data_graph.nodes])
random.seed(seed)
similarity_index_of_label_graph = similarity_index_of_label_graph_class()
for _t in range(Repeat_T):
data_graph = shuffle_graph(data_graph)
for data_graph_node, dict_of_adjvex in data_graph.adjacency():
weight_of_community_label_for_adjvex = count_dict()
for adjvex in dict_of_adjvex.keys():
if data_graph.is_multigraph():
weight_of_edge = sum(value_of_edge.get('weight', 1) for value_of_edge in dict_of_adjvex.__getitem__(adjvex).values())
else:
weight_of_edge = dict_of_adjvex.__getitem__(adjvex).get('weight', 1)
community_label_for_adjvex = random.choice(community_label_list_for_nodes.__getitem__(adjvex))
weight_of_community_label_for_adjvex[community_label_for_adjvex] += weight_of_edge
community_label_for_node = max(weight_of_community_label_for_adjvex, key = weight_of_community_label_for_adjvex.__getitem__, default = data_graph_node)
community_label_list_for_nodes.update({data_graph_node: community_label_for_node})
digraph_of_node_labels_and_frequencies = weight_normalization(remove_low_frequency_label(community_label_list_for_nodes), _t + 1)
return digraph_of_node_labels_and_frequencies
def ASLPAw(data_graph: 'graph', Repeat_T: int = 30, seed: int = None, graph_package = 'NetworkX') -> 'DirectedGraph':
'''
Returns a graph of the edges of each node with its own community tag node.
ASLPAw can be used for disjoint and overlapping community detection and works on weighted/unweighted and directed/undirected networks. ASLPAw is adaptive with virtually no configuration parameters.
Parameters
----------
data_graph : graphs
A graph object. According to the package selected by the parameter graph_package, "data_graph" can accept graph objects of the corresponding type. However, any package you choose can accept a "NetworkX" object.
Repeat_T : integer
ASLPAw is an iterative process, this parameter sets the number of iterations.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
Returns
-------
communities : DirectedGraph
Each node uses a community discovery map with a weighted edge pointing to its own community tag node.
Examples
--------
>>> from networkx.generators.community import relaxed_caveman_graph
>>> data_graph = relaxed_caveman_graph(3, 6, 0.22, seed = 65535)
>>> ASLPAw(data_graph, seed=65535).adj
AdjacencyView({0: {2: {'weight': 0.9}}, 2: {2: {'weight': 0.9333333333333333}}, 1: {6: {'weight': 0.6}}, 6: {6: {'weight': 1.0}}, 3: {2: {'weight': 0.6}}, 4: {2: {'weight': 0.8666666666666667}}, 5: {2: {'weight': 0.9333333333333333}}, 7: {6: {'weight': 1.0}}, 8: {6: {'weight': 0.9666666666666667}}, 9: {6: {'weight': 0.9333333333333333}}, 10: {6: {'weight': 0.8666666666666667}}, 11: {6: {'weight': 0.9666666666666667}}, 12: {12: {'weight': 1.0333333333333334}}, 13: {12: {'weight': 0.9666666666666667}}, 14: {12: {'weight': 1.0}}, 15: {12: {'weight': 1.0}}, 16: {12: {'weight': 1.0}}, 17: {12: {'weight': 1.0}}})
>>> data_graph = relaxed_caveman_graph(3, 6, 0.39, seed = 65535)
>>> ASLPAw(data_graph, seed=65535).adj
AdjacencyView({0: {1: {'weight': 0.9333333333333333}}, 1: {1: {'weight': 1.0}}, 2: {1: {'weight': 1.0}}, 3: {1: {'weight': 0.9666666666666667}}, 4: {1: {'weight': 1.0}}, 5: {1: {'weight': 0.9666666666666667}}, 6: {}, 7: {7: {'weight': 0.7666666666666667}}, 8: {}, 9: {13: {'weight': 0.4}, 6: {'weight': 0.26666666666666666}}, 13: {13: {'weight': 0.6333333333333333}}, 10: {1: {'weight': 0.5666666666666667}}, 11: {7: {'weight': 0.6333333333333333}}, 12: {12: {'weight': 0.4666666666666667}, 13: {'weight': 0.4}}, 14: {13: {'weight': 0.5666666666666667}}, 15: {13: {'weight': 0.5333333333333333}, 12: {'weight': 0.3333333333333333}}, 16: {13: {'weight': 0.43333333333333335}}, 17: {13: {'weight': 0.43333333333333335}, 12: {'weight': 0.4}}})
'''
if graph_package == 'NetworkX':
return _ASLPAw_networkx(data_graph, Repeat_T, seed)
elif graph_package == 'SNAP':
pass
elif graph_package == 'graph-tool':
pass
elif graph_package == 'igraph':
pass
else:
raise ValueError(f'The value "{data_graph}" of the parameter "data_graph" is not one of "NetworkX", "SNAP", "graph-tool" or "igraph"!') | PypiClean |
/FQTool-1.4.tar.gz/FQTool-1.4/src/ArgParser.py |
import argparse
## @brief This function creates a parser for command line arguments
# Sets up a parser that accepts the following flags: -i, -l, -q, -f, -a, -v, -h
# @return An already configured instance of the ArgumentParser class
def create_parser():
parser = argparse.ArgumentParser(prog = 'fqtool',
description = 'FASTQ parser. Quickly get the reads you need.',
epilog = 'That\'s all! Reach us at github.com/mistrello96/FQTool',
add_help = False)
parser.add_argument('-i', '--input-filenames', type = str, metavar = 'filename', dest = 'filenames',
nargs = '+', help = 'Input file name(s). Usually in the form *.fastq, *.q', required = True)
parser.add_argument('-l', '--length', type = int, metavar = 'length', dest = 'length',
help = 'Minimum length of the reads to be extracted.', required = True)
parser.add_argument('-q', '--probability-of-correctness', type = float, metavar = 'quality',
dest = 'quality', required = True,
help = 'Minimum probabilty of correctness of the reads to be extracted. Ranges between 0 and 1. You can also write the Phread Quality Value directly (e.g. 35)')
parser.add_argument('-f', '--ascii-conversion-function', type = str, metavar = 'function',
dest = 'function', help = 'Function to be used to switch bewteen ASCII and Phred Value.' +
'Choose between: S = Sanger, X = Solexa, I = Illumina 1.3+, J = Illumina 1.5+, L = Illumina 1.8+. Default = L',
choices = ['S', 'X', 'I', 'J', 'L'], default = 'L')
parser.add_argument('-a', '--accuracy', type = float, metavar = 'accuracy', dest = 'accuracy',
help = 'This value is the %% of bases that must have at least quality q. If this condition is not satisfied, the read will be ignored',
default = 0)
parser.add_argument('-v', '--version', action = 'version', help = 'Shows the program version and exits', version = '%(prog)s 1.4')
parser.add_argument('-h', '--help', action = 'help', help = 'List of the flags you can use with FQTool')
return parser | PypiClean |
/DJModels-0.0.6-py3-none-any.whl/djmodels/db/models/query_utils.py | import copy
import functools
import inspect
from collections import namedtuple
from djmodels.db.models.constants import LOOKUP_SEP
from djmodels.utils import tree
# PathInfo is used when converting lookups (fk__somecol). The contents
# describe the relation in Model terms (model Options and Fields for both
# sides of the relation. The join_field is the field backing the relation.
PathInfo = namedtuple('PathInfo', 'from_opts to_opts target_fields join_field m2m direct filtered_relation')
class InvalidQuery(Exception):
"""The query passed to raw() isn't a safe query to use with raw()."""
pass
def subclasses(cls):
yield cls
for subclass in cls.__subclasses__():
yield from subclasses(subclass)
class QueryWrapper:
"""
A type that indicates the contents are an SQL fragment and the associate
parameters. Can be used to pass opaque data to a where-clause, for example.
"""
contains_aggregate = False
def __init__(self, sql, params):
self.data = sql, list(params)
def as_sql(self, compiler=None, connection=None):
return self.data
class Q(tree.Node):
"""
Encapsulate filters as objects that can then be combined logically (using
`&` and `|`).
"""
# Connection types
AND = 'AND'
OR = 'OR'
default = AND
conditional = True
def __init__(self, *args, **kwargs):
connector = kwargs.pop('_connector', None)
negated = kwargs.pop('_negated', False)
super().__init__(children=list(args) + sorted(kwargs.items()), connector=connector, negated=negated)
def _combine(self, other, conn):
if not isinstance(other, Q):
raise TypeError(other)
# If the other Q() is empty, ignore it and just use `self`.
if not other:
return copy.deepcopy(self)
# Or if this Q is empty, ignore it and just use `other`.
elif not self:
return copy.deepcopy(other)
obj = type(self)()
obj.connector = conn
obj.add(self, conn)
obj.add(other, conn)
return obj
def __or__(self, other):
return self._combine(other, self.OR)
def __and__(self, other):
return self._combine(other, self.AND)
def __invert__(self):
obj = type(self)()
obj.add(self, self.AND)
obj.negate()
return obj
def resolve_expression(self, query=None, allow_joins=True, reuse=None, summarize=False, for_save=False):
# We must promote any new joins to left outer joins so that when Q is
# used as an expression, rows aren't filtered due to joins.
clause, joins = query._add_q(self, reuse, allow_joins=allow_joins, split_subq=False)
query.promote_joins(joins)
return clause
def deconstruct(self):
path = '%s.%s' % (self.__class__.__module__, self.__class__.__name__)
if path.startswith('djmodels.db.models.query_utils'):
path = path.replace('djmodels.db.models.query_utils', 'djmodels.db.models')
args, kwargs = (), {}
if len(self.children) == 1 and not isinstance(self.children[0], Q):
child = self.children[0]
kwargs = {child[0]: child[1]}
else:
args = tuple(self.children)
if self.connector != self.default:
kwargs = {'_connector': self.connector}
if self.negated:
kwargs['_negated'] = True
return path, args, kwargs
class DeferredAttribute:
"""
A wrapper for a deferred-loading field. When the value is read from this
object the first time, the query is executed.
"""
def __init__(self, field_name):
self.field_name = field_name
def __get__(self, instance, cls=None):
"""
Retrieve and caches the value from the datastore on the first lookup.
Return the cached value.
"""
if instance is None:
return self
data = instance.__dict__
if data.get(self.field_name, self) is self:
# Let's see if the field is part of the parent chain. If so we
# might be able to reuse the already loaded value. Refs #18343.
val = self._check_parent_chain(instance, self.field_name)
if val is None:
instance.refresh_from_db(fields=[self.field_name])
val = getattr(instance, self.field_name)
data[self.field_name] = val
return data[self.field_name]
def _check_parent_chain(self, instance, name):
"""
Check if the field value can be fetched from a parent field already
loaded in the instance. This can be done if the to-be fetched
field is a primary key field.
"""
opts = instance._meta
f = opts.get_field(name)
link_field = opts.get_ancestor_link(f.model)
if f.primary_key and f != link_field:
return getattr(instance, link_field.attname)
return None
class RegisterLookupMixin:
@classmethod
def _get_lookup(cls, lookup_name):
return cls.get_lookups().get(lookup_name, None)
@classmethod
@functools.lru_cache(maxsize=None)
def get_lookups(cls):
class_lookups = [parent.__dict__.get('class_lookups', {}) for parent in inspect.getmro(cls)]
return cls.merge_dicts(class_lookups)
def get_lookup(self, lookup_name):
from djmodels.db.models.lookups import Lookup
found = self._get_lookup(lookup_name)
if found is None and hasattr(self, 'output_field'):
return self.output_field.get_lookup(lookup_name)
if found is not None and not issubclass(found, Lookup):
return None
return found
def get_transform(self, lookup_name):
from djmodels.db.models.lookups import Transform
found = self._get_lookup(lookup_name)
if found is None and hasattr(self, 'output_field'):
return self.output_field.get_transform(lookup_name)
if found is not None and not issubclass(found, Transform):
return None
return found
@staticmethod
def merge_dicts(dicts):
"""
Merge dicts in reverse to preference the order of the original list. e.g.,
merge_dicts([a, b]) will preference the keys in 'a' over those in 'b'.
"""
merged = {}
for d in reversed(dicts):
merged.update(d)
return merged
@classmethod
def _clear_cached_lookups(cls):
for subclass in subclasses(cls):
subclass.get_lookups.cache_clear()
@classmethod
def register_lookup(cls, lookup, lookup_name=None):
if lookup_name is None:
lookup_name = lookup.lookup_name
if 'class_lookups' not in cls.__dict__:
cls.class_lookups = {}
cls.class_lookups[lookup_name] = lookup
cls._clear_cached_lookups()
return lookup
@classmethod
def _unregister_lookup(cls, lookup, lookup_name=None):
"""
Remove given lookup from cls lookups. For use in tests only as it's
not thread-safe.
"""
if lookup_name is None:
lookup_name = lookup.lookup_name
del cls.class_lookups[lookup_name]
def select_related_descend(field, restricted, requested, load_fields, reverse=False):
"""
Return True if this field should be used to descend deeper for
select_related() purposes. Used by both the query construction code
(sql.query.fill_related_selections()) and the model instance creation code
(query.get_klass_info()).
Arguments:
* field - the field to be checked
* restricted - a boolean field, indicating if the field list has been
manually restricted using a requested clause)
* requested - The select_related() dictionary.
* load_fields - the set of fields to be loaded on this model
* reverse - boolean, True if we are checking a reverse select related
"""
if not field.remote_field:
return False
if field.remote_field.parent_link and not reverse:
return False
if restricted:
if reverse and field.related_query_name() not in requested:
return False
if not reverse and field.name not in requested:
return False
if not restricted and field.null:
return False
if load_fields:
if field.attname not in load_fields:
if restricted and field.name in requested:
raise InvalidQuery("Field %s.%s cannot be both deferred"
" and traversed using select_related"
" at the same time." %
(field.model._meta.object_name, field.name))
return True
def refs_expression(lookup_parts, annotations):
"""
Check if the lookup_parts contains references to the given annotations set.
Because the LOOKUP_SEP is contained in the default annotation names, check
each prefix of the lookup_parts for a match.
"""
for n in range(1, len(lookup_parts) + 1):
level_n_lookup = LOOKUP_SEP.join(lookup_parts[0:n])
if level_n_lookup in annotations and annotations[level_n_lookup]:
return annotations[level_n_lookup], lookup_parts[n:]
return False, ()
def check_rel_lookup_compatibility(model, target_opts, field):
"""
Check that self.model is compatible with target_opts. Compatibility
is OK if:
1) model and opts match (where proxy inheritance is removed)
2) model is parent of opts' model or the other way around
"""
def check(opts):
return (
model._meta.concrete_model == opts.concrete_model or
opts.concrete_model in model._meta.get_parent_list() or
model in opts.get_parent_list()
)
# If the field is a primary key, then doing a query against the field's
# model is ok, too. Consider the case:
# class Restaurant(models.Model):
# place = OnetoOneField(Place, primary_key=True):
# Restaurant.objects.filter(pk__in=Restaurant.objects.all()).
# If we didn't have the primary key check, then pk__in (== place__in) would
# give Place's opts as the target opts, but Restaurant isn't compatible
# with that. This logic applies only to primary keys, as when doing __in=qs,
# we are going to turn this into __in=qs.values('pk') later on.
return (
check(target_opts) or
(getattr(field, 'primary_key', False) and check(field.model._meta))
)
class FilteredRelation:
"""Specify custom filtering in the ON clause of SQL joins."""
def __init__(self, relation_name, *, condition=Q()):
if not relation_name:
raise ValueError('relation_name cannot be empty.')
self.relation_name = relation_name
self.alias = None
if not isinstance(condition, Q):
raise ValueError('condition argument must be a Q() instance.')
self.condition = condition
self.path = []
def __eq__(self, other):
return (
isinstance(other, self.__class__) and
self.relation_name == other.relation_name and
self.alias == other.alias and
self.condition == other.condition
)
def clone(self):
clone = FilteredRelation(self.relation_name, condition=self.condition)
clone.alias = self.alias
clone.path = self.path[:]
return clone
def resolve_expression(self, *args, **kwargs):
"""
QuerySet.annotate() only accepts expression-like arguments
(with a resolve_expression() method).
"""
raise NotImplementedError('FilteredRelation.resolve_expression() is unused.')
def as_sql(self, compiler, connection):
# Resolve the condition in Join.filtered_relation.
query = compiler.query
where = query.build_filtered_relation_q(self.condition, reuse=set(self.path))
return compiler.compile(where) | PypiClean |
/ANNOgesic-1.1.14.linux-x86_64.tar.gz/usr/local/lib/python3.10/dist-packages/annogesiclib/ppi.py | import os
import sys
import csv
import time
from subprocess import call
from annogesiclib.helper import Helper
from annogesiclib.plot_PPI import plot_ppi
from annogesiclib.converter import Converter
from annogesiclib.gff3 import Gff3Parser
class PPINetwork(object):
'''detection of PPI'''
def __init__(self, out_folder):
self.helper = Helper()
self.converter = Converter()
self.gffparser = Gff3Parser()
self.tmp_id = os.path.join(out_folder, "tmp_id_list")
self.all_result = os.path.join(out_folder, "all_results")
self.best_result = os.path.join(out_folder, "best_results")
self.fig = os.path.join(out_folder, "figures")
self.ref_tags = {}
self.with_strain = "with_strain"
self.without_strain = "without_strain"
self.tmp_files = {"log": "tmp_log", "action": "tmp_action.log",
"pubmed": "tmp_pubmed.log",
"specific": os.path.join(
out_folder, "tmp_specific"),
"nospecific": os.path.join(
out_folder, "tmp_nospecific"),
"wget_action": os.path.join(
out_folder, "tmp_action")}
def _make_folder_no_exist(self, path, folder):
if folder not in os.listdir(path):
os.mkdir(os.path.join(path, folder))
def _make_subfolder(self, path, strain, ptt):
os.mkdir(os.path.join(path, strain))
os.mkdir(os.path.join(path, strain, ptt))
def _run_wget(self, source, folder, err, log):
log.write(" ".join(["wget", source, "-O", folder]) + "\n")
call(["wget", source, "-O", folder], stderr=err)
time.sleep(2)
def _wget_id(self, strain, locus, strain_id, files, log):
detect_id = False
if strain == strain_id["ptt"]:
print("Retrieving STRING ID for {0} of {1} -- {2}".format(
locus, strain_id["string"], strain_id["file"]))
id_source = ("http://string-db.org/api/tsv/get_string_ids?"
"identifier={0}&species={1}").format(
locus, strain_id["string"])
self._run_wget(id_source, os.path.join(files["id_list"], locus),
files["id_log"], log)
detect_id = True
return detect_id
def _retrieve_id(self, strain_id, genes, files, log):
log.write("Retrieving STRING ID for {0}.\n".format(strain_id["ptt"]))
for gene in genes:
if gene["gene"] != "-":
detect_id = self._wget_id(gene["strain"], gene["gene"],
strain_id, files, log)
self.ref_tags[gene["gene"]] = gene["locus_tag"]
else:
detect_id = self._wget_id(gene["strain"], gene["locus_tag"],
strain_id, files, log)
self.ref_tags[gene["locus_tag"]] = gene["locus_tag"]
if not detect_id:
log.write("{0} is not found in {1}.\n".format(
gene, strain_id["file"]))
print("Error: There is no {0} in {1}".format(
gene, strain_id["file"]))
log.write("The temporary files are generated and stored in the "
"following folders:\n")
log.write("\t" + os.path.join(
files["id_list"], gene["locus_tag"]) + "\n")
def _get_prefer_name(self, row_a, strain_id, files, querys, log):
prefername = ""
filename = row_a.split(".")
if ((filename[1] not in os.listdir(files["id_list"])) and (
"all" not in querys)) or ("all" in querys):
self._wget_id(strain_id["ptt"], filename[1], strain_id, files, log)
if (filename[1] in os.listdir(files["id_list"])) or (
"all" in querys):
if (filename[1] in os.listdir(files["id_list"])):
id_h = open(os.path.join(files["id_list"], filename[1]), "r")
for row_i in csv.reader(id_h, delimiter="\t"):
if row_a == row_i[1]:
prefername = row_i[4]
id_h.close()
return prefername
def _print_title(self, out, id_file, id_folder):
id_h = open(os.path.join(id_folder, id_file), "r")
prefername = id_file
for row_i in csv.reader(id_h, delimiter="\t"):
prefername = row_i[3]
id_h.close()
if prefername not in self.ref_tags.keys():
locus = id_file
else:
locus = self.ref_tags[prefername]
out.write("Interaction of {0} | {1}\n".format(
locus, prefername))
out.write("Genome\tstringId_A\tstringId_B\tpreferredName_A\t"
"preferredName_B\tncbiTaxonId\t"
"STRING_score\tPubmed_id\tPubmed_score\n")
def _get_pubmed(self, row, strain_id, score, id_file, first_output,
ptt, files, paths, args_ppi, log):
prefer1 = self._get_prefer_name(row[0], strain_id,
files, args_ppi.querys, log)
prefer2 = self._get_prefer_name(row[1], strain_id,
files, args_ppi.querys, log)
if (len(prefer1) > 0) and (len(prefer2) > 0):
if args_ppi.no_specific:
pubmed_source = (
"http://www.ncbi.nlm.nih.gov/CBBresearch/"
"Wilbur/IRET/PIE/getppi.cgi?term={0}+AND+{1}").format(
prefer1, prefer2)
self._run_wget(pubmed_source, self.tmp_files["nospecific"],
files["pubmed_log"], log)
strain_id["pie"] = "+".join(strain_id["pie"].split(" "))
pubmed_source = (
"http://www.ncbi.nlm.nih.gov/CBBresearch/Wilbur"
"/IRET/PIE/getppi.cgi?term={0}+AND+{1}+AND+{2}").format(
prefer1, prefer2, strain_id["pie"])
self._run_wget(pubmed_source, self.tmp_files["specific"],
files["pubmed_log"], log)
row[0] = row[0].split(".")[-1]
row[1] = row[1].split(".")[-1]
self._merge_information(
first_output, self.tmp_files["specific"],
files["all_specific"], files["best_specific"], row,
args_ppi.score, id_file, files["id_list"], "specific",
os.path.join(paths["all"], self.with_strain),
os.path.join(paths["best"], self.with_strain), ptt)
if args_ppi.no_specific:
self._merge_information(
first_output, self.tmp_files["nospecific"],
files["all_nospecific"], files["best_nospecific"], row,
args_ppi.score, id_file, files["id_list"], "nospecific",
os.path.join(paths["all"], self.without_strain),
os.path.join(paths["best"], self.without_strain), ptt)
def _print_single_file(self, out_single, row_a, ptt, row):
if row == "NA":
out_single.write("\t".join(
[ptt, "\t".join(row_a[:6]), "NA", "NA"]) + "\n")
else:
out_single.write("\t".join(
[ptt, "\t".join(row_a[:6]), "\t".join(row)]) + "\n")
def _merge_information(self, first_output, filename, out_all, out_best,
row_a, score, id_file, id_folder, file_type,
all_folder, best_folder, ptt):
if os.path.getsize(filename) != 0:
f_h = open(filename, "r")
out_all_single = open(os.path.join(
all_folder, ptt, "_".join([row_a[0], row_a[1] + ".csv"])), "w")
out_best_single = open(os.path.join(
best_folder, ptt,
"_".join([row_a[0], row_a[1] + ".csv"])), "w")
self._print_title(out_all_single, id_file, id_folder)
self._print_title(out_best_single, id_file, id_folder)
detect = False
for row in csv.reader(f_h, delimiter="\t"):
self._print_single_file(out_all_single, row_a, ptt, row)
if first_output["_".join([file_type, "all"])]:
first_output["_".join([file_type, "all"])] = False
self._print_title(out_all, id_file, id_folder)
out_all.write("\t".join([ptt, "\t".join(row_a[:6]),
"\t".join(row)]) + "\n")
if (float(row[1]) >= score):
detect = True
self._print_single_file(out_best_single, row_a, ptt, row)
if first_output["_".join([file_type, "best"])]:
first_output["_".join([file_type, "best"])] = False
self._print_title(out_best, id_file, id_folder)
out_best.write("\t".join([ptt, "\t".join(row_a[:6]),
"\t".join(row)]) + "\n")
f_h.close()
if not detect:
os.remove(os.path.join(best_folder, ptt,
"_".join([row_a[0], row_a[1] + ".csv"])))
out_all_single.close()
out_best_single.close()
else:
out_all_single = open(os.path.join(
all_folder, ptt, "_".join([row_a[0], row_a[1] + ".csv"])), "w")
self._print_title(out_all_single, id_file, id_folder)
self._print_single_file(out_all_single, row_a, ptt, "NA")
if first_output["_".join([file_type, "all"])]:
first_output["_".join([file_type, "all"])] = False
self._print_title(out_all, id_file, id_folder)
out_all.write("\t".join([ptt, "\t".join(row_a),
"NA", "NA"]) + "\n")
out_all_single.close()
def _detect_protein(self, strain_id, args_ppi):
fh = open(os.path.join(args_ppi.ptts, strain_id["file"]), "r")
genes = []
for row in csv.reader(fh, delimiter="\t"):
if (len(row) == 1) and ("-" in row[0]) and (".." in row[0]):
name = (row[0].split("-"))[0].strip().split(",")[0].strip()
if ("all" in args_ppi.querys):
if (len(row) > 1) and (row[0] != "Location"):
genes.append({"strain": name, "locus_tag": row[4],
"gene": row[5]})
else:
for query in args_ppi.querys:
datas = query.split(":")
strain = datas[0]
start = datas[1]
end = datas[2]
strand = datas[3]
if (len(row) > 1) and (row[0] != "Location") and (
name == strain) and (
start == row[0].split("..")[0]) and (
end == row[0].split("..")[1]) and (
strand == row[1]):
genes.append({"strain": name, "locus_tag": row[4],
"gene": row[5]})
fh.close()
return genes
def _setup_nospecific(self, paths, strain_id, files):
self._make_subfolder(
paths["all"], self.without_strain, strain_id["ptt"])
self._make_subfolder(
paths["best"], self.without_strain, strain_id["ptt"])
self._make_subfolder(
paths["fig"], self.without_strain, strain_id["ptt"])
filename_nostrain = "_".join([strain_id["file"].replace(".ptt", ""),
self.without_strain + ".csv"])
files["all_nospecific"] = open(os.path.join(paths["all"],
filename_nostrain), "w")
files["best_nospecific"] = open(os.path.join(paths["best"],
filename_nostrain), "w")
def _setup_folder_and_read_file(self, strain_id, pre_file,
files, paths, args_ppi):
if strain_id["file"].endswith(".ptt"):
if strain_id["file"] != pre_file:
self.helper.check_make_folder(
"_".join([self.tmp_id, strain_id["file"]]))
paths["all"] = os.path.join(
self.all_result, strain_id["file"][:-4])
paths["best"] = os.path.join(
self.best_result, strain_id["file"][:-4])
paths["fig"] = os.path.join(
self.fig, strain_id["file"][:-4])
self.helper.check_make_folder(
os.path.join(self.all_result, strain_id["file"][:-4]))
self.helper.check_make_folder(
os.path.join(self.best_result, strain_id["file"][:-4]))
self.helper.check_make_folder(
os.path.join(self.fig, strain_id["file"][:-4]))
self._make_subfolder(
paths["all"], self.with_strain, strain_id["ptt"])
self._make_subfolder(
paths["best"], self.with_strain, strain_id["ptt"])
self._make_subfolder(
paths["fig"], self.with_strain, strain_id["ptt"])
filename_strain = "_".join(
[strain_id["file"].replace(".ptt", ""),
self.with_strain + ".csv"])
files["all_specific"] = open(os.path.join(
paths["all"], filename_strain), "w")
files["best_specific"] = open(os.path.join(
paths["best"], filename_strain), "w")
if args_ppi.no_specific:
self._setup_nospecific(paths, strain_id, files)
files["id_list"] = "_".join([self.tmp_id, strain_id["file"]])
files["id_log"] = open(os.path.join(files["id_list"],
self.tmp_files["log"]), "w")
files["action_log"] = open(os.path.join(args_ppi.out_folder,
self.tmp_files["action"]), "w")
files["pubmed_log"] = open(os.path.join(args_ppi.out_folder,
self.tmp_files["pubmed"]), "w")
pre_file = strain_id["file"]
if strain_id["file"] in os.listdir(args_ppi.ptts):
genes = self._detect_protein(strain_id, args_ppi)
else:
self._make_folder_no_exist(os.path.join(paths["all"],
self.with_strain), strain_id["ptt"])
self._make_folder_no_exist(os.path.join(paths["best"],
self.with_strain), strain_id["ptt"])
if args_ppi.no_specific:
self._make_folder_no_exist(
os.path.join(paths["all"], self.without_strain),
strain_id["ptt"])
self._make_folder_no_exist(
os.path.join(paths["best"], self.without_strain),
strain_id["ptt"])
else:
print("Error: Wrong .ptt file!")
sys.exit()
return genes
def _wget_actions(self, files, id_file, strain_id, out_folder, log):
detect = False
t_h = open(os.path.join(files["id_list"], id_file), "r")
print("Retrieving STRING actions for {0} of {1} -- {2}".format(
id_file, strain_id["string"], strain_id["file"]))
for row in csv.reader(t_h, delimiter="\t"):
if row[0].startswith("queryIndex"):
continue
else:
detect = True
if row[2] == strain_id["string"]:
action_source = ("http://string-db.org/api/tsv/interaction_partners?"
"identifier={0}&species={1}").format(
row[1], row[2])
self._run_wget(
action_source, self.tmp_files["wget_action"],
files["action_log"], log)
t_h.close()
if not detect:
log.write(id_file + " can not be found in STRING.\n")
print("Warning: " + id_file + " can not be found in STRING!")
return detect
def _retrieve_actions(self, files, strain_id, paths, args_ppi, log):
'''get the interaction of proteins'''
log.write("Using STRING and PIE to retrieve the interaction "
"information for {0}.\n".format(strain_id["ptt"]))
for id_file in os.listdir(files["id_list"]):
if id_file != self.tmp_files["log"]:
detect_id = self._wget_actions(files, id_file, strain_id,
args_ppi.out_folder, log)
if detect_id:
a_h = open(self.tmp_files["wget_action"], "r")
pre_row = []
first = True
detect = False
first_output = {"specific_all": True,
"specific_best": True,
"nospecific_all": True,
"nospecific_best": True}
print("Retrieving Pubmed IDs for {0} of {1} -- {2}".format(
id_file, strain_id["string"], strain_id["file"]))
for row_a in csv.reader(a_h, delimiter="\t"):
if row_a == []:
print("No interaction can be detected")
break
if row_a[0].startswith("stringId_A"):
continue
else:
detect = True
if first:
first = False
score = row_a[5]
else:
if (row_a[0] != pre_row[0]) or (
row_a[1] != pre_row[1]):
self._get_pubmed(
pre_row, strain_id, score,
id_file, first_output,
strain_id["ptt"], files, paths,
args_ppi, log)
score = row_a[5]
else:
score = score + ";" + row_a[5]
pre_row = row_a
if detect:
detect = False
self._get_pubmed(
row_a, strain_id, score, id_file,
first_output, strain_id["ptt"], files,
paths, args_ppi, log)
self._list_files(args_ppi, paths, files, log)
if detect_id:
a_h.close()
def _list_files(self, args_ppi, paths, files, log):
log.write("The temporary files are generated and stored in the "
"following folders:\n")
if args_ppi.no_specific:
folders = [files["id_list"],
self.tmp_files["wget_action"],
self.tmp_files["specific"],
self.tmp_files["nospecific"]]
else:
folders = [files["id_list"],
self.tmp_files["wget_action"],
self.tmp_files["specific"]]
for folder in folders:
log.write("\t" + os.path.join(folder) + "\n")
log.write("The files for storing the interaction information are "
"generated and stored in the following folders:\n")
for data in (paths["all"], paths["best"]):
for files in os.listdir(data):
if os.path.isdir(os.path.join(data, files)):
for file_ in os.listdir(os.path.join(data, files)):
log.write("\t" + os.path.join(data, files, file_) + "\n")
log.write("The merged tables are generated:\n")
for data in (paths["all"], paths["best"]):
for files in os.listdir(data):
if os.path.isfile(os.path.join(data, files)):
log.write("\t" + os.path.join(data, files) + "\n")
def _plot(self, args_ppi, files, log):
log.write("Running plot_PPI.py to generate plots of PPI.\n")
log.write("The figures of PPI networks are generated and stored in the "
"following folders:\n")
if args_ppi.no_specific:
files["all_nospecific"].close()
files["best_nospecific"].close()
files["all_specific"].close()
files["best_specific"].close()
for folder in os.listdir(self.all_result):
if folder in os.listdir(self.fig):
print("Plotting {0}".format(folder))
out_folder_spe = os.path.join(self.fig, folder,
self.with_strain)
plot_ppi(os.path.join(self.all_result, folder,
"_".join([folder, self.with_strain + ".csv"])),
args_ppi.score, out_folder_spe, args_ppi.size)
for file_ in os.listdir(out_folder_spe):
log.write("\t" + os.path.join(
out_folder_spe, file_) + "\n")
if args_ppi.no_specific:
out_folder_nospe = os.path.join(self.fig, folder,
self.without_strain)
plot_ppi(os.path.join(self.all_result, folder,
"_".join([folder, self.without_strain + ".csv"])),
args_ppi.score, out_folder_nospe, args_ppi.size)
for file_ in os.listdir(out_folder_nospe):
log.write("\t" + os.path.join(
out_folder_nospe, file_) + "\n")
def _remove_tmps(self, args_ppi):
self.helper.remove_all_content(os.path.join(args_ppi.out_folder),
"tmp", "file")
self.helper.remove_all_content(os.path.join(args_ppi.out_folder),
"tmp", "dir")
for file_ in os.listdir(args_ppi.ptts):
if file_.startswith("PPI_"):
os.remove(os.path.join(args_ppi.ptts, file_))
self.helper.remove_all_content(os.path.join(args_ppi.out_folder),
"temp", "dir")
def check_query(self, args_ppi, log):
if "all" not in args_ppi.querys:
for query in args_ppi.querys:
detect = False
datas = query.split(":")
for gff in os.listdir(args_ppi.ptts):
gff_f = open(os.path.join(args_ppi.ptts, gff), "r")
for entry in Gff3Parser().entries(gff_f):
if (entry.seq_id == datas[0]) and (
entry.start == int(datas[1])) and (
entry.end == int(datas[2])) and (
entry.strand == datas[3]):
detect = True
break
if not detect:
log.write(query + " is not found in gff file.\n")
print("Error: {0} is not found in gff file!".format(query))
sys.exit()
def retrieve_ppi_network(self, args_ppi, log):
'''retrieve PPI from STRING with PIE and draw network'''
strain_ids = []
paths = {}
files = {}
self.check_query(args_ppi, log)
log.write("Running converter.py to generate ptt and rnt files.\n")
log.write("The following files are generated:\n")
for strain in args_ppi.strains:
datas = strain.split(":")
ptt_file = "PPI_" + datas[0].replace(".gff", ".ptt")
rnt_file = "PPI_" + datas[0].replace(".gff", ".rnt")
self.converter.convert_gff2rntptt(
os.path.join(args_ppi.ptts, datas[0]), datas[1],
"0", os.path.join(args_ppi.ptts, ptt_file),
os.path.join(args_ppi.ptts, rnt_file), None, None)
strain_ids.append({"file": ptt_file,
"ptt": datas[1],
"string": datas[2],
"pie": datas[3]})
log.write("\t" + os.path.join(args_ppi.ptts, ptt_file) + "\n")
log.write("\t" + os.path.join(args_ppi.ptts, rnt_file) + "\n")
strain_ids.sort(key=lambda x: x["file"])
pre_file = ""
for strain_id in strain_ids:
genes = self._setup_folder_and_read_file(strain_id, pre_file,
files, paths, args_ppi)
s_h = open(args_ppi.species, "r")
for row in csv.reader(s_h, delimiter="\t"):
if row[0] != "##":
if row[0] == strain_id["string"]:
break
elif row[2] == strain_id["string"]:
strain_id["string"] = row[0]
break
elif row[3] == strain_id["string"]:
strain_id["string"] = row[0]
break
self._retrieve_id(strain_id, genes, files, log)
self._retrieve_actions(files, strain_id, paths, args_ppi, log)
self._plot(args_ppi, files, log)
self._remove_tmps(args_ppi) | PypiClean |
/NeodroidVision-0.3.0-py36-none-any.whl/neodroidvision/utilities/visualisation/grad_cam.py |
__author__ = "Christian Heider Nielsen"
__doc__ = r"""
Gradient-weighted Class Activation Mapping
Created on 14-02-2021
"""
from typing import Sequence
import cv2
import numpy
import torch
__all__ = ["GradientClassActivationMapping"]
class GradientClassActivationMapping:
"""description"""
class ModelOutputs:
"""Class for making a forward pass, and getting:
1. The network output.
2. Activations from intermeddiate targetted layers.
3. Gradients from intermeddiate targetted layers."""
class FeatureExtractor:
"""Class for extracting activations and
registering gradients from targetted intermediate layers
"""
def __init__(self, model, target_layers):
self.model = model
self.target_layers = target_layers
self.gradients = []
def save_gradient(self, grad):
"""
Args:
grad:
"""
self.gradients.append(grad)
def __call__(self, x):
outputs = []
self.gradients = []
for name, module in self.model._modules.items():
x = module(x)
# print(name)
if name in self.target_layers:
# print(f'registered {name}')
x.register_hook(self.save_gradient)
outputs += [x]
return outputs, x
def __init__(self, model, feature_module, target_layers):
self.model = model
self.feature_module = feature_module
self.feature_extractor = (
GradientClassActivationMapping.ModelOutputs.FeatureExtractor(
self.feature_module, target_layers
)
)
def get_gradients(self):
"""
Returns:
"""
return self.feature_extractor.gradients
def __call__(self, x):
target_activations = []
for name, module in self.model._modules.items():
if module == self.feature_module:
target_activations, x = self.feature_extractor(x)
elif "avgpool" in name.lower():
x = module(x)
x = x.view(x.size(0), -1)
else:
x = module(x)
return target_activations, x
def __init__(
self,
model: torch.nn.Module,
feature_module: torch.nn.Module,
target_layer_names: Sequence,
use_cuda: bool,
):
self.model = model
self.feature_module = feature_module
self.model.eval()
self.use_cuda = use_cuda
if self.use_cuda:
self.model = model.cuda()
self.extractor = GradientClassActivationMapping.ModelOutputs(
self.model, self.feature_module, target_layer_names
)
def forward(self, input_img):
"""
Args:
input_img:
Returns:
"""
return self.model(input_img)
def __call__(self, input_img, target_category=None):
if self.use_cuda:
input_img = input_img.cuda()
features, output = self.extractor(input_img)
if target_category is None:
target_category = numpy.argmax(output.cpu().data.numpy())
one_hot = numpy.zeros((1, output.size()[-1]), dtype=numpy.float32)
one_hot[0][target_category] = 1
one_hot = torch.from_numpy(one_hot).requires_grad_(True)
if self.use_cuda:
one_hot = one_hot.cuda()
one_hot = torch.sum(one_hot * output)
self.feature_module.zero_grad()
self.model.zero_grad()
one_hot.backward(retain_graph=True)
grads_val = self.extractor.get_gradients()[-1].cpu().data.numpy()
target = features[-1]
target = target.cpu().data.numpy()[0, :]
weights = numpy.mean(grads_val, axis=(2, 3))[0, :]
cam = numpy.zeros(target.shape[1:], dtype=numpy.float32)
for i, w in enumerate(weights):
cam += w * target[i, :, :]
cam = cv2.resize(numpy.maximum(cam, 0), input_img.shape[2:])
cam -= numpy.min(cam)
cam /= numpy.max(cam)
return cam | PypiClean |
/Editra-0.7.20.tar.gz/Editra-0.7.20/src/ed_book.py | __author__ = "Cody Precord <[email protected]>"
__svnid__ = "$Id: ed_book.py 69245 2011-09-30 17:52:23Z CJP $"
__revision__ = "$Revision: 69245 $"
#-----------------------------------------------------------------------------#
# Imports
import wx
# Editra Imports
import extern.aui as aui
import ed_msg
from profiler import Profile_Get
#-----------------------------------------------------------------------------#
class EdBaseBook(aui.AuiNotebook):
"""Base notebook control"""
def __init__(self, parent, style=0):
style |= self.GetBaseStyles()
super(EdBaseBook, self).__init__(parent, agwStyle=style)
if wx.Platform == '__WXGTK__':
self.SetArtProvider(GtkTabArt())
# Setup
self.UpdateFontSetting()
font = wx.SystemSettings_GetFont(wx.SYS_DEFAULT_GUI_FONT)
font.PointSize += 2
self.NavigatorProps.Font = font
self.NavigatorProps.MinSize = wx.Size(300, 250)
self.SetSashDClickUnsplit(True)
# Message Handlers
ed_msg.Subscribe(self.OnUpdateFont, ed_msg.EDMSG_DSP_FONT)
# Event Handlers
self.Bind(wx.EVT_WINDOW_DESTROY, self._OnDestroy, self)
def _OnDestroy(self, evt):
"""Unsubscribe message handlers on delete"""
if self and evt.GetEventObject() is self:
ed_msg.Unsubscribe(self.OnUpdateFont)
evt.Skip()
@staticmethod
def GetBaseStyles():
"""Get the common base style flags
@return: bitmask
"""
style = aui.AUI_NB_NO_TAB_FOCUS
if wx.Platform == '__WXMAC__':
style |= aui.AUI_NB_CLOSE_ON_TAB_LEFT
return style
def OnUpdateFont(self, msg):
"""Update the font settings for the control in response to
user settings change.
"""
if self:
self.UpdateFontSetting()
def SetPageBitmap(self, pg, bmp):
"""Set a tabs bitmap
@param pg: page index
@param bmp: Bitmap
@note: no action if user prefs have turned off bmp
"""
if not self.UseIcons():
bmp = wx.NullBitmap
super(EdBaseBook, self).SetPageBitmap(pg, bmp)
def UpdateFontSetting(self):
"""Update font setting using latest profile data"""
font = Profile_Get('FONT3', 'font', None)
if font:
self.SetFont(font)
def UseIcons(self):
"""Is the book using tab icons?"""
bUseIcons = Profile_Get('TABICONS', default=True)
return bUseIcons
#-----------------------------------------------------------------------------#
class GtkTabArt(aui.VC71TabArt):
"""Simple tab art with no gradients"""
def __init__(self):
super(GtkTabArt, self).__init__()
def DrawBackground(self, dc, wnd, rect):
"""
Draws the tab area background.
:param `dc`: a `wx.DC` device context;
:param `wnd`: a `wx.Window` instance object;
:param `rect`: the tab control rectangle.
"""
self._buttonRect = wx.Rect()
# draw background
r = wx.Rect(rect.x, rect.y, rect.width+2, rect.height)
# draw base lines
dc.SetPen(self._border_pen)
dc.SetBrush(self._base_colour_brush)
dc.DrawRectangleRect(r) | PypiClean |
/Fo4doG_mess_server-0.0.2-py3-none-any.whl/server/common/metaclasses.py | import dis
class ServerMaker(type):
'''
Метакласс, проверяющий что в результирующем классе нет клиентских
вызовов таких как: connect. Также проверяется, что серверный
сокет является TCP и работает по IPv4 протоколу.
'''
def __init__(cls, clsname, bases, clsdict):
# Список методов, которые используются в функциях класса:
methods = []
# Атрибуты, вызываемые функциями классов
attrs = []
for func in clsdict:
# Пробуем
try:
ret = dis.get_instructions(clsdict[func])
# Если не функция то ловим исключение
except TypeError:
pass
else:
# Раз функция разбираем код, получая используемые методы и
# атрибуты.
for i in ret:
if i.opname == 'LOAD_GLOBAL':
if i.argval not in methods:
methods.append(i.argval)
elif i.opname == 'LOAD_ATTR':
if i.argval not in attrs:
attrs.append(i.argval)
# Если обнаружено использование недопустимого метода connect,
# генерируем исключение:
if 'connect' in methods:
raise TypeError(
'Использование метода connect недопустимо в серверном классе')
# Если сокет не инициализировался константами SOCK_STREAM(TCP)
# AF_INET(IPv4), тоже исключение.
if not ('SOCK_STREAM' in attrs and 'AF_INET' in attrs):
raise TypeError('Некорректная инициализация сокета.')
super().__init__(clsname, bases, clsdict)
class ClientMaker(type):
'''
Метакласс, проверяющий что в результирующем классе нет серверных
вызовов таких как: accept, listen. Также проверяется, что сокет не
создаётся внутри конструктора класса.
'''
def __init__(cls, clsname, bases, clsdict):
# Список методов, которые используются в функциях класса:
methods = []
for func in clsdict:
# Пробуем
try:
ret = dis.get_instructions(clsdict[func])
# Если не функция то ловим исключение
except TypeError:
pass
else:
# Раз функция разбираем код, получая используемые методы.
for i in ret:
if i.opname == 'LOAD_GLOBAL':
if i.argval not in methods:
methods.append(i.argval)
# Если обнаружено использование недопустимого метода accept, listen,
# socket бросаем исключение:
for command in ('accept', 'listen', 'socket'):
if command in methods:
raise TypeError(
'В классе обнаружено использование запрещённого метода')
# Вызов get_message или send_message из utils считаем корректным
# использованием сокетов
if 'get_message' in methods or 'send_message' in methods:
pass
else:
raise TypeError(
'Отсутствуют вызовы функций, работающих с сокетами.')
super().__init__(clsname, bases, clsdict) | PypiClean |
/JPype1_py3-0.5.5.4-cp37-cp37m-win_amd64.whl/jpype/_jvmfinder.py |
import os
class JVMFinder(object):
"""
JVM library finder base class
"""
def __init__(self):
"""
Sets up members
"""
# Library file name
self._libfile = "libjvm.so"
# Predefined locations
self._locations = ("/usr/lib/jvm", "/usr/java")
# Search methods
self._methods = (self._get_from_java_home,
self._get_from_known_locations)
def find_libjvm(self, java_home):
"""
Recursively looks for the given file
:param java_home: A Java home folder
:param filename: Name of the file to find
:return: The first found file path, or None
"""
# Possible parents (in preference order)
possible_parents = ('server', 'client', 'cacao', 'jamvm')
# Look for the file
for root, _, _ in os.walk(java_home):
for parent in possible_parents:
filename = os.path.join(root, parent, self._libfile)
if os.path.exists(filename):
return filename
else:
# File not found
return None
def find_possible_homes(self, parents):
"""
Generator that looks for the first-level children folders that could be
Java installations, according to their name
:param parents: A list of parent directories
:return: The possible JVM installation folders
"""
homes = []
java_names = ('jre', 'jdk', 'java')
for parent in parents:
try:
children = sorted(os.listdir(parent))
except OSError:
# Folder doesn't exist
pass
else:
for childname in children:
# Compute the real path
path = os.path.realpath(os.path.join(parent, childname))
if path in homes or not os.path.isdir(path):
# Already known path, or not a directory -> ignore
continue
# Check if the path seems OK
real_name = os.path.basename(path).lower()
for java_name in java_names:
if java_name in real_name:
# Correct JVM folder name
homes.append(path)
yield path
break
def get_jvm_path(self):
"""
Retrieves the path to the default or first found JVM library
:return: The path to the JVM shared library file
:raise ValueError: No JVM library found
"""
for method in self._methods:
try:
jvm = method()
except NotImplementedError:
# Ignore missing implementations
pass
else:
if jvm is not None:
return jvm
else:
raise ValueError("No JVM shared library file ({0}) found. "
"Try setting up the JAVA_HOME environment "
"variable properly.".format(self._libfile))
def _get_from_java_home(self):
"""
Retrieves the Java library path according to the JAVA_HOME environment
variable
:return: The path to the JVM library, or None
"""
# Get the environment variable
java_home = os.getenv("JAVA_HOME")
if java_home and os.path.exists(java_home):
# Get the real installation path
java_home = os.path.realpath(java_home)
# Look for the library file
return self.find_libjvm(java_home)
def _get_from_known_locations(self):
"""
Retrieves the first existing Java library path in the predefined known
locations
:return: The path to the JVM library, or None
"""
for home in self.find_possible_homes(self._locations):
jvm = self.find_libjvm(home)
if jvm is not None:
return jvm
def normalize_arguments(self, jvm_lib_path, args):
"""
Prepares the OS specific arguments required to start the JVM.
:param jvm_lib_path: Path to the JVM library
:param args: The list of arguments given to the JVM
:return: The list of arguments to add for the JVM to start
:raise OSError: Can't find required files
"""
return args | PypiClean |
/HATasmota-0.7.0.tar.gz/HATasmota-0.7.0/hatasmota/mqtt.py | from __future__ import annotations
import asyncio
from collections.abc import Callable, Coroutine
from dataclasses import dataclass
import logging
from typing import Any
from .const import COMMAND_BACKLOG
DEBOUNCE_TIMEOUT = 1
_LOGGER = logging.getLogger(__name__)
class Timer:
"""Simple timer."""
def __init__(
self, timeout: float, callback: Callable[[], Coroutine[Any, Any, None]]
):
self._timeout = timeout
self._callback = callback
self._task = asyncio.ensure_future(self._job())
async def _job(self) -> None:
await asyncio.sleep(self._timeout)
await self._callback()
def cancel(self) -> None:
"""Cancel the timer."""
self._task.cancel()
PublishPayloadType = str | bytes | int | float | None
ReceivePayloadType = str | bytes
@dataclass(frozen=True)
class PublishMessage:
"""MQTT Message."""
topic: str
payload: PublishPayloadType
qos: int | None
retain: bool | None
@dataclass(frozen=True)
class ReceiveMessage:
"""MQTT Message."""
topic: str
payload: ReceivePayloadType
qos: int
retain: bool
class TasmotaMQTTClient:
"""Helper class to sue an external MQTT client."""
def __init__(
self,
publish: Callable[
[str, PublishPayloadType, int | None, bool | None],
Coroutine[Any, Any, None],
],
subscribe: Callable[[dict | None, dict], Coroutine[Any, Any, dict]],
unsubscribe: Callable[[dict | None], Coroutine[Any, Any, dict]],
):
"""Initialize."""
self._pending_messages: dict[PublishMessage, Timer] = {}
self._publish = publish
self._subscribe = subscribe
self._unsubscribe = unsubscribe
async def publish(
self,
topic: str,
payload: PublishPayloadType,
qos: int | None = 0,
retain: bool | None = False,
) -> None:
"""Publish a message."""
return await self._publish(topic, payload, qos, retain)
async def publish_debounced(
self,
topic: str,
payload: PublishPayloadType,
qos: int | None = 0,
retain: bool | None = False,
) -> None:
"""Publish a message, with debounce."""
msg = PublishMessage(topic, payload, qos, retain)
async def publish_callback() -> None:
_LOGGER.debug("publish_debounced: publishing %s", msg)
self._pending_messages.pop(msg)
await self.publish(msg.topic, msg.payload, qos=msg.qos, retain=msg.retain)
if msg in self._pending_messages:
timer = self._pending_messages.pop(msg)
timer.cancel()
timer = Timer(DEBOUNCE_TIMEOUT, publish_callback)
self._pending_messages[msg] = timer
async def subscribe(self, sub_state: dict | None, topics: dict) -> dict:
"""Subscribe to topics."""
return await self._subscribe(sub_state, topics)
async def unsubscribe(self, sub_state: dict | None) -> dict:
"""Unsubscribe from topics."""
return await self._unsubscribe(sub_state)
async def send_commands(
mqtt_client: TasmotaMQTTClient,
command_topic: str,
commands: list[tuple[str, str | float]],
) -> None:
"""Send a sequence of commands."""
backlog_topic = command_topic + COMMAND_BACKLOG
backlog = ";".join([f"NoDelay;{command[0]} {command[1]}" for command in commands])
await mqtt_client.publish(backlog_topic, backlog) | PypiClean |
/BigchainDB-2.2.2.tar.gz/BigchainDB-2.2.2/bigchaindb/models.py |
from bigchaindb.backend.schema import validate_language_key
from bigchaindb.common.exceptions import (InvalidSignature,
DuplicateTransaction)
from bigchaindb.common.schema import validate_transaction_schema
from bigchaindb.common.transaction import Transaction
from bigchaindb.common.utils import (validate_txn_obj, validate_key)
class Transaction(Transaction):
ASSET = 'asset'
METADATA = 'metadata'
DATA = 'data'
def validate(self, bigchain, current_transactions=[]):
"""Validate transaction spend
Args:
bigchain (BigchainDB): an instantiated bigchaindb.BigchainDB object.
Returns:
The transaction (Transaction) if the transaction is valid else it
raises an exception describing the reason why the transaction is
invalid.
Raises:
ValidationError: If the transaction is invalid
"""
input_conditions = []
if self.operation == Transaction.CREATE:
duplicates = any(txn for txn in current_transactions if txn.id == self.id)
if bigchain.is_committed(self.id) or duplicates:
raise DuplicateTransaction('transaction `{}` already exists'
.format(self.id))
if not self.inputs_valid(input_conditions):
raise InvalidSignature('Transaction signature is invalid.')
elif self.operation == Transaction.TRANSFER:
self.validate_transfer_inputs(bigchain, current_transactions)
return self
@classmethod
def from_dict(cls, tx_body):
return super().from_dict(tx_body, False)
@classmethod
def validate_schema(cls, tx_body):
validate_transaction_schema(tx_body)
validate_txn_obj(cls.ASSET, tx_body[cls.ASSET], cls.DATA, validate_key)
validate_txn_obj(cls.METADATA, tx_body, cls.METADATA, validate_key)
validate_language_key(tx_body[cls.ASSET], cls.DATA)
validate_language_key(tx_body, cls.METADATA)
class FastTransaction:
"""A minimal wrapper around a transaction dictionary. This is useful for
when validation is not required but a routine expects something that looks
like a transaction, for example during block creation.
Note: immutability could also be provided
"""
def __init__(self, tx_dict):
self.data = tx_dict
@property
def id(self):
return self.data['id']
def to_dict(self):
return self.data | PypiClean |
/GenIce-1.0.11.tar.gz/GenIce-1.0.11/genice/load.py | from logging import getLogger
import re
import os
from pathlib import Path
from genice.importer import safe_import
from genice.cell import rel_wrap
import numpy as np
from types import SimpleNamespace
import pairlist as pl
def str2rangevalues(s):
values = s.split(":")
assert len(values) > 0
if len(values) == 1:
return [0, int(values[0]), 1]
elif len(values) == 2:
return [int(values[0]), int(values[1]), 1]
else:
return [int(v) for v in values]
def str2range(s):
return range(*str2rangevalues(s))
def iterate(filename, oname, hname, filerange, framerange, suffix=None):
logger = getLogger()
rfile = str2range(filerange)
rframe = str2rangevalues(framerange)
logger.info(" file number range: {0}:{1}:{2}".format(*str2rangevalues(filerange)))
logger.info(" frame number range: {0}:{1}:{2}".format(*rframe))
# test whether filename has a regexp for enumeration
logger.info(filename)
m = re.search("%[0-9]*d", filename)
# prepare file list
if m is None:
filelist = [filename, ]
else:
filelist = []
for num in rfile:
fname = filename % num
if os.path.exists(fname):
filelist.append(fname)
logger.debug("File list: {0}".format(filelist))
frame = 0
for fname in filelist:
logger.info(" File name: {0}".format(fname))
# single file may contain multiple frames
if suffix is None:
suffix = Path(fname).suffix[1:]
loader = safe_import("loader", suffix)
file = open(fname)
for oatoms, hatoms, cellmat in loader.load_iter(file, oname=oname, hname=hname):
if frame == rframe[0]:
logger.info("Frame: {0}".format(frame))
yield oatoms, hatoms, cellmat
rframe[0] += rframe[2]
if rframe[1] <= rframe[0]:
return
else:
logger.info("Skip frame: {0}".format(frame))
frame += 1
def average(load_iter, span=0):
logger = getLogger()
if span <= 1:
# pass through.
yield from load_iter()
return
ohist = [] # center-of-mass position (relative)
for oatoms, hatoms, cellmat in load_iter():
# center of mass
if len(ohist) == 0:
# first ohist; just store
ohist.append(oatoms)
elif span > 1:
# displacements
d = oatoms - ohist[-1]
d -= np.floor(d+0.5)
# new positions
ohist.append(ohist[-1]+d)
# if too many storage
if len(ohist) > span:
# drop the oldest one.
ohist.pop(0)
# overwrite the water positions with averaged ones.
oatoms = np.average(np.array(ohist), axis=0)
else:
ohist[0] = oatoms
yield oatoms, hatoms, cellmat
# def history(load_iter, span=0):
def make_lattice_info(oatoms, hatoms, cellmat):
logger = getLogger()
assert oatoms.shape[0] > 0
assert hatoms is None or oatoms.shape[0] * 2 == hatoms.shape[0]
coord = 'relative'
density = oatoms.shape[0] / (np.linalg.det(cellmat) * 1e-21) * 18 / 6.022e23
if hatoms is None:
return SimpleNamespace(waters=oatoms, coord=coord, density=density, bondlen=0.3, cell=cellmat)
rotmat = np.zeros((oatoms.shape[0],3,3))
for i in range(oatoms.shape[0]):
ro = oatoms[i]
rdh0 = rel_wrap(hatoms[i * 2] - ro)
rdh1 = rel_wrap(hatoms[i * 2 + 1] - ro)
dh0 = np.dot(rdh0, cellmat)
dh1 = np.dot(rdh1, cellmat)
y = dh0 - dh1
y /= np.linalg.norm(y)
z = dh0 + dh1
z /= np.linalg.norm(z)
x = np.cross(y, z)
rotmat[i] = np.vstack([x, y, z])
# 重心位置を補正。
oatoms[i] += (rdh0 + rdh1) * 1. / 18.
grid = pl.determine_grid(cellmat, 0.245)
# remove intramolecular OHs
# 水素結合は原子の平均位置で定義している。
pairs = []
logger.debug(" Make pair list.")
for o, h in pl.pairs_fine_hetero(oatoms, hatoms, 0.245, cellmat, grid, distance=False):
if not (h == o * 2 or h == o * 2 + 1):
# hとoは別の分子の上にあって近い。
# register a new intermolecular pair
pairs.append((h // 2, o))
logger.debug(" # of pairs: {0} {1}".format(len(pairs), oatoms.shape[0]))
return SimpleNamespace(waters=oatoms, coord=coord, density=density, pairs=pairs, rotmat=rotmat, cell=cellmat, __doc__=None) | PypiClean |
/Misago-0.36.1.tar.gz/Misago-0.36.1/misago/static/misago/admin/momentjs/eo.js |
;(function (global, factory) {
typeof exports === 'object' && typeof module !== 'undefined'
&& typeof require === 'function' ? factory(require('../moment')) :
typeof define === 'function' && define.amd ? define(['../moment'], factory) :
factory(global.moment)
}(this, (function (moment) { 'use strict';
var eo = moment.defineLocale('eo', {
months : 'januaro_februaro_marto_aprilo_majo_junio_julio_aŭgusto_septembro_oktobro_novembro_decembro'.split('_'),
monthsShort : 'jan_feb_mar_apr_maj_jun_jul_aŭg_sep_okt_nov_dec'.split('_'),
weekdays : 'dimanĉo_lundo_mardo_merkredo_ĵaŭdo_vendredo_sabato'.split('_'),
weekdaysShort : 'dim_lun_mard_merk_ĵaŭ_ven_sab'.split('_'),
weekdaysMin : 'di_lu_ma_me_ĵa_ve_sa'.split('_'),
longDateFormat : {
LT : 'HH:mm',
LTS : 'HH:mm:ss',
L : 'YYYY-MM-DD',
LL : 'D[-a de] MMMM, YYYY',
LLL : 'D[-a de] MMMM, YYYY HH:mm',
LLLL : 'dddd, [la] D[-a de] MMMM, YYYY HH:mm'
},
meridiemParse: /[ap]\.t\.m/i,
isPM: function (input) {
return input.charAt(0).toLowerCase() === 'p';
},
meridiem : function (hours, minutes, isLower) {
if (hours > 11) {
return isLower ? 'p.t.m.' : 'P.T.M.';
} else {
return isLower ? 'a.t.m.' : 'A.T.M.';
}
},
calendar : {
sameDay : '[Hodiaŭ je] LT',
nextDay : '[Morgaŭ je] LT',
nextWeek : 'dddd [je] LT',
lastDay : '[Hieraŭ je] LT',
lastWeek : '[pasinta] dddd [je] LT',
sameElse : 'L'
},
relativeTime : {
future : 'post %s',
past : 'antaŭ %s',
s : 'sekundoj',
ss : '%d sekundoj',
m : 'minuto',
mm : '%d minutoj',
h : 'horo',
hh : '%d horoj',
d : 'tago',//ne 'diurno', ĉar estas uzita por proksimumo
dd : '%d tagoj',
M : 'monato',
MM : '%d monatoj',
y : 'jaro',
yy : '%d jaroj'
},
dayOfMonthOrdinalParse: /\d{1,2}a/,
ordinal : '%da',
week : {
dow : 1, // Monday is the first day of the week.
doy : 7 // The week that contains Jan 7th is the first week of the year.
}
});
return eo;
}))); | PypiClean |
/CNVpytor-1.3.1.tar.gz/CNVpytor-1.3.1/cnvpytor/annotator.py | from __future__ import absolute_import, print_function, division
import requests
from .genome import *
from .utils import decode_region
_logger = logging.getLogger("cnvpytor.annotator")
class Annotator:
def __init__(self, reference_genome, biotypes=["protein_coding"]):
"""
Use Ensembl API rest to get annotation information for regions.
Parameters
----------
reference_genome : str
Reference genome used. Expect reference genome configuration with field "ensembl_api_region"
biotypes : list of str
List of biotypes to annotate
"""
if 'ensembl_api_region' in reference_genome:
self.url_template = reference_genome["ensembl_api_region"]
else:
_logger.error("ensembl_api_region tag is missing for %s reference genome" % (self.reference_genome["name"]))
self.url_template = None
self.biotypes = biotypes
def get_info(self, region):
"""
Returns annotation
Parameters
----------
region : str
Region in genome
Returns
-------
ret : string
Annotation string
"""
regs = decode_region(region)
if len(regs) == 0:
_logger.debug("get_info called without region")
return None
(c, (start, end)) = regs[0]
response = requests.get(self.url_template.format(region=region))
ret = []
if "error" in response.json():
return "No information"
for i in response.json():
if ("biotype" in i) and i["biotype"] in self.biotypes:
if i["start"] > start and i["end"] < end:
position = "inside"
elif i["start"] < start and i["end"] > end:
position = "cover"
elif i["start"] < start:
position = "intersect left"
else:
position = "intersect right"
if "external_name" in i:
ret.append("%s (%s %s)" % (i["external_name"], i["id"], position))
else:
ret.append("%s (%s %s)" % (i["id"], i["id"], position))
return ", ".join(ret) | PypiClean |
/Melopy3-0.1.3.4.tar.gz/Melopy3-0.1.3.4/README.txt | ======
Melopy
======
A python library for playing with sound.
by Jordan Scales (http://jordanscales.com) and friends
on Github: http://jdan.github.io/Melopy
Install it
==========
You may need to use `sudo` for this to work.
::
$ pip install melopy
Load it
=======
::
$ python
Python 2.7.2 (default, Jun 20 2012, 16:23:33)
[GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import melopy
>>> melopy.major_scale('C5')
['C5', 'D5', 'E5', 'F5', 'G5', 'A5', 'B5']
>>>
Develop
=======
To install locally:
::
$ git clone git://github.com/jdan/Melopy
$ cd Melopy
$ python setup.py install
For examples, check out the `examples` directory:
::
$ python examples/canon.py
$ python examples/parser.py entertainer < examples/scores/entertainer.mlp
To run the tests:
::
$ python tests/melopy_tests.py
Organization
============
Melopy is broken down into 3 subcategories - `melopy`, `scales`, and `utility`.
* `melopy.py` contains the Melopy class
* this is used for creating a Melopy and adding notes to it, rendering, etc
* `scales.py` contains methods for generating scales
* for instance, if you want to store the C major scale in an array
* `utility.py` contains methods for finding frequencies of notes, etc
melopy.py
=========
>>> from melopy import Melopy
>>> m = Melopy('mysong')
>>> m.add_quarter_note('A4')
>>> m.add_quarter_note('C#5')
>>> m.add_quarter_note('E5')
>>> m.render()
[==================================================] 100%
Done
scales.py
=========
* chromatic_scale
* harmonic_minor_scale
* major_pentatonic_scale
* major_scale
* minor_scale
* major_triad
* minor_triad
* melodic_minor_scale
* minor_pentatonic_scale
>>> from melopy.scales import *
>>> major_scale('C4')
['C4', 'D4', 'E4', 'F4', 'G4', 'A4', 'B4']
>>> major_scale('C4','dict')
{0: 'C4', 1: 'D4', 2: 'E4', 3: 'F4', 4: 'G4', 5: 'A4', 6: 'B4'}
>>> major_scale('C4','tuple')
('C4', 'D4', 'E4', 'F4', 'G4', 'A4', 'B4')
>>> minor_scale('D#5') # has some bugs
['D#5', 'F5', 'F#5', 'G#5', 'A#5', 'B5', 'C#6']
>>> major_triad('A4')
['A4', 'C#5', 'E5']
>>> major_triad('A4', 'tuple')
('A4', 'C#5', 'E5')
utility.py
==========
* key_to_frequency
* key_to_note
* note_to_frequency
* note_to_key
* frequency_to_key
* frequency_to_note
>>> from melopy.utility import *
>>> key_to_frequency(49)
440.0
>>> note_to_frequency('A4')
440.0
>>> note_to_frequency('C5')
523.2511306011972
>>> note_to_key('Bb5')
62
>>> key_to_note(65)
'C#6'
>>> key_to_note(304) # even something stupid
'C26'
>>> frequency_to_key(660)
56
>>> frequency_to_note(660)
'E5'
| PypiClean |
/CleanAdminDjango-1.5.3.1.tar.gz/CleanAdminDjango-1.5.3.1/django/contrib/admindocs/views.py | import inspect
import os
import re
from django import template
from django.template import RequestContext
from django.conf import settings
from django.contrib.admin.views.decorators import staff_member_required
from django.db import models
from django.shortcuts import render_to_response
from django.core.exceptions import ImproperlyConfigured, ViewDoesNotExist
from django.http import Http404
from django.core import urlresolvers
from django.contrib.admindocs import utils
from django.contrib.sites.models import Site
from django.utils.importlib import import_module
from django.utils._os import upath
from django.utils import six
from django.utils.translation import ugettext as _
from django.utils.safestring import mark_safe
# Exclude methods starting with these strings from documentation
MODEL_METHODS_EXCLUDE = ('_', 'add_', 'delete', 'save', 'set_')
class GenericSite(object):
domain = 'example.com'
name = 'my site'
@staff_member_required
def doc_index(request):
if not utils.docutils_is_available:
return missing_docutils_page(request)
return render_to_response('admin_doc/index.html', {
'root_path': urlresolvers.reverse('admin:index'),
}, context_instance=RequestContext(request))
@staff_member_required
def bookmarklets(request):
admin_root = urlresolvers.reverse('admin:index')
return render_to_response('admin_doc/bookmarklets.html', {
'root_path': admin_root,
'admin_url': "%s://%s%s" % (request.is_secure() and 'https' or 'http', request.get_host(), admin_root),
}, context_instance=RequestContext(request))
@staff_member_required
def template_tag_index(request):
if not utils.docutils_is_available:
return missing_docutils_page(request)
load_all_installed_template_libraries()
tags = []
app_libs = list(six.iteritems(template.libraries))
builtin_libs = [(None, lib) for lib in template.builtins]
for module_name, library in builtin_libs + app_libs:
for tag_name, tag_func in library.tags.items():
title, body, metadata = utils.parse_docstring(tag_func.__doc__)
if title:
title = utils.parse_rst(title, 'tag', _('tag:') + tag_name)
if body:
body = utils.parse_rst(body, 'tag', _('tag:') + tag_name)
for key in metadata:
metadata[key] = utils.parse_rst(metadata[key], 'tag', _('tag:') + tag_name)
if library in template.builtins:
tag_library = ''
else:
tag_library = module_name.split('.')[-1]
tags.append({
'name': tag_name,
'title': title,
'body': body,
'meta': metadata,
'library': tag_library,
})
return render_to_response('admin_doc/template_tag_index.html', {
'root_path': urlresolvers.reverse('admin:index'),
'tags': tags
}, context_instance=RequestContext(request))
@staff_member_required
def template_filter_index(request):
if not utils.docutils_is_available:
return missing_docutils_page(request)
load_all_installed_template_libraries()
filters = []
app_libs = list(six.iteritems(template.libraries))
builtin_libs = [(None, lib) for lib in template.builtins]
for module_name, library in builtin_libs + app_libs:
for filter_name, filter_func in library.filters.items():
title, body, metadata = utils.parse_docstring(filter_func.__doc__)
if title:
title = utils.parse_rst(title, 'filter', _('filter:') + filter_name)
if body:
body = utils.parse_rst(body, 'filter', _('filter:') + filter_name)
for key in metadata:
metadata[key] = utils.parse_rst(metadata[key], 'filter', _('filter:') + filter_name)
if library in template.builtins:
tag_library = ''
else:
tag_library = module_name.split('.')[-1]
filters.append({
'name': filter_name,
'title': title,
'body': body,
'meta': metadata,
'library': tag_library,
})
return render_to_response('admin_doc/template_filter_index.html', {
'root_path': urlresolvers.reverse('admin:index'),
'filters': filters
}, context_instance=RequestContext(request))
@staff_member_required
def view_index(request):
if not utils.docutils_is_available:
return missing_docutils_page(request)
if settings.ADMIN_FOR:
settings_modules = [import_module(m) for m in settings.ADMIN_FOR]
else:
settings_modules = [settings]
views = []
for settings_mod in settings_modules:
urlconf = import_module(settings_mod.ROOT_URLCONF)
view_functions = extract_views_from_urlpatterns(urlconf.urlpatterns)
if Site._meta.installed:
site_obj = Site.objects.get(pk=settings_mod.SITE_ID)
else:
site_obj = GenericSite()
for (func, regex) in view_functions:
views.append({
'full_name': '%s.%s' % (func.__module__, getattr(func, '__name__', func.__class__.__name__)),
'site_id': settings_mod.SITE_ID,
'site': site_obj,
'url': simplify_regex(regex),
})
return render_to_response('admin_doc/view_index.html', {
'root_path': urlresolvers.reverse('admin:index'),
'views': views
}, context_instance=RequestContext(request))
@staff_member_required
def view_detail(request, view):
if not utils.docutils_is_available:
return missing_docutils_page(request)
mod, func = urlresolvers.get_mod_func(view)
try:
view_func = getattr(import_module(mod), func)
except (ImportError, AttributeError):
raise Http404
title, body, metadata = utils.parse_docstring(view_func.__doc__)
if title:
title = utils.parse_rst(title, 'view', _('view:') + view)
if body:
body = utils.parse_rst(body, 'view', _('view:') + view)
for key in metadata:
metadata[key] = utils.parse_rst(metadata[key], 'model', _('view:') + view)
return render_to_response('admin_doc/view_detail.html', {
'root_path': urlresolvers.reverse('admin:index'),
'name': view,
'summary': title,
'body': body,
'meta': metadata,
}, context_instance=RequestContext(request))
@staff_member_required
def model_index(request):
if not utils.docutils_is_available:
return missing_docutils_page(request)
m_list = [m._meta for m in models.get_models()]
return render_to_response('admin_doc/model_index.html', {
'root_path': urlresolvers.reverse('admin:index'),
'models': m_list
}, context_instance=RequestContext(request))
@staff_member_required
def model_detail(request, app_label, model_name):
if not utils.docutils_is_available:
return missing_docutils_page(request)
# Get the model class.
try:
app_mod = models.get_app(app_label)
except ImproperlyConfigured:
raise Http404(_("App %r not found") % app_label)
model = None
for m in models.get_models(app_mod):
if m._meta.object_name.lower() == model_name:
model = m
break
if model is None:
raise Http404(_("Model %(model_name)r not found in app %(app_label)r") % {'model_name': model_name, 'app_label': app_label})
opts = model._meta
# Gather fields/field descriptions.
fields = []
for field in opts.fields:
# ForeignKey is a special case since the field will actually be a
# descriptor that returns the other object
if isinstance(field, models.ForeignKey):
data_type = field.rel.to.__name__
app_label = field.rel.to._meta.app_label
verbose = utils.parse_rst((_("the related `%(app_label)s.%(data_type)s` object") % {'app_label': app_label, 'data_type': data_type}), 'model', _('model:') + data_type)
else:
data_type = get_readable_field_data_type(field)
verbose = field.verbose_name
fields.append({
'name': field.name,
'data_type': data_type,
'verbose': verbose,
'help_text': field.help_text,
})
# Gather many-to-many fields.
for field in opts.many_to_many:
data_type = field.rel.to.__name__
app_label = field.rel.to._meta.app_label
verbose = _("related `%(app_label)s.%(object_name)s` objects") % {'app_label': app_label, 'object_name': data_type}
fields.append({
'name': "%s.all" % field.name,
"data_type": 'List',
'verbose': utils.parse_rst(_("all %s") % verbose , 'model', _('model:') + opts.module_name),
})
fields.append({
'name' : "%s.count" % field.name,
'data_type' : 'Integer',
'verbose' : utils.parse_rst(_("number of %s") % verbose , 'model', _('model:') + opts.module_name),
})
# Gather model methods.
for func_name, func in model.__dict__.items():
if (inspect.isfunction(func) and len(inspect.getargspec(func)[0]) == 1):
try:
for exclude in MODEL_METHODS_EXCLUDE:
if func_name.startswith(exclude):
raise StopIteration
except StopIteration:
continue
verbose = func.__doc__
if verbose:
verbose = utils.parse_rst(utils.trim_docstring(verbose), 'model', _('model:') + opts.module_name)
fields.append({
'name': func_name,
'data_type': get_return_data_type(func_name),
'verbose': verbose,
})
# Gather related objects
for rel in opts.get_all_related_objects() + opts.get_all_related_many_to_many_objects():
verbose = _("related `%(app_label)s.%(object_name)s` objects") % {'app_label': rel.opts.app_label, 'object_name': rel.opts.object_name}
accessor = rel.get_accessor_name()
fields.append({
'name' : "%s.all" % accessor,
'data_type' : 'List',
'verbose' : utils.parse_rst(_("all %s") % verbose , 'model', _('model:') + opts.module_name),
})
fields.append({
'name' : "%s.count" % accessor,
'data_type' : 'Integer',
'verbose' : utils.parse_rst(_("number of %s") % verbose , 'model', _('model:') + opts.module_name),
})
return render_to_response('admin_doc/model_detail.html', {
'root_path': urlresolvers.reverse('admin:index'),
'name': '%s.%s' % (opts.app_label, opts.object_name),
'summary': _("Fields on %s objects") % opts.object_name,
'description': model.__doc__,
'fields': fields,
}, context_instance=RequestContext(request))
@staff_member_required
def template_detail(request, template):
templates = []
for site_settings_module in settings.ADMIN_FOR:
settings_mod = import_module(site_settings_module)
if Site._meta.installed:
site_obj = Site.objects.get(pk=settings_mod.SITE_ID)
else:
site_obj = GenericSite()
for dir in settings_mod.TEMPLATE_DIRS:
template_file = os.path.join(dir, template)
templates.append({
'file': template_file,
'exists': os.path.exists(template_file),
'contents': lambda: os.path.exists(template_file) and open(template_file).read() or '',
'site_id': settings_mod.SITE_ID,
'site': site_obj,
'order': list(settings_mod.TEMPLATE_DIRS).index(dir),
})
return render_to_response('admin_doc/template_detail.html', {
'root_path': urlresolvers.reverse('admin:index'),
'name': template,
'templates': templates,
}, context_instance=RequestContext(request))
####################
# Helper functions #
####################
def missing_docutils_page(request):
"""Display an error message for people without docutils"""
return render_to_response('admin_doc/missing_docutils.html')
def load_all_installed_template_libraries():
# Load/register all template tag libraries from installed apps.
for module_name in template.get_templatetags_modules():
mod = import_module(module_name)
try:
libraries = [
os.path.splitext(p)[0]
for p in os.listdir(os.path.dirname(upath(mod.__file__)))
if p.endswith('.py') and p[0].isalpha()
]
except OSError:
libraries = []
for library_name in libraries:
try:
lib = template.get_library(library_name)
except template.InvalidTemplateLibrary:
pass
def get_return_data_type(func_name):
"""Return a somewhat-helpful data type given a function name"""
if func_name.startswith('get_'):
if func_name.endswith('_list'):
return 'List'
elif func_name.endswith('_count'):
return 'Integer'
return ''
def get_readable_field_data_type(field):
"""Returns the description for a given field type, if it exists,
Fields' descriptions can contain format strings, which will be interpolated
against the values of field.__dict__ before being output."""
return field.description % field.__dict__
def extract_views_from_urlpatterns(urlpatterns, base=''):
"""
Return a list of views from a list of urlpatterns.
Each object in the returned list is a two-tuple: (view_func, regex)
"""
views = []
for p in urlpatterns:
if hasattr(p, 'url_patterns'):
try:
patterns = p.url_patterns
except ImportError:
continue
views.extend(extract_views_from_urlpatterns(patterns, base + p.regex.pattern))
elif hasattr(p, 'callback'):
try:
views.append((p.callback, base + p.regex.pattern))
except ViewDoesNotExist:
continue
else:
raise TypeError(_("%s does not appear to be a urlpattern object") % p)
return views
named_group_matcher = re.compile(r'\(\?P(<\w+>).+?\)')
non_named_group_matcher = re.compile(r'\(.*?\)')
def simplify_regex(pattern):
"""
Clean up urlpattern regexes into something somewhat readable by Mere Humans:
turns something like "^(?P<sport_slug>\w+)/athletes/(?P<athlete_slug>\w+)/$"
into "<sport_slug>/athletes/<athlete_slug>/"
"""
# handle named groups first
pattern = named_group_matcher.sub(lambda m: m.group(1), pattern)
# handle non-named groups
pattern = non_named_group_matcher.sub("<var>", pattern)
# clean up any outstanding regex-y characters.
pattern = pattern.replace('^', '').replace('$', '').replace('?', '').replace('//', '/').replace('\\', '')
if not pattern.startswith('/'):
pattern = '/' + pattern
return pattern | PypiClean |
/GTW-1.2.6.tar.gz/GTW-1.2.6/_RST/Request.py |
from __future__ import absolute_import, division, print_function, unicode_literals
from _GTW import GTW
from _TFL import TFL
import _GTW._RST
import _GTW._RST.Signed_Token
from _TFL import I18N
from _TFL.Decorator import getattr_safe
from _TFL._Meta.Once_Property import Once_Property
import _TFL._Meta.Object
import time
### XXX replace home-grown code by werkzeug supplied functions
### XXX werkzeug.utils, werkzeug.HTTP, ...
class _RST_Request_ (TFL.Meta.Object) :
"""Wrap and extend wsgi-specific Request class."""
_real_name = "Request"
_resource = None
_user = None
allow_login = False
lang = None
original_resource = None
def __init__ (self, root, environ) :
self.root = root
self._request = root.HTTP.Request (environ)
self.cookies_to_delete = set ()
# end def __init__
def __getattr__ (self, name) :
if name.startswith ("__") and name.endswith ("__") :
### Placate inspect.unwrap of Python 3.5,
### which accesses `__wrapped__` and eventually throws `ValueError`
return getattr (self.__super, name)
if name == "request" : ### XXX remove after porting of GTW.Werkzeug.Error
return self._request
elif name != "_request" :
result = getattr (self._request, name)
setattr (self, name, result)
return result
raise AttributeError (name)
# end def __getattr__
@Once_Property
@getattr_safe
def apache_authorized_user (self) :
return self.environ.get ("REMOTE_USER")
# end def apache_authorized_user
@Once_Property
def brief (self) :
return self.req_data.has_option ("brief")
# end def brief
@Once_Property
def ckd (self) :
req_data = self.req_data
if "ckd" in req_data :
return req_data.has_option ("ckd")
elif self.method == "GET" :
### for `GET`, `ckd` is default
return not req_data.has_option ("raw")
# end def ckd
@Once_Property
def cookie_encoding (self) :
return self.settings.get ("cookie_encoding", "utf-8")
# end def cookie_encoding
@Once_Property
def cookie_salt (self) :
return self.settings.get ("cookie_salt")
# end def cookie_salt
@property
def current_time (self) :
return time.time ()
# end def current_time
@Once_Property
@getattr_safe
def http_server_authorized_user (self) :
result = self.ssl_authorized_user
if result is None :
result = self.apache_authorized_user
return result
# end def http_server_authorized_user
@Once_Property
def is_secure (self) :
return self.scheme == "https"
# end def is_secure
@property
def language (self) :
result = self.lang
if result is None :
result = self.locale_codes
return result [0] if result else None
# end def language
@Once_Property
@getattr_safe
def locale_codes (self) :
"""The locale-code for the current session."""
return self.get_browser_locale_codes ()
# end def locale_codes
@Once_Property
def origin (self) :
result = self.environ.get ("HTTP_ORIGIN")
if result is None :
referrer = self.referrer
if referrer :
url = TFL.Url (referrer)
parts = []
if url.scheme :
parts.extend ((url.scheme, "://"))
parts.append (url.authority)
result = "".join (parts)
return result
# end def origin
@Once_Property
def origin_host (self) :
origin = self.origin
if origin :
return origin.split ("//", 1) [-1]
# end def origin_host
@Once_Property
@getattr_safe
def rat_authorized_user (self) :
rat = getattr (self.root.SC, "RAT", None)
if rat is not None :
cookie = self.cookies.get ("RAT") or self.req_data.get ("RAT", "")
if cookie :
token = GTW.RST.Signed_Token.REST_Auth.recover \
(self, cookie, ttl_s = rat.session_ttl_s)
if token :
return token.account.name
else :
self.cookies_to_delete.add ("RAT")
# end def rat_authorized_user
@Once_Property
def raw (self) :
req_data = self.req_data
if "raw" in req_data :
return req_data.has_option ("raw")
elif self.method != "GET" :
### for all methods but `GET`, `raw` is default
return not req_data.has_option ("ckd")
# end def raw
@property
def resource (self) :
result = self._resource
if result is None :
result = self.root
return result
# end def resource
@resource.setter
def resource (self, value) :
self._resource = value
# end def resource
@Once_Property
def same_origin (self) :
return self.server_name == self.origin_host
# end def same_origin
@Once_Property
def server_name (self) :
env = self.environ
return env.get ("HTTP_HOST") or env.get ("SERVER_NAME")
# end def server_name
@Once_Property
def server_port (self) :
return self.environ.get ("SERVER_PORT")
# end def server_port
@property
def settings (self) :
return dict (self.root._kw, hash_fct = self.root.hash_fct)
# end def settings
@Once_Property
@getattr_safe
def ssl_authorized_user (self) :
return self.environ.get ("SSL_CLIENT_S_DN_Email")
# end def ssl_authorized_user
@Once_Property
@getattr_safe
def ssl_client_verified (self) :
return self.environ.get ("SSL_CLIENT_VERIFY") == "SUCCESS"
# end def ssl_client_verified
@property
@getattr_safe
def user (self) :
result = self._user
if result is None and self.username :
self._user = self.root._get_user (self.username)
return self._user
# end def user
@user.setter
def user (self, value) :
self._user = value
# end def user
@property
@getattr_safe
def username (self) :
### `username` is `property` not `Once_Property` to allow
### descendent to change redefine `username.setter` (to support `login`)
### `_auth_user_name` is `Once_Property` to improve performance
return self._auth_user_name
# end def username
@Once_Property
def verbose (self) :
return self.req_data.has_option ("verbose")
# end def verbose
@Once_Property
def _auth_user_name (self) :
result = self.http_server_authorized_user
if result is None :
result = self.rat_authorized_user
if result is None :
auth = self.authorization
result = auth and auth.username
return result
# end def _auth_user_name
def cookie (self, name) :
return self.cookies.get (name)
# end def cookie
def get_browser_locale_codes (self) :
"""Determines the user's locale from Accept-Language header."""
languages = self.accept_languages
supported = getattr (self.root, "languages", set ())
if supported :
locales = list (l for l, p in languages if l in supported)
if locales :
return locales
return getattr (self.root, "default_locale_code", "en")
# end def get_browser_locale_codes
def new_secure_cookie (self, data, ** kw) :
token = GTW.RST.Signed_Token.Cookie (self, data, ** kw)
return token.value
# end def new_secure_cookie
def secure_cookie (self, name, ttl_s = None) :
cookie = self.cookies.get (name)
if cookie :
token = GTW.RST.Signed_Token.Cookie.recover \
(self, cookie, ttl_s = ttl_s)
if token :
return token.data
else :
self.cookies_to_delete.add (name)
# end def secure_cookie
def use_language (self, langs) :
self.lang = langs
I18N.use (* langs)
# end def use_language
Request = _RST_Request_ # end class
if __name__ != "__main__" :
GTW.RST._Export ("Request")
### __END__ GTW.RST.Request | PypiClean |
/Electrum-Zcash-Random-Fork-3.1.3b5.tar.gz/Electrum-Zcash-Random-Fork-3.1.3b5/gui/qt/transaction_dialog.py | import copy
import datetime
import json
import traceback
from PyQt5.QtCore import *
from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
from electrum_zcash.bitcoin import base_encode
from electrum_zcash.i18n import _
from electrum_zcash.plugins import run_hook
from electrum_zcash import simple_config
from electrum_zcash.util import bfh
from electrum_zcash.wallet import AddTransactionException
from electrum_zcash.transaction import SerializationError
from .util import *
dialogs = [] # Otherwise python randomly garbage collects the dialogs...
def show_transaction(tx, parent, desc=None, prompt_if_unsaved=False):
try:
d = TxDialog(tx, parent, desc, prompt_if_unsaved)
except SerializationError as e:
traceback.print_exc(file=sys.stderr)
parent.show_critical(_("Electrum-Zcash was unable to deserialize the transaction:") + "\n" + str(e))
else:
dialogs.append(d)
d.show()
class TxDialog(QDialog, MessageBoxMixin):
def __init__(self, tx, parent, desc, prompt_if_unsaved):
'''Transactions in the wallet will show their description.
Pass desc to give a description for txs not yet in the wallet.
'''
# We want to be a top-level window
QDialog.__init__(self, parent=None)
# Take a copy; it might get updated in the main window by
# e.g. the FX plugin. If this happens during or after a long
# sign operation the signatures are lost.
self.tx = copy.deepcopy(tx)
try:
self.tx.deserialize()
except BaseException as e:
raise SerializationError(e)
self.main_window = parent
self.wallet = parent.wallet
self.prompt_if_unsaved = prompt_if_unsaved
self.saved = False
self.desc = desc
self.setMinimumWidth(750)
self.setWindowTitle(_("Transaction"))
vbox = QVBoxLayout()
self.setLayout(vbox)
vbox.addWidget(QLabel(_("Transaction ID:")))
self.tx_hash_e = ButtonsLineEdit()
qr_show = lambda: parent.show_qrcode(str(self.tx_hash_e.text()), 'Transaction ID', parent=self)
self.tx_hash_e.addButton(":icons/qrcode.png", qr_show, _("Show as QR code"))
self.tx_hash_e.setReadOnly(True)
vbox.addWidget(self.tx_hash_e)
self.tx_desc = QLabel()
vbox.addWidget(self.tx_desc)
self.status_label = QLabel()
vbox.addWidget(self.status_label)
self.date_label = QLabel()
vbox.addWidget(self.date_label)
self.amount_label = QLabel()
vbox.addWidget(self.amount_label)
self.size_label = QLabel()
vbox.addWidget(self.size_label)
self.fee_label = QLabel()
vbox.addWidget(self.fee_label)
self.add_io(vbox)
vbox.addStretch(1)
self.sign_button = b = QPushButton(_("Sign"))
b.clicked.connect(self.sign)
self.broadcast_button = b = QPushButton(_("Broadcast"))
b.clicked.connect(self.do_broadcast)
self.save_button = QPushButton(_("Save"))
save_button_disabled = not tx.is_complete()
self.save_button.setDisabled(save_button_disabled)
if save_button_disabled:
self.save_button.setToolTip(_("Please sign this transaction in order to save it"))
else:
self.save_button.setToolTip("")
self.save_button.clicked.connect(self.save)
self.export_button = b = QPushButton(_("Export"))
b.clicked.connect(self.export)
self.cancel_button = b = QPushButton(_("Close"))
b.clicked.connect(self.close)
b.setDefault(True)
self.qr_button = b = QPushButton()
b.setIcon(QIcon(":icons/qrcode.png"))
b.clicked.connect(self.show_qr)
self.copy_button = CopyButton(lambda: str(self.tx), parent.app)
# Action buttons
self.buttons = [self.sign_button, self.broadcast_button, self.save_button, self.cancel_button]
# Transaction sharing buttons
self.sharing_buttons = [self.copy_button, self.qr_button, self.export_button]
run_hook('transaction_dialog', self)
hbox = QHBoxLayout()
hbox.addLayout(Buttons(*self.sharing_buttons))
hbox.addStretch(1)
hbox.addLayout(Buttons(*self.buttons))
vbox.addLayout(hbox)
self.update()
def do_broadcast(self):
self.main_window.push_top_level_window(self)
try:
self.main_window.broadcast_transaction(self.tx, self.desc)
finally:
self.main_window.pop_top_level_window(self)
self.saved = True
self.update()
def closeEvent(self, event):
if (self.prompt_if_unsaved and not self.saved
and not self.question(_('This transaction is not saved. Close anyway?'), title=_("Warning"))):
event.ignore()
else:
event.accept()
try:
dialogs.remove(self)
except ValueError:
pass # was not in list already
def show_qr(self):
text = bfh(str(self.tx))
text = base_encode(text, base=43)
try:
self.main_window.show_qrcode(text, 'Transaction', parent=self)
except Exception as e:
self.show_message(str(e))
def sign(self):
def sign_done(success):
# note: with segwit we could save partially signed tx, because they have a txid
if self.tx.is_complete():
self.prompt_if_unsaved = True
self.saved = False
self.save_button.setDisabled(False)
self.save_button.setToolTip("")
self.update()
self.main_window.pop_top_level_window(self)
self.sign_button.setDisabled(True)
self.main_window.push_top_level_window(self)
self.main_window.sign_tx(self.tx, sign_done)
def save(self):
if self.main_window.save_transaction_into_wallet(self.tx):
self.save_button.setDisabled(True)
self.saved = True
def export(self):
name = 'signed_%s.txn' % (self.tx.txid()[0:8]) if self.tx.is_complete() else 'unsigned.txn'
fileName = self.main_window.getSaveFileName(_("Select where to save your signed transaction"), name, "*.txn")
if fileName:
with open(fileName, "w+") as f:
f.write(json.dumps(self.tx.as_dict(), indent=4) + '\n')
self.show_message(_("Transaction exported successfully"))
self.saved = True
def update(self):
desc = self.desc
base_unit = self.main_window.base_unit()
format_amount = self.main_window.format_amount
tx_hash, status, label, can_broadcast, amount, fee, height, conf, timestamp, exp_n = self.wallet.get_tx_info(self.tx)
size = self.tx.estimated_size()
self.broadcast_button.setEnabled(can_broadcast)
can_sign = not self.tx.is_complete() and \
(self.wallet.can_sign(self.tx) or bool(self.main_window.tx_external_keypairs))
self.sign_button.setEnabled(can_sign)
self.tx_hash_e.setText(tx_hash or _('Unknown'))
if desc is None:
self.tx_desc.hide()
else:
self.tx_desc.setText(_("Description") + ': ' + desc)
self.tx_desc.show()
self.status_label.setText(_('Status:') + ' ' + status)
if timestamp:
time_str = datetime.datetime.fromtimestamp(timestamp).isoformat(' ')[:-3]
self.date_label.setText(_("Date: {}").format(time_str))
self.date_label.show()
elif exp_n:
text = '%.2f MB'%(exp_n/1000000)
self.date_label.setText(_('Position in mempool') + ': ' + text + ' ' + _('from tip'))
self.date_label.show()
else:
self.date_label.hide()
if amount is None:
amount_str = _("Transaction unrelated to your wallet")
elif amount > 0:
amount_str = _("Amount received:") + ' %s'% format_amount(amount) + ' ' + base_unit
else:
amount_str = _("Amount sent:") + ' %s'% format_amount(-amount) + ' ' + base_unit
size_str = _("Size:") + ' %d bytes'% size
fee_str = _("Fee") + ': %s' % (format_amount(fee) + ' ' + base_unit if fee is not None else _('unknown'))
if fee is not None:
fee_rate = fee/size*1000
fee_str += ' ( %s ) ' % self.main_window.format_fee_rate(fee_rate)
confirm_rate = simple_config.FEERATE_WARNING_HIGH_FEE
if fee_rate > confirm_rate:
fee_str += ' - ' + _('Warning') + ': ' + _("high fee") + '!'
self.amount_label.setText(amount_str)
self.fee_label.setText(fee_str)
self.size_label.setText(size_str)
run_hook('transaction_dialog_update', self)
def add_io(self, vbox):
if self.tx.locktime > 0:
vbox.addWidget(QLabel("LockTime: %d\n" % self.tx.locktime))
vbox.addWidget(QLabel(_("Inputs") + ' (%d)'%len(self.tx.inputs())))
ext = QTextCharFormat()
rec = QTextCharFormat()
rec.setBackground(QBrush(ColorScheme.GREEN.as_color(background=True)))
rec.setToolTip(_("Wallet receive address"))
chg = QTextCharFormat()
chg.setBackground(QBrush(ColorScheme.YELLOW.as_color(background=True)))
chg.setToolTip(_("Wallet change address"))
def text_format(addr):
if self.wallet.is_mine(addr):
return chg if self.wallet.is_change(addr) else rec
return ext
def format_amount(amt):
return self.main_window.format_amount(amt, whitespaces = True)
i_text = QTextEdit()
i_text.setFont(QFont(MONOSPACE_FONT))
i_text.setReadOnly(True)
i_text.setMaximumHeight(100)
cursor = i_text.textCursor()
for x in self.tx.inputs():
if x['type'] == 'coinbase':
cursor.insertText('coinbase')
else:
prevout_hash = x.get('prevout_hash')
prevout_n = x.get('prevout_n')
cursor.insertText(prevout_hash[0:8] + '...', ext)
cursor.insertText(prevout_hash[-8:] + ":%-4d " % prevout_n, ext)
addr = x.get('address')
if addr == "(pubkey)":
_addr = self.wallet.get_txin_address(x)
if _addr:
addr = _addr
if addr is None:
addr = _('unknown')
cursor.insertText(addr, text_format(addr))
if x.get('value'):
cursor.insertText(format_amount(x['value']), ext)
cursor.insertBlock()
vbox.addWidget(i_text)
vbox.addWidget(QLabel(_("Outputs") + ' (%d)'%len(self.tx.outputs())))
o_text = QTextEdit()
o_text.setFont(QFont(MONOSPACE_FONT))
o_text.setReadOnly(True)
o_text.setMaximumHeight(100)
cursor = o_text.textCursor()
for addr, v in self.tx.get_outputs():
cursor.insertText(addr, text_format(addr))
if v is not None:
cursor.insertText('\t', ext)
cursor.insertText(format_amount(v), ext)
cursor.insertBlock()
vbox.addWidget(o_text) | PypiClean |
/Flask_AdminLTE3-1.0.9-py3-none-any.whl/flask_adminlte3/static/plugins/codemirror/addon/hint/javascript-hint.js |
(function(mod) {
if (typeof exports == "object" && typeof module == "object") // CommonJS
mod(require("../../lib/codemirror"));
else if (typeof define == "function" && define.amd) // AMD
define(["../../lib/codemirror"], mod);
else // Plain browser env
mod(CodeMirror);
})(function(CodeMirror) {
var Pos = CodeMirror.Pos;
function forEach(arr, f) {
for (var i = 0, e = arr.length; i < e; ++i) f(arr[i]);
}
function arrayContains(arr, item) {
if (!Array.prototype.indexOf) {
var i = arr.length;
while (i--) {
if (arr[i] === item) {
return true;
}
}
return false;
}
return arr.indexOf(item) != -1;
}
function scriptHint(editor, keywords, getToken, options) {
// Find the token at the cursor
var cur = editor.getCursor(), token = getToken(editor, cur);
if (/\b(?:string|comment)\b/.test(token.type)) return;
var innerMode = CodeMirror.innerMode(editor.getMode(), token.state);
if (innerMode.mode.helperType === "json") return;
token.state = innerMode.state;
// If it's not a 'word-style' token, ignore the token.
if (!/^[\w$_]*$/.test(token.string)) {
token = {start: cur.ch, end: cur.ch, string: "", state: token.state,
type: token.string == "." ? "property" : null};
} else if (token.end > cur.ch) {
token.end = cur.ch;
token.string = token.string.slice(0, cur.ch - token.start);
}
var tprop = token;
// If it is a property, find out what it is a property of.
while (tprop.type == "property") {
tprop = getToken(editor, Pos(cur.line, tprop.start));
if (tprop.string != ".") return;
tprop = getToken(editor, Pos(cur.line, tprop.start));
if (!context) var context = [];
context.push(tprop);
}
return {list: getCompletions(token, context, keywords, options),
from: Pos(cur.line, token.start),
to: Pos(cur.line, token.end)};
}
function javascriptHint(editor, options) {
return scriptHint(editor, javascriptKeywords,
function (e, cur) {return e.getTokenAt(cur);},
options);
};
CodeMirror.registerHelper("hint", "javascript", javascriptHint);
function getCoffeeScriptToken(editor, cur) {
// This getToken, it is for coffeescript, imitates the behavior of
// getTokenAt method in javascript.js, that is, returning "property"
// type and treat "." as independent token.
var token = editor.getTokenAt(cur);
if (cur.ch == token.start + 1 && token.string.charAt(0) == '.') {
token.end = token.start;
token.string = '.';
token.type = "property";
}
else if (/^\.[\w$_]*$/.test(token.string)) {
token.type = "property";
token.start++;
token.string = token.string.replace(/\./, '');
}
return token;
}
function coffeescriptHint(editor, options) {
return scriptHint(editor, coffeescriptKeywords, getCoffeeScriptToken, options);
}
CodeMirror.registerHelper("hint", "coffeescript", coffeescriptHint);
var stringProps = ("charAt charCodeAt indexOf lastIndexOf substring substr slice trim trimLeft trimRight " +
"toUpperCase toLowerCase split concat match replace search").split(" ");
var arrayProps = ("length concat join splice push pop shift unshift slice reverse sort indexOf " +
"lastIndexOf every some filter forEach map reduce reduceRight ").split(" ");
var funcProps = "prototype apply call bind".split(" ");
var javascriptKeywords = ("break case catch class const continue debugger default delete do else export extends false finally for function " +
"if in import instanceof new null return super switch this throw true try typeof var void while with yield").split(" ");
var coffeescriptKeywords = ("and break catch class continue delete do else extends false finally for " +
"if in instanceof isnt new no not null of off on or return switch then throw true try typeof until void while with yes").split(" ");
function forAllProps(obj, callback) {
if (!Object.getOwnPropertyNames || !Object.getPrototypeOf) {
for (var name in obj) callback(name)
} else {
for (var o = obj; o; o = Object.getPrototypeOf(o))
Object.getOwnPropertyNames(o).forEach(callback)
}
}
function getCompletions(token, context, keywords, options) {
var found = [], start = token.string, global = options && options.globalScope || window;
function maybeAdd(str) {
if (str.lastIndexOf(start, 0) == 0 && !arrayContains(found, str)) found.push(str);
}
function gatherCompletions(obj) {
if (typeof obj == "string") forEach(stringProps, maybeAdd);
else if (obj instanceof Array) forEach(arrayProps, maybeAdd);
else if (obj instanceof Function) forEach(funcProps, maybeAdd);
forAllProps(obj, maybeAdd)
}
if (context && context.length) {
// If this is a property, see if it belongs to some object we can
// find in the current environment.
var obj = context.pop(), base;
if (obj.type && obj.type.indexOf("variable") === 0) {
if (options && options.additionalContext)
base = options.additionalContext[obj.string];
if (!options || options.useGlobalScope !== false)
base = base || global[obj.string];
} else if (obj.type == "string") {
base = "";
} else if (obj.type == "atom") {
base = 1;
} else if (obj.type == "function") {
if (global.jQuery != null && (obj.string == '$' || obj.string == 'jQuery') &&
(typeof global.jQuery == 'function'))
base = global.jQuery();
else if (global._ != null && (obj.string == '_') && (typeof global._ == 'function'))
base = global._();
}
while (base != null && context.length)
base = base[context.pop().string];
if (base != null) gatherCompletions(base);
} else {
// If not, just look in the global object, any local scope, and optional additional-context
// (reading into JS mode internals to get at the local and global variables)
for (var v = token.state.localVars; v; v = v.next) maybeAdd(v.name);
for (var c = token.state.context; c; c = c.prev)
for (var v = c.vars; v; v = v.next) maybeAdd(v.name)
for (var v = token.state.globalVars; v; v = v.next) maybeAdd(v.name);
if (options && options.additionalContext != null)
for (var key in options.additionalContext)
maybeAdd(key);
if (!options || options.useGlobalScope !== false)
gatherCompletions(global);
forEach(keywords, maybeAdd);
}
return found;
}
}); | PypiClean |
/Flask_AppBuilder-4.3.6-py3-none-any.whl/flask_appbuilder/security/sqla/models.py | import datetime
from flask import g
from sqlalchemy import (
Boolean,
Column,
DateTime,
ForeignKey,
Integer,
Sequence,
String,
Table,
UniqueConstraint,
)
from sqlalchemy.ext.declarative import declared_attr
from sqlalchemy.orm import backref, relationship
from ... import Model
from ..._compat import as_unicode
_dont_audit = False
class Permission(Model):
__tablename__ = "ab_permission"
id = Column(Integer, Sequence("ab_permission_id_seq"), primary_key=True)
name = Column(String(100), unique=True, nullable=False)
def __repr__(self):
return self.name
class ViewMenu(Model):
__tablename__ = "ab_view_menu"
id = Column(Integer, Sequence("ab_view_menu_id_seq"), primary_key=True)
name = Column(String(250), unique=True, nullable=False)
def __eq__(self, other):
return (isinstance(other, self.__class__)) and (self.name == other.name)
def __neq__(self, other):
return self.name != other.name
def __repr__(self):
return self.name
assoc_permissionview_role = Table(
"ab_permission_view_role",
Model.metadata,
Column("id", Integer, Sequence("ab_permission_view_role_id_seq"), primary_key=True),
Column("permission_view_id", Integer, ForeignKey("ab_permission_view.id")),
Column("role_id", Integer, ForeignKey("ab_role.id")),
UniqueConstraint("permission_view_id", "role_id"),
)
class Role(Model):
__tablename__ = "ab_role"
id = Column(Integer, Sequence("ab_role_id_seq"), primary_key=True)
name = Column(String(64), unique=True, nullable=False)
permissions = relationship(
"PermissionView", secondary=assoc_permissionview_role, backref="role"
)
def __repr__(self):
return self.name
class PermissionView(Model):
__tablename__ = "ab_permission_view"
__table_args__ = (UniqueConstraint("permission_id", "view_menu_id"),)
id = Column(Integer, Sequence("ab_permission_view_id_seq"), primary_key=True)
permission_id = Column(Integer, ForeignKey("ab_permission.id"))
permission = relationship("Permission")
view_menu_id = Column(Integer, ForeignKey("ab_view_menu.id"))
view_menu = relationship("ViewMenu")
def __repr__(self):
return str(self.permission).replace("_", " ") + " on " + str(self.view_menu)
assoc_user_role = Table(
"ab_user_role",
Model.metadata,
Column("id", Integer, Sequence("ab_user_role_id_seq"), primary_key=True),
Column("user_id", Integer, ForeignKey("ab_user.id")),
Column("role_id", Integer, ForeignKey("ab_role.id")),
UniqueConstraint("user_id", "role_id"),
)
class User(Model):
__tablename__ = "ab_user"
id = Column(Integer, Sequence("ab_user_id_seq"), primary_key=True)
first_name = Column(String(64), nullable=False)
last_name = Column(String(64), nullable=False)
username = Column(String(64), unique=True, nullable=False)
password = Column(String(256))
active = Column(Boolean)
email = Column(String(320), unique=True, nullable=False)
last_login = Column(DateTime)
login_count = Column(Integer)
fail_login_count = Column(Integer)
roles = relationship("Role", secondary=assoc_user_role, backref="user")
created_on = Column(
DateTime, default=lambda: datetime.datetime.now(), nullable=True
)
changed_on = Column(
DateTime, default=lambda: datetime.datetime.now(), nullable=True
)
@declared_attr
def created_by_fk(self):
return Column(
Integer, ForeignKey("ab_user.id"), default=self.get_user_id, nullable=True
)
@declared_attr
def changed_by_fk(self):
return Column(
Integer, ForeignKey("ab_user.id"), default=self.get_user_id, nullable=True
)
created_by = relationship(
"User",
backref=backref("created", uselist=True),
remote_side=[id],
primaryjoin="User.created_by_fk == User.id",
uselist=False,
)
changed_by = relationship(
"User",
backref=backref("changed", uselist=True),
remote_side=[id],
primaryjoin="User.changed_by_fk == User.id",
uselist=False,
)
@classmethod
def get_user_id(cls):
try:
return g.user.id
except Exception:
return None
@property
def is_authenticated(self):
return True
@property
def is_active(self):
return self.active
@property
def is_anonymous(self):
return False
def get_id(self):
return as_unicode(self.id)
def get_full_name(self):
return "{0} {1}".format(self.first_name, self.last_name)
def __repr__(self):
return self.get_full_name()
class RegisterUser(Model):
__tablename__ = "ab_register_user"
id = Column(Integer, Sequence("ab_register_user_id_seq"), primary_key=True)
first_name = Column(String(64), nullable=False)
last_name = Column(String(64), nullable=False)
username = Column(String(64), unique=True, nullable=False)
password = Column(String(256))
email = Column(String(64), nullable=False)
registration_date = Column(DateTime, default=datetime.datetime.now, nullable=True)
registration_hash = Column(String(256)) | PypiClean |
/OctoBot-Trading-2.4.23.tar.gz/OctoBot-Trading-2.4.23/octobot_trading/personal_data/state.py | import asyncio
import contextlib
import octobot_commons.logging as logging
import octobot_trading.enums as enums
import octobot_trading.util as util
class State(util.Initializable):
PENDING_REFRESH_INTERVAL = 2
def __init__(self, is_from_exchange_data):
super().__init__()
# default state
self.state = enums.States.UNKNOWN
# if this state has been created from exchange data or OctoBot internal mechanism
self.is_from_exchange_data = is_from_exchange_data
# state lock
self.lock = asyncio.Lock()
# set after self.terminate has been executed (with or without raised exception)
self.terminated = asyncio.Event()
# set at True after synchronize has been called
self.has_already_been_synchronized_once = False
def is_pending(self) -> bool:
"""
:return: True if the state is pending for update
"""
return self.state is enums.States.UNKNOWN
def is_refreshing(self) -> bool:
"""
:return: True if the state is updating
"""
return self.state is enums.States.REFRESHING
def is_open(self) -> bool:
"""
:return: True if the instance is considered as open
"""
return not self.is_closed()
def is_closed(self) -> bool:
"""
:return: True if the instance is considered as closed
"""
return False
def get_logger(self):
"""
:return: the instance logger
"""
return logging.get_logger(self.__class__.__name__)
def log_event_message(self, state_message, error=None):
"""
Log a state event
"""
self.get_logger().info(state_message.value)
async def initialize_impl(self) -> None:
"""
Default async State initialization process
"""
await self.update()
def sync_initialize(self, forced=False):
"""
Default sync initialization process
"""
if not self.is_initialized or forced:
self.sync_update()
self.is_initialized = True
return True
return False
def sync_update(self):
if not self.is_refreshing():
if self.is_pending():
raise NotImplementedError("can't use sync_update on a pending state")
else:
self.trigger_sync_terminate()
else:
self.log_event_message(enums.StatesMessages.ALREADY_SYNCHRONIZING)
async def should_be_updated(self) -> bool:
"""
Defines if the instance should be updated
:return: True if the instance should be updated when necessary
"""
return True
async def update(self) -> None:
"""
Update the instance state if necessary.
Necessary when the state is not already synchronizing and when the instance should be updated.
Try to fix the pending state or terminate.
"""
if not self.is_refreshing():
if self.is_pending() and not await self.should_be_updated():
self.log_event_message(enums.StatesMessages.SYNCHRONIZING)
await self.synchronize()
else:
await self.trigger_terminate()
else:
self.log_event_message(enums.StatesMessages.ALREADY_SYNCHRONIZING)
async def trigger_terminate(self):
try:
async with self.lock:
await self.terminate()
finally:
self.on_terminate()
def trigger_sync_terminate(self):
try:
self.sync_terminate()
finally:
self.on_terminate()
async def synchronize(self, force_synchronization=False, catch_exception=False) -> None:
"""
Implement the exchange synchronization process
Should begin by setting the state to REFRESHING
Should end by :
- calling terminate if the state is terminated
- restoring the initial state if nothing has been changed with synchronization or if sync failed
:param force_synchronization: When True, for the update of the order from the exchange
:param catch_exception: When False raises the Exception during synchronize_order instead of catching it silently
"""
try:
await self.synchronize_with_exchange(force_synchronization=force_synchronization)
except Exception as e:
if catch_exception:
self.log_event_message(enums.StatesMessages.SYNCHRONIZING_ERROR, error=e)
else:
raise
finally:
self.has_already_been_synchronized_once = True
async def synchronize_with_exchange(self, force_synchronization: bool = False) -> None:
"""
Ask the exchange to update the order only if the state is not already refreshing
When the refreshing process starts set the state to enums.States.REFRESHING
Restore the previous state if the refresh process fails
:param force_synchronization: When True, for the update of the order from the exchange
"""
if self.is_refreshing():
self.log_event_message(enums.StatesMessages.ALREADY_SYNCHRONIZING)
else:
async with self.refresh_operation():
await self._synchronize_with_exchange(force_synchronization=force_synchronization)
@contextlib.asynccontextmanager
async def refresh_operation(self):
self.get_logger().debug("Starting refresh_operation")
previous_state = self.state
async with self.lock:
self.state = enums.States.REFRESHING
try:
yield
finally:
async with self.lock:
if self.state is enums.States.REFRESHING:
self.state = previous_state
self.get_logger().debug("Completed refresh_operation")
async def _synchronize_with_exchange(self, force_synchronization: bool = False) -> None:
"""
Called when state should be refreshed
:param force_synchronization: When True, for the update of the order from the exchange
"""
raise NotImplementedError("_synchronize_with_exchange not implemented")
async def terminate(self) -> None:
"""
Implement the state ending process
Can be portfolio updates, fees request, orders group updates, Trade creation etc...
"""
raise NotImplementedError("terminate not implemented")
def sync_terminate(self) -> None:
"""
Implement the state ending process
Can be portfolio updates, fees request, orders group updates, Trade creation etc...
"""
raise NotImplementedError("sync_terminate not implemented")
def on_terminate(self) -> None:
"""
Called after terminate is complete
"""
self.get_logger().debug(f"{self.__class__.__name__} terminated")
if not self.terminated.is_set():
self.terminated.set()
def __del__(self):
if not self.terminated.is_set() and self.terminated._waiters:
self.get_logger().error(f"{self.__class__.__name__} deleted before the terminated "
f"event has been set while tasks are waiting for it. "
f"Force setting event.")
self.terminated.set()
async def wait_for_terminate(self, timeout) -> None:
if self.terminated.is_set():
return
await asyncio.wait_for(self.terminated.wait(), timeout=timeout)
async def wait_for_next_state(self, timeout) -> None:
raise NotImplementedError("wait_for_next_state is not implemented")
async def on_refresh_successful(self):
"""
Called when synchronize succeed to update the instance
"""
raise NotImplementedError("on_refresh_successful not implemented")
def clear(self):
"""
Clear references
"""
raise NotImplementedError("clear not implemented") | PypiClean |
/FastDataTime-0.0.8.tar.gz/FastDataTime-0.0.8/README.md | # FastDataTime 时间处理包 | Fastdatatime time processing package
使用方法在作者的第三方仓库中,可以自行查看
The usage method is in the author's third-party warehouse and can be used by yourself
[codechina.csdn 教程链接|Tutorial links](https://codechina.csdn.net/qq_53280175/fastdatatime)
FastDataTime 只是一个简单的时间处理库,可以快速输出时间。
FastDataTime It is just a simple time processing library, which can output time quickly.
联系方式|contact information:qq:2097632843 邮件|e-mail:[email protected]
使用方法
导入库|Import library: from FastDataTime import OStime
或|or from FastDataTime import *
讲解OStime函数|Explain the 'OStime' function : OStime(time_a) time_a:有很多值|time_ a: There are many values.
示例|Example:
from FastDataTime import OStime as OS
OS.OStime('help')
结果|result:
各命令 对应各显示时间 如下:
==============================================================
1.nyr 获取当前年月日 使用方法:OStime('nyr')
2.12xs 获取当前12小时形式的时间 使用方法:OStime('12xs')
3.24xs24小时形式的时间 使用方法:OStime('24xs')
4.jh 获取当前年月日时分秒的时间 使用方法:OStime('jh')
5.m 获取当前时间的秒,为数字输出 使用方法:OStime('m')
6.f 获取当前时间的分,为数字输出 使用方法:OStime('f')
7.s 获取当前时间的时,为数字输出 使用方法:OStime('s')
8.r 获取当前时间的日期,为数字输出 使用方法:OStime('r')
9.y 获取当前时间的月,为数字输出 使用方法:OStime('y')
10.n 获取当前时间的年,为数字输出 使用方法:OStime('n')
11.GMT-8-Time_MS 获取GMT-8毫秒时间,数字输出 使用方法:OStime('GMT-8-Time_MS') 时间间隔1000ms"
12.Running_time 获取程序运行时间(有误差),数字输出 使用方法:OStime('Running_time')
13.stamp 获取时间戳
14.ctime Wed Sep
讲解Dtime函数
导入: from FastDataTime import
示例:
from FastDataTime import Dtime
Dtime.run_time()#获取时间不断系统输出
number 等于 数字和浮点数 属于输出时间间隔
Dtime.run_time('number')
如: Dtime.run_time('0.1')#每0.1毫秒输出系统时间
讲解programtime函数
导入:from FastDataTime import
示例:
from FastDataTime import programtime as pr
pr.get_program('it' )
参数:get_program(com, get_stmt, get_number, re, get_print)
com: 命令 有 'it','at'两个命令。
命令:'it' 获取程序运行时间,可添加次数。
在'it'命令下参数: get_stmt: 要计算程序变量名。 get_number: 计算次数
如:
get_program('it', xxx, '5')# xxx代表变量名,5代表运行次数
在'at'命令下参数: get_stmt: 要计算程序变量名。 re:计算重复次数。 get_number:要计算的次数。
如:
get_program('at', xxx, '2', '5')# xxx代表变量名,2代表重复次数,5代表运行次数。
剩余待完善............................................
#############################################################
PYmili
联系我
QQ:2097632843
邮件:[email protected]
版本:0.0.5
The display time corresponding to each command is as follows:
==============================================================
1.nyr get the current month, year and day usage: ostime ('nyr ')
2.12xs get the current 12 hour time usage: OStime('12xs')
3.24xs 24-hour time usage: OStime('24xs')
4. jh obtain the time of the current month, day, hour, minute and second. Usage: OStime('jh')
5. m obtain the second of the current time, which is digital output. Usage: OStime('m')
6. f obtain the minute of the current time, which is digital output. Usage: OStime('f')
7. s when obtaining the current time, it is digital output. Usage: OStime('s')
8. r get the date of the current time, which is digital output. Usage: OStime('r')
9. y obtain the month of the current time, which is the digital output. Usage: OStime('y')
10. n get the year of the current time, which is the digital output usage method: OStime('n')
11.GMT-8-Time_MS obtains GMT-8 ms time, and the digital output method is OStime('GMT-8-Time_MS') interval 1000ms
12.Running_time to obtain the running time of the program (with error). How to use digital output: OStime('Running_time')
13.stamp Get timestamp
14.ctime Wed Sep
Explain dtime function
Import: from fastdatatime import
Example:
from FastDataTime import Dtime
Dtime.run_time() # get the time constant system output
Number equals numbers and floating-point numbers, belonging to the output interval
Dtime.run_time('number')
For example: dtime.run_Time ('0.1') # output system time every 0.1 ms
Explain the programtime function
Import: from fastdatatime import
Example:
from FastDataTime import programtime as pr
pr.get_ program('it')
Parameter: get_program(com, get_stmt, get_number, re, get_print)
COM: the command has two commands, 'it' and 'at'.
Command: 'it' gets the program running time and can be added times.
Parameters under the 'it' command: get_ Stmt: the name of the program variable to be evaluated. get_ Number: number of calculations
For example:
get_ Program ('it', XXX,'5') # XXX represents the variable name and 5 represents the number of runs
Parameters under the 'at' command: get_ Stmt: the name of the program variable to be evaluated. Re: calculate the number of repetitions. get_ Number: the number of times to calculate.
For example:
get_ Program ('at', XXX, '2', '5') # XXX represents the variable name, 2 represents the number of repetitions, and 5 represents the number of run
Remaining to be improved
######################################################################
PYmili
Contact me
QQ:2097632843
ail: [email protected]
Version: 0.0.5 | PypiClean |
/CiteSoft-0.3.8-py3-none-any.whl/CiteSoft-0.3.8.data/data/README.md | CiteSoft_py is a python implementation of CiteSoft. CiteSoft is a plain text standard consisting of a format and a protocol that exports the citations for the end-users for whichever softwares they have used. CiteSoft has been designed so that software dev-users can rely upon it regardless of coding language or platform, and even for cases where multiple codes are working in a coupled manner.
Can be installed by 'pip install citesoft' which will also install semantic-version ( https://pypi.org/project/semantic-version/ and also PyYAML https://pypi.org/project/PyYAML/)
For the simplest way to learn how to use CiteSoft, open runExample.py then run it. Then open the two CiteSoft txt files generated (CiteSoftwareCheckpointsLog.txt and CiteSoftwareConsolidatedLog.txt), and also MathExample.py to see what happened.
Basically, when runExample.py is run, citations are generated in a "Checkpoint" file (based on the module that was called and the functions that were called inside MathExample.py). Finally, the citations are consolidated with duplicate entries removed.
There are two types of users of citesoft: dev-users and end-users.
FOR DEV-USERS:
There are are two syntaxes to include citations to their work. The only truly required fields are the unique_id (typically a URL or a DOI) and the software_name. The other valid_optional_fields are encouraged: ["version", "cite", "author", "doi", "url", "encoding", "misc"]. These optional fields are put into kwargs (see MathExample.py for syntax). In this module, all optional fields can be provided as lists of strings or individual strings (such as a list of authors).
1) An "import_cite" which causes a citation to be made when the the module is first imported.
CiteSoft.import_cite(unique_id=MathExample_unique_id, software_name="MathLib Example", write_immediately=True, **kwargs)
2) A "module_call_cite" which causes a citation to be made when a function in the module is actually called.
@CiteSoft.module_call_cite(unique_id=MathExample_unique_id, software_name="MathLib Example", **kwargs)
Subsequently, one would use compile_checkpoints_log & compile_consolidated_log (direct CiteSoft module functions), or @CiteSoft.after_call_compile_checkpoints_log & @CiteSoft.after_call_compile_consolidated_log (CiteSoft decorators) to create CiteSoftwareCheckpointsLog.txt and CiteSoftwareConsolidatedLog.txt.
For class-based codes, a logical choice is to use a pair of calls like this before a class's init function:
@CiteSoft.after_call_compile_consolidated_log()
@CiteSoft.module_call_cite(unique_id=MathExample_unique_id, software_name="MathLib Example", **kwargs)
def __init__(...)
CiteSoftLocal is NOT a full version of CiteSoft: it is file that only exports Checkpoints and which dev-users can include for distribution with their packages as a 'backup' in case an end-user tries to run the dev-user's package under conditions where CiteSoft or its dependencies are not installed.
FOR END-USERS:
The end-user may find the CiteSoftwareConsolidatedLog.txt to be convenient, but the authoritative list is the list inside CiteSoftwareCheckpoints.txt (though the checkpoint file may include duplicates). The end-user is responsible for citing ALL software used. To facilitate easy of doing so, the dev-user should call the consolidate command when appropriate (such as at the end of a simulation).
A typical CiteSoft entry looks like below (between the 'pre' tags):
<pre>
-
timestamp: >-
2020-08-25T11:43:30
unique_id: >-
https://docs.python.org/3/library/math.html
software_name: >-
The Python Library Reference: Mathematical functions
version:
- >-
3.6.3
author:
- >-
Van Rossum, Guido
cite:
- >-
Van Rossum, G. (2020). The Python Library Reference, release 3.8.2. Python Software Foundation.
</pre>
| PypiClean |
/LinOTP-2.11.1.tar.gz/LinOTP-2.11.1/linotp/tokens/__init__.py |
from linotp.lib.registry import ClassRegistry
from linotp.lib.error import TokenTypeNotSupportedError
from linotp.config.environment import get_activated_token_modules
from os import path, listdir, walk
import logging
log = logging.getLogger(__name__)
# ------------------------------------------------------------------------------
tokenclass_registry = ClassRegistry()
# ------------------------------------------------------------------------------
def reload_classes():
"""
iterates through the modules in this package
and import every single one of them
"""
# ---------------------------------------------------------------------- --
# if there is a list of predefined tokens in the linotp.ini
activated_modules = get_activated_token_modules()
if activated_modules:
for activated_module in activated_modules:
load_module(activated_module)
return
# ---------------------------------------------------------------------- --
# if no activated tokens specified, we import the local tokens
import_base = "linotp.tokens."
abs_file = path.abspath(__file__)
base_dir = path.dirname(abs_file)
# remove the filesystem base
for root, _subdirs, sfiles in walk(base_dir):
# remove the filesystem base
rel = root.replace(base_dir, '').replace(path.sep, '.').strip('.')
if rel:
rel = rel + '.'
for sfile in sfiles:
if sfile.endswith('.py') and not sfile.startswith('__'):
token_module = import_base + rel + sfile[:-3]
load_module(token_module)
return
def load_module(mod_rel):
"""
load a token module from a relative token module name
:param mod_rel:
:raises: TokenTypeNotSupportedError or genric Exception
"""
try:
__import__(mod_rel, globals=globals())
return True
except TokenTypeNotSupportedError:
log.warning('Token type not supported on this setup: %s', mod_rel)
except Exception as exx:
log.warning('unable to load token module : %r (%r)', mod_rel, exx)
return False
reload_classes() | PypiClean |
/HavNegpy-1.2.tar.gz/HavNegpy-1.2/docs/_build/html/_build/doctrees/nbsphinx/_build/html/_build/doctrees/nbsphinx/_build/doctrees/nbsphinx/_build/html/hn_module_tutorial.ipynb | # Tutorial for the HN module of HavNegpy package
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import HavNegpy as dd
%matplotlib qt
os.chdir(r'M:\Marshall_Data\mohamed_data\mohamed_data\n44')
def create_dataframe(f):
col_names = ['Freq', 'T', 'Eps1', 'Eps2']
#f = input(str("Enter the filename:"))
df = pd.read_csv(f, sep=r"\s+",index_col=False,usecols = [0,1,2,3],names=col_names,header=None,skiprows=4,encoding='unicode_escape',engine='python')
col1 = ['log f']
for start in range(0, len(df), 63):
name = df['T'][start]
#print(name)
col1.append(name)
df2 = pd.DataFrame()
f1 = df['Freq'][0:63].values
x1 = np.log10((f1))
e = pd.DataFrame(x1)
df2['log f'] = pd.concat([e],axis=1,ignore_index=True)
global Cooling,Heating
for start in range(0, len(df), 63):
f = df['Eps2'][start:start+63].values
ep = np.log10(f)
d = pd.DataFrame(ep)
df2[start] = pd.concat([d],axis=1,ignore_index=True)
df2.columns = col1
'''
a = int(len(col1)/3)
b = 2*a
c = int(len(col1)) - b
Heating1 = df2.iloc[8:,0:a+1]
Cooling = df2.iloc[8:,a+1:b+1]
Heating2 = df2.iloc[8:,b+1:]
heat1_col = col1[0:a+1]
cool_col = col1[a+1:b+1]
heat2_col = col1[b+1:]
Cooling.columns = cool_col
Heating1.columns = heat1_col
Heating2.columns = heat2_col
f2 = df['Freq'][8:59].values
x2 = np.log10((f2))
Cooling['Freq'] = x2
Heating1['Freq'] = x2
Heating2['Freq'] = x2
'''
Cooling = df2.iloc[:,0:25]
Heating = df2.iloc[:,25:]
return df,df2,Cooling,Heating #Heating2
df,df2,cool,heat = create_dataframe('EPS.TXT')
x,y = df2['log f'][9:], heat[40][9:]
plt.figure()
plt.scatter(x,y,label='data for fitting')
plt.xlabel('log f [Hz]')
plt.ylabel('log $\epsilon$"')
plt.legend()
plt.title('Example for HN fitting')
```
image of the plot we are using in this tutorial

```
''' instantiate the HN module from HavgNegpy'''
hn = dd.HN()
''' select range to perform hn fitting'''
''' the select range functions pops in a separate window and allows you two clicks to select the region of interest (ROI)'''
''' In this tutorial, I'll plot the ROI and append as an image in the next cell'''
x1,y1 = hn.select_range(x,y)
''' view the data from select range'''
plt.scatter(x1,y1,label = 'Data for fitting')
plt.xlabel('log f [Hz]')
plt.ylabel('log $\epsilon$"')
plt.legend()
plt.title('ROI selected from HN module')
```
image of the ROI from HN module
```
''' dump the initial guess parameters using dump parameters method (varies for each fn), which dumps the parameters in a json file'''
''' this is required before performing the first fitting as it takes the initial guess from the json file created'''
hn.dump_parameters_hn()
''' view the initial guess for the ROI using initial_view method'''
''' I'll append the image in the next cell'''
hn.initial_view_hn(x1,y1)
```
image of the initial guess
```
''' pefrorm least squares fitting'''
''' The image of the curve fit is added in the next cell '''
hn.fit(x1,y1)
```
Example of the fit performed using single HN function
the procedure is similar for double HN and HN with conductivity

```
'''create a file to save fit results using create_analysis file method'''
''' before saving fit results an analysis file has to be created '''
hn.create_analysis_file()
''' save the fit results using save_fit method of the corresponding fit function'''
''' takes one argument, read more on the documentation'''
hn.save_fit_hn(1)
```
| PypiClean |
/AuthorizeSauce-0.5.0.tar.gz/AuthorizeSauce-0.5.0/authorize/data.py | import calendar
from datetime import datetime
import re
from authorize.exceptions import AuthorizeInvalidError
CARD_TYPES = {
'visa': r'4\d{12}(\d{3})?$',
'amex': r'37\d{13}$',
'mc': r'5[1-5]\d{14}$',
'discover': r'6011\d{12}',
'diners': r'(30[0-5]\d{11}|(36|38)\d{12})$'
}
class CreditCard(object):
"""
Represents a credit card that can be charged.
Pass in the credit card number, expiration date, CVV code, and optionally
a first name and last name. The card will be validated upon instatiation
and will raise an
:class:`AuthorizeInvalidError <authorize.exceptions.AuthorizeInvalidError>`
for invalid credit card numbers, past expiration dates, etc.
"""
def __init__(self, card_number=None, exp_year=None, exp_month=None,
cvv=None, first_name=None, last_name=None):
self.card_number = re.sub(r'\D', '', str(card_number))
self.exp_year = str(exp_year)
self.exp_month = str(exp_month)
self.cvv = str(cvv)
self.first_name = first_name
self.last_name = last_name
self.validate()
def __repr__(self):
return '<CreditCard {0.card_type} {0.safe_number}>'.format(self)
def validate(self):
"""
Validates the credit card data and raises an
:class:`AuthorizeInvalidError <authorize.exceptions.AuthorizeInvalidError>`
if anything doesn't check out. You shouldn't have to call this
yourself.
"""
try:
num = [int(n) for n in self.card_number]
except ValueError:
raise AuthorizeInvalidError('Credit card number is not valid.')
if sum(num[::-2] + [sum(divmod(d * 2, 10)) for d in num[-2::-2]]) % 10:
raise AuthorizeInvalidError('Credit card number is not valid.')
if datetime.now() > self.expiration:
raise AuthorizeInvalidError('Credit card is expired.')
if not re.match(r'^\d{3,4}$', self.cvv):
raise AuthorizeInvalidError('Credit card CVV is invalid format.')
if not self.card_type:
raise AuthorizeInvalidError('Credit card number is not valid.')
@staticmethod
def exp_time(exp_month, exp_year):
exp_year, exp_month = int(exp_year), int(exp_month)
return datetime(exp_year, exp_month,
calendar.monthrange(exp_year, exp_month)[1],
23, 59, 59)
@property
def expiration(self):
"""
The credit card expiration date as a ``datetime`` object.
"""
return self.exp_time(self.exp_month, self.exp_year)
@property
def safe_number(self):
"""
The credit card number with all but the last four digits masked. This
is useful for storing a representation of the card without keeping
sensitive data.
"""
mask = '*' * (len(self.card_number) - 4)
return '{0}{1}'.format(mask, self.card_number[-4:])
@property
def card_type(self):
"""
The credit card issuer, such as Visa or American Express, which is
determined from the credit card number. Recognizes Visa, American
Express, MasterCard, Discover, and Diners Club.
"""
for card_type, card_type_re in CARD_TYPES.items():
if re.match(card_type_re, self.card_number):
return card_type
class Address(object):
"""
Represents a billing address for a charge. Pass in the street, city, state
and zip code, and optionally country for the address.
"""
def __init__(self, street=None, city=None, state=None, zip_code=None,
country='US'):
self.street = street
self.city = city
self.state = state
self.zip_code = zip_code
self.country = country
def __repr__(self):
return '<Address {0.street}, {0.city}, {0.state} {0.zip_code}>' \
.format(self) | PypiClean |
/Office365_REST_with_timeout-0.1.1-py3-none-any.whl/office365/sharepoint/sharing/role_type.py | class RoleType:
"""Specifies the types of role definitions that are available for users and groups."""
None_ = 0
"""The role definition has no rights on the site (2)."""
Guest = 1
"""The role definition has limited right to view pages and specific page elements.
This role is used to give users access to a particular page, list, or item in a list, without granting rights
to view the entire site (2). Users cannot be added explicitly to the guest role; users who are given access to
lists or document libraries by using permissions for a specific list are added automatically to the guest role.
he guest role cannot be customized or deleted."""
Reader = 2
"""
The role definition has a right to view items, personalize Web Parts, use alerts, and create a top-level site.
A reader can only read a site (2); the reader cannot add content. When a reader creates a site (2), the reader
becomes the site (2) owner and a member of the administrator role for the new site (2).
This does not affect the user's role membership for any other site (2).
Rights included are CreateSSCSite, ViewListItems, and ViewPages.
"""
Contributor = 3
"""
The role definition has reader rights, and a right to add items, edit items, delete items, manage list permissions,
manage personal views, personalize Web Part Pages, and browse directories. Includes all rights in the reader role,
and AddDelPrivateWebParts, AddListItems, BrowseDirectories, CreatePersonalGroups, DeleteListItems, EditListItems,
ManagePersonalViews, and UpdatePersonalWebParts roles. Contributors cannot create new lists or document libraries,
but they can add content to existing lists and document libraries.
"""
WebDesigner = 4
"""Has Contributor rights, plus rights to cancel check-out, delete items, manage lists, add and customize pages,
define and apply themes and borders, and link style sheets. Includes all rights in the Contributor role, plus the
following: AddAndCustomizePages, ApplyStyleSheets, ApplyThemeAndBorder, CancelCheckout, ManageLists. WebDesigners
can modify the structure of the site and create new lists or document libraries. The value = 4."""
Administrator = 5
"""Has all rights from other roles, plus rights to manage roles and view usage analysis data.
Includes all rights in the WebDesigner role, plus the following: ManageListPermissions, ManageRoles, ManageSubwebs,
ViewUsageData. The Administrator role cannot be customized or deleted, and must always contain at least one member.
Members of the Administrator role always have access to, or can grant themselves access to, any item in the
Web site. The value = 5."""
Editor = 6
"""
The role definition has reader rights, plus rights to Review items. Inclues all rights in the Reader role, plus
the following: ReviewListItems. Reviewers cannot create new lists and document libraries, but they can add
comments to existing list items or documents.
"""
Reviewer = 7
"""The role definition is a special reader role definition with restricted rights, it has a right to view
the content of the file but cannot download the file directly. Includes all rights in the Guest role, plus
the following: ViewPages, ViewListItems and ViewItemsRequiresOpen."""
RestrictedReader = 8
System = 255
"""For SharePoint internal use only. System roles can not be deleted, nor modified.""" | PypiClean |
/Flask_RESTful_DRY-0.3.1-py3-none-any.whl/flask_dry/api/links.py |
r'''Global link registry.
Links are registered at program startup. First, they need a flask app:
>>> from .app import DRY_Flask
>>> current_app = DRY_Flask(__name__) # overrides import for testing
And some authorization requirements:
>>> from .authorization import Requirement, Base_auth_context
>>> class Test_requirement(Requirement):
... def validate(self, context, debug):
... return self.role == context.role
>>> admin_role = Test_requirement(role='admin')
>>> user_role = Test_requirement(role='user')
And authorization contexts:
>>> class Auth_context(Base_auth_context):
... def __init__(self, role):
... Base_auth_context.__init__(self)
... self.role = role
>>> admin_context = Auth_context('admin')
>>> user_context = Auth_context('user')
>>> other_context = Auth_context('other')
Now we can register 5 links:
>>> register_link(current_app, (), 'rc1', 'r1.1', 'url1.1', ())
>>> register_link(current_app, (), 'rc1', 'r1.2', 'url1.2', (admin_role,))
>>> register_link(current_app, (), 'rc2', 'r2.1', 'url2.1', (user_role,))
>>> register_link(current_app, ('a',), 'rc3', 'r3.1', 'url3.1/{a}',
... ())
>>> register_link(current_app, ('a', 'b'), 'rc4', 'r4.1', 'url4.1/{a}/{b}',
... (admin_role, user_role))
Which can be looked up en-mass as needed:
>>> from pprint import pprint
>>> with current_app.app_context():
... pprint(get_relation_category_links('rc1', other_context))
({'rc1': {'r1.1': 'url1.1'}}, False)
>>> with current_app.app_context():
... pprint(get_relation_category_links('rc1', admin_context))
({'rc1': {'r1.1': 'url1.1', 'r1.2': 'url1.2'}}, False)
>>> with current_app.app_context():
... pprint(get_keyed_links(dict(a='a_value'), other_context))
({'rc3': {'r3.1': 'url3.1/a_value'}}, True)
>>> with current_app.app_context():
... pprint(get_keyed_links(dict(a='a_value', b='b_value', c='c_value'),
... other_context))
({'rc3': {'r3.1': 'url3.1/a_value'}}, False)
>>> with current_app.app_context():
... pprint(get_keyed_links(dict(a='a_value', b='b_value', c='c_value'),
... admin_context))
({'rc3': {'r3.1': 'url3.1/a_value'},
'rc4': {'r4.1': 'url4.1/a_value/b_value'}},
False)
'''
from itertools import chain, combinations, groupby
from operator import itemgetter
from flask import current_app
from .authorization import Anybody
def register_link(app, keys, relation_category, relation, url_format,
authorization_requirements):
# Condense requirements
requirements = []
for r in authorization_requirements:
if isinstance(r, Anybody):
requirements = []
break
if not any(r.equivalent(r2) for r2 in requirements):
requirements.append(r)
if not keys:
app.dry_relation_category_links[relation_category].append(
(relation, url_format, tuple(requirements)))
else:
app.dry_keyed_links[tuple(sorted(keys))].append(
(relation_category, relation, url_format, tuple(requirements)))
def get_relation_category_links(relation_category, auth_context):
r'''Returns {relation_category: {relation: url}}, cache_public.
'''
links, cache_public = _filter(
current_app.dry_relation_category_links[relation_category],
auth_context)
if links:
return {relation_category: dict(links)}, cache_public
return {}, cache_public
def get_all_relation_category_links(auth_context):
r'''Returns {relation_category: {relation: url}}, cache_public.
'''
cache_public = True
all_rc_links = {}
for rc in current_app.dry_relation_category_links.keys():
rc_links, public = get_relation_category_links(rc, auth_context)
all_rc_links.update(rc_links)
if not public:
cache_public = False
return all_rc_links, cache_public
def get_keyed_links(keys, auth_context):
r'''Returns {relation_category: {relation: url}}, cache_public.
Grabs all subsets of keys. Substitutes the key values into the urls.
`keys` is {key_name: key_value}.
'''
auth_links, cache_public = _filter(
chain.from_iterable(current_app.dry_keyed_links[tuple(sorted(subset))]
for subset in powerset(keys.keys())),
auth_context)
return ({relation_category: {relation: url_format.format(**keys)
for _, relation, url_format in links}
for relation_category, links
in groupby(sorted(auth_links, key=itemgetter(0)),
key=itemgetter(0))},
cache_public)
def _filter(links, auth_context):
r'''Returns authorized links, cache_public.
'''
auth_links = []
cache_public = True
for *link, requirements in links:
if requirements:
cache_public = False
if any(auth_context.meets(r, debug=False) for r in requirements):
auth_links.append(link)
else:
auth_links.append(link)
return auth_links, cache_public
# From itertools recipes, modified to not generate the empty sequence.
def powerset(iterable):
r'''
>>> tuple(powerset([1,2,3]))
((1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3))
'''
s = tuple(iterable)
return chain.from_iterable(combinations(s, r) for r in range(1, len(s)+1)) | PypiClean |
/Firenado-0.9.0a2.tar.gz/Firenado-0.9.0a2/firenado/service.py |
from .sqlalchemy import with_session
import functools
import importlib
import logging
logger = logging.getLogger(__name__)
class FirenadoService:
""" Base class to handle services. A Firenado service is usually connected
to a handler or a service.
The developer can add extra configuration using the configuration_service
method and can get a data source from the data connected instance using
either get_data_sources or get_data_source methods.
"""
def __init__(self, consumer, data_source=None):
self.consumer = consumer
self.data_source = data_source
self.configure_service()
def configure_service(self):
""" Use this method to add configurations to your service class.
"""
pass
def get_data_source(self, name):
""" Returns a data source by its given name.
"""
return self.get_data_sources()[name]
def get_data_sources(self):
""" If a data connected is returned then returns all data sources.
If not returns None.
:return: The data connected data sources
"""
data_connected = self.data_connected
if data_connected is not None:
return data_connected.data_sources
return None
@property
def data_connected(self):
""" Will recurse over services until the data connected instance.
If service has no consumer returns None.
:return: The data connected object in the top of the hierarchy.
"""
if self.consumer is None:
return None
from firenado.data import DataConnectedMixin
if isinstance(self.consumer, DataConnectedMixin):
return self.consumer
invert_op = getattr(self.consumer, "get_data_connected", None)
if callable(invert_op):
return self.consumer.get_data_connected()
return self.consumer.data_connected
def served_by(service, attribute_name=None):
import warnings
warnings.warn(
"The \"firenado.service.served_by\" function is depreciated. "
"Please use firenado.service.with_service instead.",
DeprecationWarning, 2)
return with_service(service, attribute_name)
def with_service(service, attribute_name=None):
""" Decorator that connects a service to a service consumer.
"""
def f_wrapper(method):
@functools.wraps(method)
def wrapper(self, *args, **kwargs):
if isinstance(service, str):
service_x = service.split('.')
service_module = importlib.import_module(
'.'.join(service_x[:-1]))
service_class = getattr(service_module, service_x[-1])
service_name = service_x[-1]
else:
service_class = service
service_name = service.__name__
service_attribute = ''
if attribute_name is None:
first = True
for s in service_name:
if s.isupper():
if first:
service_attribute = ''.join([
service_attribute, s.lower()])
else:
service_attribute = ''.join([
service_attribute, '_', s.lower()])
else:
service_attribute = ''.join([service_attribute, s])
first = False
else:
service_attribute = attribute_name
must_set_service = False
if not hasattr(self, service_attribute):
must_set_service = True
else:
if getattr(self, service_attribute) is None:
must_set_service = True
if must_set_service:
setattr(self, service_attribute, service_class(self))
return method(self, *args, **kwargs)
return wrapper
return f_wrapper
def sessionned(*args, **kwargs):
import warnings
warnings.warn(
"The \"firenado.service.sessioned\" module is depreciated. "
"Please use firenado.sqlalchemy.with_session instead.",
DeprecationWarning, 2)
return with_session(*args, **kwargs) | PypiClean |
/GeoNode-3.2.0-py3-none-any.whl/geonode/services/serviceprocessors/base.py | import logging
from urllib.parse import quote
from django.conf import settings
from django.urls import reverse
from urllib.parse import urlencode, urlparse, urljoin, parse_qs, urlunparse
from geonode import geoserver
from geonode.utils import check_ogc_backend
from .. import enumerations
from .. import models
if check_ogc_backend(geoserver.BACKEND_PACKAGE):
from geonode.geoserver.helpers import gs_catalog as catalog
else:
catalog = None
logger = logging.getLogger(__name__)
def get_proxified_ows_url(url, version='1.3.0', proxy_base=None):
"""
clean an OWS URL of basic service elements
source: https://stackoverflow.com/a/11640565
"""
if url is None or not url.startswith('http'):
return url
filtered_kvp = {}
basic_service_elements = ('service', 'version', 'request')
parsed = urlparse(url)
qd = parse_qs(parsed.query, keep_blank_values=True)
version = qd['version'][0] if 'version' in qd else version
for key, value in qd.items():
if key.lower() not in basic_service_elements:
filtered_kvp[key] = value
base_ows_url = urlunparse([
parsed.scheme,
parsed.netloc,
parsed.path,
parsed.params,
quote(urlencode(filtered_kvp, doseq=True), safe=''),
parsed.fragment
])
ows_request = quote(
urlencode(
qd,
doseq=True),
safe='') if qd else f"version%3D{version}%26request%3DGetCapabilities%26service%3Dwms"
proxy_base = proxy_base if proxy_base else urljoin(
settings.SITEURL, reverse('proxy'))
ows_url = quote(base_ows_url, safe='')
proxified_url = f"{proxy_base}?url={ows_url}%3F{ows_request}"
return (version, proxified_url, base_ows_url)
def get_geoserver_cascading_workspace(create=True):
"""Return the geoserver workspace used for cascaded services
The workspace can be created it if needed.
"""
name = getattr(settings, "CASCADE_WORKSPACE", "cascaded-services")
workspace = catalog.get_workspace(name)
if workspace is None and create:
uri = f"http://www.geonode.org/{name}"
workspace = catalog.create_workspace(name, uri)
return workspace
class ServiceHandlerBase(object):
"""Base class for remote service handlers
This class is not to be instantiated directly, but rather subclassed by
concrete implementations. The method stubs defined here must be implemented
in derived classes.
"""
url = None
service_type = None
name = ""
indexing_method = None
def __init__(self, url):
self.url = url
@property
def is_cascaded(self):
return True if self.indexing_method == enumerations.CASCADED else False
def create_geonode_service(self, owner, parent=None):
"""Create a new geonode.service.models.Service instance
Saving the service instance in the database is not a concern of this
method, it only deals with creating the instance.
:arg owner: The user who will own the service instance
:type owner: geonode.people.models.Profile
"""
raise NotImplementedError
def get_keywords(self):
raise NotImplementedError
def get_resource(self, resource_id):
"""Return a single resource's representation."""
raise NotImplementedError
def get_resources(self):
"""Return an iterable with the service's resources."""
raise NotImplementedError
def harvest_resource(self, resource_id, geonode_service):
"""Harvest a single resource from the service
This method creates new ``geonode.layers.models.Layer``
instances (and their related objects too) and save them in the
database.
:arg resource_id: The resource's identifier
:type resource_id: str
:arg geonode_service: The already saved service instance
:type geonode_service: geonode.services.models.Service
"""
raise NotImplementedError
def has_resources(self):
raise NotImplementedError
def has_unharvested_resources(self, geonode_service):
already_done = list(models.HarvestJob.objects.values_list(
"resource_id", flat=True).filter(service=geonode_service))
for resource in self.get_resources():
if resource.id not in already_done:
result = True
break
else:
result = False
return result
class CascadableServiceHandlerMixin(object):
def create_cascaded_store(self):
raise NotImplementedError | PypiClean |
/Adeepspeed-0.9.2.tar.gz/Adeepspeed-0.9.2/deepspeed/comm/utils.py |
# DeepSpeed Team
import os
import inspect
from deepspeed.utils import get_caller_func
def get_local_rank_from_launcher():
# DeepSpeed launcher will set it so get from there
rank = os.environ.get('LOCAL_RANK')
if rank is None:
rank = os.environ.get('OMPI_COMM_WORLD_LOCAL_RANK')
# Make it a single process job and set rank to 0
if rank is None:
rank = 0
return int(rank)
def get_world_rank_from_launcher():
# DeepSpeed launcher will set it so get from there
rank = os.environ.get('RANK')
if rank is None:
rank = os.environ.get('OMPI_COMM_WORLD_RANK')
# Make it a single process job and set rank to 0
if rank is None:
rank = 0
return int(rank)
def get_world_size_from_launcher():
# DeepSpeed launcher will set it so get from there
size = os.environ.get('WORLD_SIZE')
rank = os.environ.get('RANK')
if size is None:
size = os.environ.get('OMPI_COMM_WORLD_SIZE')
# Make it a single process job and set size to 1
if size is None:
size = 1
if rank == 0:
print(f"set world size to {size}")
return int(size)
def get_default_args(func):
signature = inspect.signature(func)
return {k: v.default for k, v in signature.parameters.items() if v.default is not inspect.Parameter.empty}
# We need this hacky function since torch doesn't consistently name or place the input tensor args
def get_tensor_position(func):
sig_params = inspect.signature(func).parameters
arg = None
# most colls
if 'tensor' in sig_params:
arg = 'tensor'
# reduce scatter coll
elif 'input_list' in sig_params:
arg = 'input_list'
# all_to_all and torch multiGPU colls
elif 'input_tensor_list' in sig_params:
arg = 'input_tensor_list'
if arg is None:
return -1
else:
return list(sig_params).index(arg)
def get_tensor_kwarg(func, kwargs):
func_args = get_default_args(func)
func_args.update(kwargs)
arg = None
if 'tensor' in func_args:
arg = func_args['tensor']
elif 'input_list' in func_args:
arg = func_args['input_list']
elif 'input_tensor_list' in func_args:
arg = func_args['input_tensor_list']
return arg
def get_msg_size_from_args(func, *args, **kwargs):
# 3 cases:
# - tensor arg is in args
# - tensor arg is in kwargs
# - tensor arg is not present (e.g. barrier)
tensor_arg_position = -1
tensor_arg = None
# check if tensor arg is in args
if len(args) > 0:
tensor_arg_position = get_tensor_position(func)
if tensor_arg_position > -1:
tensor_arg = args[get_tensor_position(func)]
# check if tensor arg is in kwargs
if tensor_arg is None and len(kwargs) > 0:
tensor_arg = get_tensor_kwarg(func, kwargs)
# if tensor arg is not present, no data is being transmitted
if tensor_arg is None:
return 0
else:
# Sum of tensor sizes for list colls such as torch's all_to_all
# NOTE: msg_size for list colls will not be the actual size transmitted by a given MPI/NCCL call within the coll op. Instead, it's the total amount of data transmitted.
if type(tensor_arg) is list:
return sum(x.element_size() * x.nelement() for x in tensor_arg)
else:
return tensor_arg.element_size() * tensor_arg.nelement()
def get_debug_log_name(func_args, debug):
if debug:
return func_args['log_name'] + ' | [Caller Func: ' + get_caller_func() + ']'
else:
return func_args['log_name'] | PypiClean |
/Euphorie-15.0.2.tar.gz/Euphorie-15.0.2/src/euphorie/client/resources/oira/script/chunks/27856.874d647f4a8608c8f3b2.min.js | (self.webpackChunk_patternslib_patternslib=self.webpackChunk_patternslib_patternslib||[]).push([[27856],{27856:function(e){
/*! @license DOMPurify 3.0.3 | (c) Cure53 and other contributors | Released under the Apache license 2.0 and Mozilla Public License 2.0 | github.com/cure53/DOMPurify/blob/3.0.3/LICENSE */
e.exports=function(){"use strict";const{entries:e,setPrototypeOf:t,isFrozen:n,getPrototypeOf:o,getOwnPropertyDescriptor:r}=Object;let{freeze:i,seal:a,create:l}=Object,{apply:c,construct:s}="undefined"!=typeof Reflect&&Reflect;c||(c=function(e,t,n){return e.apply(t,n)}),i||(i=function(e){return e}),a||(a=function(e){return e}),s||(s=function(e,t){return new e(...t)});const u=_(Array.prototype.forEach),m=_(Array.prototype.pop),p=_(Array.prototype.push),f=_(String.prototype.toLowerCase),d=_(String.prototype.toString),h=_(String.prototype.match),g=_(String.prototype.replace),T=_(String.prototype.indexOf),y=_(String.prototype.trim),E=_(RegExp.prototype.test),A=b(TypeError);function _(e){return function(t){for(var n=arguments.length,o=new Array(n>1?n-1:0),r=1;r<n;r++)o[r-1]=arguments[r];return c(e,t,o)}}function b(e){return function(){for(var t=arguments.length,n=new Array(t),o=0;o<t;o++)n[o]=arguments[o];return s(e,n)}}function N(e,o,r){var i;r=null!==(i=r)&&void 0!==i?i:f,t&&t(e,null);let a=o.length;for(;a--;){let t=o[a];if("string"==typeof t){const e=r(t);e!==t&&(n(o)||(o[a]=e),t=e)}e[t]=!0}return e}function S(t){const n=l(null);for(const[o,r]of e(t))n[o]=r;return n}function R(e,t){for(;null!==e;){const n=r(e,t);if(n){if(n.get)return _(n.get);if("function"==typeof n.value)return _(n.value)}e=o(e)}function n(e){return console.warn("fallback value for",e),null}return n}const w=i(["a","abbr","acronym","address","area","article","aside","audio","b","bdi","bdo","big","blink","blockquote","body","br","button","canvas","caption","center","cite","code","col","colgroup","content","data","datalist","dd","decorator","del","details","dfn","dialog","dir","div","dl","dt","element","em","fieldset","figcaption","figure","font","footer","form","h1","h2","h3","h4","h5","h6","head","header","hgroup","hr","html","i","img","input","ins","kbd","label","legend","li","main","map","mark","marquee","menu","menuitem","meter","nav","nobr","ol","optgroup","option","output","p","picture","pre","progress","q","rp","rt","ruby","s","samp","section","select","shadow","small","source","spacer","span","strike","strong","style","sub","summary","sup","table","tbody","td","template","textarea","tfoot","th","thead","time","tr","track","tt","u","ul","var","video","wbr"]),D=i(["svg","a","altglyph","altglyphdef","altglyphitem","animatecolor","animatemotion","animatetransform","circle","clippath","defs","desc","ellipse","filter","font","g","glyph","glyphref","hkern","image","line","lineargradient","marker","mask","metadata","mpath","path","pattern","polygon","polyline","radialgradient","rect","stop","style","switch","symbol","text","textpath","title","tref","tspan","view","vkern"]),L=i(["feBlend","feColorMatrix","feComponentTransfer","feComposite","feConvolveMatrix","feDiffuseLighting","feDisplacementMap","feDistantLight","feDropShadow","feFlood","feFuncA","feFuncB","feFuncG","feFuncR","feGaussianBlur","feImage","feMerge","feMergeNode","feMorphology","feOffset","fePointLight","feSpecularLighting","feSpotLight","feTile","feTurbulence"]),k=i(["animate","color-profile","cursor","discard","font-face","font-face-format","font-face-name","font-face-src","font-face-uri","foreignobject","hatch","hatchpath","mesh","meshgradient","meshpatch","meshrow","missing-glyph","script","set","solidcolor","unknown","use"]),C=i(["math","menclose","merror","mfenced","mfrac","mglyph","mi","mlabeledtr","mmultiscripts","mn","mo","mover","mpadded","mphantom","mroot","mrow","ms","mspace","msqrt","mstyle","msub","msup","msubsup","mtable","mtd","mtext","mtr","munder","munderover","mprescripts"]),x=i(["maction","maligngroup","malignmark","mlongdiv","mscarries","mscarry","msgroup","mstack","msline","msrow","semantics","annotation","annotation-xml","mprescripts","none"]),v=i(["#text"]),O=i(["accept","action","align","alt","autocapitalize","autocomplete","autopictureinpicture","autoplay","background","bgcolor","border","capture","cellpadding","cellspacing","checked","cite","class","clear","color","cols","colspan","controls","controlslist","coords","crossorigin","datetime","decoding","default","dir","disabled","disablepictureinpicture","disableremoteplayback","download","draggable","enctype","enterkeyhint","face","for","headers","height","hidden","high","href","hreflang","id","inputmode","integrity","ismap","kind","label","lang","list","loading","loop","low","max","maxlength","media","method","min","minlength","multiple","muted","name","nonce","noshade","novalidate","nowrap","open","optimum","pattern","placeholder","playsinline","poster","preload","pubdate","radiogroup","readonly","rel","required","rev","reversed","role","rows","rowspan","spellcheck","scope","selected","shape","size","sizes","span","srclang","start","src","srcset","step","style","summary","tabindex","title","translate","type","usemap","valign","value","width","xmlns","slot"]),I=i(["accent-height","accumulate","additive","alignment-baseline","ascent","attributename","attributetype","azimuth","basefrequency","baseline-shift","begin","bias","by","class","clip","clippathunits","clip-path","clip-rule","color","color-interpolation","color-interpolation-filters","color-profile","color-rendering","cx","cy","d","dx","dy","diffuseconstant","direction","display","divisor","dur","edgemode","elevation","end","fill","fill-opacity","fill-rule","filter","filterunits","flood-color","flood-opacity","font-family","font-size","font-size-adjust","font-stretch","font-style","font-variant","font-weight","fx","fy","g1","g2","glyph-name","glyphref","gradientunits","gradienttransform","height","href","id","image-rendering","in","in2","k","k1","k2","k3","k4","kerning","keypoints","keysplines","keytimes","lang","lengthadjust","letter-spacing","kernelmatrix","kernelunitlength","lighting-color","local","marker-end","marker-mid","marker-start","markerheight","markerunits","markerwidth","maskcontentunits","maskunits","max","mask","media","method","mode","min","name","numoctaves","offset","operator","opacity","order","orient","orientation","origin","overflow","paint-order","path","pathlength","patterncontentunits","patterntransform","patternunits","points","preservealpha","preserveaspectratio","primitiveunits","r","rx","ry","radius","refx","refy","repeatcount","repeatdur","restart","result","rotate","scale","seed","shape-rendering","specularconstant","specularexponent","spreadmethod","startoffset","stddeviation","stitchtiles","stop-color","stop-opacity","stroke-dasharray","stroke-dashoffset","stroke-linecap","stroke-linejoin","stroke-miterlimit","stroke-opacity","stroke","stroke-width","style","surfacescale","systemlanguage","tabindex","targetx","targety","transform","transform-origin","text-anchor","text-decoration","text-rendering","textlength","type","u1","u2","unicode","values","viewbox","visibility","version","vert-adv-y","vert-origin-x","vert-origin-y","width","word-spacing","wrap","writing-mode","xchannelselector","ychannelselector","x","x1","x2","xmlns","y","y1","y2","z","zoomandpan"]),M=i(["accent","accentunder","align","bevelled","close","columnsalign","columnlines","columnspan","denomalign","depth","dir","display","displaystyle","encoding","fence","frame","height","href","id","largeop","length","linethickness","lspace","lquote","mathbackground","mathcolor","mathsize","mathvariant","maxsize","minsize","movablelimits","notation","numalign","open","rowalign","rowlines","rowspacing","rowspan","rspace","rquote","scriptlevel","scriptminsize","scriptsizemultiplier","selection","separator","separators","stretchy","subscriptshift","supscriptshift","symmetric","voffset","width","xmlns"]),U=i(["xlink:href","xml:id","xlink:title","xml:space","xmlns:xlink"]),P=a(/\{\{[\w\W]*|[\w\W]*\}\}/gm),F=a(/<%[\w\W]*|[\w\W]*%>/gm),H=a(/\${[\w\W]*}/gm),z=a(/^data-[\-\w.\u00B7-\uFFFF]/),B=a(/^aria-[\-\w]+$/),W=a(/^(?:(?:(?:f|ht)tps?|mailto|tel|callto|sms|cid|xmpp):|[^a-z]|[a-z+.\-]+(?:[^a-z+.\-:]|$))/i),G=a(/^(?:\w+script|data):/i),Y=a(/[\u0000-\u0020\u00A0\u1680\u180E\u2000-\u2029\u205F\u3000]/g),j=a(/^html$/i);var q=Object.freeze({__proto__:null,MUSTACHE_EXPR:P,ERB_EXPR:F,TMPLIT_EXPR:H,DATA_ATTR:z,ARIA_ATTR:B,IS_ALLOWED_URI:W,IS_SCRIPT_OR_DATA:G,ATTR_WHITESPACE:Y,DOCTYPE_NAME:j});const X=()=>"undefined"==typeof window?null:window,K=function(e,t){if("object"!=typeof e||"function"!=typeof e.createPolicy)return null;let n=null;const o="data-tt-policy-suffix";t&&t.hasAttribute(o)&&(n=t.getAttribute(o));const r="dompurify"+(n?"#"+n:"");try{return e.createPolicy(r,{createHTML(e){return e},createScriptURL(e){return e}})}catch(e){return console.warn("TrustedTypes policy "+r+" could not be created."),null}};function V(){let t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:X();const n=e=>V(e);if(n.version="3.0.3",n.removed=[],!t||!t.document||9!==t.document.nodeType)return n.isSupported=!1,n;const o=t.document,r=o.currentScript;let{document:a}=t;const{DocumentFragment:l,HTMLTemplateElement:c,Node:s,Element:_,NodeFilter:b,NamedNodeMap:P=t.NamedNodeMap||t.MozNamedAttrMap,HTMLFormElement:F,DOMParser:H,trustedTypes:z}=t,B=_.prototype,G=R(B,"cloneNode"),Y=R(B,"nextSibling"),$=R(B,"childNodes"),Z=R(B,"parentNode");if("function"==typeof c){const e=a.createElement("template");e.content&&e.content.ownerDocument&&(a=e.content.ownerDocument)}let J,Q="";const{implementation:ee,createNodeIterator:te,createDocumentFragment:ne,getElementsByTagName:oe}=a,{importNode:re}=o;let ie={};n.isSupported="function"==typeof e&&"function"==typeof Z&&ee&&void 0!==ee.createHTMLDocument;const{MUSTACHE_EXPR:ae,ERB_EXPR:le,TMPLIT_EXPR:ce,DATA_ATTR:se,ARIA_ATTR:ue,IS_SCRIPT_OR_DATA:me,ATTR_WHITESPACE:pe}=q;let{IS_ALLOWED_URI:fe}=q,de=null;const he=N({},[...w,...D,...L,...C,...v]);let ge=null;const Te=N({},[...O,...I,...M,...U]);let ye=Object.seal(Object.create(null,{tagNameCheck:{writable:!0,configurable:!1,enumerable:!0,value:null},attributeNameCheck:{writable:!0,configurable:!1,enumerable:!0,value:null},allowCustomizedBuiltInElements:{writable:!0,configurable:!1,enumerable:!0,value:!1}})),Ee=null,Ae=null,_e=!0,be=!0,Ne=!1,Se=!0,Re=!1,we=!1,De=!1,Le=!1,ke=!1,Ce=!1,xe=!1,ve=!0,Oe=!1;const Ie="user-content-";let Me=!0,Ue=!1,Pe={},Fe=null;const He=N({},["annotation-xml","audio","colgroup","desc","foreignobject","head","iframe","math","mi","mn","mo","ms","mtext","noembed","noframes","noscript","plaintext","script","style","svg","template","thead","title","video","xmp"]);let ze=null;const Be=N({},["audio","video","img","source","image","track"]);let We=null;const Ge=N({},["alt","class","for","id","label","name","pattern","placeholder","role","summary","title","value","style","xmlns"]),Ye="http://www.w3.org/1998/Math/MathML",je="http://www.w3.org/2000/svg",qe="http://www.w3.org/1999/xhtml";let Xe=qe,Ke=!1,Ve=null;const $e=N({},[Ye,je,qe],d);let Ze;const Je=["application/xhtml+xml","text/html"],Qe="text/html";let et,tt=null;const nt=a.createElement("form"),ot=function(e){return e instanceof RegExp||e instanceof Function},rt=function(e){if(!tt||tt!==e){if(e&&"object"==typeof e||(e={}),e=S(e),Ze=Ze=-1===Je.indexOf(e.PARSER_MEDIA_TYPE)?Qe:e.PARSER_MEDIA_TYPE,et="application/xhtml+xml"===Ze?d:f,de="ALLOWED_TAGS"in e?N({},e.ALLOWED_TAGS,et):he,ge="ALLOWED_ATTR"in e?N({},e.ALLOWED_ATTR,et):Te,Ve="ALLOWED_NAMESPACES"in e?N({},e.ALLOWED_NAMESPACES,d):$e,We="ADD_URI_SAFE_ATTR"in e?N(S(Ge),e.ADD_URI_SAFE_ATTR,et):Ge,ze="ADD_DATA_URI_TAGS"in e?N(S(Be),e.ADD_DATA_URI_TAGS,et):Be,Fe="FORBID_CONTENTS"in e?N({},e.FORBID_CONTENTS,et):He,Ee="FORBID_TAGS"in e?N({},e.FORBID_TAGS,et):{},Ae="FORBID_ATTR"in e?N({},e.FORBID_ATTR,et):{},Pe="USE_PROFILES"in e&&e.USE_PROFILES,_e=!1!==e.ALLOW_ARIA_ATTR,be=!1!==e.ALLOW_DATA_ATTR,Ne=e.ALLOW_UNKNOWN_PROTOCOLS||!1,Se=!1!==e.ALLOW_SELF_CLOSE_IN_ATTR,Re=e.SAFE_FOR_TEMPLATES||!1,we=e.WHOLE_DOCUMENT||!1,ke=e.RETURN_DOM||!1,Ce=e.RETURN_DOM_FRAGMENT||!1,xe=e.RETURN_TRUSTED_TYPE||!1,Le=e.FORCE_BODY||!1,ve=!1!==e.SANITIZE_DOM,Oe=e.SANITIZE_NAMED_PROPS||!1,Me=!1!==e.KEEP_CONTENT,Ue=e.IN_PLACE||!1,fe=e.ALLOWED_URI_REGEXP||W,Xe=e.NAMESPACE||qe,ye=e.CUSTOM_ELEMENT_HANDLING||{},e.CUSTOM_ELEMENT_HANDLING&&ot(e.CUSTOM_ELEMENT_HANDLING.tagNameCheck)&&(ye.tagNameCheck=e.CUSTOM_ELEMENT_HANDLING.tagNameCheck),e.CUSTOM_ELEMENT_HANDLING&&ot(e.CUSTOM_ELEMENT_HANDLING.attributeNameCheck)&&(ye.attributeNameCheck=e.CUSTOM_ELEMENT_HANDLING.attributeNameCheck),e.CUSTOM_ELEMENT_HANDLING&&"boolean"==typeof e.CUSTOM_ELEMENT_HANDLING.allowCustomizedBuiltInElements&&(ye.allowCustomizedBuiltInElements=e.CUSTOM_ELEMENT_HANDLING.allowCustomizedBuiltInElements),Re&&(be=!1),Ce&&(ke=!0),Pe&&(de=N({},[...v]),ge=[],!0===Pe.html&&(N(de,w),N(ge,O)),!0===Pe.svg&&(N(de,D),N(ge,I),N(ge,U)),!0===Pe.svgFilters&&(N(de,L),N(ge,I),N(ge,U)),!0===Pe.mathMl&&(N(de,C),N(ge,M),N(ge,U))),e.ADD_TAGS&&(de===he&&(de=S(de)),N(de,e.ADD_TAGS,et)),e.ADD_ATTR&&(ge===Te&&(ge=S(ge)),N(ge,e.ADD_ATTR,et)),e.ADD_URI_SAFE_ATTR&&N(We,e.ADD_URI_SAFE_ATTR,et),e.FORBID_CONTENTS&&(Fe===He&&(Fe=S(Fe)),N(Fe,e.FORBID_CONTENTS,et)),Me&&(de["#text"]=!0),we&&N(de,["html","head","body"]),de.table&&(N(de,["tbody"]),delete Ee.tbody),e.TRUSTED_TYPES_POLICY){if("function"!=typeof e.TRUSTED_TYPES_POLICY.createHTML)throw A('TRUSTED_TYPES_POLICY configuration option must provide a "createHTML" hook.');if("function"!=typeof e.TRUSTED_TYPES_POLICY.createScriptURL)throw A('TRUSTED_TYPES_POLICY configuration option must provide a "createScriptURL" hook.');J=e.TRUSTED_TYPES_POLICY,Q=J.createHTML("")}else void 0===J&&(J=K(z,r)),null!==J&&"string"==typeof Q&&(Q=J.createHTML(""));i&&i(e),tt=e}},it=N({},["mi","mo","mn","ms","mtext"]),at=N({},["foreignobject","desc","title","annotation-xml"]),lt=N({},["title","style","font","a","script"]),ct=N({},D);N(ct,L),N(ct,k);const st=N({},C);N(st,x);const ut=function(e){let t=Z(e);t&&t.tagName||(t={namespaceURI:Xe,tagName:"template"});const n=f(e.tagName),o=f(t.tagName);return!!Ve[e.namespaceURI]&&(e.namespaceURI===je?t.namespaceURI===qe?"svg"===n:t.namespaceURI===Ye?"svg"===n&&("annotation-xml"===o||it[o]):Boolean(ct[n]):e.namespaceURI===Ye?t.namespaceURI===qe?"math"===n:t.namespaceURI===je?"math"===n&&at[o]:Boolean(st[n]):e.namespaceURI===qe?!(t.namespaceURI===je&&!at[o])&&!(t.namespaceURI===Ye&&!it[o])&&!st[n]&&(lt[n]||!ct[n]):!("application/xhtml+xml"!==Ze||!Ve[e.namespaceURI]))},mt=function(e){p(n.removed,{element:e});try{e.parentNode.removeChild(e)}catch(t){e.remove()}},pt=function(e,t){try{p(n.removed,{attribute:t.getAttributeNode(e),from:t})}catch(e){p(n.removed,{attribute:null,from:t})}if(t.removeAttribute(e),"is"===e&&!ge[e])if(ke||Ce)try{mt(t)}catch(e){}else try{t.setAttribute(e,"")}catch(e){}},ft=function(e){let t,n;if(Le)e="<remove></remove>"+e;else{const t=h(e,/^[\r\n\t ]+/);n=t&&t[0]}"application/xhtml+xml"===Ze&&Xe===qe&&(e='<html xmlns="http://www.w3.org/1999/xhtml"><head></head><body>'+e+"</body></html>");const o=J?J.createHTML(e):e;if(Xe===qe)try{t=(new H).parseFromString(o,Ze)}catch(e){}if(!t||!t.documentElement){t=ee.createDocument(Xe,"template",null);try{t.documentElement.innerHTML=Ke?Q:o}catch(e){}}const r=t.body||t.documentElement;return e&&n&&r.insertBefore(a.createTextNode(n),r.childNodes[0]||null),Xe===qe?oe.call(t,we?"html":"body")[0]:we?t.documentElement:r},dt=function(e){return te.call(e.ownerDocument||e,e,b.SHOW_ELEMENT|b.SHOW_COMMENT|b.SHOW_TEXT,null,!1)},ht=function(e){return e instanceof F&&("string"!=typeof e.nodeName||"string"!=typeof e.textContent||"function"!=typeof e.removeChild||!(e.attributes instanceof P)||"function"!=typeof e.removeAttribute||"function"!=typeof e.setAttribute||"string"!=typeof e.namespaceURI||"function"!=typeof e.insertBefore||"function"!=typeof e.hasChildNodes)},gt=function(e){return"object"==typeof s?e instanceof s:e&&"object"==typeof e&&"number"==typeof e.nodeType&&"string"==typeof e.nodeName},Tt=function(e,t,o){ie[e]&&u(ie[e],(e=>{e.call(n,t,o,tt)}))},yt=function(e){let t;if(Tt("beforeSanitizeElements",e,null),ht(e))return mt(e),!0;const o=et(e.nodeName);if(Tt("uponSanitizeElement",e,{tagName:o,allowedTags:de}),e.hasChildNodes()&&!gt(e.firstElementChild)&&(!gt(e.content)||!gt(e.content.firstElementChild))&&E(/<[/\w]/g,e.innerHTML)&&E(/<[/\w]/g,e.textContent))return mt(e),!0;if(!de[o]||Ee[o]){if(!Ee[o]&&At(o)){if(ye.tagNameCheck instanceof RegExp&&E(ye.tagNameCheck,o))return!1;if(ye.tagNameCheck instanceof Function&&ye.tagNameCheck(o))return!1}if(Me&&!Fe[o]){const t=Z(e)||e.parentNode,n=$(e)||e.childNodes;if(n&&t)for(let o=n.length-1;o>=0;--o)t.insertBefore(G(n[o],!0),Y(e))}return mt(e),!0}return e instanceof _&&!ut(e)?(mt(e),!0):"noscript"!==o&&"noembed"!==o||!E(/<\/no(script|embed)/i,e.innerHTML)?(Re&&3===e.nodeType&&(t=e.textContent,t=g(t,ae," "),t=g(t,le," "),t=g(t,ce," "),e.textContent!==t&&(p(n.removed,{element:e.cloneNode()}),e.textContent=t)),Tt("afterSanitizeElements",e,null),!1):(mt(e),!0)},Et=function(e,t,n){if(ve&&("id"===t||"name"===t)&&(n in a||n in nt))return!1;if(be&&!Ae[t]&&E(se,t));else if(_e&&E(ue,t));else if(!ge[t]||Ae[t]){if(!(At(e)&&(ye.tagNameCheck instanceof RegExp&&E(ye.tagNameCheck,e)||ye.tagNameCheck instanceof Function&&ye.tagNameCheck(e))&&(ye.attributeNameCheck instanceof RegExp&&E(ye.attributeNameCheck,t)||ye.attributeNameCheck instanceof Function&&ye.attributeNameCheck(t))||"is"===t&&ye.allowCustomizedBuiltInElements&&(ye.tagNameCheck instanceof RegExp&&E(ye.tagNameCheck,n)||ye.tagNameCheck instanceof Function&&ye.tagNameCheck(n))))return!1}else if(We[t]);else if(E(fe,g(n,pe,"")));else if("src"!==t&&"xlink:href"!==t&&"href"!==t||"script"===e||0!==T(n,"data:")||!ze[e])if(Ne&&!E(me,g(n,pe,"")));else if(n)return!1;return!0},At=function(e){return e.indexOf("-")>0},_t=function(e){let t,o,r,i;Tt("beforeSanitizeAttributes",e,null);const{attributes:a}=e;if(!a)return;const l={attrName:"",attrValue:"",keepAttr:!0,allowedAttributes:ge};for(i=a.length;i--;){t=a[i];const{name:c,namespaceURI:s}=t;if(o="value"===c?t.value:y(t.value),r=et(c),l.attrName=r,l.attrValue=o,l.keepAttr=!0,l.forceKeepAttr=void 0,Tt("uponSanitizeAttribute",e,l),o=l.attrValue,l.forceKeepAttr)continue;if(pt(c,e),!l.keepAttr)continue;if(!Se&&E(/\/>/i,o)){pt(c,e);continue}Re&&(o=g(o,ae," "),o=g(o,le," "),o=g(o,ce," "));const u=et(e.nodeName);if(Et(u,r,o)){if(!Oe||"id"!==r&&"name"!==r||(pt(c,e),o=Ie+o),J&&"object"==typeof z&&"function"==typeof z.getAttributeType)if(s);else switch(z.getAttributeType(u,r)){case"TrustedHTML":o=J.createHTML(o);break;case"TrustedScriptURL":o=J.createScriptURL(o)}try{s?e.setAttributeNS(s,c,o):e.setAttribute(c,o),m(n.removed)}catch(e){}}}Tt("afterSanitizeAttributes",e,null)},bt=function e(t){let n;const o=dt(t);for(Tt("beforeSanitizeShadowDOM",t,null);n=o.nextNode();)Tt("uponSanitizeShadowNode",n,null),yt(n)||(n.content instanceof l&&e(n.content),_t(n));Tt("afterSanitizeShadowDOM",t,null)};return n.sanitize=function(e){let t,r,i,a,c=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{};if(Ke=!e,Ke&&(e="\x3c!--\x3e"),"string"!=typeof e&&!gt(e)){if("function"!=typeof e.toString)throw A("toString is not a function");if("string"!=typeof(e=e.toString()))throw A("dirty is not a string, aborting")}if(!n.isSupported)return e;if(De||rt(c),n.removed=[],"string"==typeof e&&(Ue=!1),Ue){if(e.nodeName){const t=et(e.nodeName);if(!de[t]||Ee[t])throw A("root node is forbidden and cannot be sanitized in-place")}}else if(e instanceof s)t=ft("\x3c!----\x3e"),r=t.ownerDocument.importNode(e,!0),1===r.nodeType&&"BODY"===r.nodeName||"HTML"===r.nodeName?t=r:t.appendChild(r);else{if(!ke&&!Re&&!we&&-1===e.indexOf("<"))return J&&xe?J.createHTML(e):e;if(t=ft(e),!t)return ke?null:xe?Q:""}t&&Le&&mt(t.firstChild);const u=dt(Ue?e:t);for(;i=u.nextNode();)yt(i)||(i.content instanceof l&&bt(i.content),_t(i));if(Ue)return e;if(ke){if(Ce)for(a=ne.call(t.ownerDocument);t.firstChild;)a.appendChild(t.firstChild);else a=t;return(ge.shadowroot||ge.shadowrootmod)&&(a=re.call(o,a,!0)),a}let m=we?t.outerHTML:t.innerHTML;return we&&de["!doctype"]&&t.ownerDocument&&t.ownerDocument.doctype&&t.ownerDocument.doctype.name&&E(j,t.ownerDocument.doctype.name)&&(m="<!DOCTYPE "+t.ownerDocument.doctype.name+">\n"+m),Re&&(m=g(m,ae," "),m=g(m,le," "),m=g(m,ce," ")),J&&xe?J.createHTML(m):m},n.setConfig=function(e){rt(e),De=!0},n.clearConfig=function(){tt=null,De=!1},n.isValidAttribute=function(e,t,n){tt||rt({});const o=et(e),r=et(t);return Et(o,r,n)},n.addHook=function(e,t){"function"==typeof t&&(ie[e]=ie[e]||[],p(ie[e],t))},n.removeHook=function(e){if(ie[e])return m(ie[e])},n.removeHooks=function(e){ie[e]&&(ie[e]=[])},n.removeAllHooks=function(){ie={}},n}return V()}()}}]);
//# sourceMappingURL=27856.874d647f4a8608c8f3b2.min.js.map | PypiClean |
/BlueWhale3_Text-1.6.0-py3-none-any.whl/orangecontrib/text/datasets/README.txt | The following files originate from the Ana Cardoso Cachopo's Homepage:
[http://ana.cachopo.org/datasets-for-single-label-text-categorization]
and primarily focus on single-topic text categorization.
* 20newsgroups-train.tab [20Newsgroups dataset]
* 20newsgroups-test.tab [20Newsgroups dataset]
* reuters-r8-train.tab [Reuters-21578 dataset]
* reuters-r8-test.tab [Reuters-21578 dataset]
* reuters-r52-train.tab [Reuters-21578 dataset]
* reuters-r52-test.tab [Reuters-21578 dataset]
About the data sets:
* 20Newsgroups dataset: This dataset is a collection of approximately 20,000 newsgroup documents, partitioned
(nearly) evenly across 20 different newsgroups.
* Reuters-21578 dataset: These documents appeared on the Reuters newswire in 1987 and were manually classified
by personnel from Reuters Ltd.
* r8: only 8 most frequent topics
* r52: all 52 topics found in documents with only one label
About the preprocessing:
* all-terms: Obtained from the original datasets by applying the following transformations:
* Substitute TAB, NEWLINE and RETURN characters by SPACE.
* Keep only letters (that is, turn punctuation, numbers, etc. into SPACES).
* Turn all letters to lowercase.
* Substitute multiple SPACES by a single SPACE.
* The title/subject of each document is simply added in the beginning of the document's text.
-----------------------------------------------------------------------------------------------------------------------
Dataset: Book excerpts
In this example we are trying to separate text written for children from that written
for adults using multidimensional scaling. There are 140 texts divided into two categories:
children and adult. The texts are all in English.
-----------------------------------------------------------------------------------------------------------------------
Dataset: Deerwester
Small data set of 9 documents, 5 about human-computer interaction and 4 about graphs.
This data set originates from the paper about Latent Semantic Analysis [1].
[1] Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. (1990). Indexing by Latent Semantic
Analysis. Journal of the American Society for Information Science, 41(6), 391–407.
-----------------------------------------------------------------------------------------------------------------------
Dataset: Friends Transcripts
Transcripts from the Friends series. Originating from http://home.versatel.nl/friendspic0102/
-----------------------------------------------------------------------------------------------------------------------
Dataset: Election Tweets 2016
Tweets from the major party candidates for the 2016 US Presidential Election Hillary Clinton and Donald Trump.
This data set is a simplified version of the data set from: https://www.kaggle.com/benhamner/clinton-trump-tweets
-----------------------------------------------------------------------------------------------------------------------
Dataset: Andersen.tab
Three Andersen's stories. Originating from: http://hca.gilead.org.il
-----------------------------------------------------------------------------------------------------------------------
Dataset: Grimm-tales.tab
Selected Grimm tales. Originating from: http://www.pitt.edu/~dash/grimmtales.html
-----------------------------------------------------------------------------------------------------------------------
Dataset: Grimm-tales-short.tab
Animal tales and tales of magic. Originating from: http://www.pitt.edu/~dash/grimmtales.html
| PypiClean |
/Flask-Express-0.1.4.tar.gz/Flask-Express-0.1.4/README.md | # flask-express
<img src="https://raw.githubusercontent.com/marktennyson/flask-express/main/logos/flask-express-logo.png">
# Downloads
[](https://pepy.tech/project/flask-express) [](https://pepy.tech/project/flask-express/month) [](https://pepy.tech/project/flask-express/week)
<br>
#### contributor needed.
provide the interactive service like expressJs for the flask app.
#### Important Links
[PYPI link](https://pypi.org/project/flask-express)
[Github link](https://github.com/marktennyson/flask-express)
[Documentation link](https://marktennyson.github.io/flask-express)
### Basic installation
Use the package manager [pip](https://pypi.org/project/flask-express/) to install flask-express.
```bash
python3 -m pip install flask-express
```
Install from source code
```bash
git clone https://github.com/marktennyson/flask-express.git && cd flask-express/
python3 setup.py install
```
### Introduction to Flask-Express
Flask-Express is here to give you people the feel like ExpressJs while using the Flask app.
Basically you can use the default Request and Response as two parameters of the view functions.
Flask-Express comes with all the features of Flask with some extra features.
We are using the `munch` module to provide the attribute-style access very similar to the Javascript.
I think this is enough for the introdunction, let's play with the examples mentioned below.
### Examples and Usages
##### Basic example:
inbuilt flask_express.FlaskExpress class
```python
from flask_express import FlaskExpress
app = FlaskExpress(__name__)
@app.get("/")
def index(req, res):
return res.json(req.header)
```
##### Now the flask 2.0 support the asynchronus view function. You can implement this with flask-express too.
```python
from flask_express import FlaskExpress
app = FlaskExpress(__name__)
@app.get("/")
async def index(req, res):
return res.json(req.header)
```
##### You can use the python typing for a better view of the codes and auto completion.
```python
from flask_express import FlaskExpress
from flask_express.typing import Request, Response
app = FlaskExpress(__name__)
@app.get("/")
def index(req:Request, res:Response):
return res.json(req.header)
```
### Basic Documentation
The official and full documentation for this project is available at: https://marktennyson.github.io/flask-express.
Here I have tried to provide some of the basic features of this project here.
#### Request class:
N.B: all of the properties of the Request class will return an instance of Munch.
This will provide you the feel of the Javascript object.
##### property - json
So if your app is receiving data as json format, you can use `json` property of the request class to access the data.
It's internally using the `get_json` method to provide the data.
For example:
```python
@app.post("/send-json")
def send_json(req, res):
name = req.json.name
email = req.json.email
return res.json(name=name, email=email)
```
##### property - query
This object provides you the url based parameter.
It's internally using the `args` property to provide the data.
For example:
```python
@app.get("/get-query")
def get_query(req, res):
name=req.query.name
email = req.query.email
return res.send(dict(name=name, email=email))
```
##### property - body
This object provides you the all the parameters from the Form.
It's internally using the `form` property to provide the data.
For example:
```python
@app.get("/get-form-data")
def get_form_data(req, res):
name=req.body.name
email = req.body.email
return res.send(dict(name=name, email=email))
```
##### property - header
This object provides you the all the parameters of the request header.
It's internally using the `header` property to provide the data.
For example:
```python
@app.get("/get-form-data")
def get_form_data(req, res):
return res.send(req.header)
```
#### Response class
##### function - send_status
This is used to set the response header status.
for example:
```python
@app.route("/set-status")
def set_statuser(req, res):
return res.send_status(404).send("your requested page is not found.")
```
##### function - flash
To flash a message at the UI.
for example:
```python
@app.route('/flash')
def flasher(req, res):
return res.flash("this is the flash message").end()
```
##### function - send
It sends the HTTP response.
for example:
```python
@app.route("/send")
def sender(req, res):
return res.send("hello world")
#or
return res.send("<h1>hello world</h1>")
#or
return res.send_status(404).send("not found")
```
##### function - json
To return the json seriliazed response.
for example:
```python
@app.route("/json")
def jsoner(req, res):
return res.json(name="aniket sarkar")
#or
return res.json({'name': 'aniket sarkar'})
#or
return res.json([1,2,3,4])
```
##### function - end
To end the current resonse process.
for example:
```python
@app.route("/end")
def ender(req, res):
return res.end()
#or
return res.end(404) # to raise a 404 error.
```
##### function - render
Renders a html and sends the rendered HTML string to the client.
for example:
```python
@app.route('/render')
def renderer(req, res):
context=dict(name="Aniket Sarkar", planet="Pluto")
return res.render("index.html", context)
#or
return res.render("index.html", name="Aniket Sarkar", planet="Pluto")
```
##### function - redirect
redirect to specified route.
for example:
```python
@app.post("/login")
def login(req, res):
#if login success
return res.redirect("/dashboard")
```
##### function - get
Get the header information by the given key.
for example:
```python
@app.route("/get")
def getter(req, res):
print (res.get("Content-Type"))
return res.end()
```
##### function - set
Set the header information.
for example:
```python
@app.route("/header-seter")
def header_setter(req, res):
res.set('Content-Type', 'application/json')
#or
res.set({'Content-Type':'application/json'})
return res.end()
```
##### function - type
Sets the Content-Type HTTP header to the MIME type as determined by the specified type.
for example:
```python
@app.route("/set-mime")
def mimer(req, res):
res.type('application/json')
#or
res.type(".html")
#or
res.type("json")
```
##### function - attachment
send the attachments by using this method.
The default attachment folder name is `attachments`.
You can always change it by changing the config parameter.
the config parameter is `ATTACHMENTS_FOLDER`.
for example:
```python
@app.route('/attachments')
def attach(req, res):
filename = req.query.filename
return res.attachment(file_name)
```
##### function - send_file
Send the contents of a file to the client.Its internally using the send_file method from werkzeug.
##### function - clear_cookie
Clear a cookie. Fails silently if key doesn't exist.
##### function - set_cookie
Sets a cookie.
##### function - make_response
make a http response. It's same as `Flask.wrappers.Request`
### Development
#### Beautiful Contributors
<a href="https://github.com/flaskAio/navycut/graphs/contributors">
<img src="https://contrib.rocks/image?repo=marktennyson/flask-express" />
</a>
#### Contribution procedure.
1. Create a new issue on github.
2. Fork and clone this repository.
3. Make some changes as required.
4. Write unit test to showcase its functionality.
5. Submit a pull request under `main` branch.
#### Run this project on your local machine.
1. create a virtual environment on the project root directory.
2. install all the required dependencies from requirements.txt file.
3. make any changes on you local code.
4. then install the module on your virtual environment using `python setup.py install` command.
5. The above command will install the `flask-express` module on your virtual environment.
6. Now create a separate project inside the example folder and start testing for your code changes.
7. If you face any difficulties to perform the above steps, then plese contact me at: `[email protected]`.
### Future Roadmap
1. Middleware support.
2. Implementation of all the apis of ExpressJs.
3. Auto Swagger documentation using `flask-restplus` and `flask-pydantic` module.
### License
MIT License
Copyright (c) 2021 Aniket Sarkar([email protected]) | PypiClean |
/MarkdownSuperscript-2.1.1.tar.gz/MarkdownSuperscript-2.1.1/docs/installation.rst | .. highlight:: console
============
Installation
============
Stable release
--------------
The easiest way to install Markdown Superscript is to use `pip`_. ::
$ python -m pip install MarkdownSuperscript
This will install the latest stable version. If you need an older
version, you may pin or limit the requirements. ::
$ python -m pip install 'MarkdownSuperscript==2.1.0'
$ python -m pip install 'MarkdownSuperscript>=2.0.0,<3'
If you don't have `pip`_ installed, this `Python installation guide`_ can guide
you through the process.
.. _pip: https://pip.pypa.io/en/stable/
.. _Python installation guide: https://docs.python-guide.org/starting/installation/
From source
------------
The source files for Markdown Superscript can be downloaded from the
`Github repo`_.
You may use pip to install the latest version: ::
$ python -m pip install git+git://github.com/jambonrose/markdown_superscript_extension.git
Alternatively, you can clone the public repository: ::
$ git clone git://github.com/jambonrose/markdown_superscript_extension
Or download the `tarball`_: ::
$ curl -OL https://github.com/jambonrose/markdown_superscript_extension/tarball/development
Once you have a copy of the source, you can install it with: ::
$ python setup.py install
.. _Github repo: https://github.com/jambonrose/markdown_superscript_extension
.. _tarball: https://github.com/jambonrose/markdown_superscript_extension/tarball/development
| PypiClean |
/Gbtestapi0.1-0.1a10-py3-none-any.whl/gailbot/core/pipeline/pipeline.py | from typing import List, Dict, Any
from dataclasses import dataclass
from .component import Component, ComponentState, ComponentResult
from gailbot.core.utils.threads import ThreadPool
from gailbot.core.utils.logger import makelogger
import networkx as nx
Failure = ComponentResult(ComponentState.FAILED, None, 0)
logger = makelogger("pipeline")
@dataclass
class DataStream:
data: Any = None
class Pipeline:
"""
Defines a class for the pipeline that runs the dependency map.
"""
def __init__(
self,
dependency_map: Dict[str, List[str]],
components: Dict[str, Component],
num_threads: int,
):
"""
Dependency map describes the execution order.
"""
self.dependency_map = dependency_map
self.components = components
self.threadpool = ThreadPool(num_threads)
self._generate_dependency_graph(dependency_map, components)
def __repr__(self) -> str:
"""
Accesses the pipeline's dependency graph.
Args:
self
Returns:
String representation of the pipeline's dependency graph.d
"""
return str(self.get_dependency_graph())
def __call__(
self,
base_input: Any,
additional_component_kwargs: Dict = dict()
# NOTE: base_input is passed only to the first component.
) -> Dict[str, ComponentState]:
"""
Execute the pipeline by running all components in order of the dependency
graph. This wraps data as DataStream before passing it b/w components.
Additionally, each component receives the output of its dependencies.
Args:
base_input:
a list of input arguments that will be passed to the first
component of the graph
Additional_component_kwargs:
passed as a dereferenced dictionary to each component.
Returns:
Dictionary containing keys mapping to the component states
corresponding to the result of each task.
Note:
each component is contained in a Component class
"""
successors = self.get_dependency_graph()
logger.info(successors)
logger.info(self.dependency_graph)
logger.info(self.components)
name_to_results: Dict[
str, ComponentResult
] = dict() # map component name to result
while True:
# executables is a list of Component who currently has no dependent node
executables: List[Component] = [
c for c, d in self.dependency_graph.in_degree if d == 0
]
# NOTE: bug fix from July 3rd 2023
threadkey_to_exe: Dict[
str, Component
] = dict() # map thread key to executable
# exit the loop if no nodes left.
if len(executables) == 0:
break
for executable in executables:
exe_name: str = self.component_to_name[executable]
dependency_resolved = True
# check the result output of exe_name's dependency component
if len(successors[exe_name]) > 0:
dep_outputs: Dict[str, ComponentResult] = {
k: name_to_results[k] for k in successors[exe_name]
}
else:
dep_outputs = {
"base": ComponentResult(
state=ComponentState.SUCCESS, result=base_input, runtime=0
)
}
for res in dep_outputs.values():
if res.state == ComponentState.FAILED:
name_to_results[exe_name] = Failure
if self.dependency_graph.has_node(executable):
self.dependency_graph.remove_node(executable)
dependency_resolved = False
args = [dep_outputs]
if dependency_resolved:
key = self.threadpool.add_task(
executable, args=args, kwargs=additional_component_kwargs
)
logger.info(f" the component {executable} get the thread key {key}")
threadkey_to_exe[key] = executable
# wait until all tasks finishes before next iteration
self.threadpool.wait_for_all_completion(error_fun=lambda: None)
for key, exe in threadkey_to_exe.items():
# get the task result from the thread pool
exe_res = self.threadpool.get_task_result(key)
self.dependency_graph.remove_node(exe)
name = self.component_to_name[exe]
if exe_res and exe_res.state == ComponentState.SUCCESS:
# add to result if success
name_to_results[name] = exe_res
else:
# add the failed result on failure
name_to_results[name] = Failure
# Regenerate graph
self._generate_dependency_graph(self.dependency_map, self.components)
return {k: v.state for k, v in name_to_results.items()}
def component_names(self) -> List[str]:
"""
Gets names of all components in the dependency map.
Args:
self
Returns:
List of strings containing components.
"""
return list(self.name_to_component.keys())
def is_component(self, name: str) -> bool:
return name in self.component_names()
def component_parents(self, name: str) -> List[str]:
"""
Get the component(s) that the given component is dependent on.
Args:
name: string containing the name of the child component.
Returns:
List of strings of the names of the given component's parent
components.
Raises exception if the given name doesn't correspond to an
existing component.
"""
if not self.is_component(name):
raise Exception(f"No component named {name}")
edges = list(self.dependency_graph.in_edges(self.name_to_component[name]))
return [self.component_to_name[e[0]] for e in edges]
def component_children(self, name: str) -> List[str]:
"""
Gets component(s) that are dependent on the given component.
Args:
name: string containing the name of the child component.
Returns:
List of strings of the names of components that are dependent on the
given component.
Raises exception if the given name doesn't correspond to an
existing component.
"""
if not self.is_component(name):
raise Exception(f"No component named {name}")
edges = list(self.dependency_graph.out_edges(self.name_to_component[name]))
return [self.component_to_name[e[1]] for e in edges]
def get_dependency_graph(self) -> Dict:
"""
Returns a map from each component to the components it is dependent on.
Args:
self
Returns:
Dictionary mapping the given component to the components it is dependent upon.
"""
view = dict()
for name in self.name_to_component:
view[name] = self.component_parents(name)
return view
#####
# PRIVATE METHODS
####
def _does_cycle_exist(self, graph: nx.Graph) -> bool:
"""
Determines if there are existing cycles in the given graph.
Args:
graph: graph in which to determine if there are cycles.
Returns:
True if there are any cycles in the given graph, false if not.
"""
try:
nx.find_cycle(graph, orientation="original")
return True
except:
return False
def _generate_dependency_graph(
self, dependency_map: Dict[str, List[str]], components: Dict[str, Component]
) -> None:
"""
Generates a dependency graph containing components from a given dictionary.
Assumes that the dependency_map keys are in order i.e, a component will
be seen as a key before it is seen as a dependency.
Args:
dependency_map: dictionary containing lists of strings to map between.
components: dictionary containing components to insert into the newly
created dependency graph.
Returns:
A graph of components mapping from the component's name to its dependencies.
Raises exception if an element of the dependency graph does not correspond
to a valid element.
"""
self.dependency_graph = nx.DiGraph()
self.name_to_component = dict()
# Verify that the same keys exist in both dicts
assert (
dependency_map.keys() == components.keys()
), f"Component and dependency maps should have similar keys"
# # Mapping from component name to dependencies
for name, dependencies in dependency_map.items():
# This node must exist as a Component
if not name in components:
raise Exception(f"Node does not exist in the component map: {name}")
# Component cannot be added twice
if self.is_component(name):
raise Exception(f"Repeated component {name}")
# Create a component and add to main graph
self.dependency_graph.add_node(components[name])
# We want to add directed edges from all the dependencies to the
# current node. This implies that the dependencies should already
# exist as nodes.
for dep in dependencies:
if not dep in components:
raise Exception(f"Unseen component added as dependency")
self.dependency_graph.add_edge(components[dep], components[name])
# NOTE: Cycles are not supported
if self._does_cycle_exist(self.dependency_graph):
raise Exception(f"Cycle found in execution logic")
self.name_to_component[name] = components[name]
self.component_to_name: Dict[Component, name] = {
v: k for k, v in self.name_to_component.items()
} | PypiClean |
/BNIAJFI_VitalSigns-0.0.13.tar.gz/BNIAJFI_VitalSigns-0.0.13/README.md | # Vital Signs
> Scripts to create our annual, publicly-available, community-focused datasets; for Baltimore City.
<img src="https://raw.githubusercontent.com/bniajfi/bniajfi/main/bnia_logo_new.png" height="160px" width="auto" style="max-width: autopx">
<h2>Hi! We are <a href="https://bniajfi.org/">BNIA-JFI</a>.</h2>
This package was made to create Vital Signs data.
Check our [Github](https://github.com/bniajfi) page for more information and tools.
__About__
- Functions built and used by BNIA for annual Vital Signs data release.
- Made to be shared via IPYNB/ Google Colab notebooks.
- Data may be private and is sometimes public.
- [PyPi](https://pypi.org/project/BNIAJFI-VitalSigns/) libraries created from the notebooks.
__Included__ (but not limited to)
- CloseCrawl - Pull MD Courts data.
- TidyAddr - Expertly clean addresses in Baltimore (and beyond). Works Seamlessly with Closecrawl.
- Download ACS - ACS Tutorial. Gives a function and also teaches you how to pull any data for any geography using this API (can aggregate tracts on along a crosswalk).
- Create ACS Statistics - Create pre-made statistics from ACS data. Builds off the ACS Downloader
- VS Indicators - Create other (non ACS) Vital Signs statistics using these pre-made functions.
- convertvssheetforwpupload - For internal developer use when publishing at BNIA
VitalSigns uses functions found in our Dataplay Module and vice-versa.
[](https://colab.research.google.com/github/bnia/VitalSigns/blob/main/notebooks/index.ipynb)
[](https://github.com/bnia/VitalSigns/tree/main/notebooks/index.ipynb)
[](https://github.com/ellerbrock/open-source-badges/)
[](https://github.com/bnia/VitalSigns/blob/main/LICENSE)
[](https://bnia.github.io)
[](https://pypi.python.org/pypi/VitalSigns/)
[]()
[](https://github.com/bnia/VitalSigns)
[](https://github.com/bnia/VitalSigns)
[](https://github.com/bnia/VitalSigns)
[](https://github.com/bnia/VitalSigns)
[](https://twitter.com/intent/tweet?text=Check%20out%20this%20%E2%9C%A8%20colab%20by%20@bniajfi%20https://github.com/bnia/VitalSigns%20%F0%9F%A4%97)
[](https://twitter.com/bniajfi)
## Usage Instructions
### Install the Package
The code is on <a href="https://pypi.org/project/VitalSigns2022TEST/">PyPI</a> so you can install the scripts as a python library using the command:
```
!pip install BNIAJFI-VitalSigns dataplay geopandas
```
### Import Modules
1) Import the installed module into your code:
```
from VitalSigns.acsDownload import retrieve_acs_data
```
2) use it
```
retrieve_acs_data(state, county, tract, tableId, year)
```
# Getting Help
You can get information on the package by using the help command.
Here we look at the package's modules:
```
import VitalSigns
help(VitalSigns)
```
Lets take a look at what functions the geoms module provides:
```
import VitalSigns.acsDownload
help(VitalSigns.acsDownload)
```
And here we can look at an individual function and what it expects:
```
import VitalSigns.acsDownload
help(VitalSigns.acsDownload.retrieve_acs_data)
```
## Example #1
Follow this process for all VitalSigns scripts. The 'racdiv' script requires one more step, and is shown in example #2
### ACS Download
Install the package.
```
!pip install BNIAJFI-VitalSigns dataplay geopandas
```
Import your modules.
```
from VitalSigns.acsDownload import retrieve_acs_data
```
Read in some data.
```
#Define our download parameters (tract, county, state, tableId, state, and year)
#Our download function will use Baltimore City's tract, county and state as internal parameters
#Changing these values using different geographic reference codes will change those parameters
tract = '*'
county = '510'
state = '24'
tableId = 'B01001'
year = '19'
```
And download the Baltimore City ACS data using the imported VitalSigns library.
```
df = retrieve_acs_data(state, county, tract, tableId, year)
df.head(2)
```
Save the ACS data (Use this method ONLY if you are working in Google Colab. Otherwise, you can save the data however you prefer)
```
from google.colab import files
df.to_csv('YourFileName.csv')
files.download('YourFileName.csv')
```
### ACS Calculations and Indicators
Now that we have the ACS data, we can use any of the scripts in the VitalSigns library to create the Baltimore City indicators.
These scripts will download and clean ACS data for Baltimore and then construct indicators from the data.
A list of all the tables used and their respective indicator scripts can be found <a href="https://github.com/BNIA/VitalSigns/blob/main/ACS_Tables_and_Scripts.csv/">Here</a>
First, import the script(s) you would like to use for the ACS data chosen.
```
#Script to create the Percent of Population Under 5 Years old indicator.
from VitalSigns.create import createAcsIndicator, age5
```
Once the script has been imported, we can now create the Baltimore City indicators.
```
mergeUrl = 'https://raw.githubusercontent.com/BNIA/VitalSigns/main/CSA_2010_and_2020.csv'
merge_left_col = 'tract'
merge_right_col= 'TRACTCE'
merge_how = 'outer'
groupBy = 'CSA2010' #For the 2020 CSAs use 'CSA2020', for 2010 CSAs use 'CSA2010'
method = age5
aggMethod = 'sum'
columnsToInclude = []
MyIndicator = createAcsIndicator(state, county, tract, year, tableId,
mergeUrl, merge_left_col, merge_right_col, merge_how, groupBy,
aggMethod, method, columnsToInclude, finalFileName=False)
MyIndicator.head(2)
```
Now we can save the Baltimore City indicators (Use this method ONLY if you are working in Google Colab. Otherwise, you can save the data however you prefer)
```
from google.colab import files
MyIndicator.to_csv('YourIndicatorFileName.csv')
files.download('YourIndicatorFileName.csv')
```
## Example #2 (racdiv indicator)
The Racial Diversity Index (racdiv) indicator is the only script in our library that relies on two ACS tables.
Due to this difference, this is the only script that will ask the user for an input while the script is running (the user needs to re-enter the year)
Lets follow the same process we did during example #1
### ACS Download
Install the package.
```
!pip install BNIAJFI-VitalSigns dataplay geopandas
```
Import your modules.
```
from VitalSigns.acsDownload import retrieve_acs_data
```
Read in some data.
```
tract = '*'
county = '510'
state = '24'
tableId = 'B02001'
year = '19' #This is the number that the user NEEDS to re-enter once the script asks for an input
```
And download the Baltimore City ACS data using the imported VitalSigns library.
```
df = retrieve_acs_data(state, county, tract, tableId, year)
df.head(2)
```
Save the ACS data (Use this method ONLY if you are working in Google Colab. Otherwise, you can save the data however you prefer)
```
from google.colab import files
df.to_csv('YourFileName.csv')
files.download('YourFileName.csv')
```
### ACS Calculations and Indicators
To see the table IDs and their respective indicators again, click <a href="https://github.com/BNIA/VitalSigns/blob/main/ACS_Tables_and_Scripts.csv/">Here</a>
Import the racdiv script
```
#Script to create the Racial Diversity Index indicator.
from VitalSigns.create import createAcsIndicator, racdiv
```
Once the script has been imported, we can now create the Baltimore City indicators.
```
mergeUrl = 'https://raw.githubusercontent.com/BNIA/VitalSigns/main/CSA_2010_and_2020.csv'
merge_left_col = 'tract'
merge_right_col= 'TRACTCE'
merge_how = 'outer'
groupBy = 'CSA2010' #For the 2020 CSAs use 'CSA2020', for 2010 CSAs use 'CSA2010'
method = racdiv
aggMethod = 'sum'
columnsToInclude = []
MyIndicator = createAcsIndicator(state, county, tract, year, tableId,
mergeUrl, merge_left_col, merge_right_col, merge_how, groupBy,
aggMethod, method, columnsToInclude, finalFileName=False)
MyIndicator.head(2)
```
The cell below shows the output while the racdiv script is being run. As you can see on the last line, the script asks the user to re-enter their chosen year. After re-entering the year, the script will finish running, and the racdiv indicator table will be completed.
```
Table: B02001, Year: 19 imported.
Index(['TRACTCE', 'GEOID10', 'CSA2010', 'GEOID', 'CSA2020'], dtype='object')
Merge file imported
Both are now merged.
Aggregating...
Aggregated
Creating Indicator
Please enter your chosen year again (i.e., '17', '20'):
```
Now we can save the Baltimore City indicators (Use this method ONLY if you are working in Google Colab. Otherwise, you can save the data however you prefer)
```
from google.colab import files
MyIndicator.to_csv('YourIndicatorFileName.csv')
files.download('YourIndicatorFileName.csv')
``` | PypiClean |
/Flask-AWSCognito-1.3.tar.gz/Flask-AWSCognito-1.3/docs/build/searchindex.js | Search.setIndex({docnames:["auth_code","configure_aws","configure_flask","index","installation"],envversion:{"sphinx.domains.c":1,"sphinx.domains.changeset":1,"sphinx.domains.citation":1,"sphinx.domains.cpp":1,"sphinx.domains.javascript":1,"sphinx.domains.math":2,"sphinx.domains.python":1,"sphinx.domains.rst":1,"sphinx.domains.std":1,sphinx:56},filenames:["auth_code.rst","configure_aws.rst","configure_flask.rst","index.rst","installation.rst"],objects:{},objnames:{},objtypes:{},terms:{"1_xxx":2,"1_xxxxx":[],"31v4xxxxxx":[],"return":0,"true":1,AWS:[0,1],One:1,The:[0,1,4],__name__:2,access:0,access_token:0,address:0,after:[0,1],allowedoauthflow:1,allowedoauthflowsuserpoolcli:1,allowedoauthscop:1,also:0,amazon:1,ani:0,api:1,app:[0,1,3],applic:0,arg:0,attribut:0,attributedatatyp:1,authent:[0,1],authentication_requir:0,author:3,automat:[],autoverifiedattribut:1,avail:0,aws_auth:[0,2],aws_cognito_domain:2,aws_cognito_redirect:[0,2],aws_cognito_redirect_url:2,aws_cognito_user_pool_client_id:2,aws_cognito_user_pool_client_secret:2,aws_cognito_user_pool_id:2,aws_default_region:2,awscognito:4,awscognitoauthent:2,awstemplateformatvers:1,basic:0,becaus:0,becom:0,both:1,boto3:[],browser:0,call:1,callbackurl:1,can:0,claim:0,client:[1,3],clientnam:1,cloudform:3,code:[1,3],cognito:[0,2,3],cognito_claim:0,com:2,compromis:0,config:[1,2],configur:[1,3],confirm:0,consol:1,contain:2,content:3,could:0,creat:1,credenti:[],custom:0,data:0,decemb:1,def:0,descript:1,desir:0,diagram:3,directli:0,directori:1,doc:0,doesn:1,domain:[2,3],don:1,each:0,easiest:4,email:[0,1],end:0,endpoint:[0,1],entiti:1,exampl:0,exchang:0,expos:0,fals:1,flask:[0,4],flow:1,follow:1,forget:1,from:1,full:1,get:0,get_access_token:0,get_sign_in_url:0,got:0,grant:3,has:0,header:0,hold:1,http:[0,2],index:[0,3],instal:3,instead:0,javascript:0,join:1,jsonifi:0,less:0,like:0,localhost:2,manual:1,method:0,modul:3,mutabl:1,name:1,need:1,never:0,now:0,number:2,object:0,octob:[],open:0,openid:1,option:1,own:1,page:3,paramat:2,paramet:1,pass:3,password:0,pip:4,plugin:0,pool:[0,1],poolclientus:1,prefer:0,prepar:[0,3],present:0,presum:[],proper:[],properti:1,provid:0,redirect:3,ref:1,registr:0,rememb:0,request:0,requir:[0,1],reset:0,resourc:1,respons:0,rout:0,run:0,schema:1,screenshot:1,search:3,secret:1,see:[0,1],sent:0,should:[0,1,2],shown:1,sign:[1,3],sign_in:0,signincallback:1,stack:1,string:1,success:[0,1],support:1,supportedidentityprovid:1,templat:1,thei:0,thi:[0,4],through:[0,1,4],token:0,token_her:0,type:1,uniqu:1,upon:0,url:[0,3],usag:1,used:1,user:[0,1],userpool:1,userpoolcli:1,userpoolid:1,userpoolnam:1,using:0,wai:4,west:2,your:1,yyi:2,zzzz:2},titles:["Authorization code grant","Prepare Cognito","Configure Flask app","Welcome to Flask-AWSCognito\u2019s documentation!","Installation"],titleterms:{app:2,author:0,awscognito:3,client:0,cloudform:1,code:0,cognito:1,configur:2,diagram:0,document:3,domain:1,flask:[1,2,3],grant:0,indic:3,instal:4,pass:1,prepar:1,redirect:[0,1],sign:0,tabl:3,url:1,welcom:3}}) | PypiClean |
/NumParse-0.1.1.tar.gz/NumParse-0.1.1/README.md | # c3-NumParse
This package provides a set of tools for parsing a number from a string. It currently suppports:
- Parsing numeric values (e.g. "2410")
- Parsing number words (e.g. "one hundred forty five thousand two hundred three")
- Parsing negative numbers (e.g "negative fifty five")
- Parsing mixed numeric values and number words (e.g. "4 million")
- Parsing numeric ranges (e.g. "five to 10")
- Parsing units (e.g. "five miles", "8 to 10 hours")
These strings get parsed into a new `RangeValue` class which allows for ranges to be represented,
and any values in this form to be compared against each other.
## Installation
```commandline
pip install NumParse
```
## Usage
```python
from num_parse.NumParser import NumParser
num_parser = NumParser()
num_parser.parse_num("4 million") # returns 4000000
num_parser.parse_num("-135 thousand") # returns -135000
num_parser.parse_num("2 m") # returns 2 meter
num_parser.parse_num("five to six hours") # returns 5 to 6 hour
num_parser.parse_num("2 m") < num_parser.parse_num("2 in") # returns False
```
## Unit Tests
In order to run the unit tests, navigate to the `num_parse/tests` directory and run the following command:
```commandline
pytest -q
```
| PypiClean |
/Grammaticomastix-0.0.1rc2-py3-none-any.whl/grammaticomastix/dchars/languages/hbo/transliterations/basic/ucombinations.py | from dchars.dchars import new_dstring
from dchars.languages.hbo.dcharacter import DCharacterHBO
from dchars.languages.hbo.transliterations.basic.basic import dchar__get_translit_str
import itertools
#///////////////////////////////////////////////////////////////////////////
def get_usefull_combinations():
"""
get_usefull_combinations()
Return a (str)string with all the usefull combinations of characters,
i.e. only the 'interesting' characters (not punctuation if it's too simple
by example).
NB : this function has nothing to do with linguistic or a strict
approach of the language. This function allows only to get the
most common and/or usefull characters of the writing system.
NB : function required by the dchars-fe project.
"""
res = []
HBO = new_dstring( 'hbo' )
dstring = HBO()
# base_char : we don't use the list stored in symbols.py
# since we would lost the character's order.
base_characters = ( 'א',
'ב',
'ג',
'ד',
'ה',
'ו',
'ז',
'ח',
'ט',
'י',
'כ',
'ל',
'מ',
'נ',
'ס',
'ע',
'פ',
'צ',
'ק',
'ר',
'ש',
'ת' )
#-----------------------------------------------------------------------
# (1/2) simple characters
#-----------------------------------------------------------------------
for base_char in base_characters:
for shin_sin_dot in (None,
"HEBREW POINT SHIN DOT",
"HEBREW POINT SIN DOT"):
if base_char != 'SHIN':
shin_sin_dot = None
dchar = DCharacterHBO( dstring_object = dstring,
base_char = base_char,
contextual_form = None,
shin_sin_dot = None,
daghesh_mapiq = False,
methegh = False,
specialpoint = None,
vowel = None,
raphe = False,
cantillation_mark = None )
txt = dchar__get_translit_str(dstring_object = dstring,
dchar = dchar)
res.append( str(dchar) + "{" + txt + "} " )
#-----------------------------------------------------------------------
# (2/2) complex characters
#-----------------------------------------------------------------------
#.......................................................................
combinations = (itertools.product(
# base_char :
( 'ב', ),
# vowel :
(None,
"HEBREW POINT SHEVA",
"HEBREW POINT HATAF SEGOL",
"HEBREW POINT HATAF PATAH",
"HEBREW POINT HATAF QAMATS",
"HEBREW POINT HIRIQ",
"HEBREW POINT TSERE",
"HEBREW POINT SEGOL",
"HEBREW POINT PATAH",
"HEBREW POINT QAMATS",
"HEBREW POINT HOLAM",
"HEBREW POINT HOLAM HASER FOR VAV",
"HEBREW POINT QUBUTS",
"HEBREW POINT QAMATS QATAN"),
))
for base_char, \
vowel in combinations:
dchar = DCharacterHBO( dstring_object = dstring,
base_char = base_char,
contextual_form = "initial+medium+final",
shin_sin_dot = None,
daghesh_mapiq = False,
methegh = False,
specialpoint = None,
vowel = vowel,
raphe = None,
cantillation_mark = None, )
txt = dchar__get_translit_str(dstring_object = dstring,
dchar = dchar)
res.append( str(dchar) + "{" + txt + "} " )
#.......................................................................
combinations = (itertools.product(
# base_char :
( 'ש', ),
# shin_sin_dot :
(None, "HEBREW POINT SHIN DOT", "HEBREW POINT SIN DOT"),
))
for base_char, shin_sin_dot, \
in combinations:
dchar = DCharacterHBO( dstring_object = dstring,
base_char = base_char,
contextual_form = "initial+medium+final",
shin_sin_dot = shin_sin_dot,
daghesh_mapiq = False,
methegh = False,
specialpoint = None,
vowel = None,
raphe = None,
cantillation_mark = None, )
txt = dchar__get_translit_str(dstring_object = dstring,
dchar = dchar)
res.append( str(dchar) + "{" + txt + "} " )
return "".join(res) | PypiClean |
/KD_Lib-0.0.32.tar.gz/KD_Lib-0.0.32/README.md | <h1 align="center">KD-Lib</h1>
<h3 align="center">A PyTorch model compression library containing easy-to-use methods for knowledge distillation, pruning, and quantization</h3>
<div align='center'>
[](https://pepy.tech/project/kd-lib)
**[Documentation](https://kd-lib.readthedocs.io/en/latest/)** | **[Tutorials](https://kd-lib.readthedocs.io/en/latest/usage/tutorials/index.html)**
</div>
## Installation
### From source (recommended)
```shell
https://github.com/SforAiDl/KD_Lib.git
cd KD_Lib
python setup.py install
```
### From PyPI
```shell
pip install KD-Lib
```
## Example usage
To implement the most basic version of knowledge distillation from [Distilling the Knowledge in a Neural Network](https://arxiv.org/abs/1503.02531) and plot loss curves:
```python
import torch
import torch.optim as optim
from torchvision import datasets, transforms
from KD_Lib.KD import VanillaKD
# This part is where you define your datasets, dataloaders, models and optimizers
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(
"mnist_data",
train=True,
download=True,
transform=transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
),
),
batch_size=32,
shuffle=True,
)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST(
"mnist_data",
train=False,
transform=transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
),
),
batch_size=32,
shuffle=True,
)
teacher_model = <your model>
student_model = <your model>
teacher_optimizer = optim.SGD(teacher_model.parameters(), 0.01)
student_optimizer = optim.SGD(student_model.parameters(), 0.01)
# Now, this is where KD_Lib comes into the picture
distiller = VanillaKD(teacher_model, student_model, train_loader, test_loader,
teacher_optimizer, student_optimizer)
distiller.train_teacher(epochs=5, plot_losses=True, save_model=True) # Train the teacher network
distiller.train_student(epochs=5, plot_losses=True, save_model=True) # Train the student network
distiller.evaluate(teacher=False) # Evaluate the student network
distiller.get_parameters() # A utility function to get the number of
# parameters in the teacher and the student network
```
To train a collection of 3 models in an online fashion using the framework in [Deep Mutual Learning](https://arxiv.org/abs/1706.00384)
and log training details to Tensorboard:
```python
import torch
import torch.optim as optim
from torchvision import datasets, transforms
from KD_Lib.KD import DML
from KD_Lib.models import ResNet18, ResNet50 # To use models packaged in KD_Lib
# Define your datasets, dataloaders, models and optimizers
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(
"mnist_data",
train=True,
download=True,
transform=transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
),
),
batch_size=32,
shuffle=True,
)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST(
"mnist_data",
train=False,
transform=transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
),
),
batch_size=32,
shuffle=True,
)
student_params = [4, 4, 4, 4, 4]
student_model_1 = ResNet50(student_params, 1, 10)
student_model_2 = ResNet18(student_params, 1, 10)
student_cohort = [student_model_1, student_model_2]
student_optimizer_1 = optim.SGD(student_model_1.parameters(), 0.01)
student_optimizer_2 = optim.SGD(student_model_2.parameters(), 0.01)
student_optimizers = [student_optimizer_1, student_optimizer_2]
# Now, this is where KD_Lib comes into the picture
distiller = DML(student_cohort, train_loader, test_loader, student_optimizers, log=True, logdir="./logs")
distiller.train_students(epochs=5)
distiller.evaluate()
distiller.get_parameters()
```
## Methods Implemented
Some benchmark results can be found in the [logs](./logs.rst) file.
| Paper / Method | Link | Repository (KD_Lib/) |
| ----------------------------------------------------------|----------------------------------|----------------------|
| Distilling the Knowledge in a Neural Network | https://arxiv.org/abs/1503.02531 | KD/vision/vanilla |
| Improved Knowledge Distillation via Teacher Assistant | https://arxiv.org/abs/1902.03393 | KD/vision/TAKD |
| Relational Knowledge Distillation | https://arxiv.org/abs/1904.05068 | KD/vision/RKD |
| Distilling Knowledge from Noisy Teachers | https://arxiv.org/abs/1610.09650 | KD/vision/noisy |
| Paying More Attention To The Attention | https://arxiv.org/abs/1612.03928 | KD/vision/attention |
| Revisit Knowledge Distillation: a Teacher-free <br> Framework | https://arxiv.org/abs/1909.11723 |KD/vision/teacher_free|
| Mean Teachers are Better Role Models | https://arxiv.org/abs/1703.01780 |KD/vision/mean_teacher|
| Knowledge Distillation via Route Constrained <br> Optimization | https://arxiv.org/abs/1904.09149 | KD/vision/RCO |
| Born Again Neural Networks | https://arxiv.org/abs/1805.04770 | KD/vision/BANN |
| Preparing Lessons: Improve Knowledge Distillation <br> with Better Supervision | https://arxiv.org/abs/1911.07471 | KD/vision/KA |
| Improving Generalization Robustness with Noisy <br> Collaboration in Knowledge Distillation | https://arxiv.org/abs/1910.05057 | KD/vision/noisy|
| Distilling Task-Specific Knowledge from BERT into <br> Simple Neural Networks | https://arxiv.org/abs/1903.12136 | KD/text/BERT2LSTM |
| Deep Mutual Learning | https://arxiv.org/abs/1706.00384 | KD/vision/DML |
| The Lottery Ticket Hypothesis: Finding Sparse, <br> Trainable Neural Networks | https://arxiv.org/abs/1803.03635 | Pruning/lottery_tickets|
| Regularizing Class-wise Predictions via <br> Self-knowledge Distillation | https://arxiv.org/abs/2003.13964 | KD/vision/CSDK |
<br>
Please cite our pre-print if you find `KD-Lib` useful in any way :)
```bibtex
@misc{shah2020kdlib,
title={KD-Lib: A PyTorch library for Knowledge Distillation, Pruning and Quantization},
author={Het Shah and Avishree Khare and Neelay Shah and Khizir Siddiqui},
year={2020},
eprint={2011.14691},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
| PypiClean |
/CocoPy-1.1.0rc.zip/CocoPy-1.1.0rc/testSuite/TestTokens_Scanner.py | import sys
class Token( object ):
def __init__( self ):
self.kind = 0 # token kind
self.pos = 0 # token position in the source text (starting at 0)
self.col = 0 # token column (starting at 0)
self.line = 0 # token line (starting at 1)
self.val = u'' # token value
self.next = None # AW 2003-03-07 Tokens are kept in linked list
class Position( object ): # position of source code stretch (e.g. semantic action, resolver expressions)
def __init__( self, buf, beg, len, col ):
assert isinstance( buf, Buffer )
assert isinstance( beg, int )
assert isinstance( len, int )
assert isinstance( col, int )
self.buf = buf
self.beg = beg # start relative to the beginning of the file
self.len = len # length of stretch
self.col = col # column number of start position
def getSubstring( self ):
return self.buf.readPosition( self )
class Buffer( object ):
EOF = u'\u0100' # 256
def __init__( self, s ):
self.buf = s
self.bufLen = len(s)
self.pos = 0
self.lines = s.splitlines( True )
def Read( self ):
if self.pos < self.bufLen:
result = unichr(ord(self.buf[self.pos]) & 0xff) # mask out sign bits
self.pos += 1
return result
else:
return Buffer.EOF
def ReadChars( self, numBytes=1 ):
result = self.buf[ self.pos : self.pos + numBytes ]
self.pos += numBytes
return result
def Peek( self ):
if self.pos < self.bufLen:
return unichr(ord(self.buf[self.pos]) & 0xff) # mask out sign bits
else:
return Scanner.buffer.EOF
def getString( self, beg, end ):
s = ''
oldPos = self.getPos( )
self.setPos( beg )
while beg < end:
s += self.Read( )
beg += 1
self.setPos( oldPos )
return s
def getPos( self ):
return self.pos
def setPos( self, value ):
if value < 0:
self.pos = 0
elif value >= self.bufLen:
self.pos = self.bufLen
else:
self.pos = value
def readPosition( self, pos ):
assert isinstance( pos, Position )
self.setPos( pos.beg )
return self.ReadChars( pos.len )
def __iter__( self ):
return iter(self.lines)
class Scanner(object):
EOL = u'\n'
eofSym = 0
charSetSize = 256
maxT = 12
noSym = 12
start = [
9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 0, 0, 0, 0, 0, 0,
0, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 0, 0, 0, 0, 0,
0, 18, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
-1]
def __init__( self, s ):
self.buffer = Buffer( unicode(s) ) # the buffer instance
self.ch = u'\0' # current input character
self.pos = -1 # column number of current character
self.line = 1 # line number of current character
self.lineStart = 0 # start position of current line
self.oldEols = 0 # EOLs that appeared in a comment;
self.NextCh( )
self.ignore = set( ) # set of characters to be ignored by the scanner
self.ignore.add( ord(' ') ) # blanks are always white space
# fill token list
self.tokens = Token( ) # the complete input token stream
node = self.tokens
node.next = self.NextToken( )
node = node.next
while node.kind != Scanner.eofSym:
node.next = self.NextToken( )
node = node.next
node.next = node
node.val = u'EOF'
self.t = self.tokens # current token
self.pt = self.tokens # current peek token
def NextCh( self ):
if self.oldEols > 0:
self.ch = Scanner.EOL
self.oldEols -= 1
else:
self.ch = self.buffer.Read( )
self.pos += 1
# replace isolated '\r' by '\n' in order to make
# eol handling uniform across Windows, Unix and Mac
if (self.ch == u'\r') and (self.buffer.Peek() != u'\n'):
self.ch = Scanner.EOL
if self.ch == Scanner.EOL:
self.line += 1
self.lineStart = self.pos + 1
def CheckLiteral( self ):
lit = self.t.val
if lit == "abc":
self.t.kind = 7
elif lit == "a":
self.t.kind = 9
def NextToken( self ):
while ord(self.ch) in self.ignore:
self.NextCh( )
apx = 0
self.t = Token( )
self.t.pos = self.pos
self.t.col = self.pos - self.lineStart + 1
self.t.line = self.line
state = self.start[ord(self.ch)]
buf = u''
buf += unicode(self.ch)
self.NextCh()
done = False
while not done:
if state == -1:
self.t.kind = Scanner.eofSym # NextCh already done
done = True
elif state == 0:
self.t.kind = Scanner.noSym # NextCh already done
done = True
elif state == 1:
if (self.ch >= '0' and self.ch <= '9'
or self.ch >= 'A' and self.ch <= 'Z'
or self.ch >= 'a' and self.ch <= 'z'):
buf += unicode(self.ch)
self.NextCh()
state = 1
else:
self.t.kind = 1
self.t.val = buf
self.CheckLiteral()
return self.t
elif state == 2:
self.t.kind = 2
done = True
elif state == 3:
self.pos = self.pos - apx - 1
self.line = self.t.line
self.buffer.setPos(self.pos+1)
self.NextCh()
self.t.kind = 3
done = True
elif state == 4:
if (self.ch >= '0' and self.ch <= '9'):
buf += unicode(self.ch)
self.NextCh()
state = 4
elif self.ch == 'E':
buf += unicode(self.ch)
self.NextCh()
state = 5
else:
self.t.kind = 4
done = True
elif state == 5:
if (self.ch >= '0' and self.ch <= '9'):
buf += unicode(self.ch)
self.NextCh()
state = 7
elif (self.ch == '+'
or self.ch == '-'):
buf += unicode(self.ch)
self.NextCh()
state = 6
else:
self.t.kind = Scanner.noSym
done = True
elif state == 6:
if (self.ch >= '0' and self.ch <= '9'):
buf += unicode(self.ch)
self.NextCh()
state = 7
else:
self.t.kind = Scanner.noSym
done = True
elif state == 7:
if (self.ch >= '0' and self.ch <= '9'):
buf += unicode(self.ch)
self.NextCh()
state = 7
else:
self.t.kind = 4
done = True
elif state == 8:
self.pos = self.pos - apx - 1
self.line = self.t.line
self.buffer.setPos(self.pos+1)
self.NextCh()
self.t.kind = 4
done = True
elif state == 9:
self.t.kind = 5
done = True
elif state == 10:
if self.ch == 'c':
buf += unicode(self.ch)
self.NextCh()
state = 11
else:
self.t.kind = Scanner.noSym
done = True
elif state == 11:
self.t.kind = 6
done = True
elif state == 12:
if (self.ch >= '0' and self.ch <= '9'
or self.ch >= 'A' and self.ch <= 'Z'
or self.ch >= 'a' and self.ch <= 'z'):
apx = 0
buf += unicode(self.ch)
self.NextCh()
state = 1
elif self.ch == '*':
apx = 0
buf += unicode(self.ch)
self.NextCh()
state = 2
elif self.ch == '_':
apx += 1
buf += unicode(self.ch)
self.NextCh()
state = 14
elif self.ch == '+':
apx += 1
buf += unicode(self.ch)
self.NextCh()
state = 3
else:
self.t.kind = 1
self.t.val = buf
self.CheckLiteral()
return self.t
elif state == 13:
if (self.ch >= '0' and self.ch <= '9'):
buf += unicode(self.ch)
self.NextCh()
state = 13
elif self.ch == '.':
apx += 1
buf += unicode(self.ch)
self.NextCh()
state = 15
else:
self.t.kind = 4
done = True
elif state == 14:
if self.ch == '*':
apx = 0
buf += unicode(self.ch)
self.NextCh()
state = 2
elif self.ch == '_':
apx += 1
buf += unicode(self.ch)
self.NextCh()
state = 14
elif self.ch == '+':
apx += 1
buf += unicode(self.ch)
self.NextCh()
state = 3
else:
self.t.kind = Scanner.noSym
done = True
elif state == 15:
if (self.ch >= '0' and self.ch <= '9'):
apx = 0
buf += unicode(self.ch)
self.NextCh()
state = 4
elif self.ch == 'E':
apx = 0
buf += unicode(self.ch)
self.NextCh()
state = 5
elif self.ch == '.':
apx += 1
buf += unicode(self.ch)
self.NextCh()
state = 8
else:
self.t.kind = 4
done = True
elif state == 16:
self.t.kind = 8
done = True
elif state == 17:
self.t.kind = 11
done = True
elif state == 18:
if (self.ch >= '0' and self.ch <= '9'
or self.ch >= 'A' and self.ch <= 'Z'
or self.ch == 'a'
or self.ch >= 'c' and self.ch <= 'z'):
apx = 0
buf += unicode(self.ch)
self.NextCh()
state = 1
elif self.ch == '*':
apx = 0
buf += unicode(self.ch)
self.NextCh()
state = 2
elif self.ch == '_':
apx += 1
buf += unicode(self.ch)
self.NextCh()
state = 19
elif self.ch == '+':
apx += 1
buf += unicode(self.ch)
self.NextCh()
state = 3
elif self.ch == 'b':
apx = 0
buf += unicode(self.ch)
self.NextCh()
state = 20
else:
self.t.kind = 1
self.t.val = buf
self.CheckLiteral()
return self.t
elif state == 19:
if self.ch == '*':
apx = 0
buf += unicode(self.ch)
self.NextCh()
state = 2
elif self.ch == '_':
apx += 1
buf += unicode(self.ch)
self.NextCh()
state = 21
elif self.ch == '+':
apx += 1
buf += unicode(self.ch)
self.NextCh()
state = 3
else:
self.t.kind = 10
done = True
elif state == 20:
if (self.ch >= '0' and self.ch <= '9'
or self.ch >= 'A' and self.ch <= 'Z'
or self.ch >= 'a' and self.ch <= 'b'
or self.ch >= 'd' and self.ch <= 'z'):
buf += unicode(self.ch)
self.NextCh()
state = 1
elif ord(self.ch) == 0:
buf += unicode(self.ch)
self.NextCh()
state = 10
elif self.ch == 'c':
buf += unicode(self.ch)
self.NextCh()
state = 22
else:
self.t.kind = 1
self.t.val = buf
self.CheckLiteral()
return self.t
elif state == 21:
if self.ch == '*':
apx = 0
buf += unicode(self.ch)
self.NextCh()
state = 23
elif self.ch == '_':
apx += 1
buf += unicode(self.ch)
self.NextCh()
state = 14
elif self.ch == '+':
apx += 1
buf += unicode(self.ch)
self.NextCh()
state = 3
else:
self.t.kind = Scanner.noSym
done = True
elif state == 22:
if (self.ch >= '0' and self.ch <= '9'
or self.ch >= 'A' and self.ch <= 'Z'
or self.ch >= 'a' and self.ch <= 'z'):
buf += unicode(self.ch)
self.NextCh()
state = 1
elif self.ch == '+':
buf += unicode(self.ch)
self.NextCh()
state = 16
else:
self.t.kind = 1
self.t.val = buf
self.CheckLiteral()
return self.t
elif state == 23:
if self.ch == '*':
buf += unicode(self.ch)
self.NextCh()
state = 17
else:
self.t.kind = 2
done = True
self.t.val = buf
return self.t
def Scan( self ):
self.t = self.t.next
self.pt = self.t.next
return self.t
def Peek( self ):
self.pt = self.pt.next
while self.pt.kind > self.maxT:
self.pt = self.pt.next
return self.pt
def ResetPeek( self ):
self.pt = self.t | PypiClean |
/DLTA-AI-1.1.tar.gz/DLTA-AI-1.1/DLTA_AI_app/mmdetection/mmdet/models/necks/hrfpn.py | import torch
import torch.nn as nn
import torch.nn.functional as F
from mmcv.cnn import ConvModule
from mmcv.runner import BaseModule
from torch.utils.checkpoint import checkpoint
from ..builder import NECKS
@NECKS.register_module()
class HRFPN(BaseModule):
"""HRFPN (High Resolution Feature Pyramids)
paper: `High-Resolution Representations for Labeling Pixels and Regions
<https://arxiv.org/abs/1904.04514>`_.
Args:
in_channels (list): number of channels for each branch.
out_channels (int): output channels of feature pyramids.
num_outs (int): number of output stages.
pooling_type (str): pooling for generating feature pyramids
from {MAX, AVG}.
conv_cfg (dict): dictionary to construct and config conv layer.
norm_cfg (dict): dictionary to construct and config norm layer.
with_cp (bool): Use checkpoint or not. Using checkpoint will save some
memory while slowing down the training speed.
stride (int): stride of 3x3 convolutional layers
init_cfg (dict or list[dict], optional): Initialization config dict.
"""
def __init__(self,
in_channels,
out_channels,
num_outs=5,
pooling_type='AVG',
conv_cfg=None,
norm_cfg=None,
with_cp=False,
stride=1,
init_cfg=dict(type='Caffe2Xavier', layer='Conv2d')):
super(HRFPN, self).__init__(init_cfg)
assert isinstance(in_channels, list)
self.in_channels = in_channels
self.out_channels = out_channels
self.num_ins = len(in_channels)
self.num_outs = num_outs
self.with_cp = with_cp
self.conv_cfg = conv_cfg
self.norm_cfg = norm_cfg
self.reduction_conv = ConvModule(
sum(in_channels),
out_channels,
kernel_size=1,
conv_cfg=self.conv_cfg,
act_cfg=None)
self.fpn_convs = nn.ModuleList()
for i in range(self.num_outs):
self.fpn_convs.append(
ConvModule(
out_channels,
out_channels,
kernel_size=3,
padding=1,
stride=stride,
conv_cfg=self.conv_cfg,
act_cfg=None))
if pooling_type == 'MAX':
self.pooling = F.max_pool2d
else:
self.pooling = F.avg_pool2d
def forward(self, inputs):
"""Forward function."""
assert len(inputs) == self.num_ins
outs = [inputs[0]]
for i in range(1, self.num_ins):
outs.append(
F.interpolate(inputs[i], scale_factor=2**i, mode='bilinear'))
out = torch.cat(outs, dim=1)
if out.requires_grad and self.with_cp:
out = checkpoint(self.reduction_conv, out)
else:
out = self.reduction_conv(out)
outs = [out]
for i in range(1, self.num_outs):
outs.append(self.pooling(out, kernel_size=2**i, stride=2**i))
outputs = []
for i in range(self.num_outs):
if outs[i].requires_grad and self.with_cp:
tmp_out = checkpoint(self.fpn_convs[i], outs[i])
else:
tmp_out = self.fpn_convs[i](outs[i])
outputs.append(tmp_out)
return tuple(outputs) | PypiClean |
/NetworkSim-0.2.2.tar.gz/NetworkSim-0.2.2/examples/system_clocks.ipynb | ```
cd ..
from NetworkSim.simulation.tools.clock import TransmitterDataClock
from NetworkSim.simulation.tools.clock import ControlClock
import simpy
dc1 = TransmitterDataClock()
cc1 = ControlClock()
def clock(env):
while True:
print(f'Data ticks at {env.now}')
yield env.timeout(dc1.clock_cycle)
def control_clock(env):
while True:
print(f'Control ticks at {env.now}')
yield env.timeout(cc1.clock_cycle)
dc1.model.get_max_data_packet_num_on_ring()
dc1.model.network.num_nodes
cc1.model.get_max_control_packet_num_on_ring()
dc1.get_clock_cycle()
cc1.get_clock_cycle()
env = simpy.Environment()
env.process(clock(env))
env.process(control_clock(env))
env.run(until=100)
```
| PypiClean |
/DBUtils-3.0.3.tar.gz/DBUtils-3.0.3/dbutils/persistent_pg.py | from . import __version__
from .steady_pg import SteadyPgConnection
try:
# Prefer the pure Python version of threading.local.
# The C implementation turned out to be problematic with mod_wsgi,
# since it does not keep the thread-local data between requests.
from _threading_local import local
except ImportError:
# Fall back to the default version of threading.local.
from threading import local
class PersistentPg:
"""Generator for persistent classic PyGreSQL connections.
After you have created the connection pool, you can use
connection() to get thread-affine, steady PostgreSQL connections.
"""
version = __version__
def __init__(
self, maxusage=None, setsession=None,
closeable=False, threadlocal=None, *args, **kwargs):
"""Set up the persistent PostgreSQL connection generator.
maxusage: maximum number of reuses of a single connection
(0 or None means unlimited reuse)
When this maximum usage number of the connection is reached,
the connection is automatically reset (closed and reopened).
setsession: optional list of SQL commands that may serve to prepare
the session, e.g. ["set datestyle to ...", "set time zone ..."]
closeable: if this is set to true, then closing connections will
be allowed, but by default this will be silently ignored
threadlocal: an optional class for representing thread-local data
that will be used instead of our Python implementation
(threading.local is faster, but cannot be used in all cases)
args, kwargs: the parameters that shall be used to establish
the PostgreSQL connections using class PyGreSQL pg.DB()
"""
self._maxusage = maxusage
self._setsession = setsession
self._closeable = closeable
self._args, self._kwargs = args, kwargs
self.thread = (threadlocal or local)()
def steady_connection(self):
"""Get a steady, non-persistent PyGreSQL connection."""
return SteadyPgConnection(
self._maxusage, self._setsession, self._closeable,
*self._args, **self._kwargs)
def connection(self):
"""Get a steady, persistent PyGreSQL connection."""
try:
con = self.thread.connection
except AttributeError:
con = self.steady_connection()
self.thread.connection = con
return con | PypiClean |
/Flask-DebugToolbar-0.13.1.tar.gz/Flask-DebugToolbar-0.13.1/src/flask_debugtoolbar/static/codemirror/mode/lua/lua.js |
CodeMirror.defineMode("lua", function(config, parserConfig) {
var indentUnit = config.indentUnit;
function prefixRE(words) {
return new RegExp("^(?:" + words.join("|") + ")", "i");
}
function wordRE(words) {
return new RegExp("^(?:" + words.join("|") + ")$", "i");
}
var specials = wordRE(parserConfig.specials || []);
// long list of standard functions from lua manual
var builtins = wordRE([
"_G","_VERSION","assert","collectgarbage","dofile","error","getfenv","getmetatable","ipairs","load",
"loadfile","loadstring","module","next","pairs","pcall","print","rawequal","rawget","rawset","require",
"select","setfenv","setmetatable","tonumber","tostring","type","unpack","xpcall",
"coroutine.create","coroutine.resume","coroutine.running","coroutine.status","coroutine.wrap","coroutine.yield",
"debug.debug","debug.getfenv","debug.gethook","debug.getinfo","debug.getlocal","debug.getmetatable",
"debug.getregistry","debug.getupvalue","debug.setfenv","debug.sethook","debug.setlocal","debug.setmetatable",
"debug.setupvalue","debug.traceback",
"close","flush","lines","read","seek","setvbuf","write",
"io.close","io.flush","io.input","io.lines","io.open","io.output","io.popen","io.read","io.stderr","io.stdin",
"io.stdout","io.tmpfile","io.type","io.write",
"math.abs","math.acos","math.asin","math.atan","math.atan2","math.ceil","math.cos","math.cosh","math.deg",
"math.exp","math.floor","math.fmod","math.frexp","math.huge","math.ldexp","math.log","math.log10","math.max",
"math.min","math.modf","math.pi","math.pow","math.rad","math.random","math.randomseed","math.sin","math.sinh",
"math.sqrt","math.tan","math.tanh",
"os.clock","os.date","os.difftime","os.execute","os.exit","os.getenv","os.remove","os.rename","os.setlocale",
"os.time","os.tmpname",
"package.cpath","package.loaded","package.loaders","package.loadlib","package.path","package.preload",
"package.seeall",
"string.byte","string.char","string.dump","string.find","string.format","string.gmatch","string.gsub",
"string.len","string.lower","string.match","string.rep","string.reverse","string.sub","string.upper",
"table.concat","table.insert","table.maxn","table.remove","table.sort"
]);
var keywords = wordRE(["and","break","elseif","false","nil","not","or","return",
"true","function", "end", "if", "then", "else", "do",
"while", "repeat", "until", "for", "in", "local" ]);
var indentTokens = wordRE(["function", "if","repeat","do", "\\(", "{"]);
var dedentTokens = wordRE(["end", "until", "\\)", "}"]);
var dedentPartial = prefixRE(["end", "until", "\\)", "}", "else", "elseif"]);
function readBracket(stream) {
var level = 0;
while (stream.eat("=")) ++level;
stream.eat("[");
return level;
}
function normal(stream, state) {
var ch = stream.next();
if (ch == "-" && stream.eat("-")) {
if (stream.eat("["))
return (state.cur = bracketed(readBracket(stream), "comment"))(stream, state);
stream.skipToEnd();
return "comment";
}
if (ch == "\"" || ch == "'")
return (state.cur = string(ch))(stream, state);
if (ch == "[" && /[\[=]/.test(stream.peek()))
return (state.cur = bracketed(readBracket(stream), "string"))(stream, state);
if (/\d/.test(ch)) {
stream.eatWhile(/[\w.%]/);
return "number";
}
if (/[\w_]/.test(ch)) {
stream.eatWhile(/[\w\\\-_.]/);
return "variable";
}
return null;
}
function bracketed(level, style) {
return function(stream, state) {
var curlev = null, ch;
while ((ch = stream.next()) != null) {
if (curlev == null) {if (ch == "]") curlev = 0;}
else if (ch == "=") ++curlev;
else if (ch == "]" && curlev == level) { state.cur = normal; break; }
else curlev = null;
}
return style;
};
}
function string(quote) {
return function(stream, state) {
var escaped = false, ch;
while ((ch = stream.next()) != null) {
if (ch == quote && !escaped) break;
escaped = !escaped && ch == "\\";
}
if (!escaped) state.cur = normal;
return "string";
};
}
return {
startState: function(basecol) {
return {basecol: basecol || 0, indentDepth: 0, cur: normal};
},
token: function(stream, state) {
if (stream.eatSpace()) return null;
var style = state.cur(stream, state);
var word = stream.current();
if (style == "variable") {
if (keywords.test(word)) style = "keyword";
else if (builtins.test(word)) style = "builtin";
else if (specials.test(word)) style = "variable-2";
}
if ((style != "comment") && (style != "string")){
if (indentTokens.test(word)) ++state.indentDepth;
else if (dedentTokens.test(word)) --state.indentDepth;
}
return style;
},
indent: function(state, textAfter) {
var closing = dedentPartial.test(textAfter);
return state.basecol + indentUnit * (state.indentDepth - (closing ? 1 : 0));
}
};
});
CodeMirror.defineMIME("text/x-lua", "lua"); | PypiClean |
/MindsDB-23.8.3.0.tar.gz/MindsDB-23.8.3.0/mindsdb/integrations/handlers/reddit_handler/reddit_tables.py | import pandas as pd
from mindsdb.integrations.libs.api_handler import APITable
from mindsdb_sql.parser import ast
from mindsdb.integrations.utilities.sql_utils import extract_comparison_conditions
class CommentTable(APITable):
def select(self, query: ast.Select) -> pd.DataFrame:
'''Select data from the comment table and return it as a pandas DataFrame.
Args:
query (ast.Select): The SQL query to be executed.
Returns:
pandas.DataFrame: A pandas DataFrame containing the selected data.
'''
reddit = self.handler.connect()
submission_id = None
conditions = extract_comparison_conditions(query.where)
for condition in conditions:
if condition[0] == '=' and condition[1] == 'submission_id':
submission_id = condition[2]
break
if submission_id is None:
raise ValueError('Submission ID is missing in the SQL query')
submission = reddit.submission(id=submission_id)
submission.comments.replace_more(limit=None)
result = []
for comment in submission.comments.list():
data = {
'id': comment.id,
'body': comment.body,
'author': comment.author.name if comment.author else None,
'created_utc': comment.created_utc,
'score': comment.score,
'permalink': comment.permalink,
'ups': comment.ups,
'downs': comment.downs,
'subreddit': comment.subreddit.display_name,
}
result.append(data)
result = pd.DataFrame(result)
self.filter_columns(result, query)
return result
def get_columns(self):
'''Get the list of column names for the comment table.
Returns:
list: A list of column names for the comment table.
'''
return [
'id',
'body',
'author',
'created_utc',
'permalink',
'score',
'ups',
'downs',
'subreddit',
]
def filter_columns(self, result: pd.DataFrame, query: ast.Select = None):
columns = []
if query is not None:
for target in query.targets:
if isinstance(target, ast.Star):
columns = self.get_columns()
break
elif isinstance(target, ast.Identifier):
columns.append(target.value)
if len(columns) > 0:
result = result[columns]
class SubmissionTable(APITable):
def select(self, query: ast.Select) -> pd.DataFrame:
'''Select data from the submission table and return it as a pandas DataFrame.
Args:
query (ast.Select): The SQL query to be executed.
Returns:
pandas.DataFrame: A pandas DataFrame containing the selected data.
'''
reddit = self.handler.connect()
subreddit_name = None
sort_type = None
conditions = extract_comparison_conditions(query.where)
for condition in conditions:
if condition[0] == '=' and condition[1] == 'subreddit':
subreddit_name = condition[2]
elif condition[0] == '=' and condition[1] == 'sort_type':
sort_type = condition[2]
elif condition[0] == '=' and condition[1] == 'items':
items = int(condition[2])
if not sort_type:
sort_type = 'hot'
if not subreddit_name:
return pd.DataFrame()
if sort_type == 'new':
submissions = reddit.subreddit(subreddit_name).new(limit=items)
elif sort_type == 'rising':
submissions = reddit.subreddit(subreddit_name).rising(limit=items)
elif sort_type == 'controversial':
submissions = reddit.subreddit(subreddit_name).controversial(limit=items)
elif sort_type == 'top':
submissions = reddit.subreddit(subreddit_name).top(limit=items)
else:
submissions = reddit.subreddit(subreddit_name).hot(limit=items)
result = []
for submission in submissions:
data = {
'id': submission.id,
'title': submission.title,
'author': submission.author.name if submission.author else None,
'created_utc': submission.created_utc,
'score': submission.score,
'num_comments': submission.num_comments,
'permalink': submission.permalink,
'url': submission.url,
'ups': submission.ups,
'downs': submission.downs,
'num_crossposts': submission.num_crossposts,
'subreddit': submission.subreddit.display_name,
'selftext': submission.selftext,
}
result.append(data)
result = pd.DataFrame(result)
self.filter_columns(result, query)
return result
def get_columns(self):
'''Get the list of column names for the submission table.
Returns:
list: A list of column names for the submission table.
'''
return [
'id',
'title',
'author',
'created_utc',
'permalink',
'num_comments',
'score',
'ups',
'downs',
'num_crossposts',
'subreddit',
'selftext'
]
def filter_columns(self, result: pd.DataFrame, query: ast.Select = None):
columns = []
if query is not None:
for target in query.targets:
if isinstance(target, ast.Star):
columns = self.get_columns()
break
elif isinstance(target, ast.Identifier):
columns.append(target.parts[-1])
else:
raise NotImplementedError
else:
columns = self.get_columns()
columns = [name.lower() for name in columns]
if len(result) == 0:
result = pd.DataFrame([], columns=columns)
else:
for col in set(columns) & set(result.columns) ^ set(columns):
result[col] = None
result = result[columns]
if query is not None and query.limit is not None:
return result.head(query.limit.value)
return result | PypiClean |
/KeSi-1.6.0-py3-none-any.whl/kesi/susia/POJ.py | import unicodedata
import re
from kesi.butkian.kongiong import KHIN_SIANN_HU
from kesi.susia.kongke import thiah, tshiau_tuasiosia, SuSiaTshoNgoo
def tsuanPOJ(bun):
si_khinsiann = bun.startswith(KHIN_SIANN_HU)
if si_khinsiann:
bun = bun.replace(KHIN_SIANN_HU, '')
try:
siann, un, tiau, tuasiosia = thiah(bun)
except SuSiaTshoNgoo:
return bun
poj = kapPOJ(siann, un, tiau)
kiatko = tshiau_tuasiosia(tuasiosia, poj)
if si_khinsiann:
kiatko = '--{}'.format(kiatko)
return kiatko
def kapPOJ(siann, un, tiau):
return unicodedata.normalize(
'NFC',
臺羅數字調轉白話字.轉白話字(siann, un, tiau)
)
class 臺羅數字調轉白話字():
@classmethod
def 轉白話字(cls, 聲, 韻, 調):
白話字聲 = cls.轉白話字聲(聲)
白話字韻 = cls.轉白話字韻(韻)
白話字調 = cls.轉白話字調(調)
白話字傳統調韻 = cls.白話字韻標傳統調(白話字韻, 白話字調)
return (
白話字聲 +
白話字傳統調韻
)
@classmethod
def 轉白話字聲(cls, 聲):
白話字聲 = None
if 聲 == 'ts':
白話字聲 = 'ch'
elif 聲 == 'tsh':
白話字聲 = 'chh'
else:
白話字聲 = 聲
return 白話字聲
@classmethod
def 轉白話字韻(cls, un):
un = (
un
.replace('nn', 'ⁿ')
.replace('oo', 'o͘')
.replace('ua', 'oa')
.replace('ue', 'oe')
.replace('ing', 'eng')
.replace('ik', 'ek')
)
return un
@classmethod
def 轉白話字調(cls, tiau):
# ă a̋
return tiau.replace('\u030b', '\u0306')
@classmethod
def 白話字韻標傳統調(cls, 白話字韻無調, 調):
該標調的字 = ''
if 'o͘' in 白話字韻無調:
該標調的字 = 'o͘'
elif re.search('(iau)|(oai)', 白話字韻無調):
# 三元音 攏標佇a面頂
該標調的字 = 'a'
elif re.search('[aeiou]{2}', 白話字韻無調):
# 雙元音
if 白話字韻無調[0] == 'i':
該標調的字 = 白話字韻無調[1]
elif 白話字韻無調[1] == 'i':
該標調的字 = 白話字韻無調[0]
elif len(白話字韻無調) == 2:
# xx
該標調的字 = 白話字韻無調[0]
elif 白話字韻無調[-1] == 'ⁿ' and 白話字韻無調[-2:] != 'hⁿ':
# xxⁿ
該標調的字 = 白話字韻無調[0]
else:
# xxn, xxng, xxhⁿ
該標調的字 = 白話字韻無調[1]
elif re.search('[aeiou]', 白話字韻無調):
# 單元音
該標調的字 = 白話字韻無調[0]
elif 'ng' in 白話字韻無調:
# ng, mng
該標調的字 = 'n'
elif 'm' in 白話字韻無調:
該標調的字 = 'm'
結果 = cls.加上白話字調符(白話字韻無調, 該標調的字, 調)
return 結果
@classmethod
def 加上白話字調符(cls, 白話字韻無調, 標調字母, 調):
return 白話字韻無調.replace(標調字母, 標調字母 + 調, 1) | PypiClean |
/Flask-QiniuStorage-0.9.5.zip/Flask-QiniuStorage-0.9.5/README.rst | Flask-QiniuStorage
==================
`七牛云存储 <http://www.qiniu.com/>`_ Flask扩展,Qiniu Storage for Flask
安装
------------
.. code-block:: python
pip install Flask-QiniuStorage
配置
------------
+------------------------+-------------------+
| 配置项 | 说明 |
+========================+===================+
| QINIU_ACCESS_KEY | 七牛 Access Key |
+------------------------+-------------------+
| QINIU_SECRET_KEY | 七牛 Secret Key |
+------------------------+-------------------+
| QINIU_BUCKET_NAME | 七牛空间名称 |
+------------------------+-------------------+
| QINIU_BUCKET_DOMAIN | 七牛空间对应域名 |
+------------------------+-------------------+
使用
-------------
.. code-block:: python
from flask import Flask
from flask_qiniustorage import Qiniu
QINIU_ACCESS_KEY = '七牛 Access Key'
QINIU_SECRET_KEY = '七牛 Secret Key'
QINIU_BUCKET_NAME = '七牛空间名称'
QINIU_BUCKET_DOMAIN = '七牛空间对应域名'
app = Flask(__name__)
app.config.from_object(__name__)
qiniu_store = Qiniu(app)
# 或者
# qiniu_store = Qiniu()
# qiniu_store.init_app(app)
# 保存文件到七牛
@app.route('/save')
def save():
data = 'data to save'
filename = 'filename'
ret, info = qiniu_store.save(data, filename)
return str(ret)
# 删除七牛空间中的文件
@app.route('/delete')
def delete():
filename = 'filename'
ret, info = qiniu_store.delete(filename)
return str(ret)
# 根据文件名获取对应的公开URL
@app.route('/url')
def url():
filename = 'filename'
return qiniu_store.url(filename)
参考 *tests.py*
返回值
-------
**save** 与 **delete** 返回的 **ret** 、 **info** 为 `qiniu python-sdk <https://github.com/qiniu/python-sdk>`_
中对应API的返回值,可在成功操作与失败操作时分别打印查看其具体内容
.. code-block:: python
print ret
print info
测试
------
$ python tests.py
许可
-----
The MIT License (MIT). 详情见 **License文件** | PypiClean |
/LargeNumbers-0.1.8.tar.gz/LargeNumbers-0.1.8/README.md | <!---
Copyright 2022 Hamidreza Sadeghi. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Large Numbers
In this repository, a library has been provided that allows you to perform the four basic operations of addition, subtraction, multiplication and division on large and very large numbers. We tried to make this library as fast and efficient as possible.
Note: In this library, powering is not supported yet, but it will be supported in the next versions.
## Installation
To install, you can use `pip` as follows.
<br/>
```
pip install LargeNumbers
```
<br/>
To install from the repository, just clone the project, change the path to the corresponding directory and then install lib using `pip`.
<br/>
```
git clone https://github.com/HRSadeghi/LargeNumbers.git
cd LargeNumbers
pip install .
```
## Usage
The easiest way to use this library is as follows,
<br/>
```python
from LargeNumbers.LargeNumber import LargeNumber, largeNumberFormat
a = LargeNumber('125763678041689463179.45761346709461437894')
b = LargeNumber('-746011541145.47464169741644487000085')
# Negation
print(-a)
print(-b)
# Addition and Subtraction
print(a+b)
print(a-b)
print(-a+b)
# Multiplication
print(a*b)
# Division
largeNumberFormat.precision = 100
largeNumberFormat.return_fracation = False
print(a/b)
```
<br/>
<br/>
In the above code snippet, because the number of digits for the division operation may be very large, so a maximum can be defined for it using `largeNumberFormat.precision`. If the number of digits appearing in the division process is more than the number of digits allocated for `largeNumberFormat.precision`, then a letter `L` appears at the end of the number (this letter has no effect in further calculations).
```python
from LargeNumbers.LargeNumber import LargeNumber, largeNumberFormat
largeNumberFormat.precision = 2
a = LargeNumber('1')
b = LargeNumber('7')
print(a/b)
```
<br/>
You can also define one of the numbers as a `string`, `int` or `float`,
```python
from LargeNumbers.LargeNumber import LargeNumber, largeNumberFormat
a = LargeNumber('125763678041689463179.45761346709461437894')
b = '-746011541145.47464169741644487000085'
# Ops
print(a+b)
print(a-b)
print(a*b)
print(a/b)
a = '125763678041689463179.45761346709461437894'
b = LargeNumber('-746011541145.47464169741644487000085')
# Ops
print()
print(a+b)
print(a-b)
print(a*b)
print(a/b)
```
<br>
But if the input is a `string`, you cannot negate it first.
<br>
```python
from LargeNumbers.LargeNumber import LargeNumber, largeNumberFormat
a = '125763678041689463179.45761346709461437894'
b = LargeNumber('-746011541145.47464169741644487000085')
print(-a+b)
# In this case, you will encounter the following error
"""
TypeError: bad operand type for unary -: 'str'
"""
```
<br/>
You can also give input numbers as fractions,
<br/>
```python
from LargeNumbers.LargeNumber import LargeNumber, largeNumberFormat
a = LargeNumber('1/2')
b = LargeNumber('-3/14')
# Ops (return the result as a fraction)
largeNumberFormat.return_fracation = True
print(a+b)
print(a-b)
print(a*b)
print(a/b)
##
# Ops (return the result as a decimal)
largeNumberFormat.precision = 5
largeNumberFormat.return_fracation = False
print()
print(a+b)
print(a-b)
print(a*b)
print(a/b)
##
```
<br/>
It is also possible to give numbers as input and get the output as a fraction or non-fraction,
<br/>
```python
from LargeNumbers.LargeNumber import LargeNumber, largeNumberFormat
a = LargeNumber('2.142')
b = LargeNumber('-3/14')
# Ops (return the result as a fraction)
largeNumberFormat.return_fracation = True
print(a+b)
print(a-b)
print(a*b)
print(a/b)
##
a = LargeNumber('1.134')
b = LargeNumber('-1.57')
# Ops (return the result as a decimal)
largeNumberFormat.return_fracation = True
print()
print(a+b)
print(a-b)
print(a*b)
print(a/b)
##
```
## Recurring decimal (repeating decimal)
Numbers such as $\dfrac{1}{3}$, $\dfrac{1}{7}$ and similar numbers do not have a finite decimal representation. Therefore, we are facing a problem to perform division in these numbers. But these numbers can be shown in periodic form. As a result, $\dfrac{1}{3}$ can be represented by $0.\overline{3}$ and $\dfrac{1}{7}$ by $0.\overline{142857}$.
Here, the letter R is used to show the beginning of the period. Therefore, we represent a number like $0.\overline{3}$ with `0.R3`, a number like $0.\overline{7}$ with `0.R7` and a number like $0.12\overline{42}$ with `0.12R42`.
According to this way of representation, we can apply the four operations of addition, subtraction, multiplication and division on the same representation.
```python
from LargeNumbers.LargeNumber import LargeNumber, largeNumberFormat
largeNumberFormat.return_repeating_form = True
largeNumberFormat.return_fracation = False
largeNumberFormat.precision = 30
a = LargeNumber('0.R81')
b = LargeNumber('0.134R1')
# Ops
print(a+b)
print(a-b)
print(a*b)
print(a/b)
##
a = LargeNumber('0.12R81')
b = LargeNumber('0.665')
# Ops
print()
print(a+b)
print(a-b)
print(a*b)
print(a/b)
##
a = LargeNumber('1/7')
b = LargeNumber('0.665')
# Ops
print()
print(a+b)
print(a-b)
print(a*b)
print(a/b)
##
```
In the above code snippet, `largeNumberFormat.return_repeating_form` specifies whether the number is in recurring (repeating) form or not. If the number of digits in the periodic display exceeds the number of digits dedicated to the `largeNumberFormat.precision`, the number will not be displayed recurringly and an `L` will appear at the end of the number (this letter has no effect in further calculations).
```python
from LargeNumbers.LargeNumber import LargeNumber, largeNumberFormat
largeNumberFormat.return_repeating_form = True
largeNumberFormat.return_fracation = False
largeNumberFormat.precision = 6
a = LargeNumber('1')
b = LargeNumber('7')
print(a/b)
largeNumberFormat.precision = 5
print()
print(a/b)
``` | PypiClean |
/Espynoza-0.2.5.tar.gz/Espynoza-0.2.5/ESP/handlers/NoiseDetect.py | import machine
import time
import BaseHandler
####################################################################################################
class Handler (BaseHandler.Handler):
'''
Target : ESP8266,
Description : Read input connected to a digital sound input module, detect noise
Parameters : Period: The sample frequency, in milliseconds
Params:
- Name of the pin to read. Is also used as Tag in MQTT topic
- Threshold: the minimal count of pulses that are considered noise
Returns : State of the sound, > 0 if noisy. Only the first 'silence' detected is notified, then we keep quiet.
Noise, on the other hand, is always sent as a value.
'''
def __init__(self, p_Context):
BaseHandler.Handler.__init__(self, p_Context)
self.f_StartTime = time.ticks_ms()
self.f_Count = {}
self.f_OldState = {}
for l_PinParams in self.f_Params:
l_Pin = self.f_Pins[l_PinParams[0]]
l_Pin.irq(trigger=machine.Pin.IRQ_RISING | machine.Pin.IRQ_FALLING, handler=self.IRQ)
self.f_Count [str(l_Pin)] = 0
self.f_OldState[str(l_Pin)] = False
#########################
def periodic(self, p_Now):
if (1000 - time.ticks_diff(p_Now, self.f_StartTime)) > 0:
return
for l_PinParams, l_Threshold in self.f_Params:
l_PinString = str(self.f_Pins[l_PinParams])
l_NewState = self.f_Count[l_PinString] > l_Threshold
if l_NewState:
self.f_MQTT.publish('1/' + l_PinParams, self.f_Count[l_PinString])
elif self.f_OldState[l_PinString]:
self.f_MQTT.publish('1/' + l_PinParams, 0)
self.f_OldState[l_PinString] = l_NewState
self.f_Count[l_PinString] = 0
self.f_StartTime = p_Now
#########################
def IRQ(self, p_Pin):
self.f_Count[str(p_Pin)] += 1 | PypiClean |
/ItunesLibrarian-1.0.4-py3-none-any.whl/ituneslibrarian/modify.py | import os.path
# import urllib.request
import ituneslibrarian.utils as utils
from mutagen.easyid3 import EasyID3
from mutagen.mp3 import MP3
def library(library):
for track in library:
title = track["Name"]
tokens = []
# location = urllib.request.unquote(
# track["Location"][7:].replace("%20", " "))
location = track["Location"][7:].replace("%20", " ")
filename, file_extension = os.path.splitext(location)
utils.notice_print(title)
if os.path.exists(location):
if file_extension == ".mp3":
audio = MP3(location, ID3=EasyID3)
audio["artistsort"] = ['']
audio["titlesort"] = ['']
audio["albumsort"] = ['']
if "title" in audio.keys():
if audio["title"][0] != title:
utils.warning_print('different titles\n\tTitle: ' +
str(audio["title"]) + "\n" + '\tName: ' + title)
title_selection = utils.prompt(
'Which one would you like to keep? ', ["1", "2", "s"], 0)
if title_selection == "2":
audio["title"] = [title]
elif title_selection == "1":
title = audio["title"][0]
else:
utils.warning_print("no title")
title_duplication = utils.prompt(
'Would you like to clone the name? ', ["1", "2"], 0)
if title_duplication == "1":
audio["title"] = [title]
elif title_duplication == "2":
audio["title"] = []
if "artist" in audio.keys():
if "albumartist" in audio.keys():
if audio["artist"] != audio["albumartist"]:
utils.warning_print('different artists\n\tArtist: ' + str(
audio["artist"]) + "\n" + '\tAlbum artist: ' + str(audio["albumartist"]))
artist_selection = utils.prompt(
'Which one would you like to keep? ', ["1", "2", "s"], 2)
if artist_selection == "1":
audio["albumartist"] = audio["artist"]
elif artist_selection == "2":
audio["artist"] = audio["albumartist"]
else:
utils.warning_print("no album artist")
artist_duplication = utils.prompt(
'Would you like to substitute < no album artist > with < ' + audio["artist"][0] + ' >? ', ["1", "2", "s"], 1)
if artist_duplication == "1":
audio["albumartist"] = audio["artist"]
elif artist_duplication == "2":
audio["albumartist"] = []
else:
utils.warning_print("no artist")
title_components = title.split("-")
title_components_dir = {}
i = 0
noconflicts = {"title": False, "artist": False}
if len(title_components) > 1:
utils.warning_print("splittable name")
for comp in title_components:
comp = comp.strip()
title_components_dir[i] = comp
if "title" in audio.keys() and len(audio["title"]) > 0 and comp.lower() == audio["title"][0].lower():
noconflicts["title"] = comp
del title_components_dir[i]
if "artist" in audio.keys() and len(audio["artist"]) > 0 and comp.lower() == audio["artist"][0].lower():
noconflicts["artist"] = comp
del title_components_dir[i]
print("\t" + str(i) + " - " + comp)
i += 1
print (len(title_components_dir))
if len(title_components_dir) == 1:
suggestion = 0
else:
suggestion = False
if noconflicts["title"] is False:
newtitle = utils.prompt(
'Which term is the title? ', [str(val) for val in list(title_components_dir.keys())] + ["s"], suggestion)
if newtitle != "s":
audio["title"] = title_components[
int(newtitle)].strip()
if noconflicts["artist"] is False:
newartist = utils.prompt(
'Which term is the artist? ', [str(val) for val in list(title_components_dir.keys())] + ["s"], suggestion)
if newartist != "s":
audio["artist"] = title_components[
int(newartist)].strip()
audio.save()
else:
utils.error_print(
"Wrong file extension. Extension: " + file_extension)
else:
utils.error_print("File not found. Location: " + location)
utils.notice_print("All done.\n\n") | PypiClean |
/MAVR-0.93.tar.gz/MAVR-0.93/Pipelines/Sanger.py |
import os
from collections import OrderedDict
import numpy as np
from Bio import SeqIO
from Pipelines.Abstract import Pipeline
from RouToolPa.Collections.General import IdList, TwoLvlDict
from RouToolPa.Routines import MatplotlibRoutines
class SangerPipeline(Pipeline):
def __init__(self, workdir="./", max_threads=1):
Pipeline.__init__(self, workdir=workdir, max_threads=max_threads)
self.dirs = {"fastq": {
"raw": [],
"trimmed": [],
},
"fasta": {
"raw": [],
"trimmed": [],
},
"qual_plot": {
"raw": [],
"trimmed": [],
},
}
self.sanger_extention_list = [".ab1"]
@staticmethod
def is_sanger_file(filename):
if not os.path.isdir(filename) and ((filename[-4:] == ".ab1") or (filename[-7:] == ".ab1.gz") or (filename[-7:] == ".ab1.bz2")):
return True
return False
def handle_sanger_data(self, input_dir, output_prefix, outdir=None, read_subfolders=False,
min_mean_qual=0, min_median_qual=0, min_len=50):
if outdir:
self.workdir = outdir
self.init_dirs()
sanger_filelist = self.make_list_of_path_to_files(input_dir,
expression=self.is_sanger_file,
recursive=read_subfolders,
return_absolute_paths=True)
stat_dict = TwoLvlDict()
record_dict = OrderedDict()
trimmed_record_dict = OrderedDict()
excluded_list = IdList()
excluded_counter = 0
low_quality_counter = 0
too_short_counter = 0
merged_raw_fastq = "%s/%s.raw.fastq" % (self.workdir, output_prefix)
merged_raw_fasta = "%s/%s.raw.fasta" % (self.workdir, output_prefix)
merged_trimmed_fastq = "%s/%s.trimmed.fastq" % (self.workdir, output_prefix)
merged_trimmed_fasta = "%s/%s.trimmed.fasta" % (self.workdir, output_prefix)
for filename in sanger_filelist:
filename_list = self.split_filename(filename)
record_raw_fastq = "%s/fastq/raw/%s.raw.fastq" % (self.workdir, filename_list[1])
record_raw_fasta = "%s/fasta/raw/%s.raw.fasta" % (self.workdir, filename_list[1])
record_raw_qual_plot_prefix = "%s/qual_plot/raw/%s.raw.qual" % (self.workdir, filename_list[1])
record_trimmed_fastq = "%s/fastq/trimmed/%s.trimmed.fastq" % (self.workdir, filename_list[1])
record_trimmed_fasta = "%s/fasta/trimmed/%s.trimmed.fasta" % (self.workdir, filename_list[1])
record_trimmed_qual_plot_prefix = "%s/qual_plot/trimmed/%s.trimmed.qual" % (self.workdir, filename_list[1])
record = SeqIO.read(self.metaopen(filename, "rb"), format="abi")
record_dict[record.id] = record
SeqIO.write(record, record_raw_fastq, format="fastq")
SeqIO.write(record, record_raw_fasta, format="fasta")
trimmed_record = SeqIO.AbiIO._abi_trim(record)
stat_dict[record.id] = OrderedDict({
"raw_len": len(record),
"raw_mean_qual": np.mean(record.letter_annotations["phred_quality"]),
"raw_median_qual": np.median(record.letter_annotations["phred_quality"]),
"trimmed_len": len(trimmed_record),
"trimmed_mean_qual": np.mean(trimmed_record.letter_annotations["phred_quality"]),
"trimmed_median_qual": np.median(trimmed_record.letter_annotations["phred_quality"]),
"retained": "-",
})
MatplotlibRoutines.draw_bar_plot(record.letter_annotations["phred_quality"], record_raw_qual_plot_prefix,
extentions=["png"], xlabel="Position", ylabel="Phred quality",
title="Per base quality", min_value=None, max_value=None, new_figure=True,
figsize=(3 * (int(len(record) / 100) + 1), 3), close_figure=True)
if stat_dict[record.id]["trimmed_len"] >= min_len:
if min_median_qual:
if (stat_dict[record.id]["trimmed_median_qual"] >= min_median_qual) and (stat_dict[record.id]["trimmed_mean_qual"] >= min_mean_qual):
stat_dict[record.id]["retained"] = "+"
else:
low_quality_counter += 1
else:
stat_dict[record.id]["retained"] = "+"
else:
too_short_counter += 1
if stat_dict[record.id]["retained"] == "-":
excluded_list.append(record.id)
continue
SeqIO.write(trimmed_record, record_trimmed_fastq, format="fastq")
SeqIO.write(trimmed_record, record_trimmed_fasta, format="fasta")
MatplotlibRoutines.draw_bar_plot(trimmed_record.letter_annotations["phred_quality"],
record_trimmed_qual_plot_prefix,
extentions=["png"], xlabel="Position", ylabel="Phred quality",
title="Per base quality", min_value=None, max_value=None, new_figure=True,
figsize=(3 * (int(len(record) / 100) + 1), 3),
close_figure=True)
trimmed_record_dict[record.id] = trimmed_record
SeqIO.write(self.record_from_dict_generator(record_dict), merged_raw_fastq, format="fastq")
SeqIO.write(self.record_from_dict_generator(record_dict), merged_raw_fasta, format="fasta")
SeqIO.write(self.record_from_dict_generator(trimmed_record_dict), merged_trimmed_fastq, format="fastq")
SeqIO.write(self.record_from_dict_generator(trimmed_record_dict), merged_trimmed_fasta, format="fasta")
excluded_list.write("%s.excluded.ids" % output_prefix)
stat_dict.write(out_filename="%s.stats" % output_prefix)
print("Excluded: %i" % excluded_counter)
print("\tToo short( < %i ): %i" % (min_len, too_short_counter))
print("\tLow quality( median < %i or mean < %i ): %i" % (min_median_qual, min_mean_qual, low_quality_counter)) | PypiClean |
/Hector_Observations_Pipeline-1.4-py3-none-any.whl/hop/hexabundle_allocation/allocate_hexabundles.py | import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import csv
import time
start_time = time.time()
from problem_operations.conflicts.functions import find_all_blocked_magnets,\
create_list_of_fully_blocked_magnets
from problem_operations.conflicts.blocked_magnet import print_fully_blocked_magnets
from problem_operations.extract_data import create_list_of_all_magnets_from_file,get_file
from problem_operations.plots import create_magnet_pickup_areas, draw_magnet_pickup_areas, draw_all_magnets
from problem_operations.position_ordering import create_position_ordering_array
from problem_operations.file_arranging import arrange_guidesFile, merge_hexaAndGuides, create_robotFileArray, \
positioningArray_adjust_and_mergetoFile, finalFiles
from hector.plate import HECTOR_plate
from problem_operations.offsets import hexaPositionOffset
# clusterNum = 2
# tileNum = 0
# limit = 86
# allBatchOfTiles = ['G9','G12','G15']
# galaxyIDrecord = {}
# # for batch in allBatchOfTiles:
# # if batch == 'G9':
# # limit = 104
# # elif batch == 'G12':
# # limit = 86
# # elif batch == 'G15':
# # limit = 95
# for clusterNum in range(2,3):
# if (clusterNum == 1):
# limit = 15
# elif (clusterNum == 2):
# limit = 29
# elif (clusterNum == 3):
# limit = 24
# elif (clusterNum == 4):
# limit = 10
# elif (clusterNum == 5):
# limit = 26
# elif (clusterNum == 6):
# limit = 16
# for tileNum in range(7,10):
def allocate_hexas():
# fileNameGuides = ('GAMA_'+batch+'/Configuration/HECTORConfig_Guides_GAMA_'+batch+'_tile_%03d.txt' % (tileNum))
fileNameGuides = ('For_Ayoan_14p5_exclusion_Clusters/Cluster_%d/Configuration/HECTORConfig_Guides_Cluster_%d_tile_%03d.txt' % (clusterNum, clusterNum, tileNum))
# proxy file to arrange guides in required format to merge with hexa probes
proxyGuideFile = 'galaxy_fields/newfile.txt'
# Adding ID column and getting rid of the header line of Guides cluster to add to the hexa cluster
problem_operations.file_arranging.arrange_guidesFile(fileNameGuides, proxyGuideFile)
# fileNameHexa = ('GAMA_'+batch+'/Configuration/HECTORConfig_Hexa_GAMA_'+batch+'_tile_%03d.txt' % (tileNum))
fileNameHexa = ('For_Ayoan_14p5_exclusion_Clusters/Cluster_%d/Configuration/HECTORConfig_Hexa_Cluster_%d_tile_%03d.txt' % (clusterNum, clusterNum, tileNum))
plate_file = ('For_Ayoan_14p5_exclusion_Clusters/Cluster_%d/Output/Hexa_and_Guides_Cluster_%d_tile_%03d.txt' % (clusterNum, clusterNum, tileNum))
# plate_file = get_file('GAMA_'+batch+'/Output/Hexa_and_Guides_GAMA_'+batch+'_tile_%03d.txt' % (tileNum))
# Adding guides cluster txt file to hexa cluster txt file
problem_operations.file_arranging.merge_hexaAndGuides(fileNameHexa, proxyGuideFile, plate_file, clusterNum, tileNum)
# extracting all the magnets and making a list of them from the plate_file
all_magnets = problem_operations.extract_data.create_list_of_all_magnets_from_file(get_file(plate_file))
#### Offset functions- still a work in progress- need to determine input source and add column to output file
all_magnets = problem_operations.offsets.hexaPositionOffset(all_magnets)
# create magnet pickup areas for all the magnets
problem_operations.plots.create_magnet_pickup_areas(all_magnets)
#************** # creating plots and drawing pickup areas
plt.clf()
plt.close()
HECTOR_plate().draw_circle('r')
problem_operations.plots.draw_magnet_pickup_areas(all_magnets, '--c')
#**************
# test for collision and detect magnets which have all pickup directions blocked
conflicted_magnets = problem_operations.conflicts.functions.find_all_blocked_magnets(all_magnets)
# create a list of the fully blocked magnets
fully_blocked_magnets = problem_operations.conflicts.functions.create_list_of_fully_blocked_magnets(conflicted_magnets)
# print the fully blocked magnets out in terminal and record in conflicts file
conflictsRecord = 'galaxy_fields/Conflicts_Index.txt'
problem_operations.conflicts.blocked_magnet.print_fully_blocked_magnets(fully_blocked_magnets,conflictsRecord, fileNameHexa)
conflictFile = 'galaxy_fields/unresolvable_conflicts.txt'
flagsFile = 'galaxy_fields/Flags.txt'
#*** Choose former method OR median method OR larger bundle prioritized method for hexabundle allocation ***
positioning_array,galaxyIDrecord = problem_operations.position_ordering.create_position_ordering_array(all_magnets, fully_blocked_magnets, \
conflicted_magnets, galaxyIDrecord, clusterNum, tileNum, conflictFile, flagsFile)
# draw all the magnets in the plots created earlier
figureFile = ('figures/Cluster_%d/savedPlot_cluster_%d_tile_%03d.pdf' % (clusterNum,clusterNum,tileNum))
problem_operations.plots.draw_all_magnets(all_magnets,clusterNum,tileNum,figureFile) #***********
# checking positioning_array prints out all desired parameters
#print(positioning_array)
# insert column heading and print only rectangular magnet rows in the csv file
newrow = ['Magnet', 'Label', 'Center_x', 'Center_y', 'rot_holdingPosition', 'rot_platePlacing', 'order', 'Pickup_option', 'ID','Index', 'Hexabundle']
newrow_circular = ['Magnet', 'Label', 'Center_x', 'Center_y', 'holding_position_ang', 'plate_placement_ang', 'order', 'Pickup_option', 'ID', 'Index', 'Hexabundle']
# final two output files
outputFile = ('For_Ayoan_14p5_exclusion_Clusters/Cluster_%d/Output_with_Positioning_array/Hexa_and_Guides_with_PositioningArray_Cluster_%d_tile_%03d.txt' \
% (clusterNum, clusterNum, tileNum))
# ('GAMA_'+batch+'/Output_with_Positioning_array/Hexa_and_Guides_with_PositioningArray_GAMA_'+batch+'_tile_%03d.txt' % (tileNum)
robotFile = ('For_Ayoan_14p5_exclusion_Clusters/Cluster_%d/Output_for_Robot/Robot_Cluster_%d_tile_%03d.txt' \
% (clusterNum, clusterNum, tileNum))
#('GAMA_'+batch+'/Output_for_Robot/Robot_GAMA_'+batch+'_tile_%03d.txt' % (tileNum)
# creating robotFile array and storing it in robot file
positioning_array, robotFilearray = problem_operations.file_arranging.create_robotFileArray(positioning_array,robotFile,newrow)
# adjusting the positioning array to merge only selected parameters to the output file
positioning_array, positioning_array_circular = problem_operations.file_arranging.positioningArray_adjust_and_mergetoFile(positioning_array, plate_file, outputFile, newrow,newrow_circular)
# produce final files with consistent layout and no extra commas
problem_operations.file_arranging.finalFiles(outputFile, robotFile)
# just to check each tile's whole operation time
print("\t \t ----- %s seconds -----" % (time.time() - start_time))
# Comment out all ***** marked plot functions above(lines 81-84,105s)
# to run whole batch of tiles fast without plots | PypiClean |
/Naver_Series-1.1.3.Beta.tar.gz/Naver_Series-1.1.3.Beta/README.md | Naver Series
---
> 🙏sorry for about immature english
<h3>Installation</h3>
---
requires requests, beautifulsoup4
`python3 -m pip install Naver-Series`
> https://pypi.org/project/Naver-Series/1.1.0.Beta/
---
**option**
| Symbol | Description |
|--------|-------------------------------------------------------------------------------------|
| * | If this character is included, there is no need to use it. Or not contain the value |
<h3>1. search</h3>
***
**params**
| Name | Type | Description |
|---------|---------------------------|-------------|
| keyword | str | |
| *focus | 'comic', 'novel', 'ebook' | sort type |
```py
#request example
import NaverSeries
NaverSeries.search('쇼미더', focus='comic')
```
**response**
| Name | Type |
|----------|-------------|
| contents | list\<dict> |
**contents**
| Name\<str> | Value-Type | Description |
|------------|------------|------------------------|
| title | str | name of book |
| id | int | id of book (productNo) |
| author | list\<str> | authors of book |
```py
#response example
{
'contents':
[
{
'title': '쇼미더 엔터(총 201화/완결)',
'id': 4189091,
'author': ['베베꼬인']
},
{
'title': '쇼미더 엔터 [단행본](총 8권/완결)',
'id': 5332770,
'author': ['베베꼬인']
},
{
'title': '쇼미더럭키짱!(총 58화/미완결)',
'id': 6733269,
'author': ['김성모', '박태준']
},
{
'title': '쇼 미 더 스타크래프트 (스타크래프트로 배우는 군사·경제·정치)',
'id': 3430966,
'author': ['이성원 저']
}
]
}
```
<br>
<h3>2. get Info</h3>
***
**params**
| Name | Type | Description |
|------|-----|-------------|
| id | int | book id |
```py
#request example
import NaverSeries
book = NaverSeries.getInfo(5133669)
print(book)
```
**response**
| Name\<str> | Value-Type | Description |
|----------------|------------|---------------------|
| title | str | name of book |
| *description | str | description of book |
| img | str | thumbnail of book |
| *total_episode | int | size of all episode |
| *author | list\<str> | authors of book |
| rating | float | rating of book |
| url | str | link of book |
```py
#response example
{
'title': '전지적 독자 시점 [독점]',
'description': "'이건 내가 아는 그 전개다'\n한순간에 세계가 멸망하고, 새로운 세상이 펼쳐졌다.\n오직 나만이 완주했던 소설 세계에서 평범했던 독자의 새로운 삶이 시작된다.",
'img': 'https://comicthumb-phinf.pstatic.net/20200812_154/pocket_1597221311633UO5eI_JPEG/__1000x1500_v2.jpg?type=m260',
'total_episode': 88,
'author': ['슬리피-C', '싱숑', 'UMI'],
'rating': 9.9,
'url': 'https://series.naver.com/comic/detail.series?productNo=5133669'
}
``` | PypiClean |
/BinTut-0.3.3.tar.gz/BinTut-0.3.3/README.rst | BinTut
@@@@@@
.. image:: https://img.shields.io/pypi/v/bintut.svg
:target: https://pypi.python.org/pypi/BinTut
Dynamic or live demonstration of classical exploitation techniques
of typical memory corruption vulnerabilities,
from debugging to payload generation and exploitation,
for educational purposes :yum:.
What's BinTut
=============
BinTut is a set of tutorials, **as well as** exercises.
Tutorials
---------
See `Get Started`_ for usage information.
If you are a fan of Faiz_, ``Burst Mode`` or ``Single Mode`` should
sound familiar and inspiring.
Burst Mode
++++++++++
Watch and replay to obtain general understanding of the process.
Use ``-b / --burst`` to control the interval (in seconds).
Note that ``-b0`` means ``Single Mode``, which is the default.
Single Mode
+++++++++++
Play and examine various contents
such as the stack, registers or memory addresses,
carefully and step by step,
to acquire comprehensive and detailed knowledge of the process.
Use ``Enter`` or ``Ctrl + D`` to step.
You can execute normal GDB_ commands via the promt.
But note that BinTut won't synchronize the display
when you execute state-changing commands,
e.g. ``stepi`` or ``nexti``,
which are discouraged for the time being.
Another bad news is that readline_ does not work :scream:,
and I can't figure out the reason :scream:.
Exercises
---------
Write exploits that work outside debuggers
when you understand the principles and techniques
via watching and replaying (i.e. rewatching),
careful **playing** (i.e., **Single Mode**),
and most importantly,
**reading the source code responsible for exploit generation**,
which resides in a file named ``exploits.py``.
Installation
============
Notice
------
If pip_ is used to install BinTut,
make sure that you use the pip_ version
corresponding to the Python_ version shipped with GDB_.
For more details, see `#1`_.
``pip install bintut`` may or may not work for the time being.
Therefore it's recommended to just clone this repository
and run without installation
as long as necessary libraries are installed
by ``pip install -r requirements.txt``.
Warning
-------
BinTut does not work inside virtualenv at present.
Tested Platforms
----------------
`Arch GNU/Linux`_
+++++++++++++++++
Current version of `Arch GNU/Linux`_ ships GDB_ with Python_ 3,
in which I developed BinTut.
The latest release version should work fine.
- Install ``lib32-glibc``
::
sudo pacman -S lib32-glibc
- Install Python_ 3 and ``pip3``.
::
sudo pacman -S python python-pip
- Install BinTut using ``pip3``
::
sudo pip3 install bintut
- You are ready!
::
bintut -b0.1 jmp-esp
`Fedora GNU/Linux`_
+++++++++++++++++++
The latest Fedora Workstation comes with GDB_ with Python_ 3,
which has been tested
and BinTut is known to work properly
as in `Arch GNU/Linux`_.
- Install ``glibc.i686`` to support 32-bit programs if needed.
::
sudo dnf install glibc.i686
- Install ``BinTut`` from PyPI.
::
sudo pip3 install bintut
- Give it a try.
::
bintut -b0.1 frame-faking
`Debian GNU/Linux`_
+++++++++++++++++++
GDB_ from the stable branch of `Debian GNU/Linux`_ ships with Python_ 2.
Latest source from Git works with minor problems.
- Add support to 32-bit programs if necessary.
::
sudo dpkg --add-architecture i386
sudo apt-get update
sudo apt-get install libc6:i386
- Clone the latest source code from Git and install requirements.
::
git clone https://github.com/NoviceLive/bintut.git
cd bintut
sudo apt-get install python-pip gdb
pip2 install -r requirements.txt
- Run it without installation.
::
python2 ./bintut.py -b0.1 frame-faking
`Kali GNU/Linux`_
+++++++++++++++++
GDB_ from the latest rolling version of `Kali GNU/Linux`_ ships with Python_ 3.
- Enable ``i386`` support according to aforementioned instructions.
- Install ``pip3``
::
apt-get install python3-pip
- Install the latest BinTut release using ``pip3``
::
pip3 install bintut
- Start hacking!
::
bintut -b0.1 jmp-esp
Requirements
------------
GDB_
++++
Python_ scripting support is required.
BinTut is developed with Python_ 3,
but it's intended to be Python_ 2 compatible.
Therefore, when Python_ 2 yells at you,
feel free to create an issue or send me a pull request.
Known unresolved issues existing on Python_ 2
*********************************************
- Can't display disassembly after returning to shellcode.
- Can't print the payload for some courses.
Ropper_
+++++++
Show information about binary files and find gadgets to
build rop chains for different architectures.
pyelftools_
+++++++++++
Python library for analyzing ELF files
and DWARF debugging information.
Pat_
++++
Customizable Lazy Exploit Pattern Utility.
Colorama_
+++++++++
Simple cross-platform colored terminal text in Python.
Click_
++++++
Python composable command line utility.
.. _`Get Started`:
Get Started
===========
See ``bintut --help`` and give it a shot
via ``bintut --burst 0.1 frame-faking``.
::
./bintut.py --help
Usage: bintut.py [OPTIONS] [COURSE]
Teach You A Binary Exploitation For Great Good.
Options:
-V, --version Show the version and exit.
-l, --list List available courses.
-6, --x64 Use x64 courses.
-A, --aslr Enable ASLR.
-b, --burst FLOAT Use this burst mode interval. [default: 0]
-v, --verbose Be verbose.
-q, --quiet Be quiet.
-h, --help Show this message and exit.
Available Courses
=================
Other courses might be added later.
`Stack-based buffer overflow`_
------------------------------
1. plain
++++++++
Return to plain shellcode.
Linux x86 / x64.
NX: Disabled.
ASLR: Disabled.
Stack Protector: Disabled.
2. `nop-slide`_
+++++++++++++++
Return to NOPs plus shellcode.
Linux x86 / x64.
NX: Disabled.
ASLR: Disabled.
Stack Protector: Disabled.
This course is not demonstrative enough
and shall be updated when the author finds a scenario
where `nop-slide`_ really stands out.
3. jmp-esp
++++++++++
Return to shellcode via JMP ESP / RSP.
Linux x86 / x64.
NX: Disabled.
ASLR: Disabled.
Stack Protector: Disabled.
4. off-by-one NULL
++++++++++++++++++
Variant of ``plain`` `stack-based buffer overflow`_.
Linux x86 / x64.
NX: Disabled.
ASLR: Disabled.
Stack Protector: Disabled.
5. ret2lib_
+++++++++++
Return to functions.
Linux x86.
NX: **Enabled**.
ASLR: Disabled.
Stack Protector: Disabled.
.. _`Notes for x64`:
Notes for x64
*************
Either on Linux or Windows, the `ABI of x64`_, unlike that of x86,
passes some arguments, first six or four integral arguments
on Linux or Windows respectively,
via registers, which may not be controlled
without resort to certain gadgets.
Therefore, it may be discussed in the section for ROP_.
6. frame-faking
+++++++++++++++
Return to chained functions via LEAVE RET gadget.
Linux x86.
NX: **Enabled**.
ASLR: Disabled.
Stack Protector: Disabled.
Notes for x64
*************
See `Notes for x64`_.
Bug Reports
===========
Create `issues <https://github.com/NoviceLive/bintut/issues>`_.
BinTut might or might not work on your system,
but bug reports with necessary information are always welcome.
Tips
----
Remember to include ``bintut --version`` in your report.
You can just submit the verbose log (``stderr``) if out of words,
e.g., ``bintut -v -b0.1 frame-faking 2>log.txt``.
TODO List & You Can Contribute
==============================
- Improve the code if you find something that can be done better.
The codebase of BinTut can always be improved by those
who have a deeper understanding of Python than the author.
Also, there are hardcoded behaviors which can be generalized.
- Change color scheme to red highlight when content changes.
Currently, our color scheme remains unchanged,
in predefined colors,
which is just not eye-catching or obvious
when we want to observe some significant changes
in certain registers or specific memory locations.
Here is an example of such change,
the least-significant-**byte** of saved EBP / RBP
being cleared due to an off-by-one NULL write.
Ref. That's what you will expect in OllyDbg
and probably many other debuggers will also behave in this manner.
Ref. Some GDB_ enhancement projects have already implemented this.
- Synchronize the display when executing state-changing commands.
- Add course variants that does not allow NULL bytes.
For example, add variant courses
using ``strcpy`` instead of ``fread`` to trigger overflow,
in order to demonstrate the techniques
to survive in severe environments,
which happen to be the case of our real world.
- Use a better combination of chained functions for ``frame-faking``.
What follows is the current choice.
Yes, two consecutive ``/bin/sh`` and ``exit``.
::
elif post == 'frame-faking':
payload = (
Faked(offset=offset, address=addr) +
Faked(b'system', ['/bin/sh']) +
Faked(b'execl', ['/bin/sh', '/bin/sh', 0]) +
Faked(b'exit', [0]))
- Support demonstration on Windows and MacOS.
References
==========
- `Smashing The Stack For Fun And Profit <http://phrack.org/issues/49/14.html#article>`_
- `The Frame Pointer Overwrite <http://phrack.org/issues/55/8.html#article>`_
- `Advanced return-into-lib(c) exploits (PaX case study) <http://phrack.org/issues/58/4.html#article>`_
.. _Arch GNU/Linux: https://www.archlinux.org/
.. _Fedora GNU/Linux: https://getfedora.org/
.. _Debian GNU/Linux: https://www.debian.org/
.. _Kali GNU/Linux: https://www.kali.org/
.. _pip: https://pip.pypa.io/
.. _Python: https://www.python.org/
.. _Capstone: http://www.capstone-engine.org/
.. _filebytes: https://github.com/sashs/filebytes
.. _#1: https://github.com/NoviceLive/bintut/issues/1
.. _GDB: http://www.gnu.org/software/gdb/
.. _Ropper: https://github.com/sashs/Ropper
.. _pyelftools: https://github.com/eliben/pyelftools
.. _Pat: https://github.com/NoviceLive/pat
.. _Colorama: https://github.com/tartley/colorama
.. _Click: https://github.com/mitsuhiko/click
.. _Stack-based buffer overflow: https://en.wikipedia.org/wiki/Stack_buffer_overflow
.. _nop-slide: https://en.wikipedia.org/wiki/NOP_slide
.. _ret2lib: https://en.wikipedia.org/wiki/Return-to-libc_attack
.. _ROP: https://en.wikipedia.org/wiki/Return-oriented_programming
.. _ABI of x64: https://en.wikipedia.org/wiki/X86_calling_conventions#x86-64_calling_conventions
.. _readline: https://docs.python.org/3/library/readline.html
.. _Faiz: https://en.wikipedia.org/wiki/Kamen_Rider_555
| PypiClean |
/Lu_Project_4_C8123-0.0.2.tar.gz/Lu_Project_4_C8123-0.0.2/Lu_Project_4_C8123/base/metrics.py | import numpy as np
import torch
from torch import Tensor
from torch.nn.functional import nll_loss
from ..other.uncertain_metrics import uncertain
class BaseMetric:
""" Abstract metric class, for record history data.
define the interface for all metric classes.
every subclass should implement 'record'.
"""
def __init__(self, need_output=False):
self._history = {
"output": [],
"losses": [],
"predict": [],
"correct": [],
"uncertainty": [],
}
self.need_output = need_output
def record(self, output: Tensor, target: Tensor) -> float:
""" record info to self._history for every batch
:param output: Tensor, model output, after softmax
:param target: Tensor, label for samples
:param indices: Tensor, index(id) for samples
:return loss: float, batch loss, until now
"""
raise NotImplementedError
def result(self) -> dict:
""" get and clear history """
history = self._history
self._history = {
"output": [],
"losses": [],
"predict": [],
"correct": [],
"uncertainty": [],
}
return history
class ClassificationMetric(BaseMetric):
""" Metric for Classification.
TODO: 应该拆分Metric到多个,不然修改指标必然要改动History,Metric
"""
def __init__(self, need_output=False):
super(ClassificationMetric, self).__init__(need_output)
def record(self, output: Tensor, target: Tensor):
""" record metric: acc loss predict, correct, uncertainty
:param output: Tensor, model output, after softmax
:param target: Tensor, label for samples
:param indices: Tensor, index(id) for samples
"""
with torch.no_grad():
if self.need_output:
self._history["outputs"].append(output.tolist())
losses = nll_loss(output, target, reduction='none')
self._history["losses"] += losses.tolist()
predict = output.argmax(dim=1)
self._history["predict"] += predict.tolist()
correct = target.eq(predict.view_as(target))
self._history["correct"] += correct.tolist()
uncertainty = uncertain(output.cpu().numpy())
self._history["uncertainty"] += uncertainty.tolist()
# cal loss,acc
epoch_loss = np.mean(self._history["losses"])
tmp_correct = self._history["correct"]
epoch_acc = np.sum(tmp_correct) / len(tmp_correct)
return epoch_loss, epoch_acc | PypiClean |
/DI_engine-0.4.9-py3-none-any.whl/dizoo/smac/envs/smac_map.py | from pysc2.maps import lib
import os
class SMACMap(lib.Map):
directory = os.path.join(os.path.dirname(__file__), "maps/SMAC_Maps")
download = "https://github.com/oxwhirl/smac#smac-maps"
players = 2
step_mul = 8
game_steps_per_episode = 0
# Copied from smac/env/starcraft2/maps/smac_maps.py
map_param_registry = {
"3m": {
"n_agents": 3,
"n_enemies": 3,
"limit": 60,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"8m": {
"n_agents": 8,
"n_enemies": 8,
"limit": 120,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"25m": {
"n_agents": 25,
"n_enemies": 25,
"limit": 150,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"5m_vs_6m": {
"n_agents": 5,
"n_enemies": 6,
"limit": 70,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"8m_vs_9m": {
"n_agents": 8,
"n_enemies": 9,
"limit": 120,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"10m_vs_11m": {
"n_agents": 10,
"n_enemies": 11,
"limit": 150,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"27m_vs_30m": {
"n_agents": 27,
"n_enemies": 30,
"limit": 180,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 0,
"map_type": "marines",
},
"MMM": {
"n_agents": 10,
"n_enemies": 10,
"limit": 150,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 3,
"map_type": "MMM",
},
"MMM2": {
"n_agents": 10,
"n_enemies": 12,
"limit": 180,
"a_race": "T",
"b_race": "T",
"unit_type_bits": 3,
"map_type": "MMM",
},
"2s3z": {
"n_agents": 5,
"n_enemies": 5,
"limit": 120,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots",
},
"3s5z": {
"n_agents": 8,
"n_enemies": 8,
"limit": 150,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots",
},
"infestor_viper": {
"n_agents": 2,
"n_enemies": 9,
"limit": 150,
"a_race": "Z",
"b_race": "Z",
"unit_type_bits": 2,
"map_type": "infestor_viper"
},
"3s5z_vs_3s6z": {
"n_agents": 8,
"n_enemies": 9,
"limit": 170,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 2,
"map_type": "stalkers_and_zealots",
},
"3s_vs_3z": {
"n_agents": 3,
"n_enemies": 3,
"limit": 150,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 0,
"map_type": "stalkers",
},
"3s_vs_4z": {
"n_agents": 3,
"n_enemies": 4,
"limit": 200,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 0,
"map_type": "stalkers",
},
"3s_vs_5z": {
"n_agents": 3,
"n_enemies": 5,
"limit": 250,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 0,
"map_type": "stalkers",
},
"1c3s5z": {
"n_agents": 9,
"n_enemies": 9,
"limit": 180,
"a_race": "P",
"b_race": "P",
"unit_type_bits": 3,
"map_type": "colossi_stalkers_zealots",
},
"2m_vs_1z": {
"n_agents": 2,
"n_enemies": 1,
"limit": 150,
"a_race": "T",
"b_race": "P",
"unit_type_bits": 0,
"map_type": "marines",
},
"corridor": {
"n_agents": 6,
"n_enemies": 24,
"limit": 400,
"a_race": "P",
"b_race": "Z",
"unit_type_bits": 0,
"map_type": "zealots",
},
"6h_vs_8z": {
"n_agents": 6,
"n_enemies": 8,
"limit": 150,
"a_race": "Z",
"b_race": "P",
"unit_type_bits": 0,
"map_type": "hydralisks",
},
"2s_vs_1sc": {
"n_agents": 2,
"n_enemies": 1,
"limit": 300,
"a_race": "P",
"b_race": "Z",
"unit_type_bits": 0,
"map_type": "stalkers",
},
"so_many_baneling": {
"n_agents": 7,
"n_enemies": 32,
"limit": 100,
"a_race": "P",
"b_race": "Z",
"unit_type_bits": 0,
"map_type": "zealots",
},
"bane_vs_bane": {
"n_agents": 24,
"n_enemies": 24,
"limit": 200,
"a_race": "Z",
"b_race": "Z",
"unit_type_bits": 2,
"map_type": "bane",
},
"2c_vs_64zg": {
"n_agents": 2,
"n_enemies": 64,
"limit": 400,
"a_race": "P",
"b_race": "Z",
"unit_type_bits": 0,
"map_type": "colossus",
},
}
for name in map_param_registry.keys():
globals()[name] = type(name, (SMACMap, ), dict(filename=name))
def get_map_params(map_name):
return map_param_registry[map_name] | PypiClean |
/FastGets-0.3.5.tar.gz/FastGets-0.3.5/fastgets/stats/instance.py |
import time
import datetime
from ..core.client import get_client
from ..utils import utc2datetime
class InstanceStats(object):
@classmethod
def check_rate_limit(cls, instance_id, second_rate_limit):
now = datetime.datetime.now()
key = '{}:rate_limit'.format(instance_id)
field = now.strftime('%Y%m%d%H%M%S')
if second_rate_limit < get_client().hincrby(key, field):
return True
else:
return False
@classmethod
def set_task_active(cls, instance_id, pipe=None):
client = pipe or get_client()
client.set('{}:task_active_at'.format(instance_id), int(time.time())) # 精确到秒就行了
@classmethod
def incr(cls, instance_id, name, pipe=None):
# name: total success process_error crawl_error
client = pipe or get_client()
client.hincrby('{}:num_stats'.format(instance_id), name)
@classmethod
def add_task_for_time_cost(cls, task, pipe=None):
# name: crawl process
client = pipe or get_client()
# 分别统计抓取和处理的总耗时
client.hincrbyfloat('{}:time_stats'.format(task.instance_id), 'crawl', task.crawl_seconds)
client.hincrbyfloat('{}:time_stats'.format(task.instance_id), 'process', task.process_seconds)
task_json = task.to_json()
key = '{}:crawl_seconds_ranking'.format(task.instance_id)
client.zadd(key, task_json, -task.crawl_seconds) # 确保大的排前面
# todo 这里直接存储整个 json ,性能肯能有问题
client.zremrangebyrank(key, 10, -1) # 只保存 TOP 10
key = '{}:process_seconds_ranking'.format(task.instance_id)
client.zadd(key, task_json, -task.process_seconds)
client.zremrangebyrank(key, 10, -1)
@classmethod
def get_top_tasks(cls, instance_id):
from ..task import Task
# todo 没想到更好的名字
client = get_client()
with client.pipeline() as pipe:
pipe.zrange('{}:crawl_seconds_ranking'.format(instance_id), 0, 10)
pipe.zrange('{}:process_seconds_ranking'.format(instance_id), 0, 10)
crawl_tasks, process_tasks = pipe.execute()
crawl_tasks = [
Task.from_json(each)
for each in crawl_tasks
]
process_tasks = [
Task.from_json(each)
for each in process_tasks
]
return crawl_tasks, process_tasks
@classmethod
def get(cls, instance_id):
with get_client().pipeline() as pipe:
pipe.get('{}:task_active_at'.format(instance_id))
pipe.llen('{}:running_pool'.format(instance_id))
pipe.hgetall('{}:num_stats'.format(instance_id))
pipe.hgetall('{}:time_stats'.format(instance_id))
task_active_at, running_num, num_stats, time_stats = pipe.execute()
if task_active_at:
task_active_at = utc2datetime(int(task_active_at))
running_num = running_num or int(running_num)
num_stats = num_stats or {}
total_num = int(num_stats.get('total') or 0)
success_num = int(num_stats.get('success') or 0)
crawl_error_num = int(num_stats.get('crawl_error') or 0)
process_error_num = int(num_stats.get('process_error') or 0)
avg_crawl_seconds = None
avg_process_seconds = None
if success_num:
avg_crawl_seconds = float(time_stats['crawl']) / success_num
avg_process_seconds = float(time_stats['process']) / success_num
return task_active_at, total_num, running_num, success_num, crawl_error_num, process_error_num, \
avg_crawl_seconds, avg_process_seconds | PypiClean |
/MPT5.0.3.1-0.3.1.tar.gz/MPT5.0.3.1-0.3.1/src/MPT5/Mainpro.py |
from Allimp import *
import locale
from Config.Init import *
import GUI.window2 as window2
import GUI.Start as Strt
class mainApp(wx.App):
def OnInit(self):
self.locale = None
wx.Locale.AddCatalogLookupPathPrefix(LOCALE_PATH)
self.config = self.GetConfig()
lang = self.config.Read("Language")
langu_dic = LANGUAGE_LIST
self.UpdateLanguage(langu_dic[int(lang)])
self.SetAppName('Temp5')
if self.config.Read('Splash') != '':
splash = Strt.MySplashScreen(window2)
splash.Show(True)
else:
frame = window2.MainWin()
if self.config.Read('WinSize') != '(-1, -1)':
SIZE = wx.Size(eval(self.config.Read(u'WinSize')))
else:
SIZE = (wx.GetDisplaySize()[0],wx.GetDisplaySize()[1]-30)
frame.SetSize(SIZE)
frame.SetPosition((0,0))
#frame.EnableFullScreenView(True)
frame.Show()
return True
def GetConfig(self):
config = wx.FileConfig(appName='Temp5',localFilename=CONFIG_PATH+'option.ini',globalFilename=CONFIG_PATH+'system1.ini')
return config
def UpdateLanguage(self, lang):
supportedLangs = {"English": wx.LANGUAGE_ENGLISH,
"Farsi": wx.LANGUAGE_FARSI,
"French": wx.LANGUAGE_FRENCH,
"German": wx.LANGUAGE_GERMAN,
"Spanish": wx.LANGUAGE_SPANISH,
"Turkish": wx.LANGUAGE_TURKISH,
}
if self.locale:
assert sys.getrefcount(self.locale) <= 2
del self.locale
if supportedLangs[lang]:
self.locale = wx.Locale(supportedLangs[lang])
if self.locale.IsOk():
self.locale.AddCatalog("Temp5fa")
# self.locale.AddCatalog("Temp5fr")
self.locale.AddCatalog("Temp5de")
# self.locale.AddCatalog("Temp5sp")
self.locale.AddCatalog("Temp5tr")
else:
self.locale = None
else:
wx.MessageBox("Language support not found please sending an email to us for update new language!")
def main(argv):
#print(argv)
if len(argv) > 0:
if argv[0] == '-c':
app = mainApp()
else:
app = mainApp(redirect=True)
locale.setlocale(locale.LC_ALL, '')
app.MainLoop()
if __name__ == '__main__':
main(sys.argv[1:]) | PypiClean |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.