id
stringlengths 1
265
| text
stringlengths 6
5.19M
| dataset_id
stringclasses 7
values |
---|---|---|
/Akhet-2.0.tar.gz/Akhet-2.0/docs/demo/details.rst | Details
%%%%%%%
development.ini
===============
The config file contains the following settings which aren't in Pyramid's
built-in scaffolds:
* mako.directories: necessary when using Mako, to set the template search
path. (Theoretically you don't need this if you specify renderers by asset
spec rather than by relative path, but I couldn't get that to work.)
* cache.\*: Beaker cache settings. These are not actually necessary because
the demo never uses a cache, but they're here for demonstration.
* session.\*: Beaker session settings. These are necessary if you use sessions
or flash messages.
Beaker supports several kinds of session
persistence: in-memory, files, memcached, database, etc. The demo's
configuration uses memory mode, which holds the sessions in memory until the application
quits. It contains commented settings for file-based sessions, which is Pylons'
default. Experienced developers seem to be choosing memcached mode nowadays.
Memory sessions disappear when the server is restarted, and work only with
multithreaded servers, not multiprocess servers. File-based sessions are
persistent, but add the complications of a directory and permissions and
maintenance. Memcached avoids all these problems, and it also scales to
multiple parallel servers, which can all share a memcached session.
If you copy the session configuration to your application, do change
"session.secret" to a random string. This is used to help ensure the integrity
of the session, to prevent people from hijacking it.
Init module and main function
=============================
The main function, in addition to the minimal Pyramid configuration, activates
Beaker sessions and caching, and sets up templates, subscribers, routes, and a
static route. The Beaker setup passes the ``settings`` dict to Beaker; that's
how your settings are read. Pyramid cofigures Mako the same way behind the
scenes, passing the settings to it. The "add_renderer" line tells Pyramid to
recognize filenames ending in ".html" as Mako templates. The subscribers
include we'll see in a minute.
Activating static routes involves an include line and a "config.add_static_route"
call.
Helpers
=======
The demo provides a Pylons-like helpers module,
*akhet_demo/lib/helpers.py*. You can put utility functions here for use in
your templates. The helper contains imports for WebHelper's HTML tag helpers,
but they're commented out. (WebHelpers is a Python package containing generic
functions for use in web applications and other applications.) I'm tempted to
actually use the tag helpers in the site template but haven't done so yet.
Most of WebHelpers works with Pyramid, including the popular
``webhelpers.html`` subpackage, ``webhelpers.text``, and ``webhelpers.number``.
You'll have to add a WebHelpers dependency to your application if you want to
use it. The only part of WebHelpers that doesn't work with Pyramid is the
``webhelpers.pylonslib`` subpackage, which depends on Pylons' special globals.
Note that ``webhelpers.paginate`` requires a slightly different configuration
with Pyramid than with Pylons, because ``pylons.url`` is not available. You'll
have to supply a URL generator, perhaps using one of the convenience classes
included in WebHelpers 1.3. Paginate's URL generator is *not* Akhet's URL
generator: it's a different kind of class specific to the paginator's needs.
Subscribers
===========
*akhet_demo/subscribers.py* is unique to the demo. It sets up a URL generator
and configures several Pylons-like globals for the template namespace. The only
thing you need in here is the includeme function, which the application's main
function invokes via the ``config.include(".subscribers")`` line.
The ``add_renderer_globals`` subscriber configures the following variables for
the template namespace:
* ``h``: the helpers module.
* ``r``: an alias for ``request``.
* ``url``: the URL generator.
It has commented code to configure "settings", "session", and "c" variables if
you want those.
For completeness, here are the system variables Pyramid 1.3 adds to the
template namespace:
* ``context``: the context.
* ``renderer_name``: the name of the renderer.
* ``renderer_info``: a ``RendererHelper`` object (defined in ``pyramid.renderers``).
* ``request``: the request.
* ``view``: the view. (A function or instance.)
As a reminder, everything here is local to the current request. The URL
generator is attached to the request object, and the renderer globals are set
just before the renderer is invoked. These variables are all discarded at the
end of the request.
Views
=====
The views module has a base class called ``Handler`` (but it's not related to
"pyramid_handlers"). The index view demonstrates logging, optionally sets a
flash message, and invokes a Mako template renderer.
The demo pushes a flash message by calling ``self.request.session.flash()``
with the message text. By default this puts the message on the "info" queue,
and it's displayed using an "info" CSS class. You can push the message onto a
different queue by specifying the queue name as a second argument. But that's
only useful if the template pops the messages from the other queue by name,
otherwise they'll never be displayed. It's customary to name the queues
according to the Python logging hierarchy: debug, info (notice), warn(ing),
error, critical. The default stylesheet defines CSS classes with distinct
styling for several of these levels.
.. include:: ../links.rst
| PypiClean |
/LiliDB-1.0.3.tar.gz/LiliDB-1.0.3/README.md | # LiliDB
*Lili* ("Small" in Toki Pona) it's a small Key-Value database library.<br>
No third-party dependencies, pure Python and safe to use!<br>
# Install
```sh
# GIT+PIP
pip install git+https://github.com/ZSendokame/LiliDB.git
# PIP
pip install LiliDB
```
# How to use
```py
import lilidb
db = lilidb.Database('database.json')
db.set('key', 'value')
db.dump()
with db as db:
db.set('with': 'Open, save and close file.')
# You can use Threads!
# and make errors... If an exception ocurrs, it automatically saves.
``` | PypiClean |
/Chives-0.0.1-py3-none-any.whl/chives/info.py |
# import logging
# logging.basicConfig(filename='logging.log',
# format='%(asctime)s %(message)s',
# filemode="w", level=logging.DEBUG)
# 更新:
# 1、可配置抓取的字段:f1.f2..fxxx
# 2、可配置一次抓取的数量:'pz': 2000,
# 3、可导出到Excel:spider.py
# 4、剩余页码可配置是否开启 break
# 5、可以设置分时数据是否开启,以及过滤条件 'ft': 1(全部,大于100手等),
# 6、除中国股市外,其他地区无法获取分时数据
# 7、无法解析详情的包括:首页、自选股、特色行情、英股市场、期货市场
# 8、正常解析的包括:科创板、沪深个股、沪深板块、沪深指数、新三板、沪深港通、港股市场、
# 美股市场、全球指数、基金市场、债券市场、外汇市场、期权市场、黄金市场
# 9、大部分逻辑集中在crawList(self, tList)函数中,后面考虑拆分出来。
# 10、基本信息一栏加入了股吧的资讯和评论
# 11、只抓取分时数据的话约1.5个小时,但是record报错:pymongo.errors.DocumentTooLarge: 'update' command document too large
# 12、requests_get()函数加入了缓存池的设计
#20200222:
#AB股比价,AH股比价字段解析错误,期权字段解析错误;
#全球指数有一个最新行情时间的字段没有解析,(所有行情公用一种表结构)
#
#
loginParam={
'com': {
'pn': 1,
'pz': 5000,
'po': 1,
'np': 1,
'fltt': 2,
'invt': 2,
'fid': 'f3',
'fs' :'m:1+t:23',
'fields' :'f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f11,f12,f13,f14,f15,f16,f17,f18,f19,f20,f21,f22,f23,f24,f25,f26,f27,f28,f29,f30,f31,f32,f33,f34,f35,f36,f37,f38,f39,f40,f41,f42,f43,f44,f45,f46,f47,f48,f49,f50,f62,f128,f136,f115,f152', #无限延伸?
},
'fs': {
'pageindex': 1,
'pagesize': 5000, #最大300
'sort': 1,
'ft': 1, #过滤条件>=?手
'code': 688159,
'market': 1,
'dpt': 'wzfscj',
'ut': '7eea3edcaed734bea9cbfc24409ed989',
},
'trends2': {
'iscr': 0,
'ndays': 1,
'fields1': 'f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f11,f12,f13',
'fields2': 'f51,f52,f53,f54,f55,f56,f57,f58',
'secid': '1.600789',
},
'kline': {
'fqt': 0,
'end': '20500101',
'klt': 0,
'lmt': 120,
'fields1': 'f1,f2,f3,f4,f5',
'fields2': 'f51,f52,f53,f54,f55,f56,f57,f58',
'secid': '1.600789',
},
'fflow': {
'lmt': 100,
'klt': 101,
'fields1': 'f1,f2,f3,f7',
'fields2': 'f51,f52,f53,f54,f55,f56,f57,f58,f59,f60,f61,f62,f63,f64,f65',
'secid': '1.600789',
},
'info': {
'fltt': 2,
'invt': 2,
'volt': 2,
'fields': 'f43,f57,f58,f169,f170,f46,f44,f51,f168,f47,f164,f163,f116,f60,f45,f52,f50,f48,f167,f117,f71,f161,f49,f530,f135,f136,f137,f138,f139,f141,f142,f144,f145,f147,f148,f140,f143,f146,f149,f55,f62,f162,f92,f173,f104,f105,f84,f85,f183,f184,f185,f186,f187,f188,f189,f190,f191,f192,f107,f111,f86,f177,f78,f110,f262,f263,f264,f267,f268,f250,f251,f252,f253,f254,f255,f256,f257,f258,f266,f269,f270,f271,f273,f274,f275,f127,f199,f128,f198,f259,f260,f261,f171,f277,f278,f279,f288',
'secid': '1.600789',
},
'guba': {
'ps': 36,
'sorttype': 1,
'version': 200,
'code': 688159,
'product': 'Guba',
'plat': 'Web',
},
}
loginUrls = {
'proxyWeb': 'https://www.xicidaili.com/nn/',
'proxies': {
'http': 'http://104.224.154.185:8118',
'https': 'https://104.224.154.185:8118',
},
'url':'http://17.push2.eastmoney.com/api/qt/clist/get?',
'base':'http://quote.eastmoney.com',
'fs':'http://push2ex.eastmoney.com/getStockFenShi?',
'trends2':'http://push2his.eastmoney.com/api/qt/stock/trends2/get?',
'kline':'http://74.push2his.eastmoney.com/api/qt/stock/kline/get?',
'fflow':'http://push2.eastmoney.com/api/qt/stock/fflow/kline/get?',
'daykline':'http://push2his.eastmoney.com/api/qt/stock/fflow/daykline/get?',
'info':'http://push2.eastmoney.com/api/qt/stock/get?',
'guba':'http://gbapi.eastmoney.com/webarticlelist/api/Article/Articlelist?',
}
loginHeaders = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.86 Safari/537.36',
'Host': '17.push2.eastmoney.com',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'zh-CN,zh;q=0.8',
'Connection':'keep-alive'
}
loginCookie=''
mainjs='''/*! QuoteCenter version @2.2.0 Copyright Eastmoney */!function(e){var t={};function r(f){if(t[f])return t[f].exports;var o=t[f]={i:f,l:!1,exports:{}};return e[f].call(o.exports,o,o.exports,r),o.l=!0,o.exports}r.m=e,r.c=t,r.d=function(e,t,f){r.o(e,t)||Object.defineProperty(e,t,{enumerable:!0,get:f})},r.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},r.t=function(e,t){if(1&t&&(e=r(e)),8&t)return e;if(4&t&&"object"==typeof e&&e&&e.__esModule)return e;var f=Object.create(null);if(r.r(f),Object.defineProperty(f,"default",{enumerable:!0,value:e}),2&t&&"string"!=typeof e)for(var o in e)r.d(f,o,function(t){return e[t]}.bind(null,o));return f},r.n=function(e){var t=e&&e.__esModule?function(){return e["default"]}:function(){return e};return r.d(t,"a",t),t},r.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},r.p="//hqres.eastmoney.com/EMQuote_Center3.0/",r(r.s=101)}([,,,,,,,function(e,t){e.exports=function r(e,t,f){e=e||{};var o,n=typeof t,a=1;"undefined"!==n&&"boolean"!==n||(f="boolean"===n&&t,t=e,e=this);"object"!=typeof t&&"[object Function]"!==Object.prototype.toString.call(t)&&(t={});for(;a<=2;){if(null!=(o=1===a?e:t))for(var i in o){var s=e[i],l=o[i];e!==l&&(f&&l&&"object"==typeof l&&!l.nodeType?e[i]=r(s||(null!=l.length?[]:{}),l,f):l!==undefined&&(e[i]=l))}a++}return e}},,,,function(e,t){e.exports=jQuery},,,,,,,,function(e,t,r){var f=r(7),o=r(20);function n(e,t,r){"string"!=typeof r&&(r="...");var f=0,o=0;str_cut=new String;for(var n=0;n<e.length;n++)a=e.charAt(n),f++,escape(a).length>4&&f++,f<=t&&o++;return f<=t?e.toString():e.substr(0,o).concat(r)}function i(e,t,r){if(t=t||"yyyy-MM-dd HH:mm:ss","string"==typeof e&&(e=new Date(e.replace(/-/g,"/").replace("T"," ").split("+")[0])),isNaN(e.getTime()))return r||"";var f={"M+":e.getMonth()+1,"d+":e.getDate(),"h+":e.getHours()%12==0?12:e.getHours()%12,"H+":e.getHours(),"m+":e.getMinutes(),"s+":e.getSeconds(),"q+":Math.floor((e.getMonth()+3)/3),S:e.getMilliseconds()};for(var o in/(y+)/.test(t)&&(t=t.replace(RegExp.$1,(e.getFullYear()+"").substr(4-RegExp.$1.length))),/(E+)/.test(t)&&(t=t.replace(RegExp.$1,(RegExp.$1.length>1?RegExp.$1.length>2?"星期":"周":"")+{0:"日",1:"一",2:"二",3:"三",4:"四",5:"五",6:"六"}[e.getDay()+""])),f)new RegExp("("+o+")").test(t)&&(t=t.replace(RegExp.$1,1==RegExp.$1.length?f[o]:("00"+f[o]).substr((""+f[o]).length)));return t}function s(e){return"object"==typeof HTMLElement?e instanceof HTMLElement:e&&"object"==typeof e&&1===e.nodeType&&"string"==typeof e.nodeName}function l(e){var t=parseFloat(e);if(!isNaN(t)){var r=t<0?-1:t>0?1:0;return t<0&&(t*=-1),t>0&&t<1e4||t<0&&t>-1e4?(t*r).toString():t>0&&t<1e6||t<0&&t>-1e6?(t/=1e4).toFixed(2)*r+"万":t>0&&t<1e7||t<0&&t>-1e7?(t/=1e4).toFixed(1)*r+"万":t>0&&t<1e8||t<0&&t>-1e8?(t/=1e4).toFixed(0)*r+"万":t>0&&t<1e10||t<0&&t>-1e10?(t/=1e8).toFixed(2)*r+"亿":t>0&&t<1e11||t<0&&t>-1e11?(t/=1e8).toFixed(1)*r+"亿":t>0&&t<1e12||t<0&&t>-1e12?(t/=1e8).toFixed(0)*r+"亿":t>0&&t<1e14||t<0&&t>-1e14?(t/=1e12).toFixed(2)+"万亿":t>0&&t<1e15||t<0&&t>-1e15?(t/=1e12).toFixed(1)*r+"万亿":t>0&&t<1e16||t<0&&t>-1e16?(t/=1e12).toFixed(0)*r+"万亿":t}return"-"}e.exports={extend:f,isDOM:s,ObjectCache:o,formatDate:i,getQueryString:function(e){var t=new RegExp("(^|&)"+e+"=([^&]*)(&|$)","i"),r=window.location.search.substr(1).match(t);if(null!=r)return unescape(r[2]);return null},cutstr:n,getMarketCode:function(e){var t=sc.substring(0,1),r=sc.substring(0,3);return"5"==t||"6"==t||"9"==t?"1":"009"==r||"126"==r||"110"==r?"1":"2"},getStockStatus:function(e){switch(parseInt(e)){case-2:return"额度不可用";case-1:return"已停牌";case 0:return"额度可用";case 1:return"已收盘";case 2:return"午盘";case 3:return"休市";case 4:return"早盘清空";case 5:return"限制买入";case 6:return"限制卖出";case 7:return"暂停交易";case 8:return"上涨熔断5%";case 9:return"上涨熔断7%";case 10:return"下跌熔断5%";case 11:return"下跌熔断7%";default:return"不存在/额度不可用"}},getColor:function(){var e=0;arguments[1]?e=parseFloat(arguments[0])-parseFloat(arguments[1]):arguments[0]&&(e=parseFloat(arguments[0]));return e>0?"red":e<0?"green":""},fixMarket:function(e){var t=e.substr(0,1),r=e.substr(0,3);return"5"==t||"6"==t||"9"==t?"1":"009"==r||"126"==r||"110"==r||"201"==r||"202"==r||"203"==r||"204"==r?"1":"2"},numbericFormat:l,blinker:function(e){var t,r=f({doms:[],color:{up:["#FFDDDD","#FFEEEE",""],down:["#b4f7af","#ccffcc",""],others:["#b2c3ea","#cedaf5",""]},interval:300,blinktime:150,circle:2},e),o=this;o.raise=!1,o.loop=0;for(var n=[],a=0;a<r.doms.length;a++){var i=r.doms[a];s(i)?n.push(i):"string"==typeof r.doms[a]&&(i=mini(r.doms[a]))&&n.push(i)}t=setInterval(function(){if(o.raise){for(var e=o.comparer>0?r.color.up:o.comparer<0?r.color.down:r.color.others,t=0;t<e.length*r.circle;t++)setTimeout(function(){for(var t=0;t<n.length;t++)n[t].style["background-color"]=e[o.loop];o.loop++,o.loop=o.loop%e.length},r.blinktime*t);o.raise=!1}},r.interval),this.stop=function(){clearInterval(t)}},toFixedFun:function(e,t){var r=parseFloat(e),f="-",o=t||2;(r>=0||r<=0)&&(f=r.toFixed(o));return f},addPercent:function(e){var t,r=parseFloat(e);t=0==r?r.toFixed(2)+"%":r?r.toFixed(2)+"%":e;return t}},Number.prototype.toFixedFit=function(e){var t=this.toPrecision(e+1);return t.substr(0,t.indexOf(".")+e)},String.prototype.cutstr=function(e,t){return n(this,e,t)},String.prototype.isPositive=function(){var e=this;if("string"==typeof e.toLowerCase())return e=e.replace("%",""),new RegExp("^([\\-\\+]?\\d+(\\.\\d+)?)$").test(e)?!new RegExp("^-").test(e):Number.NaN},String.prototype.numbericFormat=function(){return l(this.toString())},Date.prototype.pattern=function(e){return i(this,e)}},function(e,t,r){var f=r(7);function o(e){e&&f(this,e||{},!1),this.getOrAdd=function(e,t){return"undefined"==typeof this[e]&&(this[e]=t),this[e]},this.set=function(e,t){return void 0!==t&&(this[e]=t),this[e]},this.remove=function(e){var t=this[e];if("function"==typeof t)return t;try{delete this[e]}catch(r){this[e]=undefined}return t},this.clear=function(){for(var e in this)this.hasOwnProperty(e)&&this.remove(e);return this}}e.exports=o,o["default"]=new o},,,,,,,,,,,,,,,,,,,,,function(module,exports){var define=!1,toString,isArray,escMap,escFunc,escRE;window.JSON||(window.JSON={parse:function(sJSON){return eval("("+sJSON+")")},stringify:(toString=Object.prototype.toString,isArray=Array.isArray||function(e){return"[object Array]"===toString.call(e)},escMap={'"':'\\"',"\\":"\\\\","\b":"\\b","\f":"\\f","\n":"\\n","\r":"\\r","\t":"\\t"},escFunc=function(e){return escMap[e]||"\\u"+(e.charCodeAt(0)+65536).toString(16).substr(1)},escRE=/[\\"\u0000-\u001F\u2028\u2029]/g,function e(t){if(null==t)return"null";if("number"==typeof t)return isFinite(t)?t.toString():"null";if("boolean"==typeof t)return t.toString();if("object"==typeof t){if("function"==typeof t.toJSON)return e(t.toJSON());if(isArray(t)){for(var r="[",f=0;f<t.length;f++)r+=(f?", ":"")+e(t[f]);return r+"]"}if("[object Object]"===toString.call(t)){var o=[];for(var n in t)t.hasOwnProperty(n)&&o.push(e(n)+": "+e(t[n]));return"{"+o.join(", ")+"}"}}return'"'+t.toString().replace(escRE,escFunc)+'"'})}),module.exports=window.JSON},,,,,,,,,,,,,,,function(e,t,r){
/*!
* jQuery hashchange event - v1.3 - 7/21/2010
* http://benalman.com/projects/jquery-hashchange-plugin/
*
* Copyright (c) 2010 "Cowboy" Ben Alman
* Dual licensed under the MIT and GPL licenses.
* http://benalman.com/about/license/
*/
e.exports=function(e,t){var r,f="hashchange",o=document,n=e.event.special,a=o.documentMode,i="on"+f in t&&(a===undefined||a>7);function s(e){return"#"+(e=e||location.href).replace(/^[^#]*#?(.*)$/,"$1")}e.fn[f]=function(e){return e?this.on(f,e):this.trigger(f)},e.fn[f].delay=50,n[f]=e.extend(n[f],{setup:function(){if(i)return!1;e(r.start)},teardown:function(){if(i)return!1;e(r.stop)}}),r=function(){var r,n,a,l={},u=s(),c=function(e){return e},d=c,m=c;function h(){var o=s(),n=m(u);o!==u?(d(u=o,n),e(t).trigger(f)):n!==u&&(location.href=location.href.replace(/#.*/,"")+n),r=setTimeout(h,e.fn[f].delay)}return l.start=function(){r||h()},l.stop=function(){r&&clearTimeout(r),r=undefined},!i&&(l.start=function(){n||(a=(a=e.fn[f].src)&&a+s(),n=e('<iframe tabindex="-1" title="empty"/>').hide().one("load",function(){a||d(s()),h()}).attr("src",a||"javascript:0").insertAfter("body")[0].contentWindow,o.onpropertychange=function(){try{"title"===event.propertyName&&(n.document.title=o.title)}catch(e){}})},l.stop=c,m=function(){return s(n.location.href)},d=function(t,r){var a=n.document,i=e.fn[f].domain;t!==r&&(a.title=o.title,a.open(),i&&a.write('<script>document.domain="'+i+'"<\/script>'),a.close(),n.location.hash=t)}),l}()}(r(11),window)},,,,,,,,,,,,,function(e,t,r){(function(t){r(41),r(70);r(72),r(73);var f=r(74),o=r(77),n=r(78),a=r(81),i=r(82);function l(e,r){this.config=n[e.type],this.mdName=r||"";var f=this.config.head;this.tableHead=f;var a={container:"",orderDur:!1,index:1,pagesize:this.config.sumcount};this.ops=t.extend(a,e),this.page=new o,this.codearr=[],this.mycontent="",this.myfs="",this._myFavor_list=null}r(85),l.prototype.Bankuai=function(e,r,f){this.fs=r,this.getBankuai(e,r,f),0!=this.ops.thclick&&(t(".dataTable thead th").css("cursor","default"),this.addEvent(e,r,f)),this.addEventSelect(e,r,f),this.add(e),this.del(e)},l.prototype.pageClick=function(e,t,r,f){var o=this;o.page.onPage=function(n){o.ops.index=n,n>r&&(o.ops.index=r),o.getDataBank(e,t,f)}},l.prototype.hoverFn=function(){t("#digest-industry").show(),t("#digest-concept").hide(),t("#digest-region").hide(),t("#digest-industry").hover(function(){t("#digest-industry").show(),t("#digest-concept").hide(),t("#digest-region").hide()}),t("#digest-concept").hover(function(){t("#digest-industry").hide(),t("#digest-concept").show(),t("#digest-region").hide()}),t("#digest-region").hover(function(){t("#digest-industry").hide(),t("#digest-concept").hide(),t("#digest-region").show()})},l.prototype.getBankuai=function(e,t,r){0!=this.getParam("sr")&&1!=this.getParam("sr")||(this.ops.orderDur=!0),-1==this.getParam("sr")&&(this.ops.orderDur=!1),this.getParam("st")&&(this.ops.order=a[this.getParam("st")]);var f=window.location.href.split("#")[1];if("region_board"==f||"concept_board"==f||"industry_board"==f)0==this.ops.orderDur&&(this.tableHead[10].title="领涨股票",this.tableHead[10].key="f128",this.tableHead[11].key="f136",this.tableHead[11].color="f136",this.createHead(e,t),this.getDataBank(e,t,r)),1==this.ops.orderDur&&(this.tableHead[10].title="领跌股票",this.tableHead[10].key="f128",this.tableHead[11].key="f136",this.tableHead[11].color="f222",this.createHead(e,t),this.getDataBank(e,t,r));else{var o={PB:"市净率",MarketValue:"总市值",FlowCapitalValue:"流通市值",FlowCapitalValue:"流通市值",ChangePercent60Day:"60日涨跌幅",ChangePercent360Day:"年初至今涨跌幅",Speed:"涨速",FiveMinuteChangePercent:"5分钟涨跌",VolumeRate:"量比"};if(this.getParam("st")){for(var n=0;n<this.tableHead.length;n++){var i=this.tableHead[n].title;"总市值"!=i&&"流通市值"!=i&&"60日涨跌幅"!=i&&"年初至今涨跌幅"!=i&&"涨速"!=i&&"5分钟涨跌"!=i&&"量比"!=i||(this.tableHead[n].show=!1),i==o[this.getParam("st")]&&(this.tableHead[n].show=!0)}this.createHead(e,t),this.getDataBank(e,t,r)}else this.createHead(e,t),this.getDataBank(e,t,r)}},l.prototype.createHead=function(e,r){for(var f=this.tableHead,o=t('<tr role="row"></tr>'),n=0;n<f.length;n++){var a=f[n];if(a&&1==a.show){if("序号"==a.title||"加自选"==a.title||0==a.order)var i=t('<th style="" class="listview-col-'+a.name+' sorting_disabled" rowspan="1" colspan="1" aria-label="'+a.title+'"><span style="color:#333">'+a.title+'</span><b class="icon"></b></th>');else if(a.key==this.ops.order)if(1==this.ops.orderDur)if("zgValue"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_asc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="转股价值=正股价/转股价*100" class="tooltip-f">转股价值<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("zgYjb"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_asc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="转股溢价率 = (转债最新价 – 转股价值)/ 转股价值" class="tooltip-f">转股溢价率<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("czYjl"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_asc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="纯债溢价率 = (转债最新价 – 纯债价值)/ 纯债价值" class="tooltip-f">纯债溢价率<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("hsCfj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_asc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="满足回售触发条件时,可转债持有人有权将其持有的可转债全部或部分按债券面值加上当期应计利息的价格回售给公司" class="tooltip-f">回售触发价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("qsCfj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_asc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="满足赎回触发条件时,公司有权按照债券面值加当期应计利息的价格赎回全部或部分未转股的可转债" class="tooltip-f">强赎触发价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("dqShj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_asc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="公司有权以债券发行说明书中规定的到期赎回价买回其发行在外债券" class="tooltip-f">到期赎回价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else i=t('<th style="" class="listview-col-'+a.name+' sorting_asc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span>'+a.title+'</span><b class="icon"></b></th>');else if("zgValue"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_desc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="转股价值=正股价/转股价*100" class="tooltip-f">转股价值<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("zgYjb"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_desc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="转股溢价率 = (转债最新价 – 转股价值)/ 转股价值" class="tooltip-f">转股溢价率<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("czYjl"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_desc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="纯债溢价率 = (转债最新价 – 纯债价值)/ 纯债价值" class="tooltip-f">纯债溢价率<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("hsCfj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_desc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="满足回售触发条件时,可转债持有人有权将其持有的可转债全部或部分按债券面值加上当期应计利息的价格回售给公司" class="tooltip-f">回售触发价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("qsCfj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_desc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="满足赎回触发条件时,公司有权按照债券面值加当期应计利息的价格赎回全部或部分未转股的可转债" class="tooltip-f">强赎触发价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("dqShj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_desc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="公司有权以债券发行说明书中规定的到期赎回价买回其发行在外债券" class="tooltip-f">到期赎回价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else i=t('<th style="" class="listview-col-'+a.name+' sorting_desc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span>'+a.title+'</span><b class="icon"></b></th>');else if("zgValue"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="转股价值=正股价/转股价*100" class="tooltip-f">转股价值<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("zgYjb"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="转股溢价率 = (转债最新价 – 转股价值)/ 转股价值" class="tooltip-f">转股溢价率<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("czYjl"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="纯债溢价率 = (转债最新价 – 纯债价值)/ 纯债价值" class="tooltip-f">纯债溢价率<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("hsCfj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="满足回售触发条件时,可转债持有人有权将其持有的可转债全部或部分按债券面值加上当期应计利息的价格回售给公司" class="tooltip-f">回售触发价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("qsCfj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="满足赎回触发条件时,公司有权按照债券面值加当期应计利息的价格赎回全部或部分未转股的可转债" class="tooltip-f">强赎触发价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("dqShj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="公司有权以债券发行说明书中规定的到期赎回价买回其发行在外债券" class="tooltip-f">到期赎回价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else i=t('<th style="" class="listview-col-'+a.name+' sorting" rowspan="1" colspan="1" aria-label="'+a.title+'"><span>'+a.title+'</span><b class="icon"></b></th>');i.attr("data-key",a.key),o.append(i)}}t(e).find("table thead").html(""),t(e).find("table thead").append(o)},l.prototype.getDataBank=function(e,r,o){var n=this,a=this.ops,i=this.config,s={fid:a.order,po:a.orderDur?"0":"1",pn:a.index,pz:i.sumcount,fs:r,fields:i.fields};f(s,function(f){var i=f.data;if(i){var s=i.diff,l=i.total,u=Math.ceil(l/a.pagesize);if(a.sumPage=u,u>1)n.page.setOps({index:a.index,sumPage:a.sumPage}),n.pageClick(e,r,u,o),t(".dataTables_paginate").show();else t(".dataTables_paginate").hide();n.setBody(e,s,o)}else s=[],t(".dataTables_paginate").hide(),n.setBody(e,s,o)})},l.prototype.addEvent=function(e,r,f){var o=this;t(e).find("thead").off(),t(e).find("thead").on("click","th",function(){var n=t(this).attr("data-key");if(n&&"addzixuan"!=n){n==o.ops.order?o.ops.orderDur=!o.ops.orderDur:(o.ops.order=n,o.ops.orderDur=!0);var a=window.location.href.split("#")[1];"region_board"==a||"concept_board"==a||"industry_board"==a?(0==o.ops.orderDur&&(o.tableHead[10].title="领涨股票",o.tableHead[10].key="f128",o.tableHead[11].key="f136",o.tableHead[11].color="f136",o.createHead(e,r),o.getDataBank(e,r,f)),1==o.ops.orderDur&&(o.tableHead[10].title="领跌股票",o.tableHead[10].key="f207",o.tableHead[11].key="f222",o.tableHead[11].color="f222",o.createHead(e,r),o.getDataBank(e,r,f))):(o.createHead(e,r),o.getDataBank(e,r,f))}})},l.prototype.addEventSelect=function(e,r){var f=this;t("#custom-fields").change(function(){for(var o=t(this).val(),n={PB:"市净率",MarketValue:"总市值",FlowCapitalValue:"流通市值",ChangePercent60Day:"60日涨跌幅",ChangePercent360Day:"年初至今涨跌幅",Speed:"涨速",FiveMinuteChangePercent:"5分钟涨跌",VolumeRate:"量比"},a=0;a<f.tableHead.length;a++){var i=f.tableHead[a].title;"市净率"!=i&&"总市值"!=i&&"流通市值"!=i&&"60日涨跌幅"!=i&&"年初至今涨跌幅"!=i&&"涨速"!=i&&"5分钟涨跌"!=i&&"量比"!=i||(f.tableHead[a].show=!1),i==n[o]&&(f.tableHead[a].show=!0)}f.createHead(e,r),f.createBody(e,r)})},l.prototype.setBody=function(e,t,r){var f=this;if(f.body=t,1==f.ops.zixuan){if(r)o(r);else i.getDefaultStocks().then(function(e){o(e)});function o(t){if(t){f.zixuanList=t.split(",");for(var r=0;r<f.body.length;r++){var o=f.body[r],n=o.f13+"."+o.f12,a=!1;a=!!f.in_array(n,f.zixuanList),o.isZisuan=a}f.createBody(e)}}}f.createBody(e)},l.prototype.add=function(e){t(e+"-table").on("click",".addzx",function(){var e=t(this),r=t(this).attr("data-myscode");i.add(r).then(function(t){1==t&&(e.addClass("hide"),e.removeClass("show"),e.siblings(".delzx").addClass("show"),e.siblings(".delzx").removeClass("hide"))})}),t("#optional-blocks-wrapper").off(),t("#optional-blocks-wrapper").on("click",".addzx",function(e){var r=t(this),f=t(this).attr("data-myscode");i.add(f).then(function(e){1==e&&(r.addClass("hide"),r.removeClass("show"),r.siblings(".delzx").addClass("show"),r.siblings(".delzx").removeClass("hide"))}),e.stopPropagation()})},l.prototype.del=function(e){t(e+"-table").on("click",".delzx",function(){var e=t(this),r=t(this).attr("data-myscode");i.del(r).then(function(t){e.addClass("hide"),e.removeClass("show"),e.siblings(".addzx").addClass("show"),e.siblings(".addzx").removeClass("hide")})}),t("#optional-blocks-wrapper").on("click",".delzx",function(){var e=t(this),r=t(this).attr("data-myscode");i.del(r).then(function(t){e.addClass("hide"),e.removeClass("show"),e.siblings(".addzx").addClass("show"),e.siblings(".addzx").removeClass("hide")})})},l.prototype.getKcbHQData=function(e){var r={secids:e,pz:20,pi:0,ut:"bd1d9ddb04089700cf9c27f6f7426281",po:1,fields:"f2,f3,f4,f5,f6,f12",fltt:2,invt:2,np:1};return t.ajax({type:"get",data:r,url:"http://push2.eastmoney.com/api/qt/ulist/get?",dataType:"jsonp",jsonp:"cb"})},l.prototype.getParam=function(e){var t=location.search,r={};if(""!=t)for(var f,o,n=(t=t.substring(1,t.length)).split("&"),a=0;a<n.length;a++)f=n[a].substring(0,n[a].indexOf("=")),o=n[a].substring(n[a].indexOf("=")+1,n[a].length),r[f]=o;return"undefined"!=typeof r[e]?r[e]:null},l.prototype.formatData=function(){for(var e=this.body,t=["f2","f4","f15","f16","f17","f18","f28","f31","f32"].join(",")+",",r=["f3","f7","f8","f9","f10","f11","f22","f23","f24","f25","f33"].join(",")+",",f=["f115"].join(",")+",",o=0,n=e.length;o<n;o++){var a=e[o],i=Math.pow(10,a.f1);for(var s in a)if(t.indexOf(s+",")>-1&&(a[s]=(a[s]/i).toFixed(a.f1)),r.indexOf(s+",")>-1&&(a[s]=(a[s]/100).toFixed(2)),f.indexOf(s+",")>-1){var l=Math.pow(10,a.f152);a[s]=(a[s]/l).toFixed(a.f152)}}},l.prototype.createBody=function(e){var r=t(e+"-table").find("tbody");0==r.length&&(r=t(e).find("tbody"));var f=t(r);f.html("");for(var o=this.body,n=this.tableHead,a=0;a<o.length;a++){var i=o[a],s=t("<tr></tr>");a%2==0?s.addClass("odd"):s.addClass("even");for(var l=0;l<n.length;l++){var u=n[l];if(u&&1==u.show){var c="",d=t("<td></td>");if("名称"==u.title||"领涨股"==u.title||"领跌股"==u.title||"主力流入最大股"==u.title)d=t("<td class='mywidth'></td>");if("板块名称"==u.title||"领涨股票"==u.title||"领跌股票"==u.title)d=t("<td class='mywidth3'></td>");if("名称"==u.title&&"usindex_name"==u.name)if(5==i.f107)d=t("<td class='mywidth' style='text-align:left;padding-left:5px;'><em class='circle' title='已收盘'>●</em></td>");else if(3==i.f107)d=t("<td class='mywidth' style='text-align:left;padding-left:5px;'><em class='circle' title='盘中休息'>●</em></td>");else d=t("<td class='mywidth' style='text-align:left;padding-left:5px;'><em class='trading' title='交易中'>●</em></td>");if("最新价"==u.title||"涨跌幅"==u.title)d=t("<td class='mywidth2'></td>");if("相关链接"==u.title&&"Links"==u.name)d=t('<td class=" listview-col-Links"><a href="http://guba.eastmoney.com/list,'+i.f12+'.html">股吧</a> <a href="http://data.eastmoney.com/zjlx/'+i.f12+'.html">资金流</a> <a href="http://data.eastmoney.com/stockdata/'+i.f12+'.html">数据</a></td>');if("相关链接"==u.title&&"Links_abgu"==u.name)d=t('<td class=" listview-col-Links"><a href="http://guba.eastmoney.com/list,'+i.f201+'.html">股吧</a> <a href="http://data.eastmoney.com/zjlx/'+i.f201+'.html">资金流</a> <a href="http://data.eastmoney.com/stockdata/'+i.f201+'.html">数据</a></td>');if("相关链接"==u.title&&"neeq_stocks"==u.name)d=t('<td class=" listview-col-neeq_stocks"><a href="http://guba.eastmoney.com/list,'+i.f12+'.html">股吧</a> <a href="http://so.eastmoney.com/Web/s?keyword='+i.f14+'">资讯</a></td>');if("相关链接"==u.title&&"hk_stocks"==u.name)if(3==i.f19||4==i.f19)d=t('<td class=" listview-col-Links"><a href="http://guba.eastmoney.com/list,hk'+i.f12+'.html">股吧</a> <a href="http://so.eastmoney.com/Web/s?keyword='+i.f14+'">资讯</a> <a href="http://emweb.securities.eastmoney.com/PC_HKF10/CoreReading/index?type=web&code='+i.f12+'&color=b">档案</a></td>');else d=t('<td class=" listview-col-Links"> <a href="http://so.eastmoney.com/Web/s?keyword='+i.f14+'">资讯</a> </td>');if("相关链接"==u.title&&"concept_board"==u.name)d=t('<td class=" listview-col-Links"><a href="http://guba.eastmoney.com/list,'+i.f12+'.html">股吧</a> <a href="http://data.eastmoney.com/bkzj/'+i.f12+'.html">资金流</a> <a href="http://data.eastmoney.com/report/'+i.f12.substr(3,3)+'yb.html">研报</a></td>');if("相关链接"==u.title&&"fullscreen_Links"==u.name)d=t('<td class=" listview-col-Links"><a href="http://guba.eastmoney.com/list,'+i.f12+'.html">股吧</a> <a href="http://data.eastmoney.com/kzz/detail/'+i.f12+'.html">详细</a> </td>');if("相关链接"==u.title&&"fundcloseend_Links"==u.name)d=t('<td class=" listview-col-Links"><a href="http://fund.eastmoney.com/'+i.f12+'.html">估算图</a> <a href="http://guba.eastmoney.com/list,of'+i.f12+'.html">基金吧</a> <a href="http://fund.eastmoney.com/f10/'+i.f12+'.html">档案</a></td>');if("涨跌家数"==u.title&&"zhangdiejia"==u.name)d=t('<td><span style="color: red;">'+i.f104+'</span>/<span style="color: green;">'+i.f105+"</span></td>");if("买量"==u.title&&"buycount"==u.name)d=t('<td style="color: red;"></td>');if("上涨家数"==u.title)d=t('<td style="color: red;"></td>');if("下跌家数"==u.title)d=t('<td style="color: green;"></td>');if("卖量"==u.title&&"sellcount"==u.name)d=t('<td style="color: green;"></td>');if("港股吧"==u.title)d=t('<td class=" listview-col-Links"><a href="http://guba.eastmoney.com/list,hk'+i.f12+'.html">港股吧</a></td>');if("A股吧"==u.title)d=t('<td class=" listview-col-Links"><a href="http://guba.eastmoney.com/topic,'+i.f191+'.html">A股吧</a></td>');if("名称"==u.title&&"mgsc_name"==u.name)d=t("<td class='text_overflow' title='"+i.f14+"'></td>");var m=i.f13+"."+i.f12;if("加自选"==u.title)if(0==i.isZisuan)d=t('<td class=" listview-col-add"><a class="addzx show" target="_self" onclick="return false;" href="javascript:void(0);" data-myscode='+m+'></a><a class="delzx hide" target="_self" onclick="return false;" href="javascript:void(0);" data-myscode='+m+"></a></td>");else if(1==i.isZisuan)d=t('<td class=" listview-col-add"><a class="addzx hide" target="_self" onclick="return false;" href="javascript:void(0);" data-myscode='+m+'></a><a class="delzx show" target="_self" onclick="return false;" href="javascript:void(0);" data-myscode='+m+"></a></td>");else d=t('<td class=" listview-col-add"><a class="addzx show" target="_self" onclick="return false;" href="javascript:void(0);" data-myscode='+m+'></a><a class="delzx hide" target="_self" onclick="return false;" href="javascript:void(0);" data-myscode='+m+"></a></td>");if(u.type)"seq"==u.type&&d.text(a+1+this.ops.pagesize*(this.ops.index-1));else{var h="";if(u.key&&(h=i[u.key]),u.cb&&(h=u.cb(i[u.key],i)),u.newcb&&(h=u.newcb(i[u.key],i[u.fixedkey])),u.suffix&&(h+=u.suffix),u.color!==undefined){c=t("<span></span>");var b,y,w=u.color;"number"==typeof w?(b=i[u.key],y=w):0==w.indexOf("_")?(b=i[u.key],y=i[w.substr(1)]):(b=i[w],y=0);var k=b-y;k>0?c.addClass("red"):k<0?c.addClass("green"):c.addClass("fair")}if(u.href){for(var p=u.data,g=u.href,x=0;x<p.length;x++){var N=RegExp("\\{\\{"+x+"\\}\\}"),_=i[p[x]];"128"==_&&(_="116"),g=g.replace(N,_)}c=t(g)}c?(c.text(h),d.append(c),u.wenhaoFlag&&"欧洲斯托克50"==i[u.key]&&d.append(t(u.wenhaoFlag))):d.text(h)}s.append(d)}}f.append(s)}},l.prototype.in_array=function(e,t){for(s=0;s<t.length;s++)if(thisEntry=t[s].toString(),thisEntry==e)return!0;return!1},e.exports=l}).call(this,r(11))},function(e,t,r){var f=r(11);r(71),e.exports=function(e){function t(t){var r=e.data(t,"tooltip");r.showTimer&&(clearTimeout(r.showTimer),r.showTimer=null),r.hideTimer&&(clearTimeout(r.hideTimer),r.hideTimer=null)}function r(t,r){var f=e.data(t,"tooltip"),o=f.options;if(r&&(o.content=r),f.tip){var n="function"==typeof o.content?o.content.call(t):o.content;f.tip.children(".tooltip-content").html(n),o.onUpdate.call(t,n)}}e.fn.tooltip=function(t,f){return"string"==typeof t?e.fn.tooltip.methods[t](this,f):(t=t||{},this.each(function(){var f,o,n=e.data(this,"tooltip");n?e.extend(n.options,t):(e.data(this,"tooltip",{options:e.extend({},e.fn.tooltip.defaults,e.fn.tooltip.parseOptions(this),t)}),e(this).addClass("tooltip-f")),f=this,o=e.data(f,"tooltip").options,e(f).unbind(".tooltip").bind(o.showEvent+".tooltip",function(t){e(f).tooltip("show",t)}).bind(o.hideEvent+".tooltip",function(t){e(f).tooltip("hide",t)}).bind("mousemove.tooltip",function(t){o.trackMouse&&(o.trackMouseX=t.pageX,o.trackMouseY=t.pageY,e(f).tooltip("reposition"))}),r(this)}))},e.fn.tooltip.methods={options:function(t){return e.data(t[0],"tooltip").options},tip:function(t){return e.data(t[0],"tooltip").tip},arrow:function(e){return e.tooltip("tip").children(".tooltip-arrow-outer,.tooltip-arrow")},show:function(f,o){return f.each(function(){!function(f,o){var n=e.data(f,"tooltip"),a=n.options,i=n.tip;i||(i=e('<div tabindex="-1" class="tooltip"><div class="tooltip-content"></div><div class="tooltip-arrow-outer"></div><div class="tooltip-arrow"></div></div>').appendTo("body"),n.tip=i,r(f)),t(f),n.showTimer=setTimeout(function(){e(f).tooltip("reposition"),i.show(),a.onShow.call(f,o);var t=i.children(".tooltip-arrow-outer"),r=i.children(".tooltip-arrow"),n="border-"+a.position+"-color";t.add(r).css({borderTopColor:"",borderBottomColor:"",borderLeftColor:"",borderRightColor:""}),t.css(n,i.css(n)),r.css(n,i.css("backgroundColor"))},a.showDelay)}(this,o)})},hide:function(r,f){return r.each(function(){!function(r,f){var o=e.data(r,"tooltip");o&&o.tip&&(t(r),o.hideTimer=setTimeout(function(){o.tip.hide(),o.options.onHide.call(r,f)},o.options.hideDelay))}(this,f)})},update:function(e,t){return e.each(function(){r(this,t)})},reposition:function(t){return t.each(function(){!function(t){var r=e.data(t,"tooltip");if(r&&r.tip){var f=r.options,o=r.tip,n={left:-1e5,top:-1e5};if(e(t).is(":visible"))if(n=i(f.position),"top"==f.position&&n.top<0?n=i("bottom"):"bottom"==f.position&&n.top+o._outerHeight()>e(window)._outerHeight()+e(document).scrollTop()&&(n=i("top")),n.left<0)"left"==f.position?n=i("right"):(e(t).tooltip("arrow").css("left",o._outerWidth()/2+n.left),n.left=0);else if(n.left+o._outerWidth()>e(window)._outerWidth()+e(document)._scrollLeft())if("right"==f.position)n=i("left");else{var a=n.left;n.left=e(window)._outerWidth()+e(document)._scrollLeft()-o._outerWidth(),e(t).tooltip("arrow").css("left",o._outerWidth()/2-(n.left-a))}o.css({left:n.left,top:n.top,zIndex:f.zIndex!=undefined?f.zIndex:e.fn.window?e.fn.window.defaults.zIndex++:""}),f.onPosition.call(t,n.left,n.top)}function i(r){var n,a;f.position=r||"bottom",o.removeClass("tooltip-top tooltip-bottom tooltip-left tooltip-right").addClass("tooltip-"+f.position);var i=e.isFunction(f.deltaX)?f.deltaX.call(t,f.position):f.deltaX,s=e.isFunction(f.deltaY)?f.deltaY.call(t,f.position):f.deltaY;if(f.trackMouse)l=e(),n=f.trackMouseX+i,a=f.trackMouseY+s;else{var l=e(t);n=l.offset().left+i,a=l.offset().top+s}switch(f.position){case"right":n+=l._outerWidth()+12+(f.trackMouse?12:0),a-=(o._outerHeight()-l._outerHeight())/2;break;case"left":n-=o._outerWidth()+12+(f.trackMouse?12:0),a-=(o._outerHeight()-l._outerHeight())/2;break;case"top":n-=(o._outerWidth()-l._outerWidth())/2,a-=o._outerHeight()+12+(f.trackMouse?12:0);break;case"bottom":n-=(o._outerWidth()-l._outerWidth())/2,a+=l._outerHeight()+12+(f.trackMouse?12:0)}return{left:n,top:a}}}(this)})},destroy:function(r){return r.each(function(){!function(r){var f=e.data(r,"tooltip");if(f){t(r);var o=f.options;f.tip&&f.tip.remove(),o._title&&e(r).attr("title",o._title),e.removeData(r,"tooltip"),e(r).unbind(".tooltip").removeClass("tooltip-f"),o.onDestroy.call(r)}}(this)})}},e.fn.tooltip.parseOptions=function(t){var r=e(t),f=e.extend({},e.parser.parseOptions(t,["position","showEvent","hideEvent","content",{trackMouse:"boolean",deltaX:"number",deltaY:"number",showDelay:"number",hideDelay:"number"}]),{_title:r.attr("title")});return r.attr("title",""),f.content||(f.content=f._title),f},e.fn.tooltip.defaults={position:"bottom",content:null,trackMouse:!1,deltaX:0,deltaY:0,showEvent:"mouseenter",hideEvent:"mouseleave",showDelay:200,hideDelay:100,onShow:function(e){},onHide:function(e){},onUpdate:function(e){},onPosition:function(e,t){},onDestroy:function(){}}}(f)},function(e,t,r){(function(t){var r;e.exports=((r=t).easyui={indexOfArray:function(e,t,r){for(var f=0,o=e.length;f<o;f++)if(r==undefined){if(e[f]==t)return f}else if(e[f][t]==r)return f;return-1},removeArrayItem:function(e,t,r){if("string"==typeof t){for(var f=0,o=e.length;f<o;f++)if(e[f][t]==r)return void e.splice(f,1)}else{var n=this.indexOfArray(e,t);-1!=n&&e.splice(n,1)}},addArrayItem:function(e,t,r){var f=this.indexOfArray(e,t,r?r[t]:undefined);-1==f?e.push(r||t):e[f]=r||t},getArrayItem:function(e,t,r){var f=this.indexOfArray(e,t,r);return-1==f?null:e[f]},forEach:function(e,t,r){for(var f=[],o=0;o<e.length;o++)f.push(e[o]);for(;f.length;){var n=f.shift();if(0==r(n))return;if(t&&n.children)for(o=n.children.length-1;o>=0;o--)f.unshift(n.children[o])}}},r.parser={auto:!0,onComplete:function(e){},plugins:["draggable","droppable","resizable","pagination","tooltip","linkbutton","menu","menubutton","splitbutton","switchbutton","progressbar","tree","textbox","passwordbox","filebox","combo","combobox","combotree","combogrid","combotreegrid","tagbox","numberbox","validatebox","searchbox","spinner","numberspinner","timespinner","datetimespinner","calendar","datebox","datetimebox","slider","layout","panel","datagrid","propertygrid","treegrid","datalist","tabs","accordion","window","dialog","form"],parse:function(e){for(var t=[],f=0;f<r.parser.plugins.length;f++){var o=r.parser.plugins[f],n=r(".easyui-"+o,e);n.length&&(n[o]?n.each(function(){r(this)[o](r.data(this,"options")||{})}):t.push({name:o,jq:n}))}if(t.length&&window.easyloader){var a=[];for(f=0;f<t.length;f++)a.push(t[f].name);easyloader.load(a,function(){for(var f=0;f<t.length;f++){var o=t[f].name;t[f].jq.each(function(){r(this)[o](r.data(this,"options")||{})})}r.parser.onComplete.call(r.parser,e)})}else r.parser.onComplete.call(r.parser,e)},parseValue:function(e,t,f,o){o=o||0;var n=r.trim(String(t||""));return"%"==n.substr(n.length-1,1)?(n=parseFloat(n.substr(0,n.length-1)),n=e.toLowerCase().indexOf("width")>=0?Math.floor((f.width()-o)*n/100):Math.floor((f.height()-o)*n/100)):n=parseInt(n)||undefined,n},parseOptions:function(e,t){var f=r(e),o={},n=r.trim(f.attr("data-options"));if(n&&("{"!=n.substring(0,1)&&(n="{"+n+"}"),o=new Function("return "+n)()),r.map(["width","height","left","top","minWidth","maxWidth","minHeight","maxHeight"],function(t){var f=r.trim(e.style[t]||"");f&&(-1==f.indexOf("%")&&(f=parseInt(f),isNaN(f)&&(f=undefined)),o[t]=f)}),t){for(var a={},i=0;i<t.length;i++){var s=t[i];if("string"==typeof s)a[s]=f.attr(s);else for(var l in s){var u=s[l];"boolean"==u?a[l]=f.attr(l)?"true"==f.attr(l):undefined:"number"==u&&(a[l]="0"==f.attr(l)?0:parseFloat(f.attr(l))||undefined)}}r.extend(o,a)}return o}},r(function(){var e=r('<div style="position:absolute;top:-1000px;width:100px;height:100px;padding:5px"></div>').appendTo("body");r._boxModel=100!=e.outerWidth(),e.remove(),e=r('<div style="position:fixed"></div>').appendTo("body"),r._positionFixed="fixed"==e.css("position"),e.remove(),!window.easyloader&&r.parser.auto&&r.parser.parse()}),r.fn._outerWidth=function(e){return e==undefined?this[0]==window?this.width()||document.body.clientWidth:this.outerWidth()||0:this._size("width",e)},r.fn._outerHeight=function(e){return e==undefined?this[0]==window?this.height()||document.body.clientHeight:this.outerHeight()||0:this._size("height",e)},r.fn._scrollLeft=function(e){return e==undefined?this.scrollLeft():this.each(function(){r(this).scrollLeft(e)})},r.fn._propAttr=r.fn.prop||r.fn.attr,void(r.fn._size=function(e,t){return"string"==typeof e?"clear"==e?this.each(function(){r(this).css({width:"",minWidth:"",maxWidth:"",height:"",minHeight:"",maxHeight:""})}):"fit"==e?this.each(function(){f(this,"BODY"==this.tagName?r("body"):r(this).parent(),!0)}):"unfit"==e?this.each(function(){f(this,r(this).parent(),!1)}):t==undefined?n(this[0],e):this.each(function(){n(this,e,t)}):this.each(function(){t=t||r(this).parent(),r.extend(e,f(this,t,e.fit)||{});var n=o(this,"width",t,e),a=o(this,"height",t,e);n||a?r(this).addClass("easyui-fluid"):r(this).removeClass("easyui-fluid")});function f(e,t,f){if(!t.length)return!1;var o=r(e)[0],n=t[0],a=n.fcount||0;return f?(o.fitted||(o.fitted=!0,n.fcount=a+1,r(n).addClass("panel-noscroll"),"BODY"==n.tagName&&r("html").addClass("panel-fit")),{width:r(n).width()||1,height:r(n).height()||1}):(o.fitted&&(o.fitted=!1,n.fcount=a-1,0==n.fcount&&(r(n).removeClass("panel-noscroll"),"BODY"==n.tagName&&r("html").removeClass("panel-fit"))),!1)}function o(e,t,f,o){var n=r(e),a=t,i=a.substr(0,1).toUpperCase()+a.substr(1),s=r.parser.parseValue("min"+i,o["min"+i],f),l=r.parser.parseValue("max"+i,o["max"+i],f),u=r.parser.parseValue(a,o[a],f),c=String(o[a]||"").indexOf("%")>=0;if(isNaN(u))n._size(a,""),n._size("min"+i,s),n._size("max"+i,l);else{var d=Math.min(Math.max(u,s||0),l||99999);c||(o[a]=d),n._size("min"+i,""),n._size("max"+i,""),n._size(a,d)}return c||o.fit}function n(e,t,f){var o=r(e);if(f==undefined)return f=parseInt(e.style[t]),isNaN(f)?undefined:(r._boxModel&&(f+=n()),f);function n(){return t.toLowerCase().indexOf("width")>=0?o.outerWidth()-o.width():o.outerHeight()-o.height()}""===f?o.css(t,""):(r._boxModel&&(f-=n())<0&&(f=0),o.css(t,f+"px"))}}))}).call(this,r(11))},function(e,t){e.exports=function(){return Math.floor(100*Math.random()+1)}},function(e,t){e.exports=function(e){if(0==e)return e;if(e==undefined||""==e||isNaN(e))return"";var t="";if(e>=1e8||e<=-1e8)e/=1e8,t="亿";else{if(!(e>=1e4||e<=-1e4))return e;e/=1e4,t="万"}return e.toFixed(2).toString()+t}},function(e,t,r){(function(t){var f=r(75),o=r(76);e.exports=function(e,r){(n=window.location.href).split("#")[1];var n=o.getEnvPath("tsApi")+"api/qt/clist/get",a={pn:0,pz:20,po:1,np:1,ut:"bd1d9ddb04089700cf9c27f6f7426281",fltt:2,invt:2};a=t.extend(a,e),t.ajax({url:n,method:"GET",data:f.objToPars(a),dataType:"jsonp",jsonp:"cb",success:function(e){r(e)},error:function(e){}})}}).call(this,r(11))},function(e,t){e.exports={objToPars:function(e){var t=[];for(var r in e)t.push(r+"="+e[r]);return t.join("&")}}},function(e,t,r){r(19);e.exports={development:{commonApi:"//push2.eastmoney.com/",tsApi:"//"+(Math.floor(99*Math.random())+1)+".push2.eastmoney.com/"},production:{commonApi:"//push2.eastmoney.com/",tsApi:"//"+(Math.floor(99*Math.random())+1)+".push2.eastmoney.com/"},test:{commonApi:"http://61.129.249.233:18665/",tsApi:"http://61.129.249.233:18665/"},getEnvPath:function(e,t){if(t)return t;if(!this.production[e])return this.production.commonApi;var r=function(e){try{for(var t=window.location.search.substring(1).split("&"),r=0;r<t.length;r++){var f=t[r].split("=");if(f[0]==e)return f[1]}return!1}catch(o){return!1}}("hq-env");return r&&("development"===r||"production"===r||"test"===r)?this[r][e]:this.production[e]||""}}},function(e,t,r){(function(t){function r(e){this.ops={container:".dataTables_paginate",sumPage:20,index:10},this.ops=t.extend(this.ops,e),this.box=t(this.ops.container),this.onPage=function(){},this.addEvn()}r.prototype.init=function(){this.box.html("");for(var e=this.ops.index,t=[{title:e,index:e,cls:"current"}],r=e,f=e-3;--r>f;)r>0&&t.unshift({title:r,index:r});r>=2&&t.unshift({title:"…",index:""}),r>=1&&t.unshift({title:1,index:1}),t.unshift({title:"上一页",index:e-1>0?e-1:1});var o=this.ops.sumPage,n=e;for(f=e+3,n>o&&(n=o);++n<f;)n<=o&&t.push({title:n,index:n});n<=o-1&&t.push({title:"…",index:""}),n<=o&&t.push({title:o,index:o}),t.push({title:"下一页",index:e+1>o?o:e+1}),this.btns=t},r.prototype.setOps=function(e){this.ops=t.extend(this.ops,e),this.ops.sumPage>1&&(this.init(),this.create())},r.prototype.addEvn=function(){var e=this;this.box.off(),this.box.on("click",".paginate_button",function(){var r=t(this).attr("data-index");return r&&e.onPage(r/1),!1}),this.box.on("click",".paginte_go",function(){var r=t(".paginate_input").val();r&&e.onPage(r/1)})},r.prototype.create=function(){var e=this.box;e.html("");for(var r,f=this.btns,o=0,n=f.length;o<n;o++){var a=f[o];if(0==o){var i=t('<a class="previous paginate_button">上一页</a>');i.attr("data-index",a.index),1==a.index&&i.addClass("disabled"),e.append(i)}else if(o==n-1){var s=t('<a class="next paginate_button">下一页</a>');s.attr("data-index",a.index),this.ops.index===this.ops.sumPage&&s.addClass("disabled"),e.append(s)}else{r||(r=t("<span></span>").addClass("paginate_page"),e.append(r));var l=t('<a class="paginate_button"></a>');l.text(a.title),l.attr("data-index",a.index),a.cls&&l.addClass(a.cls),"…"==a.title&&l.addClass("disabled"),r.append(l)}}var u=t('<span class="paginate_of"> 转到</span><input class="paginate_input" type="text"><a class="paginte_go">GO</a>');e.append(u),e.find("input").val(this.ops.index)},e.exports=r}).call(this,r(11))},function(e,t,r){var f=r(79);e.exports={futures:f.futures,stocks:f.stocks,bankuai:f.bankuai,zijinliu:f.zijinliu,zijinliuGegu:f.zijinliuGegu,hushenAStock:f.hushenAStock,hushenStockKcb:f.hushenStockKcb,shhkBoard:f.shhkBoard,staqnetBoard:f.staqnetBoard,indexsh:f.indexsh,neeqstocks:f.neeqstocks,hkstocks:f.hkstocks,hkshstocks:f.hkshstocks,abgu:f.abgu,ahgu:f.ahgu,ahgu2:f.ahgu2,ahgu3:f.ahgu3,hkindex:f.hkindex,usstocks:f.usstocks,usindex:f.usindex,globalamerica:f.globalamerica,globalamerica2:f.globalamerica2,globalamericaOz:f.globalamericaOz,globalamerica3:f.globalamerica3,globalamerica4:f.globalamerica4,conceptboard:f.conceptboard,conceptboardDatil:f.conceptboardDatil,hsgt:f.hsgt,qhsc_jq:f.qhsc_jq,fundcloseend:f.fundcloseend,fundcloseend2:f.fundcloseend2,bond:f.bond,bondnew:f.bondnew,forex:f.forex,forex2:f.forex2,qqsc:f.qqsc,gold:f.gold,gold2:f.gold2,hsbk:f.hsbk,hsbkd:f.hsbkd,hsbklz:f.hsbklz,hszs:f.hszs,hszs2:f.hszs2,mgsc:f.mgsc,mgsc3:f.mgsc3,bondmine:f.bondmine,indexqh:f.indexqh,indexqhNew:f.indexqhNew,zjs:f.zjs,gjqh:f.gjqh,hjqh:f.hjqh,whpj:f.whpj,ggsc:f.ggsc,Zqzqzs:f.Zqzqzs,opsnewgu:f.opsnewgu,hkadr:f.hkadr,hkstocks_cp_asc:f.hkstocks_cp_asc,sh_a_cp_desc:f.sh_a_cp_desc,AB5MinChangePercent:f.AB5MinChangePercent,ABTurnoverRate:f.ABTurnoverRate,ABVolumeRate:f.ABVolumeRate,ABAmplitude:f.ABAmplitude,ABAmount:f.ABAmount,ABPE:f.ABPE,ABPB:f.ABPB,ABFlowCapitalValue:f.ABFlowCapitalValue,ABMarketValue:f.ABMarketValue,AB60DayChangePercent:f.AB60DayChangePercent,AB360DayChangePercent:f.AB360DayChangePercent,ABSpeed:f.ABSpeed,BoardsCPAsc:f.BoardsCPAsc,BoardsCPAscd:f.BoardsCPAscd,BoardsTurnoverRate:f.BoardsTurnoverRate,BoardsSpeed:f.BoardsSpeed,BoardsAmount:f.BoardsAmount,CommonVolume:f.CommonVolume,CommonAmount:f.CommonAmount,BoardsPosition:f.BoardsPosition,MainCaptialFlow:f.MainCaptialFlow,FFRanking:f.FFRanking,BoardsCaptialFlow:f.BoardsCaptialFlow,fullscreenlist:f.fullscreenlist}},function(e,t,r){var f=r(80);e.exports={bankuai:{fields:"f1,f2,f3,f14,f12,f13,f62,f128,f136,f152",head:[{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/center/boardlist.html#boards-{{0}}1' title='{{1}}'></a>",data:["f12","f14"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"涨跌幅",key:"f3",color:"f3",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0},{title:"主力净流入",key:"f62",color:"f62",order:!1,cb:function(e,t){return f.formatNumberIndex(e)},show:!0},{title:"领涨股",key:"f128",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f141","f140","f128"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"涨跌幅",key:"f136",color:"f136",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0}],sumcount:5},zijinliu:{fields:"f1,f2,f3,f14,f12,f13,f62,f128,f136,f152,f184",head:[{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!1,show:!0},{title:"涨跌幅",key:"f3",color:"f3",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0},{title:"主力净流入",key:"f62",color:"f62",order:!1,cb:function(e,t){return f.formatNumberIndex(e)},show:!0},{title:"主力净占比",key:"f184",color:"f184",order:!1,cb:function(e,t){return f.formatNumber4(e)},show:!0}],sumcount:5},hushenAStock:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"相关链接",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"涨跌额",key:"f4",fixedkey:"f1",color:"f4",order:!0,show:!0,name:"Change",newcb:function(e,t){return f.formatNumberFlag(e,t)}},{title:"成交量(手)",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"振幅",key:"f7",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"Amplitude"},{title:"最高",key:"f15",fixedkey:"f1",color:"_f18",order:!0,newcb:function(e,t){return f.formatNumberFlag(e,t)},show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",color:"_f18",order:!0,newcb:function(e,t){return f.formatNumberFlag(e,t)},show:!0,name:"Low"},{title:"今开",key:"f17",fixedkey:"f1",color:"_f18",order:!0,newcb:function(e,t){return f.formatNumberFlag(e,t)},show:!0,name:"Open"},{title:"昨收",key:"f18",fixedkey:"f1",order:!0,newcb:function(e,t){return f.formatNumberFlag(e,t)},show:!0,name:"PreviousClose"},{title:"量比",key:"f10",order:!0,cb:function(e,t){return f.formatNumberHSGGLB(e)},show:!0,name:"VolumeRate"},{title:"换手率",key:"f8",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"TurnoverRate"},{title:"市盈率(动态)",key:"f9",order:!0,show:!0,cb:function(e,t){return f.formatNumberSyl(e)},name:"PERation"},{title:"市净率",key:"f23",order:!0,show:!0,cb:function(e,t){return f.formatNumberSyl(e)},name:"PB"},{title:"总市值",key:"f20",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!1},{title:"流通市值",key:"f21",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!1},{title:"60日涨跌幅",key:"f24",order:!0,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f24",show:!1},{title:"年初至今涨跌幅",key:"f25",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f25",order:!0,show:!1},{title:"涨速",key:"f22",color:"f22",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!1},{title:"5分钟涨跌",key:"f11",color:"f11",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!1},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:20},hushenStockKcb:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"相关链接",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"涨跌额",key:"f4",fixedkey:"f1",color:"f4",order:!0,show:!0,name:"Change",newcb:function(e,t){return f.formatNumberFlag(e,t)}},{title:"成交量(手)",key:"f5",order:!0,cb:function(e,t){return f.kcbMyformatNum(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"振幅",key:"f7",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"Amplitude"},{title:"最高",key:"f15",fixedkey:"f1",color:"_f18",order:!0,newcb:function(e,t){return f.formatNumberFlag(e,t)},show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",color:"_f18",order:!0,newcb:function(e,t){return f.formatNumberFlag(e,t)},show:!0,name:"Low"},{title:"今开",key:"f17",fixedkey:"f1",color:"_f18",order:!0,newcb:function(e,t){return f.formatNumberFlag(e,t)},show:!0,name:"Open"},{title:"昨收",key:"f18",fixedkey:"f1",order:!0,newcb:function(e,t){return f.formatNumberFlag(e,t)},show:!0,name:"PreviousClose"},{title:"量比",key:"f10",order:!0,cb:function(e,t){return f.formatNumberHSGGLB(e)},show:!0,name:"VolumeRate"},{title:"换手率",key:"f8",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"TurnoverRate"},{title:"市盈率(动态)",key:"f9",order:!0,show:!0,cb:function(e,t){return f.formatNumberSyl(e)},name:"PERation"},{title:"市净率",key:"f23",order:!0,show:!0,cb:function(e,t){return f.formatNumberSyl(e)},name:"PB"},{title:"总市值",key:"f20",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!1},{title:"流通市值",key:"f21",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!1},{title:"60日涨跌幅",key:"f24",order:!0,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f24",show:!1},{title:"年初至今涨跌幅",key:"f25",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f25",order:!0,show:!1},{title:"涨速",key:"f22",color:"f22",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!1},{title:"5分钟涨跌",key:"f11",color:"f11",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!1},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:20},opsnewgu:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"相关链接",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"涨跌额",key:"f4",fixedkey:"f1",color:"f4",order:!0,show:!0,name:"Change",newcb:function(e,t){return f.formatNumberFlag(e,t)}},{title:"成交量(手)",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"振幅",key:"f7",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"Amplitude"},{title:"最高",key:"f15",fixedkey:"f1",color:"_f18",order:!0,newcb:function(e,t){return f.formatNumberFlag(e,t)},show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",color:"_f18",order:!0,newcb:function(e,t){return f.formatNumberFlag(e,t)},show:!0,name:"Low"},{title:"今开",key:"f17",fixedkey:"f1",color:"_f18",order:!0,newcb:function(e,t){return f.formatNumberFlag(e,t)},show:!0,name:"Open"},{title:"昨收",key:"f18",fixedkey:"f1",order:!0,newcb:function(e,t){return f.formatNumberFlag(e,t)},show:!0,name:"PreviousClose"},{title:"换手率",key:"f8",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"TurnoverRate"},{title:"市盈率(动态)",key:"f9",order:!0,show:!0,cb:function(e,t){return f.formatNumberSyl(e)},name:"PERation"},{title:"市净率",key:"f23",order:!0,show:!0,cb:function(e,t){return f.formatNumberSyl(e)},name:"PB"},{title:"上市时间",key:"f26",order:!0,show:!0,name:"shtime",cb:function(e,t){return f.formatNumber3(e)}},{title:"总市值",key:"f20",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!1},{title:"流通市值",key:"f21",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!1},{title:"60日涨跌幅",key:"f24",order:!0,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f24",show:!1},{title:"年初至今涨跌幅",key:"f25",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f25",order:!0,show:!1},{title:"涨速",key:"f22",color:"f22",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!1},{title:"5分钟涨跌",key:"f11",color:"f11",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!1},{title:"量比",key:"f10",order:!0,cb:function(e,t){return f.formatNumberHSGGLB(e)},show:!1,name:"VolumeRate"},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:20},abgu:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152,f201,f202,f203,f196,f197,f199,f195,f200",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"B股代码",key:"f201",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f202","f201"],show:!0,name:"Code"},{title:"名称",key:"f203",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f202","f201"],show:!0,name:"Name"},{title:"相关链接",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f202","f201"],show:!0,name:"Links_abgu"},{title:"最新价",key:"f196",fixedkey:"f200",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f195",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f197",color:"f197",order:!0,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0,name:"ChangePercent"},{title:"A股代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"比价(A/B)",key:"f199",color:"",order:!0,cb:function(e,t){return f.formatNumber4(e)},show:!0,name:"ChangePercent"}],sumcount:20},ahgu:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152,f191,f192,f193,f186,f185,f187,f189,f188",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"名称",key:"f193",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f192","f191"],show:!0,name:"Name"},{title:"H股代码",key:"f12",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"最新价(HKD)",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!1,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",color:"f3",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0,name:"ChangePercent"},{title:"港股吧",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'>A股吧</a>",data:["f13","f12"],show:!0,name:""},{title:"A股代码",key:"f191",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f192","f191"],show:!0,name:"Code"},{title:"最新价(RMB)",key:"f186",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f185",order:!1,show:!0,name:"Close"},{title:"涨跌幅",key:"f187",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f187",order:!1,show:!0,name:"ChangePercent"},{title:"A股吧",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'>A股吧</a>",data:["f192","f191"],show:!0,name:"gangguba"},{title:"比价(A/H)",key:"f189",color:"",order:!1,cb:function(e,t){return f.formatNumber4(e)},show:!0,name:"ChangePercent"},{title:"溢价(A/H)%",key:"f188",color:"",order:!1,cb:function(e,t){return f.formatNumber4(e)},show:!0,name:"ChangePercent"}],sumcount:5},ahgu2:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152,f191,f192,f193,f186,f185,f187,f189,f188",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"名称",key:"f193",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f192","f191"],show:!0,name:"Name"},{title:"H股代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"最新价(HKD)",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"港股吧",key:"",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'>A股吧</a>",data:["f13","f12"],show:!0,name:""},{title:"A股代码",key:"f191",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f192","f191"],show:!0,name:"Code"},{title:"最新价(RMB)",key:"f186",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f185",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f187",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f187",order:!0,show:!0,name:"ChangePercent"},{title:"A股吧",key:"",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'>A股吧</a>",data:["f192","f191"],show:!0,name:"gangguba"},{title:"比价(A/H)",key:"f189",color:"",order:!0,cb:function(e,t){return f.formatNumber4(e)},show:!0,name:"ChangePercent"},{title:"溢价(A/H)%",key:"f188",color:"f188",order:!0,cb:function(e,t){return f.formatNumber4(e)},show:!0,name:"ChangePercent"}],sumcount:20},ahgu3:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152,f191,f192,f193,f186,f185,f187,f189,f188",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"名称",key:"f193",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f192","f191"],show:!0,name:"Name"},{title:"H股代码",key:"f12",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"最新价(HKD)",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!1,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"},{title:"港股吧",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'>A股吧</a>",data:["f13","f12"],show:!0,name:""},{title:"A股代码",key:"f191",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f192","f191"],show:!0,name:"Code"},{title:"最新价(RMB)",key:"f186",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f185",order:!1,show:!0,name:"Close"},{title:"涨跌幅",key:"f187",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f187",order:!1,show:!0,name:"ChangePercent"},{title:"A股吧",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'>A股吧</a>",data:["f192","f191"],show:!0,name:"gangguba"},{title:"比价(A/H)",key:"f189",color:"",order:!1,cb:function(e,t){return f.formatNumber4(e)},show:!0,name:"ChangePercent"},{title:"溢价(A/H)%",key:"f188",color:"",order:!1,cb:function(e,t){return f.formatNumber4(e)},show:!0,name:"ChangePercent"},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:5},shhkBoard:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"相关链接",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"成交量(手)",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"振幅",key:"f7",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"Amplitude"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Low"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Open"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"换手率",key:"f8",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"TurnoverRate"},{title:"市盈率(动态)",key:"f9",order:!0,show:!0,cb:function(e,t){return f.formatNumberSyl(e)},name:"PERation"},{title:"市净率",key:"f23",order:!0,show:!0,cb:function(e,t){return f.formatNumberSyl(e)},name:"PB"},{title:"上市时间",key:"f26",order:!0,show:!0,name:"shtime",cb:function(e,t){return f.formatNumber3(e)}},{title:"总市值",key:"f20",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!1},{title:"流通市值",key:"f21",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!1},{title:"60日涨跌幅",key:"f24",order:!0,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f4",show:!1},{title:"年初至今涨跌幅",key:"f25",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f4",order:!0,show:!1},{title:"涨速",key:"f22",color:"f22",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!1},{title:"5分钟涨跌",key:"f11",color:"f11",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!1},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:20},staqnetBoard:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f33,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"相关链接",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"neeq_stocks"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"成交量(手)",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Low"},{title:"委比",key:"f33",fixedkey:"f152",newcb:function(e,t){return f.formatNumberFlag2(e,t)},order:!0,show:!0,name:"weibi"}],sumcount:20},indexsh:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f33,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"成交量(手)",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"振幅",key:"f7",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"Amplitude"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Low"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Open"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"量比",key:"f10",order:!0,cb:function(e,t){return f.formatNumber4(e)},show:!0,name:"VolumeRate"}],sumcount:20},neeqstocks:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f33,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"相关链接",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"neeq_stocks"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"成交量",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Low"},{title:"委比",key:"f33",fixedkey:"f152",newcb:function(e,t){return f.formatNumberFlag2(e,t)},color:"f33",order:!0,show:!0,name:"weibi"}],sumcount:20},hkshstocks:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"相关链接",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"hk_stocks"},{title:"最新价(HKD)",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Low"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"成交量(股)",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额(港元)",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:20},hkstocks:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f19,f20,f21,f23,f24,f25,f26,f22,f33,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"相关链接",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"hk_stocks"},{title:"最新价(HKD)",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Low"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"成交量(股)",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额(港元)",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:20},hkindex:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f33,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"最新价(HKD)",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"今开",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},key:"f17",color:"_f18",order:!0,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Low"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"成交量(股)",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额(港元)",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"}],sumcount:20},usstocks:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f33,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"最新价(美元)",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"开盘价",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Open"},{title:"最高价",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Hign"},{title:"最低价",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Low"},{title:"昨收价",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"总市值(美元)",key:"f20",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0},{title:"市盈率",key:"f115",order:!0,cb:function(e,t){return f.formatNumberSyl(e)},show:!0}],sumcount:20},usindex:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f33,f11,f62,f128,f136,f115,f152,f124,f107",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"名称",key:"f14",order:!0,href:'<a href="//quote.eastmoney.com/unify/r/{{0}}.{{1}}"></a>',data:["f13","f12"],show:!0,name:"usindex_name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"开盘价",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Open"},{title:"最高价",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Hign"},{title:"最低价",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Low"},{title:"昨收价",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"振幅",key:"f7",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"zhenfu"},{title:"最新行情时间",key:"f124",order:!0,show:!0,cb:function(e,t){return f.formatNumberTime(e)},name:"newtime"}],sumcount:20},globalamerica:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f33,f11,f62,f128,f136,f115,f152,f124,f107",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"名称",key:"f14",order:!0,href:'<a href="//quote.eastmoney.com/unify/r/{{0}}.{{1}}"></a>',data:["f13","f12"],show:!0,name:"usindex_name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"开盘价",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Open"},{title:"最高价",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Hign"},{title:"最低价",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Low"},{title:"昨收价",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"振幅",key:"f7",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"zhenfu"},{title:"最新行情时间",key:"f124",order:!0,show:!0,cb:function(e,t){return f.formatNumberTime(e)},name:"newtime"}],sumcount:20},globalamerica2:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f33,f11,f62,f128,f136,f115,f152,f124,f107",head:[{title:"名称",key:"f14",order:!1,href:'<a href="//quote.eastmoney.com/unify/r/{{0}}.{{1}}"></a>',data:["f13","f12"],show:!0,name:"usindex_name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"}],sumcount:21},globalamericaOz:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f33,f11,f62,f128,f136,f115,f152,f124,f107",head:[{title:"名称",key:"f14",order:!1,href:'<a href="//quote.eastmoney.com/unify/r/{{0}}.{{1}}"></a>',data:["f13","f12"],show:!0,name:"usindex_name",wenhaoFlag:'<span id="GDR_title3" style="background: url(http://hqres.eastmoney.com/EMQuote_Center3.0/images/ying_wen.png) no-repeat;display:inline-block;width:15px;height:14px;position:relative;top:2px;left: 4px;" title="" class="tooltip-f"></span>'},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"}],sumcount:23},globalamerica3:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f33,f11,f62,f128,f136,f115,f152,f124,f107",head:[{title:"名称",key:"f14",order:!1,href:'<a href="//quote.eastmoney.com/unify/r/{{0}}.{{1}}"></a>',data:["f13","f12"],show:!0,name:"usindex_name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"}],sumcount:6},globalamerica4:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f33,f11,f62,f128,f136,f115,f152,f124,f107",head:[{title:"名称",key:"f14",order:!1,href:'<a href="//quote.eastmoney.com/unify/r/{{0}}.{{1}}"></a>',data:["f13","f12"],show:!0,name:"usindex_name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"}],sumcount:3},conceptboard:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f33,f11,f62,f128,f136,f115,f152,f124,f107,f104,f105,f140,f141,f207,f222",head:[{title:"排名",type:"seq",show:!0,name:"number"},{title:"板块名称",key:"f14",order:!0,href:'<a href="//quote.eastmoney.com/unify/r/{{0}}.{{1}}"></a>',data:["f13","f12"],show:!0,name:"name"},{title:"相关链接",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"concept_board"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"总市值",key:"f20",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0},{title:"换手率",key:"f8",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0},{title:"上涨家数",key:"f104",color:"red",order:!0,show:!0},{title:"下跌家数",key:"f105",color:"green",order:!0,show:!0},{title:"领涨股票",key:"f128",order:!0,href:'<a href="//quote.eastmoney.com/unify/r/{{0}}.{{1}}"></a>',data:["f141","f140"],show:!0,name:"name",cb:function(e,t){return f.formatNumber3(e)}},{title:"涨跌幅",key:"f136",color:"f136",order:!0,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0,name:"ChangePercent"}],sumcount:20},conceptboardDatil:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"相关链接",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"成交量(手)",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"振幅",key:"f7",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"Amplitude"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Low"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Open"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"量比",key:"f10",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"VolumeRate"},{title:"换手率",key:"f8",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"TurnoverRate"},{title:"市盈率(动态)",key:"f9",order:!0,show:!0,cb:function(e,t){return f.formatNumberSyl(e)},name:"PERation"},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:20},hsgt:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"},{title:"成交量(手)",key:"f5",order:!1,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"加自选",key:"addzixuan",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:5},qhsc_jq:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!1,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"},{title:"所属交易所",key:"f13",order:!1,data:["f13"],newcb:function(e){return f.formatNumberQhsc(e)},show:!0}],sumcount:5},ggsc:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"},{title:"成交量(手)",key:"f5",order:!1,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"加自选",key:"addzixuan",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:5},hkadr:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152,f213,f214,f220,f219,f217,f218",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"港股名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0},{title:"港股代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"ADR代码",key:"f213",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f214","f213"],show:!0,name:"Code"},{title:"ADR收市价(USD)",key:"f219",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f217",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f217",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f217",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f218",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f218",order:!0,show:!0,name:"ChangePercent"},{title:"折合每股港元",key:"f220",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"Close"}],sumcount:20},fundcloseend:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0},{title:"相关链接",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"fundcloseend_Links"},{title:"最新价",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},key:"f2",color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"成交量",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"开盘价",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Open"},{title:"最高价",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Hign"},{title:"最低价",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Low"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"}],sumcount:20},fundcloseend2:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0},{title:"相关链接",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"fundcloseend_Links"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"},{title:"成交量",key:"f5",order:!1,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!1,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"开盘价",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Open"},{title:"最高价",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Hign"},{title:"最低价",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Low"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!1,show:!0,name:"PreviousClose"}],sumcount:5},Zqzqzs:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0},{title:"最新",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Low"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!1,show:!0,name:"PreviousClose"},{title:"成交量",key:"f5",order:!1,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!1,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"}],sumcount:5},bond:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f33,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Low"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"成交量(手)",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"}],sumcount:20},bondnew:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f33,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Low"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"成交量(手)",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"}],sumcount:20},bondmine:{fields:"f1,f2,f3,f4,f14,f12,f13,f62,f128,f136,f152,f184",head:[{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!1,show:!0},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"}],sumcount:5},forex:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Low"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"}],sumcount:20},whpj:{fields:"f1,f12,f13,f14,f31,f32,f142,f143,f124",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,show:!0},{title:"现汇买入价",key:"f31",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"",order:!0,show:!0,name:"Close"},{title:"现钞买入价",key:"f142",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"",order:!0,show:!0,name:"Close"},{title:"现汇卖出价",key:"f32",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"",order:!0,show:!0,name:"Close"},{title:"现钞卖出价",key:"f143",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"",order:!0,show:!0,name:"Close"},{title:"更新时间",key:"f124",order:!0,cb:function(e,t){return f.formatNumberTime(e)},show:!0,name:"PreviousClose"}],sumcount:20},forex2:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152",head:[{title:"代码",key:"f12",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"},{title:"开盘",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Low"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!1,show:!0,name:"PreviousClose"}],sumcount:5},qqsc:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f28,f11,f62,f128,f136,f115,f152,f133,f108,f163,f161,f162",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"成交量",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"持仓量",key:"f108",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"行权价",key:"f161",order:!0,cb:function(e,t){return f.formatNumber4(e)},show:!0,name:"Amount"},{title:"剩余日",key:"f162",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"日增",key:"f163",order:!0,color:"f163",cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"昨结",key:"f28",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!0,show:!0,name:"Open"}],sumcount:20},gold:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f28,f11,f62,f128,f136,f115,f152,f133,f124",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"品种",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!0,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!0,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!0,show:!0,name:"Low"},{title:"昨结",key:"f28",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"更新时间",key:"f124",order:!0,cb:function(e,t){return f.formatNumberTime(e)},show:!0,name:"PreviousClose"}],sumcount:20},gold2:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f28,f11,f62,f128,f136,f115,f152,f133,f124",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"品种",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!1,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!1,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!1,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!1,show:!0,name:"Low"},{title:"昨结",key:"f28",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!1,show:!0,name:"PreviousClose"},{title:"更新时间",key:"f124",order:!1,cb:function(e,t){return f.formatNumberTime(e)},show:!0,name:"PreviousClose"}],sumcount:5},hsbk:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152,f133,f104,f105",head:[{title:"排名",type:"seq",show:!0,order:!1,name:"number"},{title:"板块名称",key:"f14",order:!1,href:'<a href="//quote.eastmoney.com/unify/r/{{0}}.{{1}}" title="{{2}}"></a>',data:["f13","f12","f14"],show:!0,name:"name"},{title:"相关链接",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"concept_board"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"},{title:"换手率",key:"f8",order:!1,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"TurnoverRate"},{title:"涨跌家数",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"zhangdiejia"},{title:"领涨股票",key:"f128",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f141","f140","f128"],show:!0,cb:function(e,t){return f.formatNumber3(e)}},{title:"涨跌幅",key:"f136",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f136",order:!1,show:!0}],sumcount:10},hsbkd:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152,f133,f104,f105,f128,f136,f207,f208,f209,f222",head:[{title:"排名",type:"seq",show:!0,order:!1,name:"number"},{title:"板块名称",key:"f14",order:!1,href:'<a href="//quote.eastmoney.com/unify/r/{{0}}.{{1}}" title="{{2}}"></a>',data:["f13","f12","f14"],show:!0,name:"name"},{title:"相关链接",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"concept_board"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"},{title:"换手率",key:"f8",order:!1,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"TurnoverRate"},{title:"涨跌家数",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"zhangdiejia"},{title:"领跌股票",key:"f207",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f209","f208","f207"],show:!0,cb:function(e,t){return f.formatNumber3(e)}},{title:"涨跌幅",key:"f222",color:"f222",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0}],sumcount:10},hsbklz:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152,f133,f104,f105",head:[{title:"板块名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/center/boardlist.html#boards-{{0}}1' title='{{1}}'></a>",data:["f12","f14"],show:!0},{title:"涨跌幅",key:"f3",color:"f3",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0},{title:"领涨股票",key:"f128",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f141","f140","f128"],show:!0,cb:function(e,t){return f.formatNumber3(e)}},{title:"涨跌幅",key:"f136",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f136",order:!1,show:!0}],sumcount:5},hszs:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"},{title:"成交量",key:"f5",order:!1,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!1,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!1,show:!0,name:"PreviousClose"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Low"}],sumcount:5},hszs2:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"},{title:"成交量",key:"f5",order:!1,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!1,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"昨收",key:"f18",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!1,show:!0,name:"PreviousClose"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Low"}],sumcount:12},mgsc:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f11,f62,f128,f136,f115,f152",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"mgsc_name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"},{title:"成交量",key:"f5",order:!1,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"}],sumcount:5},mgsc3:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f11,f62,f128,f136,f115,f152",head:[{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"mgsc_name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"}],sumcount:5},indexqh:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f31,f32,f108,f163,f211,f212,f5,f30",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!0,show:!0},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"买入价",key:"f31",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!0,show:!0,name:"Change"},{title:"卖出价",key:"f32",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!0,show:!0,name:"Change"},{title:"买量",key:"f211",color:"red",order:!0,show:!0,name:"buycount",cb:function(e,t){return f.formatNumber(e)}},{title:"卖量",key:"f212",color:"green",order:!0,show:!0,name:"sellcount",cb:function(e,t){return f.formatNumber(e)}},{title:"总量",key:"f5",color:"",order:!0,show:!0,name:"Change",cb:function(e,t){return f.formatNumber(e)}},{title:"现量",key:"f30",color:"",order:!0,show:!0,name:"Change",cb:function(e,t){return f.formatNumber(e)}},{title:"持仓量",key:"f108",show:!0,order:!0,cb:function(e,t){return f.formatNumber(e)}},{title:"日增",key:"f163",order:!0,color:"f163",cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"昨结算",key:"f28",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"}],sumcount:20},indexqhNew:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f31,f32,f108,f163,f211,f212,f5,f30",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,show:!0},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!0,show:!0},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"买入价",key:"f31",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!0,show:!0,name:"Change"},{title:"卖出价",key:"f32",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!0,show:!0,name:"Change"},{title:"买量",key:"f211",color:"red",order:!0,show:!0,name:"buycount",cb:function(e,t){return f.formatNumber(e)}},{title:"卖量",key:"f212",color:"green",order:!0,show:!0,name:"sellcount",cb:function(e,t){return f.formatNumber(e)}},{title:"总量",key:"f5",color:"",order:!0,show:!0,name:"Change",cb:function(e,t){return f.formatNumber(e)}},{title:"现量",key:"f30",color:"",order:!0,show:!0,name:"Change",cb:function(e,t){return f.formatNumber(e)}},{title:"持仓量",key:"f108",show:!0,order:!0,cb:function(e,t){return f.formatNumber(e)}},{title:"日增",key:"f163",order:!0,color:"f163",cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"昨结算",key:"f28",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"}],sumcount:20},zjs:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f28,f22,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!0,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!0,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f28",order:!0,show:!0,name:"Low"},{title:"昨结",key:"f28",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"成交量",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"买盘(外盘)",key:"f34",show:!0,order:!0},{title:"卖盘(内盘)",key:"f35",show:!0,order:!0},{title:"持仓量",key:"f108",show:!0,order:!0}],sumcount:20},gjqh:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Low"},{title:"昨结",key:"f28",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"成交量",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"买盘(外盘)",key:"f34",show:!0,order:!0},{title:"卖盘(内盘)",key:"f35",show:!0,order:!0},{title:"持仓量",key:"f108",show:!0,order:!0}],sumcount:20},hjqh:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108,f211,f212",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"f12",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!1,show:!0,name:"Close"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!1,show:!0,name:"Change"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!1,show:!0,name:"ChangePercent"},{title:"今开",key:"f17",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Open"},{title:"最高",key:"f15",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Hign"},{title:"最低",key:"f16",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!1,show:!0,name:"Low"},{title:"昨结",key:"f28",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!1,show:!0,name:"PreviousClose"},{title:"成交量(手)",key:"f5",order:!1,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"买量(手)",key:"f211",color:"red",order:!1,show:!0,name:"buycount",cb:function(e,t){return f.formatNumber(e)}},{title:"卖量(手)",key:"f212",color:"green",order:!1,show:!0,name:"sellcount",cb:function(e,t){return f.formatNumber(e)}},{title:"持仓(手)",key:"f108",show:!0,order:!1}],sumcount:5},hkstocks_cp_asc:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,name:"Name",newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"}],sumcount:5},CommonVolume:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,name:"Name",newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"成交量",key:"f5",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"}],sumcount:5},CommonAmount:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,name:"Name",newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"成交额",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"}],sumcount:5},sh_a_cp_desc:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,name:"Name",newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"涨跌额",key:"f4",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f4",order:!0,show:!0,name:"Change"},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:5},AB5MinChangePercent:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,name:"Name",newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"五分钟涨跌幅",key:"f11",color:"f11",order:!0,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:5},ABTurnoverRate:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,name:"Name",newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"换手率",key:"f8",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"TurnoverRate"},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:5},ABVolumeRate:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,name:"Name",newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"量比",key:"f10",order:!0,cb:function(e,t){return f.formatNumber4(e)},show:!0,name:"TurnoverRate"},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:5},ABAmplitude:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,name:"Name",newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"振幅",key:"f7",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"Amplitude"},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:5},ABAmount:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,name:"Name",newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"成交额",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:5},ABPE:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,name:"Name",newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"市盈率",key:"f9",order:!0,show:!0,cb:function(e,t){return f.formatNumberSyl(e)},name:"PERation"},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:5},ABPB:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,name:"Name",newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"市净率",key:"f23",order:!0,show:!0,cb:function(e,t){return f.formatNumberSyl(e)},name:"PB"},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:5},ABFlowCapitalValue:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,name:"Name",newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"流通市值",key:"f21",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:5},ABMarketValue:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,name:"Name",newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"总市值",key:"f20",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:5},AB60DayChangePercent:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,name:"Name",newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"60日涨跌幅",key:"f24",order:!0,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f24",show:!0},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:5},AB360DayChangePercent:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,name:"Name",newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"年初至今涨跌幅",key:"f25",order:!0,suffix:"%",color:"f25",show:!0},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:5},ABSpeed:{fields:"f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f26,f22,f28,f11,f62,f128,f136,f115,f152,f34,f35,f108",head:[{title:"名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,name:"Name",newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_f18",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"涨速",key:"f22",order:!0,suffix:"%",color:"f22",show:!0},{title:"加自选",key:"addzixuan",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Links"}],sumcount:5},BoardsCPAsc:{fields:"f1,f2,f3,f14,f12,f13,f62,f128,f136,f152",head:[{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/center/boardlist.html#boards-{{0}}1' title='{{1}}'></a>",data:["f12","f14"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"涨跌幅",key:"f3",color:"f3",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0},{title:"主力净流入",key:"f62",color:"f62",order:!1,cb:function(e,t){return f.formatNumberIndex(e)},show:!0},{title:"领涨股",key:"f128",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f141","f140","f128"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"涨跌幅",key:"f136",color:"f136",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0}],sumcount:5},BoardsCPAscd:{fields:"f1,f2,f3,f14,f12,f13,f62,f128,f136,f152,f207,f208,f209,f222",head:[{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/center/boardlist.html#boards-{{0}}1' title='{{1}}'></a>",data:["f12","f14"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"涨跌幅",key:"f3",color:"f3",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0},{title:"主力净流入",key:"f62",color:"f62",order:!1,cb:function(e,t){return f.formatNumberIndex(e)},show:!0},{title:"领跌股",key:"f207",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f209","f208","f207"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"涨跌幅",key:"f222",color:"f222",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0}],sumcount:5},BoardsTurnoverRate:{fields:"f1,f2,f3,f8,f14,f12,f13,f62,f128,f136,f152",head:[{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/center/boardlist.html#boards-{{0}}1' title='{{1}}'></a>",data:["f12","f14"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"涨跌幅",key:"f3",color:"f3",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0},{title:"换手率",key:"f8",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"TurnoverRate"},{title:"领涨股",key:"f128",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f141","f140","f128"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"涨跌幅",key:"f136",color:"f136",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0}],sumcount:5},BoardsSpeed:{fields:"f1,f2,f3,f8,f14,f12,f13,f22,f62,f128,f136,f152",head:[{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/center/boardlist.html#boards-{{0}}1' title='{{1}}'></a>",data:["f12","f14"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"涨跌幅",key:"f3",color:"f3",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0},{title:"涨速",key:"f22",color:"f22",order:!0,cb:function(e,t){return f.formatNumber2(e)},show:!0,name:"TurnoverRate"},{title:"领涨股",key:"f128",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f141","f140","f128"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"涨跌幅",key:"f136",color:"f136",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0}],sumcount:6},BoardsAmount:{fields:"f1,f2,f3,f6,f8,f14,f12,f13,f22,f62,f128,f136,f152",head:[{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/center/boardlist.html#boards-{{0}}1' title='{{1}}'></a>",data:["f12","f14"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"涨跌幅",key:"f3",color:"f3",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0},{title:"成交额",key:"f6",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Amount"},{title:"领涨股",key:"f128",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f141","f140","f128"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"涨跌幅",key:"f136",color:"f136",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0}],sumcount:6},BoardsPosition:{fields:"f1,f2,f3,f6,f8,f14,f12,f13,f22,f62,f128,f136,f152,f184,f225,f226",head:[{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/center/boardlist.html#boards-{{0}}1' title='{{1}}'></a>",data:["f12","f14"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"涨跌幅",key:"f3",color:"f3",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0},{title:"增仓占比",key:"f184",color:"f184",order:!0,cb:function(e,t){return f.formatNumberZCZB(e)},show:!0,name:"zczb"},{title:"排名",key:"f225",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"paim"},{title:"排行变化",key:"f226",color:"f226",order:!0,cb:function(e,t){return f.formatPaiming(e)},show:!0,name:"pmchange"}],sumcount:6},MainCaptialFlow:{fields:"f1,f2,f3,f14,f12,f13,f62,f128,f136,f152,f184",head:[{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!1,show:!0},{title:"涨跌幅",key:"f3",color:"f3",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0},{title:"主力净流入",key:"f62",color:"f62",order:!1,cb:function(e,t){return f.formatNumberIndex(e)},show:!0},{title:"净占比",key:"f184",color:"f184",order:!1,cb:function(e,t){return f.formatNumberJZB(e)},show:!0}],sumcount:5},FFRanking:{fields:"f1,f2,f3,f14,f12,f13,f62,f128,f136,f152,f184,f223",head:[{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!1,show:!0},{title:"净占比",key:"f184",color:"f184",order:!1,cb:function(e,t){return f.formatNumberJZB(e)},show:!0},{title:"排名",key:"f223",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"paim"},{title:"涨跌幅",key:"f3",color:"f3",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0}],sumcount:5},BoardsCaptialFlow:{fields:"f1,f2,f3,f6,f8,f14,f12,f13,f22,f62,f128,f136,f152,f204,f184",head:[{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/center/boardlist.html#boards-{{0}}1' title='{{1}}'></a>",data:["f12","f14"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"涨跌幅",key:"f3",color:"f3",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0},{title:"主力净流入",key:"f62",color:"f62",order:!1,cb:function(e,t){return f.formatNumberIndex(e)},show:!0},{title:"净占比",key:"f184",color:"f184",order:!1,cb:function(e,t){return f.formatNumberJZB(e)},show:!0},{title:"主力流入最大股",key:"f204",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f206","f205","f204"],show:!0,newcb:function(e){return f.txtPoint(e)}}],sumcount:5},zijinliuGegu:{fields:"f1,f2,f3,f14,f12,f13,f62,f128,f136,f152,f184",head:[{title:"名称",key:"f14",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}' title='{{2}}'></a>",data:["f13","f12","f14"],show:!0,newcb:function(e){return f.txtPoint(e)}},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!1,show:!0},{title:"涨跌幅",key:"f3",color:"f3",order:!1,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0},{title:"主力净流入",key:"f62",color:"f62",order:!1,cb:function(e,t){return f.formatNumberIndex(e)},show:!0},{title:"净占比",key:"f184",color:"f184",order:!1,cb:function(e,t){return f.formatNumberJZB(e)},show:!0}],sumcount:5},fullscreenlist:{fields:"f1,f152,f2,f3,f12,f13,f14,f227,f228,f229,f230,f231,f232,f233,f234,f235,f236,f237,f238,f239,f240,f241,f242,f26,f243",head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"转债代码",key:"f12",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Code"},{title:"转债名称",key:"f14",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"Name"},{title:"最新价",key:"f2",fixedkey:"f1",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"f3",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f3",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f3",order:!0,show:!0,name:"ChangePercent"},{title:"相关链接",key:"",order:!1,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f13","f12"],show:!0,name:"fullscreen_Links"},{title:"正股代码",key:"f232",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f233","f232"],show:!0,name:"Code"},{title:"正股名称",key:"f234",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["f233","f232"],show:!0,name:"Name"},{title:"最新价",key:"f229",fixedkey:"f1",cb:function(e,t){return f.formatNumberJiaZ(e,2)},color:"f230",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"f230",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},color:"f230",order:!0,show:!0,name:"ChangePercent"},{title:"转股价",key:"f235",order:!0,cb:function(e,t){return f.formatNumberSyl(e)},show:!0,name:"Volume"},{title:"转股价值",key:"f236",order:!0,cb:function(e,t){return f.formatNumberJiaZ(e,4)},show:!0,name:"zgValue"},{title:"转股溢价率",key:"f237",order:!0,color:"f237",fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0,name:"zgYjb"},{title:"纯债溢价率",key:"f238",color:"f238",order:!0,fixedkey:"f152",newcb:function(e,t){return f.formatNumberIndexZdf(e,t)},show:!0,name:"czYjl"},{title:"回售触发价",key:"f239",order:!0,cb:function(e,t){return f.formatNumberSyl(e)},show:!0,name:"hsCfj"},{title:"强赎触发价",key:"f240",order:!0,cb:function(e,t){return f.formatNumberSyl(e)},show:!0,name:"qsCfj"},{title:"到期赎回价",key:"f241",order:!0,cb:function(e,t){return f.formatNumberSyl(e)},show:!0,name:"dqShj"},{title:"纯债价值",key:"f227",order:!0,cb:function(e,t){return f.formatNumberSyl(e)},show:!0,name:"Volume"},{title:"开始转股日",key:"f242",order:!0,show:!0,name:"shtime",cb:function(e,t){return f.formatNumber3(e)}},{title:"上市日期",key:"f26",order:!0,show:!0,name:"shtime",cb:function(e,t){return f.formatNumber3(e)}},{title:"申购日期",key:"f243",order:!0,show:!0,name:"shtime",cb:function(e,t){return f.formatNumber3(e)}},{title:"加自选",key:"addzixuan",order:!1,show:!0,name:"Links"}],sumcount:50}}},function(e,t){e.exports={formatNumber:function(e){if(0==e)return"0.00";var t="";try{var r=Math.abs(e/1);"number"==typeof r&&(t=r>1e4&&r<1e8?(e/1e4).toFixed(2)+"万":r>1e8?(e/1e8).toFixed(2)+"亿":0==r?"-":e.toFixed(0))}catch(f){t="-"}return t},formatNumberHSGGLB:function(e){if(0==e)return"0.00";try{return e.toFixed(2)}catch(t){return"-"}},formatNumberIndex:function(e){if("-"==e)return"-";if(0==e)return"0.00";try{var t="number"==typeof t&&t>=0?t:NaN;try{var r=Math.abs(e/1);if("number"==typeof e)return r>=0&&r<1?e.toFixed(2):r>=1&&r<100?e.toFixed(2):r>=100&&r<1e3?e.toFixed(1):r>=1e3&&r<1e4?e.toFixed(0):r>=1e4&&r<1e6?(e/=1e4).toFixed(t||2)+"万":r>=1e6&&r<1e7?(e/=1e4).toFixed(t||1)+"万":r>=1e7&&r<1e8?(e/=1e4).toFixed(t||0)+"万":r>=1e8&&r<1e10?(e/=1e8).toFixed(t||2)+"亿":r>=1e10&&r<1e11?(e/=1e8).toFixed(t||1)+"亿":r>=1e11&&r<1e12?(e/=1e8).toFixed(t||0)+"亿":r>=1e12&&r<1e13?(e/=1e12).toFixed(t||1)+"万亿":r>=1e13||r<0&&r>-1e16?(e/=1e12).toFixed(t||0)+"万亿":e.toFixed(0)}catch(f){e="-"}return e}catch(f){return"-"}},kcbMyformatNum:function(e){if("-"==e)return"-";if(e==undefined||""==e||isNaN(e)||"-"==e)return"";var t="";try{return e>=0&&e<=99.999999999?t=parseFloat(e).toFixed(2):e>=100&&e<=999?t=parseFloat(e).toFixed(1):e>=1e3&&(t=e>1e4&&e<1e8?(e/1e4).toFixed(2)+"万":e>1e8?(e/1e8).toFixed(2)+"亿":0==e?"-":e.toFixed(0)),e<0&&((e=Math.abs(e))>=0&&e<=99?t=parseFloat(e).toFixed(2):e>=100&&e<=999?t=parseFloat(e).toFixed(1):e>=1e3&&(t=parseFloat(e).toFixed(0)),t="-"+t),t.toString()+""}catch(r){return"-"}},formatNumber_0:function(e){var t="";try{var r=Math.abs(e/1);"number"==typeof r&&(t=r>1e4&&r<1e8?(e/1e4).toFixed(0)+"万":r>1e8?(e/1e8).toFixed(2)+"亿":e)}catch(f){t="-"}return t},formatNumber2:function(e){var t="";try{var r=Math.abs(e/1);"number"==typeof r&&(t=r.toFixed(2))}catch(f){t="-"}return e>=0?t+"%":"-"},formatNumberJZB:function(e){var t="";try{var r=Math.abs(e/1);"number"==typeof r&&(t=r.toFixed(2))}catch(f){t="-"}return e>=0?t+"%":e<0?"-"+t+"%":"-"},formatNumberZCZB:function(e){var t="";try{var r=Math.abs(e/1);"number"==typeof r&&(t=r.toFixed(2))}catch(f){t="-"}return e>=0?t+"%":e<0?"-"+t+"%":"-"},formatNumberSyl:function(e){return e>0?e.toFixed(2):e<0?e.toFixed(2):"-"},formatNumberJiaZ:function(e,t){return e>0?e.toFixed(t):e<0?e.toFixed(t):"-"},formatNumber3:function(e){var t=e;return"number"==typeof t?t.toString().substr(0,4)+"-"+t.toString().substr(4,2)+"-"+t.toString().substr(6,2):"string"==typeof t?t:"-"},formatNumber4:function(e){var t="";try{var r=Math.abs(e/1);"number"==typeof r&&(t=r.toFixed(2))}catch(f){t="-"}return e>0?t:e<0?"-"+t:0==e?"0.00":"-"},formatNumberTime:function(e){var t=new Date(1e3*e),r=t.getFullYear(),f=t.getMonth()+1,o=t.getDate(),n=t.getHours(),a=t.getMinutes(),i=t.getSeconds();return r+"-"+s(f)+"-"+s(o)+" "+s(n)+":"+s(a)+":"+s(i);function s(e){return e<10?"0"+e:e}},formatNumberZde:function(e){var t="";try{var r=Math.abs(e/1);"number"==typeof r&&(t=r.toFixed(2))}catch(f){t="-"}return e>0?t:e<0?"-"+t:0==e?"0.00":"-"},formatPaiming:function(e){if(0==e)return"-";var t="";try{var r=Math.abs(e/1);"number"==typeof r&&(t=r>1e4&&r<1e8?(e/1e4).toFixed(2)+"万":r>1e8?(e/1e8).toFixed(2)+"亿":0==r?"-":e.toFixed(0))}catch(f){t="-"}return t>0&&(t="+"+t),t},formatNumberFlag:function(e,t){var r="";try{var f=Math.abs(e/1);"number"==typeof f&&(r=f.toFixed(t))}catch(o){r="-"}return e>0?r:e<0?"-"+r:0==e?(0).toFixed(t):"-"},formatNumberFlag2:function(e,t){var r="";try{var f=Math.abs(e/1);"number"==typeof f&&(r=f.toFixed(t))}catch(o){r="-"}return e>0?r+"%":e<0?"-"+r+"%":"-"},formatNumberIndexZdf:function(e,t){var r="";try{var f=Math.abs(e/1);"number"==typeof f&&(r=f.toFixed(t))}catch(o){r="-"}return e>0?r+"%":e<0?"-"+r+"%":0==e?"0.00%":"-"},txtPoint:function(e){var t=e.length,r=-1,f=0;strChar=[];for(var o=0;o<t;o++)r=e.charCodeAt(o),strChar[f]=e.charAt(o),r>=0&&r<=128?f+=1:(strChar[f+1]="",f+=2);return f>8&&(e=strChar.slice(0,6).join("")+".."),e},formatNumberQhsc:function(e){var t={1:"上交所",8:"中金所",113:"上期所",114:"大商所",115:"郑商所",116:"港交所",142:"上期能源"},r="";return e&&(r=t[e]?t[e]:"国际期货"),r}}},function(e,t){e.exports={ChangePercent:"f3",Amount:"f6",Volume:"f5",TurnoverRate:"f8",Speed:"f22",ChangePercent360Day:"f25",ChangePercent60Day:"f24",MarketValue:"f20",FlowCapitalValue:"f21",PB:"f23",PERation:"f115",Amplitude:"f7",VolumeRate:"f10",FiveMinuteChangePercent:"f11",PERation:"f9"}},function(e,t,r){(function(t){var f=!!r(83).get(),o=function(e){var r="mystock_web";return f||(r="mystock_webanonym"),t.ajax({type:"GET",url:"http://myfavor1.eastmoney.com/"+r+"?cb=?",data:e,dataType:"jsonp"})},n=function(e){var r="mystock_web";return f||(r="mystock_webanonym"),t.ajax({type:"GET",url:"http://myfavor1.eastmoney.com/"+r,data:e,dataType:"jsonp",jsonp:"cb"})};e.exports={getDefaultStocks:function(){return o({f:"gsaandcheck"}).then(function(e){return window._myFavor_list=e.data.list,e.data.list})},add:function(e){return n({f:"asz",sc:e}).then(function(e){return 1==e.result})},del:function(e){return n({f:"dsz",sc:e}).then(function(e){return 1==e.result})},get:function(e){return n({f:"gsaandcheck",sc:e}).then(function(e){return"True"==e.data.check})},newget:function(e){t.ajax({type:"get",data:"",url:"https://myfavor.eastmoney.com/mystock?f=gsaandcheck&t=undefined&cb=jsonp1553678369631",dataType:"jsonp",jsonp:"js",success:function(e){}})}}}).call(this,r(11))},function(e,t,r){var f=r(84),o={get:function(){if(f.get("ut")&&f.get("ct")&&f.get("uidal")){var e={vtype:null,state:null,name:""};if(f.get("vtpst")&&"|"!=f.get("vtpst")){var t=f.get("vtpst").split("|");if(t.length>1){if("0"==t[1]||"3"==t[1])switch(t[0]){case"301":e.vtype=1,e.name="理财师";break;case"302":e.vtype=2,e.name="非理财师";break;case"303":e.vtype=3,e.name="企业"}switch(t[1]){case"0":e.state=0;break;case"1":e.state=11;break;case"2":e.state=12;break;case"3":e.state=13;break;case"8":e.state=18;break;case"9":e.state=19}}}return{id:f.get("uidal").substring(0,16),nick:f.get("uidal").substring(16),jiav:e}}return null},logOut:function(e){var t=new Date;document.cookie="pi=;path=/;domain=eastmoney.com;expires="+t.toGMTString(),document.cookie="ct=;path=/;domain=eastmoney.com;expires="+t.toGMTString(),document.cookie="ut=;path=/;domain=eastmoney.com;expires="+t.toGMTString(),document.cookie="uidal=;path=/;domain=eastmoney.com;expires="+t.toGMTString(),e&&e()},isLogin:function(){return!!this.get()}};e.exports=o},function(e,t){var r={get:function(e){var t=document.cookie.match(new RegExp("(^| )"+e+"=([^;]*)(;|$)"));return null!=t?decodeURIComponent(t[2]):null},set:function(e,t,r,f){var o=new Date;o.setDate(o.getDate()+r),document.cookie=e+"="+escape(t)+";expires="+o.toGMTString()+";path=/;domain="+f},del:function(e,t){var r=new Date((new Date).getTime()-1);document.cookie=t?e+"=;path=/;expires="+r.toGMTString()+";domain="+t:e+"=;path=/;expires="+r.toGMTString()}};e.exports=r},function(e,t){},function(e){e.exports={ops:{order:"f3",orderDur:!1,type:"bankuai",thclick:!1},ops2:{order:"f62",orderDur:!1,type:"zijinliu",thclick:!1},ops3:{order:"f62",orderDur:!0,type:"zijinliu",thclick:!1},ops4:{order:"f3",orderDur:!1,type:"hushenAStock",zixuan:!0},opsKcb:{order:"f3",orderDur:!1,type:"hushenStockKcb",zixuan:!0},opsnewgu:{order:"f26",orderDur:!1,type:"opsnewgu",zixuan:!0},abgu:{order:"f199",orderDur:!1,type:"abgu",zixuan:!0},ahgu:{order:"f3",orderDur:!1,type:"ahgu",zixuan:!0,thclick:!1},ahgu2:{order:"f3",orderDur:!1,type:"ahgu2",zixuan:!0},ahgu3:{order:"f3",orderDur:!1,type:"ahgu3",zixuan:!0,thclick:!1},ops5:{order:"f26",orderDur:!1,type:"shhkBoard",zixuan:!0},ops6:{order:"f3",orderDur:!1,type:"staqnetBoard"},ops7:{order:"f3",orderDur:!1,type:"indexsh"},ops8:{order:"f3",orderDur:!1,type:"indexsz"},ops9:{order:"f3",orderDur:!1,type:"neeqstocks"},hkstocks:{order:"f3",orderDur:!1,type:"hkstocks",zixuan:!0},hkshstocks:{order:"f3",orderDur:!1,type:"hkshstocks",zixuan:!0},hkindex:{order:"f3",orderDur:!1,type:"hkindex"},hkindexNXZ:{order:"f6",orderDur:!1,type:"hkindex"},usstocks:{order:"f3",orderDur:!1,type:"usstocks"},usindex:{order:"f3",orderDur:!1,type:"usindex"},globalamerica:{order:"f3",orderDur:!1,type:"globalamerica"},globalamerica2:{orderDur:!1,type:"globalamerica2",thclick:!1},globalamericaOz:{orderDur:!1,type:"globalamericaOz",thclick:!1},globalamerica3:{orderDur:!1,type:"globalamerica3",thclick:!1},globalamerica4:{orderDur:!1,type:"globalamerica4",thclick:!1},conceptboard:{order:"f3",orderDur:!1,type:"conceptboard"},conceptboardDatil:{order:"f3",orderDur:!1,type:"conceptboardDatil",zixuan:!0},hsgt:{order:"f3",orderDur:!1,type:"hsgt",zixuan:!0,thclick:!1},qhsc_jq:{order:"f3",orderDur:!1,type:"qhsc_jq",zixuan:!0,thclick:!1},fundcloseend:{order:"f3",orderDur:!1,type:"fundcloseend"},fundcloseend2:{order:"f3",orderDur:!1,type:"fundcloseend2",thclick:!1},fundcloseend2d:{order:"f3",orderDur:!0,type:"fundcloseend2",thclick:!1},bond:{order:"f3",orderDur:!1,type:"bond"},bondnew:{order:"f6",orderDur:!1,type:"bondnew"},forex:{order:"f3",orderDur:!1,type:"forex"},forex2:{order:"f3",orderDur:!1,type:"forex2",thclick:!1},qqsc:{order:"f3",orderDur:!1,type:"qqsc"},gold:{order:"f3",orderDur:!1,type:"gold"},gold2:{order:"f3",orderDur:!1,type:"gold2",thclick:!1},hsbk:{order:"f3",orderDur:!1,type:"hsbk",thclick:!1},hsbkd:{order:"f3",orderDur:!0,type:"hsbkd",thclick:!1},hsbklz:{order:"f3",orderDur:!1,type:"hsbklz",thclick:!1},hszs:{order:"f3",orderDur:!1,type:"hszs",thclick:!1},hszs2:{order:"",orderDur:!1,type:"hszs2",thclick:!1},ggsc:{order:"f3",orderDur:!1,type:"ggsc",zixuan:!0,thclick:!1},ggscd:{order:"f3",orderDur:!0,type:"ggsc",zixuan:!0,thclick:!1},mgsc:{order:"f3",orderDur:!1,type:"mgsc",thclick:!1},mgscd:{order:"f3",orderDur:!0,type:"mgsc",thclick:!1},mgsc3:{order:"f3",orderDur:!1,type:"mgsc3",thclick:!1},bondmine:{order:"f3",orderDur:!1,type:"bondmine",thclick:!1},indexqh:{order:"f14",orderDur:!0,type:"indexqh"},indexqhNew:{order:"f14",orderDur:!0,type:"indexqhNew"},indexqh_gjs:{order:"f14",orderDur:!0,type:"indexqh"},zjs:{order:"f3",orderDur:!1,type:"zjs"},gjqh:{order:"f3",orderDur:!1,type:"gjqh"},hjqh:{order:"f3",orderDur:!1,type:"hjqh",thclick:!1},whpj:{order:"f3",orderDur:!1,type:"whpj"},Zqzqzs:{order:"f3",orderDur:!1,type:"Zqzqzs",thclick:!1},hkadr:{order:"f3",orderDur:!1,type:"hkadr"},fullscreenlist:{order:"f243",orderDur:!1,type:"fullscreenlist",zixuan:!0}}},,,,,,,,,,,function(e,t,r){(function(t){r(41),r(70);r(72),r(73);var f=r(98),o=r(77),n=r(99),a=r(82);function i(e){var r=e.type,f=n[r].head;this.tableHead=f;var a={orderBy:"zdf",sort:"desc",pageSize:n[r].sumcount,pageIndex:0,index:1};this.ops=t.extend(a,e),this.page=new o,this.codearr=[],this.mycontent="",this.myfs=""}r(85),i.prototype.Qihuo=function(e,r,f){this.getQihuo(e,r,f),0!=this.ops.thclick&&(t(".dataTable thead th").css("cursor","default"),this.addEvent(e,r,f))},i.prototype.pageClick=function(e,t,r,f){var o=this;o.page.onPage=function(n){o.ops.index=n,n>r&&(o.ops.index=r),o.getDataBank(e,t,f)}},i.prototype.hoverFn=function(){t("#digest-industry").show(),t("#digest-concept").hide(),t("#digest-region").hide(),t("#digest-industry").hover(function(){t("#digest-industry").show(),t("#digest-concept").hide(),t("#digest-region").hide()}),t("#digest-concept").hover(function(){t("#digest-industry").hide(),t("#digest-concept").show(),t("#digest-region").hide()}),t("#digest-region").hover(function(){t("#digest-industry").hide(),t("#digest-concept").hide(),t("#digest-region").show()})},i.prototype.getQihuo=function(e,t,r){0!=this.getParam("sr")&&1!=this.getParam("sr")||(this.ops.orderDur=!0),-1==this.getParam("sr")&&(this.ops.orderDur=!1);var f=window.location.href.split("#")[1];if("region_board"==f||"concept_board"==f||"industry_board"==f)0==this.ops.orderDur&&(this.tableHead[10].title="领涨股票",this.tableHead[10].key="f128",this.tableHead[11].key="f136",this.tableHead[11].color="f136",this.createHead(e,t),this.getDataBank(e,t,r)),1==this.ops.orderDur&&(this.tableHead[10].title="领跌股票",this.tableHead[10].key="f128",this.tableHead[11].key="f136",this.tableHead[11].color="f222",this.createHead(e,t),this.getDataBank(e,t,r));else{var o={PB:"市净率",MarketValue:"总市值",FlowCapitalValue:"流通市值",FlowCapitalValue:"流通市值",ChangePercent60Day:"60日涨跌幅",ChangePercent360Day:"年初至今涨跌幅",Speed:"涨速",FiveMinuteChangePercent:"5分钟涨跌",VolumeRate:"量比"};if(this.getParam("st")){for(var n=0;n<this.tableHead.length;n++){var a=this.tableHead[n].title;"总市值"!=a&&"流通市值"!=a&&"60日涨跌幅"!=a&&"年初至今涨跌幅"!=a&&"涨速"!=a&&"5分钟涨跌"!=a&&"量比"!=a||(this.tableHead[n].show=!1),a==o[this.getParam("st")]&&(this.tableHead[n].show=!0)}this.createHead(e,t),this.getDataBank(e,t,r)}else this.createHead(e,t),this.getDataBank(e,t,r)}},i.prototype.createHead=function(e,r){for(var f=this.tableHead,o=t('<tr role="row"></tr>'),n=0;n<f.length;n++){var a=f[n];if(a&&1==a.show){if("序号"==a.title||"加自选"==a.title||0==a.order)var i=t('<th style="" class="listview-col-'+a.name+' sorting_disabled" rowspan="1" colspan="1" aria-label="'+a.title+'"><span style="color:#333">'+a.title+'</span><b class="icon"></b></th>');else if(a.key==this.ops.order)if(1==this.ops.orderDur)if("zgValue"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_asc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="转股价值=正股价/转股价*100" class="tooltip-f">转股价值<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("zgYjb"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_asc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="转股溢价率 = (转债最新价 – 转股价值)/ 转股价值" class="tooltip-f">转股溢价率<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("czYjl"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_asc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="纯债溢价率 = (转债最新价 – 纯债价值)/ 纯债价值" class="tooltip-f">纯债溢价率<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("hsCfj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_asc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="满足回售触发条件时,可转债持有人有权将其持有的可转债全部或部分按债券面值加上当期应计利息的价格回售给公司" class="tooltip-f">回售触发价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("qsCfj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_asc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="满足赎回触发条件时,公司有权按照债券面值加当期应计利息的价格赎回全部或部分未转股的可转债" class="tooltip-f">强赎触发价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("dqShj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_asc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="公司有权以债券发行说明书中规定的到期赎回价买回其发行在外债券" class="tooltip-f">到期赎回价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else i=t('<th style="" class="listview-col-'+a.name+' sorting_asc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span>'+a.title+'</span><b class="icon"></b></th>');else if("zgValue"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_desc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="转股价值=正股价/转股价*100" class="tooltip-f">转股价值<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("zgYjb"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_desc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="转股溢价率 = (转债最新价 – 转股价值)/ 转股价值" class="tooltip-f">转股溢价率<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("czYjl"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_desc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="纯债溢价率 = (转债最新价 – 纯债价值)/ 纯债价值" class="tooltip-f">纯债溢价率<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("hsCfj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_desc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="满足回售触发条件时,可转债持有人有权将其持有的可转债全部或部分按债券面值加上当期应计利息的价格回售给公司" class="tooltip-f">回售触发价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("qsCfj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_desc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="满足赎回触发条件时,公司有权按照债券面值加当期应计利息的价格赎回全部或部分未转股的可转债" class="tooltip-f">强赎触发价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("dqShj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting_desc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="公司有权以债券发行说明书中规定的到期赎回价买回其发行在外债券" class="tooltip-f">到期赎回价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else i=t('<th style="" class="listview-col-'+a.name+' sorting_desc" rowspan="1" colspan="1" aria-label="'+a.title+'"><span>'+a.title+'</span><b class="icon"></b></th>');else if("zgValue"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="转股价值=正股价/转股价*100" class="tooltip-f">转股价值<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("zgYjb"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="转股溢价率 = (转债最新价 – 转股价值)/ 转股价值" class="tooltip-f">转股溢价率<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("czYjl"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="纯债溢价率 = (转债最新价 – 纯债价值)/ 纯债价值" class="tooltip-f">纯债溢价率<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("hsCfj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="满足回售触发条件时,可转债持有人有权将其持有的可转债全部或部分按债券面值加上当期应计利息的价格回售给公司" class="tooltip-f">回售触发价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("qsCfj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="满足赎回触发条件时,公司有权按照债券面值加当期应计利息的价格赎回全部或部分未转股的可转债" class="tooltip-f">强赎触发价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else if("dqShj"==a.name)i=t('<th style="" class="listview-col-'+a.name+' sorting" rowspan="1" colspan="1" aria-label="'+a.title+'"><span><span title="公司有权以债券发行说明书中规定的到期赎回价买回其发行在外债券" class="tooltip-f">到期赎回价<em class="help-icon"></em></span></span><b class="icon"></b></th>');else i=t('<th style="" class="listview-col-'+a.name+' sorting" rowspan="1" colspan="1" aria-label="'+a.title+'"><span>'+a.title+'</span><b class="icon"></b></th>');i.attr("data-key",a.key),o.append(i)}}t(e).find("table thead").html(""),t(e).find("table thead").append(o)},i.prototype.getDataBank=function(e,r,o){var n=this,a=this.ops,i=(this.config,{orderBy:a.order,sort:a.orderDur?"asc":"desc",pageSize:a.pageSize,pageIndex:a.index-1,blockName:a.blockName});f(i,a.url,function(f){var i=f.list;if(i){var s=i;try{a.sc!=undefined&&(s=i.map(function(e){return e.sc=a.sc,e}))}catch(c){}var l=f.total,u=Math.ceil(l/a.pageSize);if(a.sumPage=u,u>1)n.page.setOps({index:a.index,sumPage:a.sumPage}),n.pageClick(e,r,u,o),t(".dataTables_paginate").show();else t(".dataTables_paginate").hide();n.setBody(e,s,o)}else s=[],t(".dataTables_paginate").hide(),n.setBody(e,s,o)})},i.prototype.addEvent=function(e,r,f){var o=this;t(e).find("thead").off(),t(e).find("thead").on("click","th",function(){var n=t(this).attr("data-key");n&&"addzixuan"!=n&&(n==o.ops.order?o.ops.orderDur=!o.ops.orderDur:(o.ops.order=n,o.ops.orderDur=!0),o.createHead(e,r),o.getDataBank(e,r,f))})},i.prototype.setBody=function(e,t,r){var f=this;if(f.body=t,1==f.ops.zixuan){if(r)o(r);else a.getDefaultStocks().then(function(e){o(e)});function o(t){if(t){f.zixuanList=t.split(",");for(var r=0;r<f.body.length;r++){var o=f.body[r],n=o.f13+"."+o.f12,a=!1;a=!!f.in_array(n,f.zixuanList),o.isZisuan=a}f.createBody(e)}}}f.createBody(e)},i.prototype.getParam=function(e){var t=location.search,r={};if(""!=t)for(var f,o,n=(t=t.substring(1,t.length)).split("&"),a=0;a<n.length;a++)f=n[a].substring(0,n[a].indexOf("=")),o=n[a].substring(n[a].indexOf("=")+1,n[a].length),r[f]=o;return"undefined"!=typeof r[e]?r[e]:null},i.prototype.formatData=function(){for(var e=this.body,t=["f2","f4","f15","f16","f17","f18","f28","f31","f32"].join(",")+",",r=["f3","f7","f8","f9","f10","f11","f22","f23","f24","f25","f33"].join(",")+",",f=["f115"].join(",")+",",o=0,n=e.length;o<n;o++){var a=e[o],i=Math.pow(10,a.f1);for(var s in a)if(t.indexOf(s+",")>-1&&(a[s]=(a[s]/i).toFixed(a.f1)),r.indexOf(s+",")>-1&&(a[s]=(a[s]/100).toFixed(2)),f.indexOf(s+",")>-1){var l=Math.pow(10,a.f152);a[s]=(a[s]/l).toFixed(a.f152)}}},i.prototype.createBody=function(e){var r=t(e+"-table").find("tbody");0==r.length&&(r=t(e).find("tbody"));var f=t(r);f.html("");for(var o=this.body,n=this.tableHead,a=0;a<o.length;a++){var i=o[a],s=t("<tr></tr>");a%2==0?s.addClass("odd"):s.addClass("even");for(var l=0;l<n.length;l++){var u=n[l];if(u&&1==u.show){var c="",d=t("<td></td>");if("名称"==u.title||"领涨股"==u.title||"领跌股"==u.title||"主力流入最大股"==u.title)d=t("<td class='mywidth'></td>");if("板块名称"==u.title||"领涨股票"==u.title||"领跌股票"==u.title)d=t("<td class='mywidth3'></td>");if("名称"==u.title&&"usindex_name"==u.name)if(5==i.f107)d=t("<td class='mywidth' style='text-align:left;padding-left:5px;'><em class='circle' title='已收盘'>●</em></td>");else if(3==i.f107)d=t("<td class='mywidth' style='text-align:left;padding-left:5px;'><em class='circle' title='盘中休息'>●</em></td>");else d=t("<td class='mywidth' style='text-align:left;padding-left:5px;'><em class='trading' title='交易中'>●</em></td>");if("最新价"==u.title||"涨跌幅"==u.title)d=t("<td class='mywidth2'></td>");if("名称"==u.title&&"mgsc_name"==u.name)d=t("<td class='text_overflow' title='"+i.f14+"'></td>");var m=i.f13+"."+i.f12;if("加自选"==u.title)if(0==i.isZisuan)d=t('<td class=" listview-col-add"><a class="addzx show" target="_self" onclick="return false;" href="javascript:void(0);" data-myscode='+m+'></a><a class="delzx hide" target="_self" onclick="return false;" href="javascript:void(0);" data-myscode='+m+"></a></td>");else if(1==i.isZisuan)d=t('<td class=" listview-col-add"><a class="addzx hide" target="_self" onclick="return false;" href="javascript:void(0);" data-myscode='+m+'></a><a class="delzx show" target="_self" onclick="return false;" href="javascript:void(0);" data-myscode='+m+"></a></td>");else d=t('<td class=" listview-col-add"><a class="addzx show" target="_self" onclick="return false;" href="javascript:void(0);" data-myscode='+m+'></a><a class="delzx hide" target="_self" onclick="return false;" href="javascript:void(0);" data-myscode='+m+"></a></td>");if(u.type)"seq"==u.type&&d.text(a+1+this.ops.pageSize*(this.ops.index-1));else{var h="-";if(u.key&&(h=i[u.key]),u.cb&&(h=u.cb(i[u.key],i)),u.newcb&&(h=u.newcb(i[u.key],i[u.fixedkey])),u.suffix&&(h+=u.suffix),u.color!==undefined){c=t("<span></span>");var b,y,w=u.color;"number"==typeof w?(b=i[u.key],y=w):0==w.indexOf("_")?(b=i[u.key],y=i[w.substr(1)]):(b=i[w],y=0);var k=b-y;if(k>0?c.addClass("red"):k<0?c.addClass("green"):c.addClass("fair"),-1!==w.indexOf("qihuo_")){var p=i.zjsj;h>p?c.addClass("red"):h<p&&c.addClass("green")}else"red"==w?c.addClass("red"):"green"==w&&c.addClass("green")}if(u.href){for(var g=u.data,x=u.href,N=0;N<g.length;N++){var _=RegExp("\\{\\{"+N+"\\}\\}");x=x.replace(_,i[g[N]])}c=t(x)}c?(c.text(h),d.append(c)):d.text(h)}s.append(d)}}f.append(s)}},i.prototype.in_array=function(e,t){for(s=0;s<t.length;s++)if(thisEntry=t[s].toString(),thisEntry==e)return!0;return!1},e.exports=i}).call(this,r(11))},function(e,t,r){(function(t){var f=r(75);r(76);e.exports=function(e,r,o){e.blockName?e.blockName=e.blockName:e.blockName="callback";var n={orderBy:"zdf",sort:"desc",pageSize:5,pageIndex:0,callbackName:"aaa_"+e.blockName};n=t.extend(n,e),t.ajax({url:r,method:"GET",data:f.objToPars(n),dataType:"jsonp",jsonp:"cb",jsonpCallback:"aaa_"+e.blockName,success:function(e){o(e)},error:function(e){}})}}).call(this,r(11))},function(e,t,r){var f=r(80);e.exports={gjqh:{head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"dm",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["sc","dm"],show:!0,name:"Code"},{title:"名称",key:"name",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["sc","dm"],show:!0,name:"Name"},{title:"最新价",key:"p",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_zjsj",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"zde",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"zdf",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"zdf",fixedkey:"f152",newcb:function(e){return"-"!==e?e.toFixed(2)+"%":"-"},color:"zdf",order:!0,show:!0,name:"ChangePercent"},{title:"今开",key:"o",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_zjsj",order:!0,show:!0,name:"Open"},{title:"最高",key:"h",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_zjsj",order:!0,show:!0,name:"Hign"},{title:"最低",key:"l",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_zjsj",order:!0,show:!0,name:"Low"},{title:"昨结",key:"zjsj",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"成交量",key:"vol",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"买盘(外盘)",key:"wp",show:!0,order:!0},{title:"卖盘(内盘)",key:"np",show:!0,order:!0},{title:"持仓量",key:"ccl",show:!0,order:!0}],sumcount:20},sqs:{head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"dm",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["sc","dm"],show:!0,name:"Code"},{title:"名称",key:"name",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["sc","dm"],show:!0,name:"Name"},{title:"最新价",key:"p",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_zjsj",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"zde",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"zdf",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"zdf",fixedkey:"f152",newcb:function(e){return"-"!==e?e.toFixed(2)+"%":"-"},color:"zdf",order:!0,show:!0,name:"ChangePercent"},{title:"今开",key:"o",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_zjsj",order:!0,show:!0,name:"Open"},{title:"最高",key:"h",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_zjsj",order:!0,show:!0,name:"Hign"},{title:"最低",key:"l",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_zjsj",order:!0,show:!0,name:"Low"},{title:"昨结",key:"zjsj",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"成交量",key:"vol",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"cje",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"买盘(外盘)",key:"wp",show:!0,order:!0},{title:"卖盘(内盘)",key:"np",show:!0,order:!0},{title:"持仓量",key:"ccl",show:!0,order:!0}],sumcount:20},gjs:{head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"dm",order:!0,show:!0,name:"Code"},{title:"名称",key:"name",order:!0,show:!0,name:"Name"},{title:"最新价",key:"p",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_zjsj",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"zdf",fixedkey:"f152",newcb:function(e){return"-"!==e?e.toFixed(2)+"%":"-"},color:"zdf",order:!0,show:!0,name:"ChangePercent"},{title:"涨跌额",key:"zde",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"zdf",order:!0,show:!0,name:"Change"},{title:"买入价",key:"mrj",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"qihuo_zjsj",order:!0,show:!0,name:"mrj"},{title:"卖出价",key:"mcj",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"qihuo_zjsj",order:!0,show:!0,name:"mcj"},{title:"买量",key:"mrl",cb:function(e,t){return f.formatNumber(e)},color:"red",order:!0,show:!0,name:"ml"},{title:"卖量",key:"mcl",cb:function(e,t){return f.formatNumber(e)},color:"green",order:!0,show:!0,name:"mll"},{title:"总量",key:"vol",color:"",order:!0,show:!0,name:"zl",cb:function(e,t){return f.formatNumber(e)}},{title:"现量",key:"xs",color:"",order:!0,show:!0,name:"xl",cb:function(e,t){return f.formatNumber(e)}},{title:"持仓量",key:"ccl",color:"",order:!0,show:!0,name:"ccl",cb:function(e,t){return f.formatNumber(e)}},{title:"日增",key:"rz",color:"rz",order:!0,show:!0,name:"rz",cb:function(e,t){return f.formatNumber(e)}},{title:"昨结算",key:"zjsj",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"",order:!0,show:!0,name:"zjs"}],sumcount:20},gjs_gpqh:{head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"dm",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["sc","dm"],show:!0,name:"Code"},{title:"名称",key:"name",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["sc","dm"],show:!0,name:"Name"},{title:"最新价",key:"p",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_zjsj",order:!0,show:!0,name:"Close"},{title:"涨跌幅",key:"zdf",fixedkey:"f152",newcb:function(e){return"-"!==e?e.toFixed(2)+"%":"-"},color:"zdf",order:!0,show:!0,name:"ChangePercent"},{title:"涨跌额",key:"zde",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"zdf",order:!0,show:!0,name:"Change"},{title:"买入价",key:"mrj",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"qihuo_zjsj",order:!0,show:!0,name:"mrj"},{title:"卖出价",key:"mcj",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"qihuo_zjsj",order:!0,show:!0,name:"mcj"},{title:"买量",key:"mrl",cb:function(e,t){return f.formatNumber(e)},color:"red",order:!0,show:!0,name:"ml"},{title:"卖量",key:"mcl",cb:function(e,t){return f.formatNumber(e)},color:"green",order:!0,show:!0,name:"mll"},{title:"总量",key:"vol",color:"",order:!0,show:!0,name:"zl",cb:function(e,t){return f.formatNumber(e)}},{title:"现量",key:"xs",color:"",order:!0,show:!0,name:"xl",cb:function(e,t){return f.formatNumber(e)}},{title:"持仓量",key:"ccl",color:"",order:!0,show:!0,name:"ccl",cb:function(e,t){return f.formatNumber(e)}},{title:"日增",key:"rz",color:"rz",order:!0,show:!0,name:"rz",cb:function(e,t){return f.formatNumber(e)}},{title:"昨结算",key:"zjsj",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"",order:!0,show:!0,name:"zjs"}],sumcount:20},zjs:{head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"dm",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["sc","dm"],show:!0,name:"Code"},{title:"名称",key:"name",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["sc","dm"],show:!0,name:"Name"},{title:"最新价",key:"p",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_zjsj",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"zde",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"zdf",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"zdf",fixedkey:"f152",newcb:function(e){return"-"!==e?e.toFixed(2)+"%":"-"},color:"zdf",order:!0,show:!0,name:"ChangePercent"},{title:"今开",key:"o",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_zjsj",order:!0,show:!0,name:"Open"},{title:"最高",key:"h",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_zjsj",order:!0,show:!0,name:"Hign"},{title:"最低",key:"l",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_zjsj",order:!0,show:!0,name:"Low"},{title:"昨结",key:"zjsj",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"成交量",key:"vol",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"cje",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"cje"},{title:"买盘(外盘)",key:"wp",show:!0,order:!0},{title:"卖盘(内盘)",key:"np",show:!0,order:!0},{title:"持仓量",key:"ccl",show:!0,order:!0}],sumcount:20},qqsc:{head:[{title:"序号",type:"seq",show:!0,name:"number"},{title:"代码",key:"dm",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["sc","dm"],show:!0,name:"Code"},{title:"名称",key:"name",order:!0,href:"<a href='//quote.eastmoney.com/unify/r/{{0}}.{{1}}'></a>",data:["sc","dm"],show:!0,name:"Name"},{title:"最新价",key:"p",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_zjsj",order:!0,show:!0,name:"Close"},{title:"涨跌额",key:"zde",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"zdf",order:!0,show:!0,name:"Change"},{title:"涨跌幅",key:"zdf",fixedkey:"f152",newcb:function(e){return"-"!==e?e.toFixed(2)+"%":"-"},color:"zdf",order:!0,show:!0,name:"ChangePercent"},{title:"成交量",key:"vol",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"成交额",key:"cje",order:!0,cb:function(e,t){return f.formatNumber(e)},show:!0,name:"Volume"},{title:"持仓量",key:"ccl",show:!0,order:!0},{title:"行权价",key:"",show:!0,order:!0,name:"xqj"},{title:"剩余日",key:"",show:!0,order:!0,name:"syr"},{title:"日增",key:"rz",color:"rz",show:!0,order:!0,name:"rz"},{title:"昨结",key:"zjsj",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},order:!0,show:!0,name:"PreviousClose"},{title:"今开",key:"o",fixedkey:"zsjd",newcb:function(e,t){return f.formatNumberFlag(e,t)},color:"_zjsj",order:!0,show:!0,name:"Open"}],sumcount:20}}},function(e,t){var r,f;!function(e){try{for(var t=window.location.search.substring(1).split("&"),r=0;r<t.length;r++){var f=t[r].split("=");if(f[0]==e)return f[1]}return!1}catch(o){return!1}}("hq-env")?(r="http://futsse.eastmoney.com",f="http://futsse-static.eastmoney.com"):(r="http://futssetest.eastmoney.com",f="http://static.futssetest.eastmoney.com"),e.exports={qihuourl:r,qihuo_static_url:f}},function(e,t,r){(function(e){r(56),r(41);var t=r(102);t(),e(window).hashchange(function(e){t()})}).call(this,r(11))},function(e,t,r){(function(t){var f=r(69),o=r(86),n=r(97),a=r(100),i=!!r(83).get();e.exports=function(){var e=window.location.href.split("#")[1];switch(t("#yssm").hide(),e){case"hs_a_board":new f(o.ops4).Bankuai("#table_wrapper","m:0+t:6,m:0+t:13,m:0+t:80,m:1+t:2,m:1+t:23");break;case"sh_a_board":new f(o.ops4).Bankuai("#table_wrapper","m:1+t:2,m:1+t:23");break;case"sz_a_board":new f(o.ops4).Bankuai("#table_wrapper","m:0+t:6,m:0+t:13,m:0+t:80");break;case"newshares":new f(o.opsnewgu).Bankuai("#table_wrapper","m:0+f:8,m:1+f:8");break;case"sme_board":new f(o.ops4).Bankuai("#table_wrapper","m:0+t:13");break;case"gem_board":new f(o.ops4).Bankuai("#table_wrapper","m:0+t:80");break;case"kcb_board":new f(o.opsKcb).Bankuai("#table_wrapper","m:1+t:23");break;case"sh_hk_board":new f(o.ops5).Bankuai("#table_wrapper","b:BK0707");break;case"sz_hk_board":new f(o.ops5).Bankuai("#table_wrapper","b:BK0804");break;case"b_board":new f(o.ops4).Bankuai("#table_wrapper","m:0+t:7,m:1+t:3");break;case"ab_comparison_sh":new f(o.abgu).Bankuai("#table_wrapper","m:1+b:BK0498");break;case"ab_comparison_sz":new f(o.abgu).Bankuai("#table_wrapper","m:0+b:BK0498");break;case"st_board":new f(o.ops4).Bankuai("#table_wrapper","m:0+f:4,m:1+f:4");break;case"staq_net_board":new f(o.ops6).Bankuai("#table_wrapper","m:0+s:3");break;case"index_sh":new f(o.ops7).Bankuai("#table_wrapper","m:1+s:2");break;case"index_sz":new f(o.ops7).Bankuai("#table_wrapper","m:0+t:5");break;case"index_components":new f(o.ops7).Bankuai("#table_wrapper","m:1+s:3,m:0+t:5");break;case"index_zzzs":new f(o.ops7).Bankuai("#table_wrapper","m:1+s:1");break;case"hk_sh_stocks":new f(o.hkshstocks).Bankuai("#table_wrapper","b:MK0144");break;case"hk_sz_stocks":new f(o.hkshstocks).Bankuai("#table_wrapper","b:MK0146");break;case"ah_comparison":case"hkah_comparison":t.ajax({type:"get",data:"",url:"http://push2.eastmoney.com/api/qt/clist/get?pn=1&pz=20&po=1&np=1&ut=bd1d9ddb04089700cf9c27f6f7426281&fltt=2&invt=2&fid=f3&fs=b:MK0101&fields=f1,f2,f3,f4,f5,f6,f7,f8,f9,f10",dataType:"jsonp",jsonp:"cb",success:function(e){e&&e.lt&&(1==e.lt&&i?(new f(o.ahgu2).Bankuai("#table_wrapper","b:MK0101"),t("#yssm").hide()):(new f(o.ahgu2).Bankuai("#table_wrapper","b:DLMK0101"),t("#yssm").show()))}});break;case"neeq_stocks":new f(o.ops9).Bankuai("#table_wrapper","m:0+t:81+s:!4");break;case"neeq_innovate":new f(o.ops9).Bankuai("#table_wrapper","m:0+s:512");break;case"neeq_basic":new f(o.ops9).Bankuai("#table_wrapper","m:0+s:256");break;case"neeq_agreement":new f(o.ops9).Bankuai("#table_wrapper","m:0+s:16");break;case"neeq_marketmaking":new f(o.ops9).Bankuai("#table_wrapper","m:0+s:32");break;case"neeq_bidding":new f(o.ops9).Bankuai("#table_wrapper","m:0+s:192");break;case"hk_stocks":new f(o.hkstocks).Bankuai("#table_wrapper","m:116");break;case"hk_mainboard":new f(o.hkstocks).Bankuai("#table_wrapper","m:116+t:3");break;case"hk_gem":new f(o.hkstocks).Bankuai("#table_wrapper","m:116+t:4");break;case"hk_wellknown":new f(o.hkstocks).Bankuai("#table_wrapper","b:MK0106");break;case"hk_bluechips":new f(o.hkstocks).Bankuai("#table_wrapper","b:MK0104");break;case"hk_redchips":new f(o.hkstocks).Bankuai("#table_wrapper","b:MK0102");break;case"hk_redchips_components":new f(o.hkstocks).Bankuai("#table_wrapper","b:MK0111");break;case"hk_stateowned":new f(o.hkstocks).Bankuai("#table_wrapper","b:MK0103");break;case"hk_stateowned_components":new f(o.hkstocks).Bankuai("#table_wrapper","b:MK0112");break;case"hk_components":new f(o.hkstocks).Bankuai("#table_wrapper","b:MK0146,b:MK0144");break;case"hk_components":new f(o.hkstocks).Bankuai("#table_wrapper","111");break;case"hsi_large_components":new f(o.hkstocks).Bankuai("#table_wrapper","b:MK0141");break;case"hsi_medium_components":new f(o.hkstocks).Bankuai("#table_wrapper","b:MK0142");break;case"hk_adr":new f(o.hkadr).Bankuai("#table_wrapper","m:116+s:1");break;case"hk_index":new f(o.hkindex).Bankuai("#table_wrapper","m:124,m:125,m:305");break;case"hk_warrants":new f(o.hkindex).Bankuai("#table_wrapper","m:116+t:6");break;case"hk_CBBCs":new f(o.hkindexNXZ).Bankuai("#table_wrapper","m:116+t:5+s:6");break;case"us_stocks":new f(o.usstocks).Bankuai("#table_wrapper","m:105,m:106,m:107");break;case"us_chinese":new f(o.usstocks).Bankuai("#table_wrapper","b:MK0201");break;case"us_index":new f(o.usindex).Bankuai("#table_wrapper","i:100.NDX,i:100.DJIA,i:100.SPX");break;case"us_wellknown":new f(o.usstocks).Bankuai("#table_wrapper","b:MK0001");break;case"us_technology":new f(o.usstocks).Bankuai("#table_wrapper","b:MK0216");break;case"us_financial":new f(o.usstocks).Bankuai("#table_wrapper","b:MK0217");break;case"us_medicine_food":new f(o.usstocks).Bankuai("#table_wrapper","b:MK0218");break;case"us_media":new f(o.usstocks).Bankuai("#table_wrapper","b:MK0220");break;case"us_automotive_energy":new f(o.usstocks).Bankuai("#table_wrapper","b:MK0219");break;case"us_manufacture_retail":new f(o.usstocks).Bankuai("#table_wrapper","b:MK0221");break;case"us_chinese_internet":new f(o.usstocks).Bankuai("#table_wrapper","b:MK0202");break;case"global_asia":new f(o.globalamerica).Bankuai("#table_wrapper","i:1.000001,i:0.399001,i:0.399005,i:0.399006,i:1.000300,i:100.HSI,i:100.HSCEI,i:124.HSCCI,i:100.TWII,i:100.N225,i:100.KOSPI200,i:100.KS11,i:100.STI,i:100.SENSEX,i:100.KLSE,i:100.SET,i:100.PSI,i:100.KSE100,i:100.VNINDEX,i:100.JKSE,i:100.CSEALL");break;case"global_america":new f(o.globalamerica).Bankuai("#table_wrapper","i:100.DJIA,i:100.SPX,i:100.NDX,i:100.TSX,i:100.BVSP,i:100.MXX");break;case"global_euro":new f(o.globalamerica).Bankuai("#table_wrapper","i:100.SX5E,i:100.FTSE,i:100.MCX,i:100.AXX,i:100.FCHI,i:100.GDAXI,i:100.RTS,i:100.IBEX,i:100.PSI20,i:100.OMXC20,i:100.BFX,i:100.AEX,i:100.WIG,i:100.OMXSPI,i:100.SSMI,i:100.HEX,i:100.OSEBX,i:100.ATX,i:100.MIB,i:100.ASE,i:100.ICEXI,i:100.PX,i:100.ISEQ");break;case"global_australia":new f(o.globalamerica).Bankuai("#table_wrapper","i:100.AS51,i:100.AORD,i:100.NZ50");break;case"global_qtzs":new f(o.globalamerica).Bankuai("#table_wrapper","i:100.UDI,i:100.BDI,i:100.CRB");break;case"futures_cffex":var r={type:"zjs",order:"zdf",url:a.qihuourl+"/list/8"};new n(r).Qihuo("#table_wrapper");break;case"futures_cffex-_TS":r={type:"zjs",order:"zdf",url:a.qihuourl+"/list/variety/8/6"};new n(r).Qihuo("#table_wrapper");break;case"futures_cffex-_T":r={type:"zjs",order:"zdf",url:a.qihuourl+"/list/variety/8/4"};new n(r).Qihuo("#table_wrapper");break;case"futures_cffex-_TF":r={type:"zjs",order:"zdf",url:a.qihuourl+"/list/variety/8/5"};new n(r).Qihuo("#table_wrapper");break;case"futures_cffex-_IF":r={type:"zjs",order:"zdf",url:a.qihuourl+"/list/variety/8/2"};new n(r).Qihuo("#table_wrapper");break;case"futures_cffex-_IH":r={type:"zjs",order:"zdf",url:a.qihuourl+"/list/variety/8/3"};new n(r).Qihuo("#table_wrapper");break;case"futures_cffex-_IC":r={type:"zjs",order:"zdf",url:a.qihuourl+"/list/variety/8/1"};new n(r).Qihuo("#table_wrapper");break;case"hk_index_futures":r={type:"gjs",order:"name",orderDur:"1",url:a.qihuourl+"/list/HKINDEXF"};new n(r).Qihuo("#table_wrapper");break;case"hk_stock_futures":r={type:"gjs_gpqh",order:"name",orderDur:"1",url:a.qihuourl+"/list/HKSTOCKF"};new n(r).Qihuo("#table_wrapper");break;case"HCF_CUS":r={type:"gjs",order:"name",orderDur:"1",url:a.qihuourl+"/list/HKCNYF"};new n(r).Qihuo("#table_wrapper");break;case"HMFS_LRP":r={type:"gjs",order:"name",orderDur:"1",url:a.qihuourl+"/list/HKMETALFS"};new n(r).Qihuo("#table_wrapper");break;case"futures_global":r={type:"gjqh",order:"zdf",url:a.qihuourl+"/list/COMEX,NYMEX,COBOT,SGX,NYBOT,LME,MDEX,TOCOM,IPE"};new n(r).Qihuo("#table_wrapper");break;case"futures_global-comex":r={type:"gjqh",order:"zdf",url:a.qihuourl+"/list/COMEX"};new n(r).Qihuo("#table_wrapper");break;case"futures_global-nymex":r={type:"gjqh",order:"zdf",url:a.qihuourl+"/list/NYMEX"};new n(r).Qihuo("#table_wrapper");break;case"futures_global-cobot":r={type:"gjqh",order:"zdf",url:a.qihuourl+"/list/COBOT"};new n(r).Qihuo("#table_wrapper");break;case"futures_global-sgx":r={type:"gjqh",order:"zdf",url:a.qihuourl+"/list/SGX"};new n(r).Qihuo("#table_wrapper");break;case"futures_global-nybot":r={type:"gjqh",order:"zdf",url:a.qihuourl+"/list/NYBOT"};new n(r).Qihuo("#table_wrapper");break;case"futures_global-lme":r={type:"gjqh",order:"zdf",url:a.qihuourl+"/list/LME"};new n(r).Qihuo("#table_wrapper");break;case"futures_global-mdex":r={type:"gjqh",order:"zdf",url:a.qihuourl+"/list/MDEX"};new n(r).Qihuo("#table_wrapper");break;case"futures_global-tocom":r={type:"gjqh",order:"zdf",url:a.qihuourl+"/list/TOCOM"};new n(r).Qihuo("#table_wrapper");break;case"futures_global-ipe":r={type:"gjqh",order:"zdf",url:a.qihuourl+"/list/IPE"};new n(r).Qihuo("#table_wrapper");break;case"futures_finance":r={blockName:"financial",type:"gjqh",order:"zdf",url:a.qihuourl+"/list/block"};new n(r).Qihuo("#table_wrapper");break;case"futures_energy":r={blockName:"energy",type:"gjqh",order:"zdf",url:a.qihuourl+"/list/block"};new n(r).Qihuo("#table_wrapper");break;case"futures_metal":r={blockName:"metal",type:"gjqh",order:"zdf",url:a.qihuourl+"/list/block"};new n(r).Qihuo("#table_wrapper");break;case"futures_farmproduce":r={blockName:"agricultural",type:"gjqh",order:"zdf",url:a.qihuourl+"/list/block"};new n(r).Qihuo("#table_wrapper");break;case"fund_close_end":new f(o.fundcloseend).Bankuai("#table_wrapper","e:19");break;case"fund_etf":new f(o.fundcloseend).Bankuai("#table_wrapper","b:MK0021,b:MK0022,b:MK0023,b:MK0024");break;case"fund_lof":new f(o.fundcloseend).Bankuai("#table_wrapper","b:MK0404,b:MK0405,b:MK0406,b:MK0407");break;case"bond_index":new f(o.bond).Bankuai("#table_wrapper","i:1.000012,i:1.000013,i:1.000022,i:1.000061,i:0.395021,i:0.395022,i:0.395031,i:0.395032,i:0.399481");break;case"bond_sh":new f(o.bond).Bankuai("#table_wrapper","m:1+t:4");break;case"bond_national_sh":new f(o.bondnew).Bankuai("#table_wrapper","m:1+b:MK0351");break;case"bond_enterprise_sh":new f(o.bondnew).Bankuai("#table_wrapper","m:1+b:MK0353");break;case"bond_convertible_sh":new f(o.bond).Bankuai("#table_wrapper","m:1+b:MK0354");break;case"bond_sz":new f(o.bond).Bankuai("#table_wrapper","m:0+t:8");break;case"bond_national_sz":new f(o.bondnew).Bankuai("#table_wrapper","m:0+b:MK0351");break;case"bond_enterprise_sz":new f(o.bondnew).Bankuai("#table_wrapper","m:0+b:MK0353");break;case"bond_convertible_sz":new f(o.bond).Bankuai("#table_wrapper","m:0+b:MK0354");break;case"bond_sh_buyback":new f(o.bondnew).Bankuai("#table_wrapper","m:1+b:MK0356");break;case"bond_sz_buyback":new f(o.bondnew).Bankuai("#table_wrapper","m:0+b:MK0356");break;case"forex_all":new f(o.forex).Bankuai("#table_wrapper","m:119,m:120,m:121,m:133");break;case"forex_basic":new f(o.forex).Bankuai("#table_wrapper","b:MK0300");break;case"forex_cross":new f(o.forex).Bankuai("#table_wrapper","b:MK0301");break;case"forex_cny":new f(o.forex).Bankuai("#table_wrapper","m:120,m:121,m:133");break;case"forex_cnyc":new f(o.forex).Bankuai("#table_wrapper","m:120");break;case"forex_cnyi":new f(o.forex).Bankuai("#table_wrapper","m:121+t:1");break;case"forex_cnyb":new f(o.forex).Bankuai("#table_wrapper","m:121+t:2");break;case"forex_cnh":new f(o.forex).Bankuai("#table_wrapper","m:133");break;case"forex_exchange_icbc":new f(o.whpj).Bankuai("#table_wrapper","m:162+s:1");break;case"forex_exchange_abc":new f(o.whpj).Bankuai("#table_wrapper","m:162+s:2");break;case"forex_exchange_boc":new f(o.whpj).Bankuai("#table_wrapper","m:162+s:4");break;case"forex_exchange_ccb":new f(o.whpj).Bankuai("#table_wrapper","m:162+s:8");break;case"forex_exchange_bcm":new f(o.whpj).Bankuai("#table_wrapper","m:162+s:16");break;case"forex_exchange_cmb":new f(o.whpj).Bankuai("#table_wrapper","m:162+s:32");break;case"options_sh50etf_all":new f(o.qqsc).Bankuai("#table_wrapper","m:10");break;case"options_szetf_all":case"options_sz300etf_all":new f(o.qqsc).Bankuai("#table_wrapper","m:12");break;case"options_sh50etf_call":new f(o.qqsc).Bankuai("#table_wrapper","m:10+t:173");break;case"options_sh50etf_put":new f(o.qqsc).Bankuai("#table_wrapper","m:10+t:174");break;case"options_shfe_all":r={type:"qqsc",order:"zdf",url:a.qihuourl+"/list/SHFEOPTION"};new n(r).Qihuo("#table_wrapper");break;case"options_cu_all":r={type:"qqsc",order:"zdf",url:a.qihuourl+"/list/variety/SHFEOPTION/1"};new n(r).Qihuo("#table_wrapper");break;case"options_ru_all":r={type:"qqsc",order:"zdf",url:a.qihuourl+"/list/variety/SHFEOPTION/2"};new n(r).Qihuo("#table_wrapper");break;case"options_au_all":r={type:"qqsc",order:"zdf",url:a.qihuourl+"/list/variety/151/3"};new n(r).Qihuo("#table_wrapper");break;case"options_dce_all":r={type:"qqsc",order:"zdf",url:a.qihuourl+"/list/DCEOPTION"};new n(r).Qihuo("#table_wrapper");break;case"options_beanpulp_all":r={type:"qqsc",order:"zdf",url:a.qihuourl+"/list/variety/DCEOPTION/1"};new n(r).Qihuo("#table_wrapper");break;case"options_c_all":r={type:"qqsc",order:"zdf",url:a.qihuourl+"/list/variety/DCEOPTION/2"};new n(r).Qihuo("#table_wrapper");break;case"options_t_all":r={type:"qqsc",order:"zdf",url:a.qihuourl+"/list/variety/DCEOPTION/3"};new n(r).Qihuo("#table_wrapper");break;case"options_czce_all":r={type:"qqsc",order:"zdf",url:a.qihuourl+"/list/CZCEOPTION"};new n(r).Qihuo("#table_wrapper");break;case"options_sugar_all":r={type:"qqsc",order:"zdf",url:a.qihuourl+"/list/variety/CZCEOPTION/1"};new n(r).Qihuo("#table_wrapper");break;case"options_cf_all":r={type:"qqsc",order:"zdf",url:a.qihuourl+"/list/variety/CZCEOPTION/2"};new n(r).Qihuo("#table_wrapper");break;case"options_cf_ta":r={type:"qqsc",order:"zdf",url:a.qihuourl+"/list/variety/CZCEOPTION/4"};new n(r).Qihuo("#table_wrapper");break;case"options_cf_ma":r={type:"qqsc",order:"zdf",url:a.qihuourl+"/list/variety/CZCEOPTION/3"};new n(r).Qihuo("#table_wrapper");break;case"options_cf_cp":r={type:"qqsc",order:"zdf",url:a.qihuourl+"/list/variety/CZCEOPTION/5"};new n(r).Qihuo("#table_wrapper");break;case"options_cffex_all":case"options_cffex_io":r={type:"qqsc",order:"zdf",url:a.qihuourl+"/list/11",sc:11};new n(r).Qihuo("#table_wrapper");break;case"options_uscny_all":r={type:"qqsc",order:"zdf",url:a.qihuourl+"/list/HKUSDCNHOP"};new n(r).Qihuo("#table_wrapper");break;case"gold_sh_spotgoods":new f(o.gold).Bankuai("#table_wrapper","m:118");break;case"gold_sh_futures":new f(o.zjs).Bankuai("#table_wrapper","m:113+t:5");break;case"nobalmetal_spotgoods":new f(o.gold,"nobalmetal_spotgoods").Bankuai("#table_wrapper","m:122,m:123");break;case"nobalmetal_futures":new f(o.gold).Bankuai("#table_wrapper","i:111.JAGC,i:101.QI00Y,i:111.JPAC,i:101.HG00Y,i:111.JAUC,i:111.JPLC,i:102.PL00Y,i:101.QO00Y,i:101.MGC00Y,i:101.GC00Y,i:101.SI00Y,i:102.PA00Y");break;default:window.location.href="//quote.eastmoney.com/center/gridlist.html#hs_a_board"}}}).call(this,r(11))}]);
//# sourceMappingURL=gridlist3.min.js.map'''
menu='''[{"key":"index","show":true,"name":"","title":"首页","href":"/center","target":"_self","order":-20,"groupKey":"","className":"","hot":false,"next":[]},{"key":"kcb","show":true,"name":"","title":"科创板","href":"/center/gridlist.html#kcb_board","target":"_self","order":-1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"favor","show":true,"name":"","title":"自选股","href":"http://quote.eastmoney.com/zixuan/?from=quotecenter","target":"_blank","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"sp_quotation","show":true,"name":"","title":"特色行情","href":"http://quote.eastmoney.com/zhuti/?from=center","target":"_blank","order":1,"groupKey":"","className":"","hot":false,"next":[{"key":"sp_quotation_changes","show":true,"name":"","title":"盘口异动","href":"http://quote.eastmoney.com/changes?from=center","target":"_blank","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"sp_quotation_zhuti","show":true,"name":"","title":"主题投资","href":"http://quote.eastmoney.com/zhuti/?from=center","target":"_blank","order":2,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"hs_market","show":true,"name":"","title":"沪深个股","href":"/center/gridlist.html#hs_a_board","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[{"key":"hs_a_board","show":true,"name":"","title":"沪深A股","href":"/center/gridlist.html#hs_a_board","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"sh_a_board","show":true,"name":"","title":"上证A股","href":"/center/gridlist.html#sh_a_board","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"sz_a_board","show":true,"name":"","title":"深证A股","href":"/center/gridlist.html#sz_a_board","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"newshares","show":true,"name":"","title":"新股","href":"/center/gridlist.html#newshares","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]},{"key":"sme_board","show":true,"name":"","title":"中小板","href":"/center/gridlist.html#sme_board","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[]},{"key":"gem_board","show":true,"name":"","title":"创业板","href":"/center/gridlist.html#gem_board","target":"_self","order":5,"groupKey":"","className":"","hot":false,"next":[]},{"key":"kcb_board","show":true,"name":"","title":"科创板","href":"/center/gridlist.html#kcb_board","target":"_self","order":6,"groupKey":"","className":"","hot":false,"next":[]},{"key":"sh_hk_board","show":true,"name":"","title":"沪股通","href":"/center/gridlist.html#sh_hk_board","target":"_self","order":7,"groupKey":"","className":"","hot":false,"next":[]},{"key":"sz_hk_board","show":true,"name":"","title":"深股通","href":"/center/gridlist.html#sz_hk_board","target":"_self","order":8,"groupKey":"","className":"","hot":false,"next":[]},{"key":"b_board","show":true,"name":"","title":"B股","href":"/center/gridlist.html#b_board","target":"_self","order":9,"groupKey":"","className":"","hot":false,"next":[]},{"key":"ab_comparison","show":true,"name":"","title":"AB股比价","href":"/center/gridlist.html#ab_comparison_sh","target":"_self","order":10,"groupKey":"","className":"","hot":false,"next":[{"key":"ab_comparison_sh","show":true,"name":"","title":"上证AB股比价","href":"/center/gridlist.html#ab_comparison_sh","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"ab_comparison_sz","show":true,"name":"","title":"深证AB股比价","href":"/center/gridlist.html#ab_comparison_sz","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"st_board","show":true,"name":"","title":"风险警示板","href":"/center/gridlist.html#st_board","target":"_self","order":11,"groupKey":"","className":"","hot":false,"next":[]},{"key":"staq_net_board","show":true,"name":"","title":"两网及退市","href":"/center/gridlist.html#staq_net_board","target":"_self","order":12,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"hsbroad","show":true,"name":"","title":"沪深板块","href":"/center/hsbk.html","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[{"key":"concept_board","show":true,"name":"","title":"概念板块","href":"/center/boardlist.html#concept_board","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[{"key":"boards2-90.BK0713","show":true,"name":"","title":"2025规划","href":"/center/boardlist.html#boards2-90.BK0713","target":null,"order":0,"groupKey":"#","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0881","show":true,"name":"","title":"3D玻璃","href":"/center/boardlist.html#boards2-90.BK0881","target":null,"order":1,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0619","show":true,"name":"","title":"3D打印","href":"/center/boardlist.html#boards2-90.BK0619","target":null,"order":2,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0901","show":true,"name":"","title":"3D摄像头","href":"/center/boardlist.html#boards2-90.BK0901","target":null,"order":3,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0714","show":true,"name":"","title":"5G概念","href":"/center/boardlist.html#boards2-90.BK0714","target":null,"order":4,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0498","show":true,"name":"","title":"AB股","href":"/center/boardlist.html#boards2-90.BK0498","target":null,"order":5,"groupKey":"A","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0499","show":true,"name":"","title":"AH股","href":"/center/boardlist.html#boards2-90.BK0499","target":null,"order":6,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0689","show":true,"name":"","title":"阿里概念","href":"/center/boardlist.html#boards2-90.BK0689","target":null,"order":7,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0894","show":true,"name":"","title":"阿兹海默","href":"/center/boardlist.html#boards2-90.BK0894","target":null,"order":8,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0629","show":true,"name":"","title":"北斗导航","href":"/center/boardlist.html#boards2-90.BK0629","target":null,"order":9,"groupKey":"B","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0675","show":true,"name":"","title":"病毒防治","href":"/center/boardlist.html#boards2-90.BK0675","target":null,"order":10,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0845","show":true,"name":"","title":"百度概念","href":"/center/boardlist.html#boards2-90.BK0845","target":null,"order":11,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0636","show":true,"name":"","title":"B股","href":"/center/boardlist.html#boards2-90.BK0636","target":null,"order":12,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0566","show":true,"name":"","title":"滨海新区","href":"/center/boardlist.html#boards2-90.BK0566","target":null,"order":13,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0896","show":true,"name":"","title":"白酒","href":"/center/boardlist.html#boards2-90.BK0896","target":null,"order":14,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0717","show":true,"name":"","title":"北京冬奥","href":"/center/boardlist.html#boards2-90.BK0717","target":null,"order":15,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0879","show":true,"name":"","title":"标普概念","href":"/center/boardlist.html#boards2-90.BK0879","target":null,"order":16,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0860","show":true,"name":"","title":"边缘计算","href":"/center/boardlist.html#boards2-90.BK0860","target":null,"order":17,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0812","show":true,"name":"","title":"贬值受益","href":"/center/boardlist.html#boards2-90.BK0812","target":null,"order":18,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0679","show":true,"name":"","title":"超导概念","href":"/center/boardlist.html#boards2-90.BK0679","target":null,"order":19,"groupKey":"C","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0700","show":true,"name":"","title":"充电桩","href":"/center/boardlist.html#boards2-90.BK0700","target":null,"order":20,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0824","show":true,"name":"","title":"参股360","href":"/center/boardlist.html#boards2-90.BK0824","target":null,"order":21,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0604","show":true,"name":"","title":"参股保险","href":"/center/boardlist.html#boards2-90.BK0604","target":null,"order":22,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0950","show":true,"name":"","title":"草甘膦","href":"/center/boardlist.html#boards2-90.BK0950","target":null,"order":23,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0905","show":true,"name":"","title":"传感器","href":"/center/boardlist.html#boards2-90.BK0905","target":null,"order":24,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0524","show":true,"name":"","title":"参股期货","href":"/center/boardlist.html#boards2-90.BK0524","target":null,"order":25,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0514","show":true,"name":"","title":"参股券商","href":"/center/boardlist.html#boards2-90.BK0514","target":null,"order":26,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0525","show":true,"name":"","title":"参股银行","href":"/center/boardlist.html#boards2-90.BK0525","target":null,"order":27,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0703","show":true,"name":"","title":"超级电容","href":"/center/boardlist.html#boards2-90.BK0703","target":null,"order":28,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0811","show":true,"name":"","title":"超级品牌","href":"/center/boardlist.html#boards2-90.BK0811","target":null,"order":29,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0862","show":true,"name":"","title":"超级真菌","href":"/center/boardlist.html#boards2-90.BK0862","target":null,"order":30,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0920","show":true,"name":"","title":"车联网","href":"/center/boardlist.html#boards2-90.BK0920","target":null,"order":31,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0671","show":true,"name":"","title":"彩票概念","href":"/center/boardlist.html#boards2-90.BK0671","target":null,"order":32,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0859","show":true,"name":"","title":"超清视频","href":"/center/boardlist.html#boards2-90.BK0859","target":null,"order":33,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0899","show":true,"name":"","title":"CRO","href":"/center/boardlist.html#boards2-90.BK0899","target":null,"order":34,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0506","show":true,"name":"","title":"创投","href":"/center/boardlist.html#boards2-90.BK0506","target":null,"order":35,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0501","show":true,"name":"","title":"次新股","href":"/center/boardlist.html#boards2-90.BK0501","target":null,"order":36,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0874","show":true,"name":"","title":"创业板壳","href":"/center/boardlist.html#boards2-90.BK0874","target":null,"order":37,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0742","show":true,"name":"","title":"创业板综","href":"/center/boardlist.html#boards2-90.BK0742","target":null,"order":38,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0638","show":true,"name":"","title":"创业成份","href":"/center/boardlist.html#boards2-90.BK0638","target":null,"order":39,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0534","show":true,"name":"","title":"成渝特区","href":"/center/boardlist.html#boards2-90.BK0534","target":null,"order":40,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0838","show":true,"name":"","title":"东北振兴","href":"/center/boardlist.html#boards2-90.BK0838","target":null,"order":41,"groupKey":"D","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0814","show":true,"name":"","title":"大飞机","href":"/center/boardlist.html#boards2-90.BK0814","target":null,"order":42,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0835","show":true,"name":"","title":"独角兽","href":"/center/boardlist.html#boards2-90.BK0835","target":null,"order":43,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0676","show":true,"name":"","title":"独家药品","href":"/center/boardlist.html#boards2-90.BK0676","target":null,"order":44,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0870","show":true,"name":"","title":"单抗概念","href":"/center/boardlist.html#boards2-90.BK0870","target":null,"order":45,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0622","show":true,"name":"","title":"地热能","href":"/center/boardlist.html#boards2-90.BK0622","target":null,"order":46,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0665","show":true,"name":"","title":"电商概念","href":"/center/boardlist.html#boards2-90.BK0665","target":null,"order":47,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0634","show":true,"name":"","title":"大数据","href":"/center/boardlist.html#boards2-90.BK0634","target":null,"order":48,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0853","show":true,"name":"","title":"电子竞技","href":"/center/boardlist.html#boards2-90.BK0853","target":null,"order":49,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0865","show":true,"name":"","title":"电子烟","href":"/center/boardlist.html#boards2-90.BK0865","target":null,"order":50,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0875","show":true,"name":"","title":"ETC","href":"/center/boardlist.html#boards2-90.BK0875","target":null,"order":51,"groupKey":"E","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0664","show":true,"name":"","title":"二胎概念","href":"/center/boardlist.html#boards2-90.BK0664","target":null,"order":52,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0878","show":true,"name":"","title":"分拆预期","href":"/center/boardlist.html#boards2-90.BK0878","target":null,"order":53,"groupKey":"F","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0690","show":true,"name":"","title":"氟化工","href":"/center/boardlist.html#boards2-90.BK0690","target":null,"order":54,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0595","show":true,"name":"","title":"风能","href":"/center/boardlist.html#boards2-90.BK0595","target":null,"order":55,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0867","show":true,"name":"","title":"富时概念","href":"/center/boardlist.html#boards2-90.BK0867","target":null,"order":56,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0842","show":true,"name":"","title":"富士康","href":"/center/boardlist.html#boards2-90.BK0842","target":null,"order":57,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0696","show":true,"name":"","title":"国产软件","href":"/center/boardlist.html#boards2-90.BK0696","target":null,"order":58,"groupKey":"G","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0891","show":true,"name":"","title":"国产芯片","href":"/center/boardlist.html#boards2-90.BK0891","target":null,"order":59,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0904","show":true,"name":"","title":"广电","href":"/center/boardlist.html#boards2-90.BK0904","target":null,"order":60,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0868","show":true,"name":"","title":"GDR概念","href":"/center/boardlist.html#boards2-90.BK0868","target":null,"order":61,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0667","show":true,"name":"","title":"国家安防","href":"/center/boardlist.html#boards2-90.BK0667","target":null,"order":62,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0884","show":true,"name":"","title":"光刻胶","href":"/center/boardlist.html#boards2-90.BK0884","target":null,"order":63,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0683","show":true,"name":"","title":"国企改革","href":"/center/boardlist.html#boards2-90.BK0683","target":null,"order":64,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0567","show":true,"name":"","title":"股权激励","href":"/center/boardlist.html#boards2-90.BK0567","target":null,"order":65,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0803","show":true,"name":"","title":"股权转让","href":"/center/boardlist.html#boards2-90.BK0803","target":null,"order":66,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0723","show":true,"name":"","title":"高送转","href":"/center/boardlist.html#boards2-90.BK0723","target":null,"order":67,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0491","show":true,"name":"","title":"高校","href":"/center/boardlist.html#boards2-90.BK0491","target":null,"order":68,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0807","show":true,"name":"","title":"共享经济","href":"/center/boardlist.html#boards2-90.BK0807","target":null,"order":69,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0810","show":true,"name":"","title":"工业4.0","href":"/center/boardlist.html#boards2-90.BK0810","target":null,"order":70,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0856","show":true,"name":"","title":"工业大麻","href":"/center/boardlist.html#boards2-90.BK0856","target":null,"order":71,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0832","show":true,"name":"","title":"工业互联","href":"/center/boardlist.html#boards2-90.BK0832","target":null,"order":72,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0707","show":true,"name":"","title":"沪股通","href":"/center/boardlist.html#boards2-90.BK0707","target":null,"order":73,"groupKey":"H","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0512","show":true,"name":"","title":"化工原料","href":"/center/boardlist.html#boards2-90.BK0512","target":null,"order":74,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0601","show":true,"name":"","title":"海工装备","href":"/center/boardlist.html#boards2-90.BK0601","target":null,"order":75,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0908","show":true,"name":"","title":"HIT电池","href":"/center/boardlist.html#boards2-90.BK0908","target":null,"order":76,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0547","show":true,"name":"","title":"黄金概念","href":"/center/boardlist.html#boards2-90.BK0547","target":null,"order":77,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0637","show":true,"name":"","title":"互联金融","href":"/center/boardlist.html#boards2-90.BK0637","target":null,"order":78,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0837","show":true,"name":"","title":"互联医疗","href":"/center/boardlist.html#boards2-90.BK0837","target":null,"order":79,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0724","show":true,"name":"","title":"海绵城市","href":"/center/boardlist.html#boards2-90.BK0724","target":null,"order":80,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0715","show":true,"name":"","title":"航母概念","href":"/center/boardlist.html#boards2-90.BK0715","target":null,"order":81,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0577","show":true,"name":"","title":"核能核电","href":"/center/boardlist.html#boards2-90.BK0577","target":null,"order":82,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0672","show":true,"name":"","title":"沪企改革","href":"/center/boardlist.html#boards2-90.BK0672","target":null,"order":83,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0500","show":true,"name":"","title":"HS300_","href":"/center/boardlist.html#boards2-90.BK0500","target":null,"order":84,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0854","show":true,"name":"","title":"华为概念","href":"/center/boardlist.html#boards2-90.BK0854","target":null,"order":85,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0623","show":true,"name":"","title":"海洋经济","href":"/center/boardlist.html#boards2-90.BK0623","target":null,"order":86,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0697","show":true,"name":"","title":"IPO受益","href":"/center/boardlist.html#boards2-90.BK0697","target":null,"order":87,"groupKey":"I","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0897","show":true,"name":"","title":"IPv6","href":"/center/boardlist.html#boards2-90.BK0897","target":null,"order":88,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0561","show":true,"name":"","title":"基本金属","href":"/center/boardlist.html#boards2-90.BK0561","target":null,"order":89,"groupKey":"J","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0849","show":true,"name":"","title":"京东金融","href":"/center/boardlist.html#boards2-90.BK0849","target":null,"order":90,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0490","show":true,"name":"","title":"军工","href":"/center/boardlist.html#boards2-90.BK0490","target":null,"order":91,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0552","show":true,"name":"","title":"机构重仓","href":"/center/boardlist.html#boards2-90.BK0552","target":null,"order":92,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0684","show":true,"name":"","title":"京津冀","href":"/center/boardlist.html#boards2-90.BK0684","target":null,"order":93,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0909","show":true,"name":"","title":"降解塑料","href":"/center/boardlist.html#boards2-90.BK0909","target":null,"order":94,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0536","show":true,"name":"","title":"基金重仓","href":"/center/boardlist.html#boards2-90.BK0536","target":null,"order":95,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0850","show":true,"name":"","title":"进口博览","href":"/center/boardlist.html#boards2-90.BK0850","target":null,"order":96,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0719","show":true,"name":"","title":"健康中国","href":"/center/boardlist.html#boards2-90.BK0719","target":null,"order":97,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0808","show":true,"name":"","title":"军民融合","href":"/center/boardlist.html#boards2-90.BK0808","target":null,"order":98,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0494","show":true,"name":"","title":"节能环保","href":"/center/boardlist.html#boards2-90.BK0494","target":null,"order":99,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0685","show":true,"name":"","title":"举牌概念","href":"/center/boardlist.html#boards2-90.BK0685","target":null,"order":100,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0887","show":true,"name":"","title":"鸡肉概念","href":"/center/boardlist.html#boards2-90.BK0887","target":null,"order":101,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0693","show":true,"name":"","title":"基因测序","href":"/center/boardlist.html#boards2-90.BK0693","target":null,"order":102,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0806","show":true,"name":"","title":"精准医疗","href":"/center/boardlist.html#boards2-90.BK0806","target":null,"order":103,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0990","show":true,"name":"","title":"快递概念","href":"/center/boardlist.html#boards2-90.BK0990","target":null,"order":104,"groupKey":"K","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0818","show":true,"name":"","title":"可燃冰","href":"/center/boardlist.html#boards2-90.BK0818","target":null,"order":105,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0911","show":true,"name":"","title":"口罩","href":"/center/boardlist.html#boards2-90.BK0911","target":null,"order":106,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0820","show":true,"name":"","title":"壳资源","href":"/center/boardlist.html#boards2-90.BK0820","target":null,"order":107,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0674","show":true,"name":"","title":"蓝宝石","href":"/center/boardlist.html#boards2-90.BK0674","target":null,"order":108,"groupKey":"L","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0574","show":true,"name":"","title":"锂电池","href":"/center/boardlist.html#boards2-90.BK0574","target":null,"order":109,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0580","show":true,"name":"","title":"LED","href":"/center/boardlist.html#boards2-90.BK0580","target":null,"order":110,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0906","show":true,"name":"","title":"流感","href":"/center/boardlist.html#boards2-90.BK0906","target":null,"order":111,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0873","show":true,"name":"","title":"垃圾分类","href":"/center/boardlist.html#boards2-90.BK0873","target":null,"order":112,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0852","show":true,"name":"","title":"冷链物流","href":"/center/boardlist.html#boards2-90.BK0852","target":null,"order":113,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0710","show":true,"name":"","title":"量子通信","href":"/center/boardlist.html#boards2-90.BK0710","target":null,"order":114,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0492","show":true,"name":"","title":"煤化工","href":"/center/boardlist.html#boards2-90.BK0492","target":null,"order":115,"groupKey":"M","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0902","show":true,"name":"","title":"MiniLED","href":"/center/boardlist.html#boards2-90.BK0902","target":null,"order":116,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0890","show":true,"name":"","title":"MLCC","href":"/center/boardlist.html#boards2-90.BK0890","target":null,"order":117,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0626","show":true,"name":"","title":"美丽中国","href":"/center/boardlist.html#boards2-90.BK0626","target":null,"order":118,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0857","show":true,"name":"","title":"MSCI大盘","href":"/center/boardlist.html#boards2-90.BK0857","target":null,"order":119,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0821","show":true,"name":"","title":"MSCI中国","href":"/center/boardlist.html#boards2-90.BK0821","target":null,"order":120,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0858","show":true,"name":"","title":"MSCI中盘","href":"/center/boardlist.html#boards2-90.BK0858","target":null,"order":121,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0698","show":true,"name":"","title":"免疫治疗","href":"/center/boardlist.html#boards2-90.BK0698","target":null,"order":122,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0855","show":true,"name":"","title":"纳米银","href":"/center/boardlist.html#boards2-90.BK0855","target":null,"order":123,"groupKey":"N","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0888","show":true,"name":"","title":"农业种植","href":"/center/boardlist.html#boards2-90.BK0888","target":null,"order":124,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0840","show":true,"name":"","title":"OLED","href":"/center/boardlist.html#boards2-90.BK0840","target":null,"order":125,"groupKey":"O","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0877","show":true,"name":"","title":"PCB","href":"/center/boardlist.html#boards2-90.BK0877","target":null,"order":126,"groupKey":"P","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0666","show":true,"name":"","title":"苹果概念","href":"/center/boardlist.html#boards2-90.BK0666","target":null,"order":127,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0721","show":true,"name":"","title":"PPP模式","href":"/center/boardlist.html#boards2-90.BK0721","target":null,"order":128,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0535","show":true,"name":"","title":"QFII重仓","href":"/center/boardlist.html#boards2-90.BK0535","target":null,"order":129,"groupKey":"Q","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0872","show":true,"name":"","title":"青蒿素","href":"/center/boardlist.html#boards2-90.BK0872","target":null,"order":130,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0830","show":true,"name":"","title":"区块链","href":"/center/boardlist.html#boards2-90.BK0830","target":null,"order":131,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0864","show":true,"name":"","title":"氢能源","href":"/center/boardlist.html#boards2-90.BK0864","target":null,"order":132,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0711","show":true,"name":"","title":"券商概念","href":"/center/boardlist.html#boards2-90.BK0711","target":null,"order":133,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0699","show":true,"name":"","title":"全息技术","href":"/center/boardlist.html#boards2-90.BK0699","target":null,"order":134,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0800","show":true,"name":"","title":"人工智能","href":"/center/boardlist.html#boards2-90.BK0800","target":null,"order":135,"groupKey":"R","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0682","show":true,"name":"","title":"燃料电池","href":"/center/boardlist.html#boards2-90.BK0682","target":null,"order":136,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0706","show":true,"name":"","title":"人脑工程","href":"/center/boardlist.html#boards2-90.BK0706","target":null,"order":137,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0892","show":true,"name":"","title":"乳业","href":"/center/boardlist.html#boards2-90.BK0892","target":null,"order":138,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0866","show":true,"name":"","title":"人造肉","href":"/center/boardlist.html#boards2-90.BK0866","target":null,"order":139,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0596","show":true,"name":"","title":"融资融券","href":"/center/boardlist.html#boards2-90.BK0596","target":null,"order":140,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0520","show":true,"name":"","title":"社保重仓","href":"/center/boardlist.html#boards2-90.BK0520","target":null,"order":141,"groupKey":"S","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0568","show":true,"name":"","title":"深成500","href":"/center/boardlist.html#boards2-90.BK0568","target":null,"order":142,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0804","show":true,"name":"","title":"深股通","href":"/center/boardlist.html#boards2-90.BK0804","target":null,"order":143,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0643","show":true,"name":"","title":"上海自贸","href":"/center/boardlist.html#boards2-90.BK0643","target":null,"order":144,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0851","show":true,"name":"","title":"纾困概念","href":"/center/boardlist.html#boards2-90.BK0851","target":null,"order":145,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0597","show":true,"name":"","title":"水利建设","href":"/center/boardlist.html#boards2-90.BK0597","target":null,"order":146,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0709","show":true,"name":"","title":"赛马概念","href":"/center/boardlist.html#boards2-90.BK0709","target":null,"order":147,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0617","show":true,"name":"","title":"石墨烯","href":"/center/boardlist.html#boards2-90.BK0617","target":null,"order":148,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0614","show":true,"name":"","title":"食品安全","href":"/center/boardlist.html#boards2-90.BK0614","target":null,"order":149,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0511","show":true,"name":"","title":"ST概念","href":"/center/boardlist.html#boards2-90.BK0511","target":null,"order":150,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0669","show":true,"name":"","title":"生态农业","href":"/center/boardlist.html#boards2-90.BK0669","target":null,"order":151,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0970","show":true,"name":"","title":"生物识别","href":"/center/boardlist.html#boards2-90.BK0970","target":null,"order":152,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0548","show":true,"name":"","title":"生物疫苗","href":"/center/boardlist.html#boards2-90.BK0548","target":null,"order":153,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0642","show":true,"name":"","title":"手游概念","href":"/center/boardlist.html#boards2-90.BK0642","target":null,"order":154,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0743","show":true,"name":"","title":"深证100R","href":"/center/boardlist.html#boards2-90.BK0743","target":null,"order":155,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0612","show":true,"name":"","title":"上证180_","href":"/center/boardlist.html#boards2-90.BK0612","target":null,"order":156,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0705","show":true,"name":"","title":"上证380","href":"/center/boardlist.html#boards2-90.BK0705","target":null,"order":157,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0611","show":true,"name":"","title":"上证50_","href":"/center/boardlist.html#boards2-90.BK0611","target":null,"order":158,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0883","show":true,"name":"","title":"数字货币","href":"/center/boardlist.html#boards2-90.BK0883","target":null,"order":159,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0861","show":true,"name":"","title":"数字孪生","href":"/center/boardlist.html#boards2-90.BK0861","target":null,"order":160,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0549","show":true,"name":"","title":"深圳特区","href":"/center/boardlist.html#boards2-90.BK0549","target":null,"order":161,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0633","show":true,"name":"","title":"送转预期","href":"/center/boardlist.html#boards2-90.BK0633","target":null,"order":162,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0836","show":true,"name":"","title":"数字中国","href":"/center/boardlist.html#boards2-90.BK0836","target":null,"order":163,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0805","show":true,"name":"","title":"钛白粉","href":"/center/boardlist.html#boards2-90.BK0805","target":null,"order":164,"groupKey":"T","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0632","show":true,"name":"","title":"土地流转","href":"/center/boardlist.html#boards2-90.BK0632","target":null,"order":165,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0592","show":true,"name":"","title":"铁路基建","href":"/center/boardlist.html#boards2-90.BK0592","target":null,"order":166,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0863","show":true,"name":"","title":"透明工厂","href":"/center/boardlist.html#boards2-90.BK0863","target":null,"order":167,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0843","show":true,"name":"","title":"天然气","href":"/center/boardlist.html#boards2-90.BK0843","target":null,"order":168,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0644","show":true,"name":"","title":"特斯拉","href":"/center/boardlist.html#boards2-90.BK0644","target":null,"order":169,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0841","show":true,"name":"","title":"体外诊断","href":"/center/boardlist.html#boards2-90.BK0841","target":null,"order":170,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0708","show":true,"name":"","title":"体育产业","href":"/center/boardlist.html#boards2-90.BK0708","target":null,"order":171,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0625","show":true,"name":"","title":"通用航空","href":"/center/boardlist.html#boards2-90.BK0625","target":null,"order":172,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0898","show":true,"name":"","title":"胎压监测","href":"/center/boardlist.html#boards2-90.BK0898","target":null,"order":173,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0588","show":true,"name":"","title":"太阳能","href":"/center/boardlist.html#boards2-90.BK0588","target":null,"order":174,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0880","show":true,"name":"","title":"UWB概念","href":"/center/boardlist.html#boards2-90.BK0880","target":null,"order":175,"groupKey":"U","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0885","show":true,"name":"","title":"VPN","href":"/center/boardlist.html#boards2-90.BK0885","target":null,"order":176,"groupKey":"V","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0831","show":true,"name":"","title":"万达概念","href":"/center/boardlist.html#boards2-90.BK0831","target":null,"order":177,"groupKey":"W","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0940","show":true,"name":"","title":"网红直播","href":"/center/boardlist.html#boards2-90.BK0940","target":null,"order":178,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0915","show":true,"name":"","title":"WiFi","href":"/center/boardlist.html#boards2-90.BK0915","target":null,"order":179,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0655","show":true,"name":"","title":"网络安全","href":"/center/boardlist.html#boards2-90.BK0655","target":null,"order":180,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0554","show":true,"name":"","title":"物联网","href":"/center/boardlist.html#boards2-90.BK0554","target":null,"order":181,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0509","show":true,"name":"","title":"网络游戏","href":"/center/boardlist.html#boards2-90.BK0509","target":null,"order":182,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0704","show":true,"name":"","title":"无人机","href":"/center/boardlist.html#boards2-90.BK0704","target":null,"order":183,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0802","show":true,"name":"","title":"无人驾驶","href":"/center/boardlist.html#boards2-90.BK0802","target":null,"order":184,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0895","show":true,"name":"","title":"维生素","href":"/center/boardlist.html#boards2-90.BK0895","target":null,"order":185,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0960","show":true,"name":"","title":"无线充电","href":"/center/boardlist.html#boards2-90.BK0960","target":null,"order":186,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0893","show":true,"name":"","title":"无线耳机","href":"/center/boardlist.html#boards2-90.BK0893","target":null,"order":187,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0813","show":true,"name":"","title":"雄安新区","href":"/center/boardlist.html#boards2-90.BK0813","target":null,"order":188,"groupKey":"X","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0523","show":true,"name":"","title":"新材料","href":"/center/boardlist.html#boards2-90.BK0523","target":null,"order":189,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0834","show":true,"name":"","title":"乡村振兴","href":"/center/boardlist.html#boards2-90.BK0834","target":null,"order":190,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0913","show":true,"name":"","title":"消毒剂","href":"/center/boardlist.html#boards2-90.BK0913","target":null,"order":191,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0695","show":true,"name":"","title":"小金属","href":"/center/boardlist.html#boards2-90.BK0695","target":null,"order":192,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0825","show":true,"name":"","title":"新零售","href":"/center/boardlist.html#boards2-90.BK0825","target":null,"order":193,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0833","show":true,"name":"","title":"小米概念","href":"/center/boardlist.html#boards2-90.BK0833","target":null,"order":194,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0722","show":true,"name":"","title":"虚拟现实","href":"/center/boardlist.html#boards2-90.BK0722","target":null,"order":195,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0493","show":true,"name":"","title":"新能源","href":"/center/boardlist.html#boards2-90.BK0493","target":null,"order":196,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0900","show":true,"name":"","title":"新能源车","href":"/center/boardlist.html#boards2-90.BK0900","target":null,"order":197,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0519","show":true,"name":"","title":"稀缺资源","href":"/center/boardlist.html#boards2-90.BK0519","target":null,"order":198,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0600","show":true,"name":"","title":"新三板","href":"/center/boardlist.html#boards2-90.BK0600","target":null,"order":199,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0578","show":true,"name":"","title":"稀土永磁","href":"/center/boardlist.html#boards2-90.BK0578","target":null,"order":200,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0912","show":true,"name":"","title":"远程办公","href":"/center/boardlist.html#boards2-90.BK0912","target":null,"order":201,"groupKey":"Y","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0712","show":true,"name":"","title":"一带一路","href":"/center/boardlist.html#boards2-90.BK0712","target":null,"order":202,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0556","show":true,"name":"","title":"移动支付","href":"/center/boardlist.html#boards2-90.BK0556","target":null,"order":203,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0914","show":true,"name":"","title":"医废处理","href":"/center/boardlist.html#boards2-90.BK0914","target":null,"order":204,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0663","show":true,"name":"","title":"油改概念","href":"/center/boardlist.html#boards2-90.BK0663","target":null,"order":205,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0677","show":true,"name":"","title":"粤港自贸","href":"/center/boardlist.html#boards2-90.BK0677","target":null,"order":206,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0579","show":true,"name":"","title":"云计算","href":"/center/boardlist.html#boards2-90.BK0579","target":null,"order":207,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0563","show":true,"name":"","title":"油价相关","href":"/center/boardlist.html#boards2-90.BK0563","target":null,"order":208,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0570","show":true,"name":"","title":"预亏预减","href":"/center/boardlist.html#boards2-90.BK0570","target":null,"order":209,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0653","show":true,"name":"","title":"养老概念","href":"/center/boardlist.html#boards2-90.BK0653","target":null,"order":210,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0823","show":true,"name":"","title":"养老金","href":"/center/boardlist.html#boards2-90.BK0823","target":null,"order":211,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0889","show":true,"name":"","title":"医疗美容","href":"/center/boardlist.html#boards2-90.BK0889","target":null,"order":212,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0668","show":true,"name":"","title":"医疗器械","href":"/center/boardlist.html#boards2-90.BK0668","target":null,"order":213,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0606","show":true,"name":"","title":"油气设服","href":"/center/boardlist.html#boards2-90.BK0606","target":null,"order":214,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0610","show":true,"name":"","title":"央视50_","href":"/center/boardlist.html#boards2-90.BK0610","target":null,"order":215,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0847","show":true,"name":"","title":"影视概念","href":"/center/boardlist.html#boards2-90.BK0847","target":null,"order":216,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0603","show":true,"name":"","title":"页岩气","href":"/center/boardlist.html#boards2-90.BK0603","target":null,"order":217,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0903","show":true,"name":"","title":"云游戏","href":"/center/boardlist.html#boards2-90.BK0903","target":null,"order":218,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0571","show":true,"name":"","title":"预盈预增","href":"/center/boardlist.html#boards2-90.BK0571","target":null,"order":219,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0635","show":true,"name":"","title":"中超概念","href":"/center/boardlist.html#boards2-90.BK0635","target":null,"order":220,"groupKey":"Z","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0628","show":true,"name":"","title":"智慧城市","href":"/center/boardlist.html#boards2-90.BK0628","target":null,"order":221,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0886","show":true,"name":"","title":"智慧政务","href":"/center/boardlist.html#boards2-90.BK0886","target":null,"order":222,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0718","show":true,"name":"","title":"证金持股","href":"/center/boardlist.html#boards2-90.BK0718","target":null,"order":223,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0594","show":true,"name":"","title":"长江三角","href":"/center/boardlist.html#boards2-90.BK0594","target":null,"order":224,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0907","show":true,"name":"","title":"转基因","href":"/center/boardlist.html#boards2-90.BK0907","target":null,"order":225,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0641","show":true,"name":"","title":"智能穿戴","href":"/center/boardlist.html#boards2-90.BK0641","target":null,"order":226,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0656","show":true,"name":"","title":"智能电视","href":"/center/boardlist.html#boards2-90.BK0656","target":null,"order":227,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0581","show":true,"name":"","title":"智能电网","href":"/center/boardlist.html#boards2-90.BK0581","target":null,"order":228,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0680","show":true,"name":"","title":"智能家居","href":"/center/boardlist.html#boards2-90.BK0680","target":null,"order":229,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0640","show":true,"name":"","title":"智能机器","href":"/center/boardlist.html#boards2-90.BK0640","target":null,"order":230,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0801","show":true,"name":"","title":"增强现实","href":"/center/boardlist.html#boards2-90.BK0801","target":null,"order":231,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0817","show":true,"name":"","title":"昨日触板","href":"/center/boardlist.html#boards2-90.BK0817","target":null,"order":232,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0882","show":true,"name":"","title":"猪肉概念","href":"/center/boardlist.html#boards2-90.BK0882","target":null,"order":233,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0816","show":true,"name":"","title":"昨日连板","href":"/center/boardlist.html#boards2-90.BK0816","target":null,"order":234,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0815","show":true,"name":"","title":"昨日涨停","href":"/center/boardlist.html#boards2-90.BK0815","target":null,"order":235,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0839","show":true,"name":"","title":"知识产权","href":"/center/boardlist.html#boards2-90.BK0839","target":null,"order":236,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0822","show":true,"name":"","title":"租售同权","href":"/center/boardlist.html#boards2-90.BK0822","target":null,"order":237,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0662","show":true,"name":"","title":"在线教育","href":"/center/boardlist.html#boards2-90.BK0662","target":null,"order":238,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0692","show":true,"name":"","title":"在线旅游","href":"/center/boardlist.html#boards2-90.BK0692","target":null,"order":239,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0615","show":true,"name":"","title":"中药","href":"/center/boardlist.html#boards2-90.BK0615","target":null,"order":240,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0701","show":true,"name":"","title":"中证500","href":"/center/boardlist.html#boards2-90.BK0701","target":null,"order":241,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0528","show":true,"name":"","title":"转债标的","href":"/center/boardlist.html#boards2-90.BK0528","target":null,"order":242,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0980","show":true,"name":"","title":"债转股","href":"/center/boardlist.html#boards2-90.BK0980","target":null,"order":243,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0505","show":true,"name":"","title":"中字头","href":"/center/boardlist.html#boards2-90.BK0505","target":null,"order":244,"groupKey":"","className":null,"hot":false,"next":null}]},{"key":"region_board","show":true,"name":"","title":"地域板块","href":"/center/boardlist.html#region_board","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[{"key":"boards2-90.BK0149","show":true,"name":"","title":"安徽板块","href":"/center/boardlist.html#boards2-90.BK0149","target":null,"order":0,"groupKey":"A","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0150","show":true,"name":"","title":"北京板块","href":"/center/boardlist.html#boards2-90.BK0150","target":null,"order":1,"groupKey":"B","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0170","show":true,"name":"","title":"重庆板块","href":"/center/boardlist.html#boards2-90.BK0170","target":null,"order":2,"groupKey":"C","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0151","show":true,"name":"","title":"福建板块","href":"/center/boardlist.html#boards2-90.BK0151","target":null,"order":3,"groupKey":"F","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0153","show":true,"name":"","title":"广东板块","href":"/center/boardlist.html#boards2-90.BK0153","target":null,"order":4,"groupKey":"G","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0152","show":true,"name":"","title":"甘肃板块","href":"/center/boardlist.html#boards2-90.BK0152","target":null,"order":5,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0154","show":true,"name":"","title":"广西板块","href":"/center/boardlist.html#boards2-90.BK0154","target":null,"order":6,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0173","show":true,"name":"","title":"贵州板块","href":"/center/boardlist.html#boards2-90.BK0173","target":null,"order":7,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0155","show":true,"name":"","title":"河北板块","href":"/center/boardlist.html#boards2-90.BK0155","target":null,"order":8,"groupKey":"H","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0157","show":true,"name":"","title":"湖北板块","href":"/center/boardlist.html#boards2-90.BK0157","target":null,"order":9,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0146","show":true,"name":"","title":"黑龙江","href":"/center/boardlist.html#boards2-90.BK0146","target":null,"order":10,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0158","show":true,"name":"","title":"湖南板块","href":"/center/boardlist.html#boards2-90.BK0158","target":null,"order":11,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0156","show":true,"name":"","title":"河南板块","href":"/center/boardlist.html#boards2-90.BK0156","target":null,"order":12,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0176","show":true,"name":"","title":"海南板块","href":"/center/boardlist.html#boards2-90.BK0176","target":null,"order":13,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0148","show":true,"name":"","title":"吉林板块","href":"/center/boardlist.html#boards2-90.BK0148","target":null,"order":14,"groupKey":"J","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0159","show":true,"name":"","title":"江苏板块","href":"/center/boardlist.html#boards2-90.BK0159","target":null,"order":15,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0160","show":true,"name":"","title":"江西板块","href":"/center/boardlist.html#boards2-90.BK0160","target":null,"order":16,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0161","show":true,"name":"","title":"辽宁板块","href":"/center/boardlist.html#boards2-90.BK0161","target":null,"order":17,"groupKey":"L","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0175","show":true,"name":"","title":"内蒙古","href":"/center/boardlist.html#boards2-90.BK0175","target":null,"order":18,"groupKey":"N","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0162","show":true,"name":"","title":"宁夏板块","href":"/center/boardlist.html#boards2-90.BK0162","target":null,"order":19,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0163","show":true,"name":"","title":"青海板块","href":"/center/boardlist.html#boards2-90.BK0163","target":null,"order":20,"groupKey":"Q","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0169","show":true,"name":"","title":"四川板块","href":"/center/boardlist.html#boards2-90.BK0169","target":null,"order":21,"groupKey":"S","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0164","show":true,"name":"","title":"山东板块","href":"/center/boardlist.html#boards2-90.BK0164","target":null,"order":22,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0145","show":true,"name":"","title":"上海板块","href":"/center/boardlist.html#boards2-90.BK0145","target":null,"order":23,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0165","show":true,"name":"","title":"陕西板块","href":"/center/boardlist.html#boards2-90.BK0165","target":null,"order":24,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0167","show":true,"name":"","title":"山西板块","href":"/center/boardlist.html#boards2-90.BK0167","target":null,"order":25,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0166","show":true,"name":"","title":"天津板块","href":"/center/boardlist.html#boards2-90.BK0166","target":null,"order":26,"groupKey":"T","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0147","show":true,"name":"","title":"新疆板块","href":"/center/boardlist.html#boards2-90.BK0147","target":null,"order":27,"groupKey":"X","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0174","show":true,"name":"","title":"西藏板块","href":"/center/boardlist.html#boards2-90.BK0174","target":null,"order":28,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0171","show":true,"name":"","title":"云南板块","href":"/center/boardlist.html#boards2-90.BK0171","target":null,"order":29,"groupKey":"Y","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0172","show":true,"name":"","title":"浙江板块","href":"/center/boardlist.html#boards2-90.BK0172","target":null,"order":30,"groupKey":"Z","className":null,"hot":false,"next":null}]},{"key":"industry_board","show":true,"name":"","title":"行业板块","href":"/center/boardlist.html#industry_board","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[{"key":"boards2-90.BK0735","show":true,"name":"","title":"安防设备","href":"/center/boardlist.html#boards2-90.BK0735","target":null,"order":0,"groupKey":"A","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0546","show":true,"name":"","title":"玻璃陶瓷","href":"/center/boardlist.html#boards2-90.BK0546","target":null,"order":1,"groupKey":"B","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0474","show":true,"name":"","title":"保险","href":"/center/boardlist.html#boards2-90.BK0474","target":null,"order":2,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0733","show":true,"name":"","title":"包装材料","href":"/center/boardlist.html#boards2-90.BK0733","target":null,"order":3,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0729","show":true,"name":"","title":"船舶制造","href":"/center/boardlist.html#boards2-90.BK0729","target":null,"order":4,"groupKey":"C","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0537","show":true,"name":"","title":"材料行业","href":"/center/boardlist.html#boards2-90.BK0537","target":null,"order":5,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0428","show":true,"name":"","title":"电力行业","href":"/center/boardlist.html#boards2-90.BK0428","target":null,"order":6,"groupKey":"D","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0736","show":true,"name":"","title":"电信运营","href":"/center/boardlist.html#boards2-90.BK0736","target":null,"order":7,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0738","show":true,"name":"","title":"多元金融","href":"/center/boardlist.html#boards2-90.BK0738","target":null,"order":8,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0447","show":true,"name":"","title":"电子信息","href":"/center/boardlist.html#boards2-90.BK0447","target":null,"order":9,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0459","show":true,"name":"","title":"电子元件","href":"/center/boardlist.html#boards2-90.BK0459","target":null,"order":10,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0451","show":true,"name":"","title":"房地产","href":"/center/boardlist.html#boards2-90.BK0451","target":null,"order":11,"groupKey":"F","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0436","show":true,"name":"","title":"纺织服装","href":"/center/boardlist.html#boards2-90.BK0436","target":null,"order":12,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0425","show":true,"name":"","title":"工程建设","href":"/center/boardlist.html#boards2-90.BK0425","target":null,"order":13,"groupKey":"G","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0484","show":true,"name":"","title":"国际贸易","href":"/center/boardlist.html#boards2-90.BK0484","target":null,"order":14,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0732","show":true,"name":"","title":"贵金属","href":"/center/boardlist.html#boards2-90.BK0732","target":null,"order":15,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0450","show":true,"name":"","title":"港口水运","href":"/center/boardlist.html#boards2-90.BK0450","target":null,"order":16,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0421","show":true,"name":"","title":"高速公路","href":"/center/boardlist.html#boards2-90.BK0421","target":null,"order":17,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0479","show":true,"name":"","title":"钢铁行业","href":"/center/boardlist.html#boards2-90.BK0479","target":null,"order":18,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0440","show":true,"name":"","title":"工艺商品","href":"/center/boardlist.html#boards2-90.BK0440","target":null,"order":19,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0427","show":true,"name":"","title":"公用事业","href":"/center/boardlist.html#boards2-90.BK0427","target":null,"order":20,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0728","show":true,"name":"","title":"环保工程","href":"/center/boardlist.html#boards2-90.BK0728","target":null,"order":21,"groupKey":"H","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0731","show":true,"name":"","title":"化肥行业","href":"/center/boardlist.html#boards2-90.BK0731","target":null,"order":22,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0538","show":true,"name":"","title":"化工行业","href":"/center/boardlist.html#boards2-90.BK0538","target":null,"order":23,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0480","show":true,"name":"","title":"航天航空","href":"/center/boardlist.html#boards2-90.BK0480","target":null,"order":24,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0471","show":true,"name":"","title":"化纤行业","href":"/center/boardlist.html#boards2-90.BK0471","target":null,"order":25,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0456","show":true,"name":"","title":"家电行业","href":"/center/boardlist.html#boards2-90.BK0456","target":null,"order":26,"groupKey":"J","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0739","show":true,"name":"","title":"金属制品","href":"/center/boardlist.html#boards2-90.BK0739","target":null,"order":27,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0545","show":true,"name":"","title":"机械行业","href":"/center/boardlist.html#boards2-90.BK0545","target":null,"order":28,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0429","show":true,"name":"","title":"交运设备","href":"/center/boardlist.html#boards2-90.BK0429","target":null,"order":29,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0422","show":true,"name":"","title":"交运物流","href":"/center/boardlist.html#boards2-90.BK0422","target":null,"order":30,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0485","show":true,"name":"","title":"旅游酒店","href":"/center/boardlist.html#boards2-90.BK0485","target":null,"order":31,"groupKey":"L","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0420","show":true,"name":"","title":"民航机场","href":"/center/boardlist.html#boards2-90.BK0420","target":null,"order":32,"groupKey":"M","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0437","show":true,"name":"","title":"煤炭采选","href":"/center/boardlist.html#boards2-90.BK0437","target":null,"order":33,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0476","show":true,"name":"","title":"木业家具","href":"/center/boardlist.html#boards2-90.BK0476","target":null,"order":34,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0477","show":true,"name":"","title":"酿酒行业","href":"/center/boardlist.html#boards2-90.BK0477","target":null,"order":35,"groupKey":"N","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0433","show":true,"name":"","title":"农牧饲渔","href":"/center/boardlist.html#boards2-90.BK0433","target":null,"order":36,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0730","show":true,"name":"","title":"农药兽药","href":"/center/boardlist.html#boards2-90.BK0730","target":null,"order":37,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0481","show":true,"name":"","title":"汽车行业","href":"/center/boardlist.html#boards2-90.BK0481","target":null,"order":38,"groupKey":"Q","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0473","show":true,"name":"","title":"券商信托","href":"/center/boardlist.html#boards2-90.BK0473","target":null,"order":39,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0737","show":true,"name":"","title":"软件服务","href":"/center/boardlist.html#boards2-90.BK0737","target":null,"order":40,"groupKey":"R","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0454","show":true,"name":"","title":"塑胶制品","href":"/center/boardlist.html#boards2-90.BK0454","target":null,"order":41,"groupKey":"S","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0424","show":true,"name":"","title":"水泥建材","href":"/center/boardlist.html#boards2-90.BK0424","target":null,"order":42,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0457","show":true,"name":"","title":"输配电气","href":"/center/boardlist.html#boards2-90.BK0457","target":null,"order":43,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0438","show":true,"name":"","title":"食品饮料","href":"/center/boardlist.html#boards2-90.BK0438","target":null,"order":44,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0482","show":true,"name":"","title":"商业百货","href":"/center/boardlist.html#boards2-90.BK0482","target":null,"order":45,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0464","show":true,"name":"","title":"石油行业","href":"/center/boardlist.html#boards2-90.BK0464","target":null,"order":46,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0448","show":true,"name":"","title":"通讯行业","href":"/center/boardlist.html#boards2-90.BK0448","target":null,"order":47,"groupKey":"T","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0486","show":true,"name":"","title":"文化传媒","href":"/center/boardlist.html#boards2-90.BK0486","target":null,"order":48,"groupKey":"W","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0740","show":true,"name":"","title":"文教休闲","href":"/center/boardlist.html#boards2-90.BK0740","target":null,"order":49,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0475","show":true,"name":"","title":"银行","href":"/center/boardlist.html#boards2-90.BK0475","target":null,"order":50,"groupKey":"Y","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0726","show":true,"name":"","title":"园林工程","href":"/center/boardlist.html#boards2-90.BK0726","target":null,"order":51,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0727","show":true,"name":"","title":"医疗行业","href":"/center/boardlist.html#boards2-90.BK0727","target":null,"order":52,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0458","show":true,"name":"","title":"仪器仪表","href":"/center/boardlist.html#boards2-90.BK0458","target":null,"order":53,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0478","show":true,"name":"","title":"有色金属","href":"/center/boardlist.html#boards2-90.BK0478","target":null,"order":54,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0465","show":true,"name":"","title":"医药制造","href":"/center/boardlist.html#boards2-90.BK0465","target":null,"order":55,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0734","show":true,"name":"","title":"珠宝首饰","href":"/center/boardlist.html#boards2-90.BK0734","target":null,"order":56,"groupKey":"Z","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0539","show":true,"name":"","title":"综合行业","href":"/center/boardlist.html#boards2-90.BK0539","target":null,"order":57,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0725","show":true,"name":"","title":"装修装饰","href":"/center/boardlist.html#boards2-90.BK0725","target":null,"order":58,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0910","show":true,"name":"","title":"专用设备","href":"/center/boardlist.html#boards2-90.BK0910","target":null,"order":59,"groupKey":"","className":null,"hot":false,"next":null},{"key":"boards2-90.BK0470","show":true,"name":"","title":"造纸印刷","href":"/center/boardlist.html#boards2-90.BK0470","target":null,"order":60,"groupKey":"","className":null,"hot":false,"next":null}]}]},{"key":"hsindex","show":true,"name":"","title":"沪深指数","href":"/center/hszs.html","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[{"key":"index_sh","show":true,"name":"","title":"上证系列指数","href":"/center/gridlist.html#index_sh","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"index_sz","show":true,"name":"","title":"深证系列指数","href":"/center/gridlist.html#index_sz","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"index_components","show":true,"name":"","title":"指数成份","href":"/center/gridlist.html#index_components","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"index_zzzs","show":true,"name":"","title":"中证系列指数","href":"/center/gridlist.html#index_zzzs","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"neeq_market","show":true,"name":"","title":"新三板","href":"/center/gridlist.html#neeq_stocks","target":"_self","order":5,"groupKey":"","className":"","hot":false,"next":[{"key":"neeq_stocks","show":true,"name":"","title":"全部","href":"/center/gridlist.html#neeq_stocks","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"neeq_innovate","show":true,"name":"","title":"创新层","href":"/center/gridlist.html#neeq_innovate","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"neeq_basic","show":true,"name":"","title":"基础层","href":"/center/gridlist.html#neeq_basic","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"neeq_agreement","show":true,"name":"","title":"协议","href":"/center/gridlist.html#neeq_agreement","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]},{"key":"neeq_marketmaking","show":true,"name":"","title":"做市","href":"/center/gridlist.html#neeq_marketmaking","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[]},{"key":"neeq_bidding","show":true,"name":"","title":"竞价","href":"/center/gridlist.html#neeq_bidding","target":"_self","order":5,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"hs_hk_market","show":true,"name":"","title":"沪深港通","href":"/center/hsgt.html","target":"_self","order":6,"groupKey":"","className":"","hot":false,"next":[{"key":"sh_hk_board","show":true,"name":"","title":"沪股通","href":"/center/gridlist.html#sh_hk_board","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"sz_hk_board","show":true,"name":"","title":"深股通","href":"/center/gridlist.html#sz_hk_board","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hk_sh_stocks","show":true,"name":"","title":"港股通(沪)","href":"/center/gridlist.html#hk_sh_stocks","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hk_sz_stocks","show":true,"name":"","title":"港股通(深)","href":"/center/gridlist.html#hk_sz_stocks","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]},{"key":"ah_comparison","show":true,"name":"","title":"AH股比价","href":"/center/gridlist.html#ah_comparison","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"hk_market","show":true,"name":"","title":"港股市场","href":"/center/ggsc.html","target":"_self","order":7,"groupKey":"","className":"","hot":false,"next":[{"key":"hk_stocks","show":true,"name":"","title":"全部港股","href":"/center/gridlist.html#hk_stocks","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hk_mainboard","show":true,"name":"","title":"港股主板","href":"/center/gridlist.html#hk_mainboard","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hk_gem","show":true,"name":"","title":"港股创业板","href":"/center/gridlist.html#hk_gem","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hk_wellknown","show":true,"name":"","title":"知名港股","href":"/center/gridlist.html#hk_wellknown","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hk_bluechips","show":true,"name":"","title":"蓝筹股","href":"/center/gridlist.html#hk_bluechips","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hk_redchips","show":true,"name":"","title":"红筹股","href":"/center/gridlist.html#hk_redchips","target":"_self","order":5,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hk_redchips_components","show":true,"name":"","title":"红筹成分股","href":"/center/gridlist.html#hk_redchips_components","target":"_self","order":5,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hk_stateowned","show":true,"name":"","title":"国企股","href":"/center/gridlist.html#hk_stateowned","target":"_self","order":7,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hk_stateowned_components","show":true,"name":"","title":"国企成分股","href":"/center/gridlist.html#hk_stateowned_components","target":"_self","order":8,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hk_components","show":true,"name":"","title":"港股通成份股","href":"/center/gridlist.html#hk_components","target":"_self","order":9,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hsi_large_components","show":true,"name":"","title":"恒生综合大型成份股","href":"/center/gridlist.html#hsi_large_components","target":"_self","order":10,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hsi_medium_components","show":true,"name":"","title":"恒生综合中型成份股","href":"/center/gridlist.html#hsi_medium_components","target":"_self","order":11,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hkah_comparison","show":true,"name":"","title":"AH股比价","href":"/center/gridlist.html#hkah_comparison","target":"_self","order":12,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hk_adr","show":true,"name":"","title":"ADR","href":"/center/gridlist.html#hk_adr","target":"_self","order":13,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hk_index","show":true,"name":"","title":"香港指数","href":"/center/gridlist.html#hk_index","target":"_self","order":14,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hk_warrants","show":true,"name":"","title":"港股窝轮","href":"/center/gridlist.html#hk_warrants","target":"_self","order":15,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hk_CBBCs","show":true,"name":"","title":"港股牛熊证","href":"/center/gridlist.html#hk_CBBCs","target":"_self","order":16,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"us_market","show":true,"name":"","title":"美股市场","href":"/center/mgsc.html","target":"_self","order":8,"groupKey":"","className":"","hot":false,"next":[{"key":"us_stocks","show":true,"name":"","title":"全部美股","href":"/center/gridlist.html#us_stocks","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"us_wellknown","show":true,"name":"","title":"知名美股","href":"/center/gridlist.html#us_wellknown","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[{"key":"us_technology","show":true,"name":"","title":"科技类","href":"/center/gridlist.html#us_technology","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"us_financial","show":true,"name":"","title":"金融类","href":"/center/gridlist.html#us_financial","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"us_medicine_food","show":true,"name":"","title":"医药食品类","href":"/center/gridlist.html#us_medicine_food","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"us_media","show":true,"name":"","title":"媒体类","href":"/center/gridlist.html#us_media","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]},{"key":"us_automotive_energy","show":true,"name":"","title":"汽车能源类","href":"/center/gridlist.html#us_automotive_energy","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[]},{"key":"us_manufacture_retail","show":true,"name":"","title":"制造零售类","href":"/center/gridlist.html#us_manufacture_retail","target":"_self","order":5,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"us_chinese","show":true,"name":"","title":"中国概念股","href":"/center/gridlist.html#us_chinese","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"us_index","show":true,"name":"","title":"美股指数","href":"/center/gridlist.html#us_index","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"uks_market","show":true,"name":"","title":"英股市场","href":"/center/yg.html","target":"_self","order":9,"groupKey":"","className":"","hot":false,"next":[{"key":"stocks_all","show":true,"name":"","title":"全部英股","href":"/center/gridlist2.html#stocks_all","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"stocks_mt_156_7","show":true,"name":"","title":"GDR","href":"/center/gridlist2.html#stocks_mt_156_7","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"stocks_b_MK0794","show":true,"name":"","title":"知名英股","href":"/center/gridlist2.html#stocks_b_MK0794","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]},{"key":"stocks_mt_155_3","show":true,"name":"","title":"英国ETF","href":"/center/gridlist2.html#stocks_mt_155_3","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[]},{"key":"stocks_mt_341","show":true,"name":"","title":"英国指数","href":"/center/gridlist2.html#stocks_mt_341","target":"_self","order":5,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"global_index","show":true,"name":"","title":"全球指数","href":"/center/qqzs.html","target":"_self","order":10,"groupKey":"","className":"","hot":false,"next":[{"key":"global_asia","show":true,"name":"","title":"亚洲股市","href":"/center/gridlist.html#global_asia","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"global_euro","show":true,"name":"","title":"欧洲股市","href":"/center/gridlist.html#global_euro","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"global_asia","show":false,"name":"","title":"亚洲股市","href":"/center/gridlist.html#global_asia","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"global_america","show":true,"name":"","title":"美洲股市","href":"/center/gridlist.html#global_america","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"global_america","show":false,"name":"","title":"美洲股市","href":"/center/gridlist.html#global_america","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"global_australia","show":true,"name":"","title":"澳洲股市","href":"/center/gridlist.html#global_australia","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]},{"key":"global_qtzs","show":true,"name":"","title":"其他指数","href":"/center/gridlist.html#global_qtzs","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]},{"key":"global_euro","show":false,"name":"","title":" 欧洲股市","href":"/center/gridlist.html#global_euro","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]},{"key":"global_australia","show":false,"name":"","title":" 澳洲股市","href":"/center/gridlist.html#global_australia","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[]},{"key":"global_qtzs","show":false,"name":"","title":" 其他指数","href":"/center/gridlist.html#global_qtzs","target":"_self","order":5,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"futures_market","show":true,"name":"","title":"期货市场","href":"/center/futures.html","target":"_self","order":11,"groupKey":"","className":"","hot":false,"next":[{"key":"futures_113","show":true,"name":"","title":"上期所","href":"/center/gridlist2.html#futures_113","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[{"key":"futures_113_1","show":true,"name":"","title":"沪铜","href":"/center/gridlist2.html#futures_113_1","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_113_14","show":true,"name":"","title":"沪锡","href":"/center/gridlist2.html#futures_113_14","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_113_5","show":true,"name":"","title":"沪金","href":"/center/gridlist2.html#futures_113_5","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_113_13","show":true,"name":"","title":"沪镍","href":"/center/gridlist2.html#futures_113_13","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_113_10","show":true,"name":"","title":"橡胶","href":"/center/gridlist2.html#futures_113_10","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_113_7","show":true,"name":"","title":"螺纹钢","href":"/center/gridlist2.html#futures_113_7","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_113_6","show":true,"name":"","title":"沪银","href":"/center/gridlist2.html#futures_113_6","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_113_8","show":true,"name":"","title":"线材","href":"/center/gridlist2.html#futures_113_8","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_113_4","show":true,"name":"","title":"沪铅","href":"/center/gridlist2.html#futures_113_4","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_113_3","show":true,"name":"","title":"沪锌","href":"/center/gridlist2.html#futures_113_3","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_113_11","show":true,"name":"","title":"石油沥青","href":"/center/gridlist2.html#futures_113_11","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_113_2","show":true,"name":"","title":"沪铝","href":"/center/gridlist2.html#futures_113_2","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_113_9","show":true,"name":"","title":"燃油","href":"/center/gridlist2.html#futures_113_9","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_113_12","show":true,"name":"","title":"热轧卷板","href":"/center/gridlist2.html#futures_113_12","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_113_15","show":true,"name":"","title":"纸浆","href":"/center/gridlist2.html#futures_113_15","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_113_16","show":true,"name":"","title":"不锈钢","href":"/center/gridlist2.html#futures_113_16","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"futures_142","show":true,"name":"","title":"上期能源","href":"/center/gridlist2.html#futures_142","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[{"key":"futures_142_1","show":true,"name":"","title":"原油","href":"/center/gridlist2.html#futures_142_1","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_142_2","show":true,"name":"","title":"20号胶","href":"/center/gridlist2.html#futures_142_2","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"futures_114","show":true,"name":"","title":"大商所","href":"/center/gridlist2.html#futures_114","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[{"key":"futures_114_1","show":true,"name":"","title":"玉米","href":"/center/gridlist2.html#futures_114_1","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_2","show":true,"name":"","title":"豆一","href":"/center/gridlist2.html#futures_114_2","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_3","show":true,"name":"","title":"豆二","href":"/center/gridlist2.html#futures_114_3","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_4","show":true,"name":"","title":"豆粕","href":"/center/gridlist2.html#futures_114_4","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_5","show":true,"name":"","title":"豆油","href":"/center/gridlist2.html#futures_114_5","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_6","show":true,"name":"","title":"棕榈油","href":"/center/gridlist2.html#futures_114_6","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_7","show":true,"name":"","title":"聚乙烯","href":"/center/gridlist2.html#futures_114_7","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_8","show":true,"name":"","title":"聚氯乙烯","href":"/center/gridlist2.html#futures_114_8","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_9","show":true,"name":"","title":"焦炭","href":"/center/gridlist2.html#futures_114_9","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_10","show":true,"name":"","title":"焦煤","href":"/center/gridlist2.html#futures_114_10","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_11","show":true,"name":"","title":"纤维板","href":"/center/gridlist2.html#futures_114_11","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_12","show":true,"name":"","title":"胶合板","href":"/center/gridlist2.html#futures_114_12","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_13","show":true,"name":"","title":"铁矿石","href":"/center/gridlist2.html#futures_114_13","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_14","show":true,"name":"","title":"鸡蛋","href":"/center/gridlist2.html#futures_114_14","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_15","show":true,"name":"","title":"聚丙烯","href":"/center/gridlist2.html#futures_114_15","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_16","show":true,"name":"","title":"玉米淀粉","href":"/center/gridlist2.html#futures_114_16","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_17","show":true,"name":"","title":"乙二醇","href":"/center/gridlist2.html#futures_114_17","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_18","show":true,"name":"","title":"粳米","href":"/center/gridlist2.html#futures_114_18","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_114_19","show":true,"name":"","title":"苯乙烯","href":"/center/gridlist2.html#futures_114_19","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"futures_115","show":true,"name":"","title":"郑商所","href":"/center/gridlist2.html#futures_115","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[{"key":"futures_115_1","show":true,"name":"","title":"强麦","href":"/center/gridlist2.html#futures_115_1","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_2","show":true,"name":"","title":"普麦","href":"/center/gridlist2.html#futures_115_2","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_3","show":true,"name":"","title":"一号棉花","href":"/center/gridlist2.html#futures_115_3","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_4","show":true,"name":"","title":"白糖","href":"/center/gridlist2.html#futures_115_4","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_5","show":true,"name":"","title":"PTA","href":"/center/gridlist2.html#futures_115_5","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_6","show":true,"name":"","title":"菜油","href":"/center/gridlist2.html#futures_115_6","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_7","show":true,"name":"","title":"早籼稻","href":"/center/gridlist2.html#futures_115_7","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_8","show":true,"name":"","title":"甲醇","href":"/center/gridlist2.html#futures_115_8","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_9","show":true,"name":"","title":"玻璃","href":"/center/gridlist2.html#futures_115_9","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_10","show":true,"name":"","title":"菜籽","href":"/center/gridlist2.html#futures_115_10","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_11","show":true,"name":"","title":"菜粕","href":"/center/gridlist2.html#futures_115_11","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_13","show":true,"name":"","title":"粳稻","href":"/center/gridlist2.html#futures_115_13","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_14","show":true,"name":"","title":"晚籼稻","href":"/center/gridlist2.html#futures_115_14","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_15","show":true,"name":"","title":"硅铁","href":"/center/gridlist2.html#futures_115_15","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_16","show":true,"name":"","title":"锰硅","href":"/center/gridlist2.html#futures_115_16","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_17","show":true,"name":"","title":"动力煤","href":"/center/gridlist2.html#futures_115_17","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_18","show":true,"name":"","title":"棉纱","href":"/center/gridlist2.html#futures_115_18","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_19","show":true,"name":"","title":"苹果","href":"/center/gridlist2.html#futures_115_19","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_20","show":true,"name":"","title":"红枣","href":"/center/gridlist2.html#futures_115_20","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_21","show":true,"name":"","title":"尿素","href":"/center/gridlist2.html#futures_115_21","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_115_22","show":true,"name":"","title":"纯碱","href":"/center/gridlist2.html#futures_115_22","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"futures_cffex","show":true,"name":"","title":"中金所","href":"/center/gridlist.html#futures_cffex","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[{"key":"futures_cffex-_TS","show":true,"name":"","title":"2年期国债","href":"/center/gridlist.html#futures_cffex-_TS","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_cffex-_TF","show":true,"name":"","title":"5年期国债","href":"/center/gridlist.html#futures_cffex-_TF","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_cffex-_T","show":true,"name":"","title":"10年期国债","href":"/center/gridlist.html#futures_cffex-_T","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_cffex-_IH","show":true,"name":"","title":"上证50","href":"/center/gridlist.html#futures_cffex-_IH","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_cffex-_IC","show":true,"name":"","title":"中证500","href":"/center/gridlist.html#futures_cffex-_IC","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_cffex-_IF","show":true,"name":"","title":"沪深300","href":"/center/gridlist.html#futures_cffex-_IF","target":"_self","order":5,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"futures_hkex","show":true,"name":"","title":"港交所","href":"/center/gjs.html#futures_hkex","target":"_self","order":5,"groupKey":"","className":"","hot":false,"next":[{"key":"hk_index_futures","show":true,"name":"","title":"指数期货","href":"/center/gridlist.html#hk_index_futures","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"hk_stock_futures","show":true,"name":"","title":"股票期货","href":"/center/gridlist.html#hk_stock_futures","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"HCF_CUS","show":true,"name":"","title":"外汇期货","href":"/center/gridlist.html#HCF_CUS","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"HMFS_LRP","show":true,"name":"","title":"商品期货","href":"/center/gridlist.html#HMFS_LRP","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"futures_global","show":true,"name":"","title":"国际期货","href":"/center/gridlist.html#futures_global","target":"_self","order":6,"groupKey":"","className":"","hot":false,"next":[{"key":"futures_global-cobot","show":true,"name":"","title":"CME农业期货","href":"/center/gridlist.html#futures_global-cobot","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_global-nymex","show":true,"name":"","title":"CME能源期货","href":"/center/gridlist.html#futures_global-nymex","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_global-comex","show":true,"name":"","title":"CME金属期货","href":"/center/gridlist.html#futures_global-comex","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_global-nybot","show":true,"name":"","title":"ICE农业期货","href":"/center/gridlist.html#futures_global-nybot","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_global-lme","show":true,"name":"","title":"伦敦金属交易所","href":"/center/gridlist.html#futures_global-lme","target":"_self","order":5,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_global-ipe","show":true,"name":"","title":"ICE能源期货","href":"/center/gridlist.html#futures_global-ipe","target":"_self","order":6,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_global-tocom","show":true,"name":"","title":"东京商品期货","href":"/center/gridlist.html#futures_global-tocom","target":"_self","order":7,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_global-sgx","show":true,"name":"","title":"新加坡交易所","href":"/center/gridlist.html#futures_global-sgx","target":"_self","order":8,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_global-mdex","show":true,"name":"","title":"马来西亚交易所","href":"/center/gridlist.html#futures_global-mdex","target":"_self","order":9,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"futures_finance","show":true,"name":"","title":"金融期货","href":"/center/gridlist.html#futures_finance","target":"_self","order":13,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_energy","show":true,"name":"","title":"能源化工","href":"/center/gridlist.html#futures_energy","target":"_self","order":14,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_metal","show":true,"name":"","title":"金属钢材","href":"/center/gridlist.html#futures_metal","target":"_self","order":15,"groupKey":"","className":"","hot":false,"next":[]},{"key":"futures_farmproduce","show":true,"name":"","title":"农产品食品原料","href":"/center/gridlist.html#futures_farmproduce","target":"_self","order":16,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"fund_market","show":true,"name":"","title":"基金市场","href":"/center/jjsc.html","target":"_self","order":12,"groupKey":"","className":"","hot":false,"next":[{"key":"fund_close_end","show":true,"name":"","title":"封闭基金行情","href":"/center/gridlist.html#fund_close_end","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"fund_etf","show":true,"name":"","title":"ETF基金行情","href":"/center/gridlist.html#fund_etf","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"fund_lof","show":true,"name":"","title":"LOF基金行情","href":"/center/gridlist.html#fund_lof","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"fund_open_nav","show":true,"name":"","title":"开放式基金净值","href":"http://fund.eastmoney.com/data/fundranking.html","target":"_blank","order":3,"groupKey":"","className":"","hot":false,"next":[{"key":"fund_stocks","show":true,"name":"","title":"全部","href":"http://fund.eastmoney.com/data/fundranking.html","target":"_blank","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"fund_open_stock","show":true,"name":"","title":"股票型","href":"http://fund.eastmoney.com/data/fundranking.html#tgp","target":"_blank","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"fund_open_blend","show":true,"name":"","title":"混合型","href":"http://fund.eastmoney.com/data/fundranking.html#thh","target":"_blank","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"fund_open_bond","show":true,"name":"","title":"债券型","href":"http://fund.eastmoney.com/data/fundranking.html#tzq","target":"_blank","order":3,"groupKey":"","className":"","hot":false,"next":[]},{"key":"fund_open_lof","show":true,"name":"","title":"LOF","href":"http://fund.eastmoney.com/data/fundranking.html#tlof","target":"_blank","order":4,"groupKey":"","className":"","hot":false,"next":[]},{"key":"fund_open_eft","show":true,"name":"","title":"ETF","href":"http://fund.eastmoney.com/data/fundranking.html#tzs","target":"_blank","order":5,"groupKey":"","className":"","hot":false,"next":[]},{"key":"fund_open_qdii","show":true,"name":"","title":"QDII","href":"http://fund.eastmoney.com/data/fundranking.html#tqdii","target":"_blank","order":6,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"fund_close_nav","show":true,"name":"","title":"封闭式基金净值","href":"http://fund.eastmoney.com/CXXFBS_bzdm.html","target":"_blank","order":4,"groupKey":"","className":"","hot":false,"next":[]},{"key":"fund_monetary","show":true,"name":"","title":"货币型基金收益","href":"http://fund.eastmoney.com/HBJJ_dwsy.html","target":"_blank","order":5,"groupKey":"","className":"","hot":false,"next":[]},{"key":"fund_open_estimate","show":true,"name":"","title":"开放式净值估算","href":"http://fund.eastmoney.com/fund.html","target":"_blank","order":6,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"bond_market","show":true,"name":"","title":"债券市场","href":"/center/zqsc.html","target":"_self","order":13,"groupKey":"","className":"","hot":false,"next":[{"key":"bond_index","show":true,"name":"","title":"债券指数","href":"/center/gridlist.html#bond_index","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"bond_sh","show":true,"name":"","title":"上海债券","href":"/center/gridlist.html#bond_sh","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[{"key":"bond_national_sh","show":true,"name":"","title":"沪国债","href":"/center/gridlist.html#bond_national_sh","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"bond_enterprise_sh","show":true,"name":"","title":"沪企债","href":"/center/gridlist.html#bond_enterprise_sh","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"bond_convertible_sh","show":true,"name":"","title":"沪转债","href":"/center/gridlist.html#bond_convertible_sh","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"convertible_comparison","show":true,"name":"","title":"可转债比价表","href":"/center/fullscreenlist.html#convertible_comparison","target":"_blank","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"bond_sz","show":true,"name":"","title":"深圳债券","href":"/center/gridlist.html#bond_sz","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[{"key":"bond_national_sz","show":true,"name":"","title":"深国债","href":"/center/gridlist.html#bond_national_sz","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"bond_enterprise_sz","show":true,"name":"","title":"深企债","href":"/center/gridlist.html#bond_enterprise_sz","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"bond_convertible_sz","show":true,"name":"","title":"深转债","href":"/center/gridlist.html#bond_convertible_sz","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"bond_sh_buyback","show":true,"name":"","title":"上证回购","href":"/center/gridlist.html#bond_sh_buyback","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]},{"key":"bond_sz_buyback","show":true,"name":"","title":"深证回购","href":"/center/gridlist.html#bond_sz_buyback","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"forex_market","show":true,"name":"","title":"外汇市场","href":"/center/whsc.html","target":"_self","order":14,"groupKey":"","className":"","hot":false,"next":[{"key":"forex_all","show":true,"name":"","title":"所有汇率","href":"/center/gridlist.html#forex_all","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"forex_basic","show":true,"name":"","title":"基本汇率","href":"/center/gridlist.html#forex_basic","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"forex_cross","show":true,"name":"","title":"交叉汇率","href":"/center/gridlist.html#forex_cross","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"forex_cny","show":true,"name":"","title":"人民币品种","href":"/center/gridlist.html#forex_cny","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[{"key":"forex_cnyc","show":true,"name":"","title":"人民币汇率中间价","href":"/center/gridlist.html#forex_cnyc","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"forex_cnyi","show":true,"name":"","title":"人民币询价","href":"/center/gridlist.html#forex_cnyi","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"forex_cnyb","show":true,"name":"","title":"人民币竞价","href":"/center/gridlist.html#forex_cnyb","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"forex_cnh","show":true,"name":"","title":"离岸人民币外币","href":"/center/gridlist.html#forex_cnh","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"forex_exchange","show":true,"name":"","title":"外汇牌价","href":"/center/gridlist.html#forex_exchange_icbc","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[{"key":"forex_exchange_icbc","show":true,"name":"","title":"工商银行报价","href":"/center/gridlist.html#forex_exchange_icbc","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"forex_exchange_abc","show":true,"name":"","title":"农业银行报价","href":"/center/gridlist.html#forex_exchange_abc","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"forex_exchange_boc","show":true,"name":"","title":"中国银行报价","href":"/center/gridlist.html#forex_exchange_boc","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"forex_exchange_ccb","show":true,"name":"","title":"建设银行报价","href":"/center/gridlist.html#forex_exchange_ccb","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]},{"key":"forex_exchange_bcm","show":true,"name":"","title":"交通银行报价","href":"/center/gridlist.html#forex_exchange_bcm","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[]},{"key":"forex_exchange_cmb","show":true,"name":"","title":"招商银行报价","href":"/center/gridlist.html#forex_exchange_cmb","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"forex_exchange_icbc","show":false,"name":"","title":"工商银行报价","href":"/center/gridlist.html#forex_exchange_icbc","target":"_self","order":5,"groupKey":"","className":"","hot":false,"next":[]},{"key":"forex_exchange_abc","show":false,"name":"","title":"农业银行报价","href":"/center/gridlist.html#forex_exchange_abc","target":"_self","order":6,"groupKey":"","className":"","hot":false,"next":[]},{"key":"forex_exchange_boc","show":false,"name":"","title":"中国银行报价","href":"/center/gridlist.html#forex_exchange_boc","target":"_self","order":7,"groupKey":"","className":"","hot":false,"next":[]},{"key":"forex_exchange_ccb","show":false,"name":"","title":"建设银行报价","href":"/center/gridlist.html#forex_exchange_ccb","target":"_self","order":8,"groupKey":"","className":"","hot":false,"next":[]},{"key":"forex_exchange_bcm","show":false,"name":"","title":"交通银行报价","href":"/center/gridlist.html#forex_exchange_bcm","target":"_self","order":8,"groupKey":"","className":"","hot":false,"next":[]},{"key":"forex_exchange_cmb","show":false,"name":"","title":"招商银行报价","href":"/center/gridlist.html#forex_exchange_cmb","target":"_self","order":9,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"options_market","show":true,"name":"","title":"期权市场","href":"/center/qqsc.html","target":"_self","order":15,"groupKey":"","className":"","hot":false,"next":[{"key":"options_sh50etf","show":true,"name":"","title":"上交所","href":"/center/gridlist.html#options_sh50etf_all","target":"_self","order":-2,"groupKey":"","className":"","hot":false,"next":[{"key":"options_sh50etf","show":true,"name":"","title":"上交期权","href":"/center/gridlist.html#options_sh50etf_all","target":"_self","order":-1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"options_sh50etf_all","show":false,"name":"","title":"全部期权","href":"/center/gridlist.html#options_sh50etf_all","target":"_self","order":-1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"options_sh50etf_call","show":false,"name":"","title":"认购期权","href":"/center/gridlist.html#options_sh50etf_call","target":"_self","order":-1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"options_sh50etf_put","show":false,"name":"","title":"认沽期权","href":"/center/gridlist.html#options_sh50etf_put","target":"_self","order":-1,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"options_sh50etf_all","show":false,"name":"","title":"上交期权","href":"/center/gridlist.html#options_sh50etf_all","target":"_self","order":-1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"options_sh50etf_call","show":false,"name":"","title":"认购期权","href":"/center/gridlist.html#options_sh50etf_call","target":"_self","order":-1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"options_sh50etf_put","show":false,"name":"","title":"认沽期权","href":"/center/gridlist.html#options_sh50etf_put","target":"_self","order":-1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"options_szetf_all","show":true,"name":"","title":"深交所","href":"/center/gridlist.html#options_szetf_all","target":"_self","order":-1,"groupKey":"","className":"","hot":false,"next":[{"key":"options_sz300etf_all","show":true,"name":"","title":"深交期权","href":"/center/gridlist.html#options_sz300etf_all","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"options_cffex_all","show":true,"name":"","title":"中金所","href":"/center/gridlist.html#options_cffex_all","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[{"key":"options_cffex_io","show":true,"name":"","title":"沪深300指数","href":"/center/gridlist.html#options_cffex_io","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"options_shfe_all","show":true,"name":"","title":"上期所","href":"/center/gridlist.html#options_shfe_all","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[{"key":"options_cu_all","show":true,"name":"","title":"铜","href":"/center/gridlist.html#options_cu_all","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"options_ru_all","show":true,"name":"","title":"橡胶","href":"/center/gridlist.html#options_ru_all","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"options_au_all","show":true,"name":"","title":"黄金","href":"/center/gridlist.html#options_au_all","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"options_dce_all","show":true,"name":"","title":"大商所","href":"/center/gridlist.html#options_dce_all","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[{"key":"options_beanpulp_all","show":true,"name":"","title":"豆粕","href":"/center/gridlist.html#options_beanpulp_all","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"options_c_all","show":true,"name":"","title":"玉米","href":"/center/gridlist.html#options_c_all","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"options_t_all","show":true,"name":"","title":"铁矿石","href":"/center/gridlist.html#options_t_all","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"options_czce_all","show":true,"name":"","title":"郑商所","href":"/center/gridlist.html#options_czce_all","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[{"key":"options_sugar_all","show":true,"name":"","title":"白糖","href":"/center/gridlist.html#options_sugar_all","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"options_cf_all","show":true,"name":"","title":"棉花","href":"/center/gridlist.html#options_cf_all","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"options_cf_ta","show":true,"name":"","title":"PTA期权","href":"/center/gridlist.html#options_cf_ta","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"options_cf_ma","show":true,"name":"","title":"甲醇","href":"/center/gridlist.html#options_cf_ma","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]},{"key":"options_cf_cp","show":true,"name":"","title":"菜粕","href":"/center/gridlist.html#options_cf_cp","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[]}]},{"key":"options_hkex","show":true,"name":"","title":"香港期交所","href":"/center/gridlist.html#options_uscny_all","target":"_self","order":4,"groupKey":"","className":"","hot":false,"next":[{"key":"options_uscny_all","show":true,"name":"","title":"美元兑人民币","href":"/center/gridlist.html#options_uscny_all","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]}]}]},{"key":"gold_market","show":true,"name":"","title":"黄金市场","href":"/center/hjsc.html","target":"_self","order":16,"groupKey":"","className":"","hot":false,"next":[{"key":"gold_sh_spotgoods","show":true,"name":"","title":"上海黄金现货","href":"/center/gridlist.html#gold_sh_spotgoods","target":"_self","order":0,"groupKey":"","className":"","hot":false,"next":[]},{"key":"gold_sh_futures","show":true,"name":"","title":"上海黄金期货","href":"/center/gridlist.html#gold_sh_futures","target":"_self","order":1,"groupKey":"","className":"","hot":false,"next":[]},{"key":"nobalmetal_spotgoods","show":true,"name":"","title":"国际贵金属现货","href":"/center/gridlist.html#nobalmetal_spotgoods","target":"_self","order":2,"groupKey":"","className":"","hot":false,"next":[]},{"key":"nobalmetal_futures","show":true,"name":"","title":"国际贵金属期货","href":"/center/gridlist.html#nobalmetal_futures","target":"_self","order":3,"groupKey":"","className":"","hot":false,"next":[]}]}]'''
timejs='''/******/ (function(modules) { // webpackBootstrap
/******/ // The module cache
/******/ var installedModules = {};
/******/
/******/ // The require function
/******/ function __webpack_require__(moduleId) {
/******/
/******/ // Check if module is in cache
/******/ if(installedModules[moduleId]) {
/******/ return installedModules[moduleId].exports;
/******/ }
/******/ // Create a new module (and put it into the cache)
/******/ var module = installedModules[moduleId] = {
/******/ i: moduleId,
/******/ l: false,
/******/ exports: {}
/******/ };
/******/
/******/ // Execute the module function
/******/ modules[moduleId].call(module.exports, module, module.exports, __webpack_require__);
/******/
/******/ // Flag the module as loaded
/******/ module.l = true;
/******/
/******/ // Return the exports of the module
/******/ return module.exports;
/******/ }
/******/
/******/
/******/ // expose the modules object (__webpack_modules__)
/******/ __webpack_require__.m = modules;
/******/
/******/ // expose the module cache
/******/ __webpack_require__.c = installedModules;
/******/
/******/ // define getter function for harmony exports
/******/ __webpack_require__.d = function(exports, name, getter) {
/******/ if(!__webpack_require__.o(exports, name)) {
/******/ Object.defineProperty(exports, name, { enumerable: true, get: getter });
/******/ }
/******/ };
/******/
/******/ // define __esModule on exports
/******/ __webpack_require__.r = function(exports) {
/******/ if(typeof Symbol !== 'undefined' && Symbol.toStringTag) {
/******/ Object.defineProperty(exports, Symbol.toStringTag, { value: 'Module' });
/******/ }
/******/ Object.defineProperty(exports, '__esModule', { value: true });
/******/ };
/******/
/******/ // create a fake namespace object
/******/ // mode & 1: value is a module id, require it
/******/ // mode & 2: merge all properties of value into the ns
/******/ // mode & 4: return value when already ns object
/******/ // mode & 8|1: behave like require
/******/ __webpack_require__.t = function(value, mode) {
/******/ if(mode & 1) value = __webpack_require__(value);
/******/ if(mode & 8) return value;
/******/ if((mode & 4) && typeof value === 'object' && value && value.__esModule) return value;
/******/ var ns = Object.create(null);
/******/ __webpack_require__.r(ns);
/******/ Object.defineProperty(ns, 'default', { enumerable: true, value: value });
/******/ if(mode & 2 && typeof value != 'string') for(var key in value) __webpack_require__.d(ns, key, function(key) { return value[key]; }.bind(null, key));
/******/ return ns;
/******/ };
/******/
/******/ // getDefaultExport function for compatibility with non-harmony modules
/******/ __webpack_require__.n = function(module) {
/******/ var getter = module && module.__esModule ?
/******/ function getDefault() { return module['default']; } :
/******/ function getModuleExports() { return module; };
/******/ __webpack_require__.d(getter, 'a', getter);
/******/ return getter;
/******/ };
/******/
/******/ // Object.prototype.hasOwnProperty.call
/******/ __webpack_require__.o = function(object, property) { return Object.prototype.hasOwnProperty.call(object, property); };
/******/
/******/ // __webpack_public_path__
/******/ __webpack_require__.p = "";
/******/
/******/
/******/ // Load entry module and return exports
/******/ return __webpack_require__(__webpack_require__.s = "./jssrc/timetrade.ts");
/******/ })
/************************************************************************/
/******/ ({
/***/ "./jssrc/timetrade.ts":
/*!****************************!*\
!*** ./jssrc/timetrade.ts ***!
\****************************/
/*! no static exports found */
/***/ (function(module, exports, __webpack_require__) {
/**
* 分时成交JS
*/
function GetQueryString(name) {
var reg = new RegExp("(^|&)" + name + "=([^&]*)(&|$)");
var r = window.location.search.toLowerCase().substr(1).match(reg);
if (r != null)
return unescape(r[2]);
return null;
}
window['stockEnity'] = {
"stockCode": GetQueryString("code") || "",
"stockmarket": GetQueryString("market") || "",
"stockId": GetQueryString("id") || GetQueryString("code") + GetQueryString("market")
};
__webpack_require__(/*! ../src/modules/old_f1 */ "./src/modules/old_f1/index.js");
setTimeout(function () {
qphf.getIndexQuote(15);
}, 500);
var HV = new HistoryViews("historyest", {
def: "",
set: "",
lns: 11
});
var _cpyno = "c1";
/***/ }),
/***/ "./src/modules/ie_sse_polyfill/index.js":
/*!**********************************************!*\
!*** ./src/modules/ie_sse_polyfill/index.js ***!
\**********************************************/
/*! no static exports found */
/***/ (function(module, exports) {
/*
* EventSource polyfill version 0.9.7
* Supported by sc AmvTek srl
* :email: [email protected]
*/
;(function (global) {
if (global.EventSource && !global._eventSourceImportPrefix){
return;
}
var evsImportName = (global._eventSourceImportPrefix||'')+"EventSource";
var EventSource = function (url, options) {
if (!url || typeof url != 'string') {
throw new SyntaxError('Not enough arguments');
}
this.URL = url;
this.setOptions(options);
var evs = this;
setTimeout(function(){evs.poll()}, 0);
};
EventSource.prototype = {
CONNECTING: 0,
OPEN: 1,
CLOSED: 2,
defaultOptions: {
loggingEnabled: false,
loggingPrefix: "eventsource",
interval: 500, // milliseconds
bufferSizeLimit: 256*1024, // bytes
silentTimeout: 300000, // milliseconds
getArgs:{
'evs_buffer_size_limit': 256*1024
},
xhrHeaders:{
'Accept': 'text/event-stream',
'Cache-Control': 'no-cache',
'X-Requested-With': 'XMLHttpRequest'
}
},
setOptions: function(options){
var defaults = this.defaultOptions;
var option;
// set all default options...
for (option in defaults){
if ( defaults.hasOwnProperty(option) ){
this[option] = defaults[option];
}
}
// override with what is in options
for (option in options){
if (option in defaults && options.hasOwnProperty(option)){
this[option] = options[option];
}
}
// if getArgs option is enabled
// ensure evs_buffer_size_limit corresponds to bufferSizeLimit
if (this.getArgs && this.bufferSizeLimit) {
this.getArgs['evs_buffer_size_limit'] = this.bufferSizeLimit;
}
// if console is not available, force loggingEnabled to false
if (typeof console === "undefined" || typeof console.log === "undefined") {
this.loggingEnabled = false;
}
},
log: function(message) {
if (this.loggingEnabled) {
console.log("[" + this.loggingPrefix +"]:" + message)
}
},
poll: function() {
try {
if (this.readyState == this.CLOSED) {
return;
}
this.cleanup();
this.readyState = this.CONNECTING;
this.cursor = 0;
this.cache = '';
this._xhr = new this.XHR(this);
this.resetNoActivityTimer();
}
catch (e) {
// in an attempt to silence the errors
this.log('There were errors inside the pool try-catch');
this.dispatchEvent('error', { type: 'error', data: e.message });
}
},
pollAgain: function (interval) {
// schedule poll to be called after interval milliseconds
var evs = this;
evs.readyState = evs.CONNECTING;
evs.dispatchEvent('error', {
type: 'error',
data: "Reconnecting "
});
this._pollTimer = setTimeout(function(){evs.poll()}, interval||0);
},
cleanup: function() {
this.log('evs cleaning up')
if (this._pollTimer){
clearInterval(this._pollTimer);
this._pollTimer = null;
}
if (this._noActivityTimer){
clearInterval(this._noActivityTimer);
this._noActivityTimer = null;
}
if (this._xhr){
this._xhr.abort();
this._xhr = null;
}
},
resetNoActivityTimer: function(){
if (this.silentTimeout){
if (this._noActivityTimer){
clearInterval(this._noActivityTimer);
}
var evs = this;
this._noActivityTimer = setTimeout(
function(){ evs.log('Timeout! silentTImeout:'+evs.silentTimeout); evs.pollAgain(); },
this.silentTimeout
);
}
},
close: function () {
this.readyState = this.CLOSED;
this.log('Closing connection. readyState: '+this.readyState);
this.cleanup();
},
_onxhrdata: function() {
var request = this._xhr;
if (request.isReady() && !request.hasError() ) {
// reset the timer, as we have activity
this.resetNoActivityTimer();
// move this EventSource to OPEN state...
if (this.readyState == this.CONNECTING) {
this.readyState = this.OPEN;
this.dispatchEvent('open', { type: 'open' });
}
var buffer = request.getBuffer();
if (buffer.length > this.bufferSizeLimit) {
this.log('buffer.length > this.bufferSizeLimit');
this.pollAgain();
}
if (this.cursor == 0 && buffer.length > 0){
// skip byte order mark \uFEFF character if it starts the stream
if (buffer.substring(0,1) == '\uFEFF'){
this.cursor = 1;
}
}
var lastMessageIndex = this.lastMessageIndex(buffer);
if (lastMessageIndex[0] >= this.cursor){
var newcursor = lastMessageIndex[1];
var toparse = buffer.substring(this.cursor, newcursor);
this.parseStream(toparse);
this.cursor = newcursor;
}
// if request is finished, reopen the connection
if (request.isDone()) {
this.log('request.isDone(). reopening the connection');
this.pollAgain(this.interval);
}
}
else if (this.readyState !== this.CLOSED) {
this.log('this.readyState !== this.CLOSED');
this.pollAgain(this.interval);
//MV: Unsure why an error was previously dispatched
}
},
parseStream: function(chunk) {
// normalize line separators (\r\n,\r,\n) to \n
// remove white spaces that may precede \n
chunk = this.cache + this.normalizeToLF(chunk);
var events = chunk.split('\n\n');
var i, j, eventType, datas, line, retry;
for (i=0; i < (events.length - 1); i++) {
eventType = 'message';
datas = [];
parts = events[i].split('\n');
for (j=0; j < parts.length; j++) {
line = this.trimWhiteSpace(parts[j]);
if (line.indexOf('event') == 0) {
eventType = line.replace(/event:?\s*/, '');
}
else if (line.indexOf('retry') == 0) {
retry = parseInt(line.replace(/retry:?\s*/, ''));
if(!isNaN(retry)) {
this.interval = retry;
}
}
else if (line.indexOf('data') == 0) {
datas.push(line.replace(/data:?\s*/, ''));
}
else if (line.indexOf('id:') == 0) {
this.lastEventId = line.replace(/id:?\s*/, '');
}
else if (line.indexOf('id') == 0) { // this resets the id
this.lastEventId = null;
}
}
if (datas.length) {
// dispatch a new event
var event = new MessageEvent(eventType, datas.join('\n'), window.location.origin, this.lastEventId);
this.dispatchEvent(eventType, event);
}
}
this.cache = events[events.length - 1];
},
dispatchEvent: function (type, event) {
var handlers = this['_' + type + 'Handlers'];
if (handlers) {
for (var i = 0; i < handlers.length; i++) {
handlers[i].call(this, event);
}
}
if (this['on' + type]) {
this['on' + type].call(this, event);
}
},
addEventListener: function (type, handler) {
if (!this['_' + type + 'Handlers']) {
this['_' + type + 'Handlers'] = [];
}
this['_' + type + 'Handlers'].push(handler);
},
removeEventListener: function (type, handler) {
var handlers = this['_' + type + 'Handlers'];
if (!handlers) {
return;
}
for (var i = handlers.length - 1; i >= 0; --i) {
if (handlers[i] === handler) {
handlers.splice(i, 1);
break;
}
}
},
_pollTimer: null,
_noactivityTimer: null,
_xhr: null,
lastEventId: null,
cache: '',
cursor: 0,
onerror: null,
onmessage: null,
onopen: null,
readyState: 0,
// ===================================================================
// helpers functions
// those are attached to prototype to ease reuse and testing...
urlWithParams: function (baseURL, params) {
var encodedArgs = [];
if (params){
var key, urlarg;
var urlize = encodeURIComponent;
for (key in params){
if (params.hasOwnProperty(key)) {
urlarg = urlize(key)+'='+urlize(params[key]);
encodedArgs.push(urlarg);
}
}
}
if (encodedArgs.length > 0){
if (baseURL.indexOf('?') == -1)
return baseURL + '?' + encodedArgs.join('&');
return baseURL + '&' + encodedArgs.join('&');
}
return baseURL;
},
lastMessageIndex: function(text) {
var ln2 =text.lastIndexOf('\n\n');
var lr2 = text.lastIndexOf('\r\r');
var lrln2 = text.lastIndexOf('\r\n\r\n');
if (lrln2 > Math.max(ln2, lr2)) {
return [lrln2, lrln2+4];
}
return [Math.max(ln2, lr2), Math.max(ln2, lr2) + 2]
},
trimWhiteSpace: function(str) {
// to remove whitespaces left and right of string
var reTrim = /^(\s|\u00A0)+|(\s|\u00A0)+$/g;
return str.replace(reTrim, '');
},
normalizeToLF: function(str) {
// replace \r and \r\n with \n
return str.replace(/\r\n|\r/g, '\n');
}
};
if (!isOldIE()){
EventSource.isPolyfill = "XHR";
// EventSource will send request using XMLHttpRequest
EventSource.prototype.XHR = function(evs) {
request = new XMLHttpRequest();
this._request = request;
evs._xhr = this;
// set handlers
request.onreadystatechange = function(){
if (request.readyState > 1 && evs.readyState != evs.CLOSED) {
if (request.status == 200 || (request.status>=300 && request.status<400)){
evs._onxhrdata();
}
else {
request._failed = true;
evs.readyState = evs.CLOSED;
evs.dispatchEvent('error', {
type: 'error',
data: "The server responded with "+request.status
});
evs.close();
}
}
};
request.onprogress = function () {
};
request.open('GET', evs.urlWithParams(evs.URL, evs.getArgs), true);
var headers = evs.xhrHeaders; // maybe null
for (var header in headers) {
if (headers.hasOwnProperty(header)){
request.setRequestHeader(header, headers[header]);
}
}
if (evs.lastEventId) {
request.setRequestHeader('Last-Event-Id', evs.lastEventId);
}
request.send();
};
EventSource.prototype.XHR.prototype = {
useXDomainRequest: false,
_request: null,
_failed: false, // true if we have had errors...
isReady: function() {
return this._request.readyState >= 2;
},
isDone: function() {
return (this._request.readyState == 4);
},
hasError: function() {
return (this._failed || (this._request.status >= 400));
},
getBuffer: function() {
var rv = '';
try {
rv = this._request.responseText || '';
}
catch (e){}
return rv;
},
abort: function() {
if ( this._request ) {
this._request.abort();
}
}
};
}
else {
EventSource.isPolyfill = "IE_8-9";
// patch EventSource defaultOptions
var defaults = EventSource.prototype.defaultOptions;
defaults.xhrHeaders = null; // no headers will be sent
defaults.getArgs['evs_preamble'] = 2048 + 8;
// EventSource will send request using Internet Explorer XDomainRequest
EventSource.prototype.XHR = function(evs) {
request = new XDomainRequest();
this._request = request;
// set handlers
request.onprogress = function(){
request._ready = true;
evs._onxhrdata();
};
request.onload = function(){
this._loaded = true;
evs._onxhrdata();
};
request.onerror = function(){
this._failed = true;
evs.readyState = evs.CLOSED;
evs.dispatchEvent('error', {
type: 'error',
data: "XDomainRequest error"
});
};
request.ontimeout = function(){
this._failed = true;
evs.readyState = evs.CLOSED;
evs.dispatchEvent('error', {
type: 'error',
data: "XDomainRequest timed out"
});
};
// XDomainRequest does not allow setting custom headers
// If EventSource has enabled the use of GET arguments
// we add parameters to URL so that server can adapt the stream...
var reqGetArgs = {};
if (evs.getArgs) {
// copy evs.getArgs in reqGetArgs
var defaultArgs = evs.getArgs;
for (var key in defaultArgs) {
if (defaultArgs.hasOwnProperty(key)){
reqGetArgs[key] = defaultArgs[key];
}
}
if (evs.lastEventId){
reqGetArgs['evs_last_event_id'] = evs.lastEventId;
}
}
// send the request
request.open('GET', evs.urlWithParams(evs.URL,reqGetArgs));
request.send();
};
EventSource.prototype.XHR.prototype = {
useXDomainRequest: true,
_request: null,
_ready: false, // true when progress events are dispatched
_loaded: false, // true when request has been loaded
_failed: false, // true if when request is in error
isReady: function() {
return this._request._ready;
},
isDone: function() {
return this._request._loaded;
},
hasError: function() {
return this._request._failed;
},
getBuffer: function() {
var rv = '';
try {
rv = this._request.responseText || '';
}
catch (e){}
return rv;
},
abort: function() {
if ( this._request){
this._request.abort();
}
}
};
}
function MessageEvent(type, data, origin, lastEventId) {
this.bubbles = false;
this.cancelBubble = false;
this.cancelable = false;
this.data = data || null;
this.origin = origin || '';
this.lastEventId = lastEventId || '';
this.type = type || 'message';
}
function isOldIE () {
//return true if we are in IE8 or IE9
return (window.XDomainRequest && (window.XMLHttpRequest && new XMLHttpRequest().responseType === undefined)) ? true : false;
}
global[evsImportName] = EventSource;
})(window);
/***/ }),
/***/ "./src/modules/old_f1/index.js":
/*!*************************************!*\
!*** ./src/modules/old_f1/index.js ***!
\*************************************/
/*! no static exports found */
/***/ (function(module, exports, __webpack_require__) {
__webpack_require__(/*! ../ie_sse_polyfill */ "./src/modules/ie_sse_polyfill/index.js");
var template = __webpack_require__(/*! ../template */ "./src/modules/template/index.js")
// var marge = require('lodash/merge');
var marge = _.merge;
$(function () {
drawTable();
setInterval(refresh, 30 * 1000);
//setInterval(loadQuote, 10*1000);
loadQuote();
JuadeIe7();
filterFun();
clickFun();
// isRong();
var suggest = new suggest2017({
inputid: "search_box",
offset: { left: -91, top: 5 }
});
});
function refresh() {
var option = argFun();
drawTable(option);
// loadQuote();
}
//区分ie7
function JuadeIe7() {
//增加判断浏览器是否为ie7及以下
if (document.all && !document.querySelector) {
setInterval(loadQuote, 30 * 1000);
} else {
sseHeadData();
}
}
function argFun() {
var option = {};
$(".bigOrder dd input[type=radio]").each(function () {
if ($(this).prop("checked")) {
// option.gtvolume = $(this).val();
var gtvolume = $(this).val();
if(gtvolume == '100') {
option.ft = 2
} else if(gtvolume == '200') {
option.ft = 3
} else if(gtvolume == '500') {
option.ft = 4
} else if(gtvolume == '1000') {
option.ft = 5
} else if(gtvolume == '2000') {
option.ft = 6
} else if(gtvolume == '5000') {
option.ft = 7
} else if(gtvolume == '10000') {
option.ft = 8
} else {
option.ft = 1
}
}
});
option.sort = $("#reverseInput").val();
option.pageindex = Number($(".curPage").html()) - 1;
//console.log(option)
return option;
}
//青色=绿色=1=#00ffff,紫色=红色=2=ff00ff 先判断方向再判断大于20万 1是涨 -1是跌 0是没跌没涨
//增加newflag 倒序无需执行分页
//分时成交
function drawTable(option, newflag) {
var _default = {
"pageindex": 0, "id": stockEnity.stockId, "sort": 1, "ft": 1
}
var _option = $.extend(_default, option);
_option.code = getCode();
_option.market = getMarket();
if (_option.pageindex == -1) { return; }
// var url = "http://61.152.230.32:1981/getStockFenShi?pagesize=144&ut=7eea3edcaed734bea9cbfc24409ed989&dpt=wzfscj"
var url = "http://push2ex.eastmoney.com/getStockFenShi?pagesize=144&ut=7eea3edcaed734bea9cbfc24409ed989&dpt=wzfscj"
$.ajax({
url: url,
dataType: "jsonp",
jsonp: "cb",
data: _option,
success: function (json) {
if (json.data) {
// console.log('分时时间')
// console.log(json)
var count = Math.ceil(json.data.tc / 144);
var kcbFlag = json.data.ct;
if(count == 0) {
count = 1;
}
// console.log(count)
var data = json.data.data, pc = Dealprice(json.data.cp);
var list1 = [], list2 = [], list3 = [], list4 = [];
var html_1 = "", html_2 = "", html_3 = "", html_4 = "";
var dir = "", color = "", otherColor = "", chje = "";
if (!data && data.length == 0 && typeof data == "string") { return; }
for (var i = 0; i < data.length; i++) {
var preItems = data[i-1];
var items = data[i];
color = Dealprice(items.p) - pc == 0 ? "" : Dealprice(items.p) - pc > 0 ? "red" : Dealprice(items.p) - pc < 0 ? "green" : "";
chje = items.v * 100 * Dealprice(items.p);
if (items.bs == "1") {
otherColor = chje > 200000 ? "blue" : "green";
} else if (items.bs == "2") {
otherColor = chje > 200000 ? "purple" : "red";
} else {
otherColor = "";
}
//箭头
if (items.bs != "4") {
if(preItems) {
if (items.p > preItems.p) {
dir = "red";
} else if (items.p < preItems.p) {
dir = "green";
} else {
dir = "";
}
}else {
dir = "";
}
} else {
dir = "";
items.v = "-";
color = "";
}
//判断是否是科创板 手数做不同处理
var new_cjs = items.v;
if(kcbFlag == 1) {
new_cjs = kcbMyformatNum(new_cjs);
// console.log(new_cjs)
}else {
new_cjs = new_cjs;
}
if (i < 36) {
list1.push({
"time": DealTime(items.t),
"close": Dealprice(items.p),
"coloseColor": color,
"num": new_cjs,
"fontColor": otherColor,
"dir": dir
});
}
if (i > 35 && i < 72) {
list2.push({
"time": DealTime(items.t),
"close": Dealprice(items.p),
"coloseColor": color,
"num": new_cjs,
"fontColor": otherColor,
"dir": dir
});
}
if (i > 71 && i < 108) {
list3.push({
"time": DealTime(items.t),
"close": Dealprice(items.p),
"coloseColor": color,
"num": new_cjs,
"fontColor": otherColor,
"dir": dir
});
}
if (i > 107 && i < 144) {
list4.push({
"time": DealTime(items.t),
"close": Dealprice(items.p),
"coloseColor": color,
"num": new_cjs,
"fontColor": otherColor,
"dir": dir
});
}
}
var html_1 = template("tmp", { list: list1 });
var html_2 = template("tmp", { list: list2 });
var html_3 = template("tmp", { list: list3 });
var html_4 = template("tmp", { list: list4 });
$("#listTable1 tbody").html(html_1);
$("#listTable2 tbody").html(html_2);
$("#listTable3 tbody").html(html_3);
$("#listTable4 tbody").html(html_4);
if(newflag !== false) {
pageFun(count, _option.pageindex);
}
}
}
})
}
//科创板成交量
function kcbMyformatNum(num) {
if (num == '-') {
return '-';
}
if (num == undefined || isNaN(num)) {
return '';
}
var hz = '';
var num2 = '';
try {
if (num >= 0 && num <= 99.999999999) {
num2 = parseFloat(num).toFixed(2);
} else if (num >= 100 && num <= 999) {
num2 = parseFloat(num).toFixed(1);
} else if (num >= 1000) {
// num2 = parseFloat(num).toFixed(0);
if (num > 10000 && num < 100000000) {
num2 = (num / 10000).toFixed(2) + "万";
} else if (num > 100000000) {
num2 = (num / 100000000).toFixed(2) + "亿";
} else if (num == 0) {
num2 = '-';
} else {
num2 = num.toFixed(0);
}
}
return num2.toString() + hz;
} catch (error) {
return '-';
}
}
//处理价格
function Dealprice(val) {
try {
val = val/1000;
return val.toFixed(2);
} catch(e) {
return '-'
}
}
//处理时间
function DealTime(val) {
try {
val = '' + val;
if(val.length == 5) {
val = '0' + val;
}
return val.substr(0,2) + ':' + val.substr(2,2) + ':' + val.substr(4,2)
} catch(e) {
return '-'
}
}
//翻页的html
function pageFun(count, page) {
var count = Number(count);
var page = Number(page);
if (count == 0) {
page = 0;
}
var obj = {
"curPage": page,
"prev": page == 0 ? 0 : page - 1 ,
"next": page == count ? count : page + 1,
"count": count
};
var lstxt = obj.curPage + 1;
//ie7
$(".prevPage").attr("mypage", obj.prev);
// $(".prevPage").attr("value", obj.prev);
$(".nextPage").attr("mypage", obj.next);
$(".lastPage").attr("mypage", obj.count-1);
$(".curPage").text(lstxt);
$(".count").text(obj.count);
if (count == page+1) {
$(".lastPage").removeClass("canClick");
$(".arrowR").hide();
$(".nextPage").removeClass("canClick");
} else {
$(".lastPage").addClass("canClick");
$(".nextPage").addClass("canClick");
$(".arrowR").show();
}
if (page+1 == 1) {
$(".fistPage").removeClass("canClick");
$(".prevPage").removeClass("canClick");
$(".arrowL").hide();
} else {
$(".fistPage").addClass("canClick");
$(".prevPage").addClass("canClick");
$(".arrowL").show();
}
if (count == 0) {
$(".liPage").removeClass("canClick");
$(".arrowR").hide();
$(".arrowL").hide();
}
}
//筛选、倒序
function filterFun() {
$(".bigOrder input[type=radio]").change(function () {
var option = {};
// console.log('筛选大单:',$(this).val())
// option.gtvolume = $(this).val();
var gtvolume = $(this).val();
if(gtvolume == '100') {
option.ft = 2
} else if(gtvolume == '200') {
option.ft = 3
} else if(gtvolume == '500') {
option.ft = 4
} else if(gtvolume == '1000') {
option.ft = 5
} else if(gtvolume == '2000') {
option.ft = 6
} else if(gtvolume == '5000') {
option.ft = 7
} else if(gtvolume == '10000') {
option.ft = 8
} else {
option.ft = 1
}
option.sort = $("#reverseInput").val();
drawTable(option);
});
$(".reverseDiv input[type=checkbox]").change(function () {
var option = {};
if ($(this).prop("checked")) {
$(this).val("2");
} else {
$(this).val("1");
}
option.sort = $("#reverseInput").val();
//option.page = Number($(".curPage").html());
//argFun()
var _option = $.extend(argFun(), option);
drawTable(_option, false);
});
}
//点击事件
function clickFun() {
$(".goBtn").click(function () {
var val = Number($(".go").val());
var curPage = Number($(".PageNavBtm .curPage").text());
var lastPage = Number($(".PageNavBtm .lastPage").attr("mypage"));
if (val == "" || val > lastPage+1 || val < 1 || isNaN(val) > 0 || val == curPage) { $(".go").val(""); return; }
var option = {};
option.pageindex = val-1;
var _option = $.extend(argFun(), option);
drawTable(_option);
$(".go").val("");
});
$(".liPage").click(function () {
$(".go").val("");
if ($(this).hasClass("canClick")) {
var val = $(this).attr("mypage");
var option = {};
option.pageindex = val;
var _option = $.extend(argFun(), option);
drawTable(_option);
}
});
$(".go").keydown(function (e) {
var keyCode = e.keyCode ? e.keyCode : e.which ? e.which : e.charCode; //兼容IE 火狐 谷歌
if (keyCode == 13) {
$(".goBtn").trigger("click");
return false;
}
});
$(".mobile-icon").mouseover(function () {
$(".QRcode-div").show();
});
$(".mobile-icon").mouseout(function () {
$(".QRcode-div").hide();
});
}
//快速行情
function loadQuote() {
var options = {
"id": stockEnity.stockId
}
var secid = getMarket() + '.' + getCode();
// var _url = "//nuff.eastmoney.com/EM_Finance2015TradeInterface/JS.ashx?token=beb0a0047196124721f56b0f0ff5a27c";
var url = 'http://push2.eastmoney.com/api/qt/stock/get?ut=fa5fd1943c7b386f172d6893dbfba10b&invt=2&fltt=2&fields=f1,f43,f57,f58,f169,f170,f46,f44,f51,f168,f47,f116,f60,f45,f52,f50,f48,f167,f117,f71,f161,f49,f162,f107,f86,f177,f127,f199,f128,f80,f152,f110,f111,f107,f285,f286,f262,f263,f256,f257,f269,f270&secid=' + secid;
$.ajax({
url: url,
// data: options,
dataType: "jsonp",
jsonp: "cb",
success: function (json) {
// console.log(json)
var items = json.data;
if (!items || items.length == 0 || typeof items == "string") { return; }
if(items) {
formatHead(json);
renderContent(items);
renderRong(items);
renderGenming();
}
}
});
}
//sse
function sseHeadData() {
var secid = getMarket() + '.' + getCode();
//正式地址
var url = "http://" + (Math.floor(Math.random() * 99) + 1) +".push2.eastmoney.com/api/qt/stock/sse?ut=fa5fd1943c7b386f172d6893dbfba10b&invt=2&fltt=2&fields=f1,f43,f57,f58,f169,f170,f46,f44,f51,f168,f47,f116,f60,f45,f52,f50,f48,f167,f117,f71,f161,f49,f162,f107,f86,f177,f127,f199,f128,f80,f152,f110,f111,f107,f285,f286,f262,f263,f256,f257,f269,f270&secid=" + secid;
var evtSource = new EventSource(url);
evtSource.onmessage = function (msg) {
var obj = JSON.parse(msg.data)
var items = msg.data;
if (obj.data) {
formatHead(obj);
}
}
}
//head format
var headSourceData;
function formatHead(data) {
// console.log(data);
if(data.full == 1 && data.data){
headSourceData = data.data;
}
else if(data.full == 0 && data.data ){
if(headSourceData) {
headSourceData = marge(headSourceData, data.data);
}
}
renderHead(headSourceData);
}
function renderHead(items) {
var list = [
{ "name": "zxj", "item": items.f43 == "-" ? "-" : (items.f43).toFixed(items.f152), "color": getColor(items.f170) },
{ "name": "zde", "item": items.f169 == "-" ? "-" : (items.f169).toFixed(items.f152), "color": getColor(items.f170) },
{ "name": "zdf", "item": items.f170 == "-" ? "-" : (items.f170).toFixed(items.f152) + "%", "color": getColor(items.f170) },
{ "name": "jk", "item": items.f46 == "-" ? "-" : (items.f46).toFixed(items.f152), "color": getColor(items.f46 - items.f60) },
{ "name": "zs", "item": items.f60 == "-" ? "-" : (items.f60).toFixed(items.f152)},
{ "name": "zg", "item": items.f44 == "-" ? "-" : (items.f44).toFixed(items.f152), "color": getColor(items.f44 - items.f60) },
{ "name": "zd", "item": items.f45 == "-" ? "-" : (items.f45).toFixed(items.f152), "color": getColor(items.f45 - items.f60) },
{ "name": "zt", "item": items.f51 == "-" ? "-" : (items.f51).toFixed(2), "color": getColor(items.f51 - items.f60) },
{ "name": "dt", "item": items.f52 == "-" ? "-" : (items.f52).toFixed(2), "color": getColor(items.f52 - items.f60) },
{ "name": "hs", "item": items.f168 == "-" ? "-" : items.f168 + "%" },
{ "name": "lb", "item": items.f50 == "-" ? "-" : (items.f50).toFixed(2)},
{ "name": "cjl", "item": items.f47 == "-" ? "-" : NumbericFormat(items.f47) },
{ "name": "cje", "item": items.f48 == "-" ? "-" : NumbericFormat(items.f48) },
{ "name": "sy", "item": items.f162 == "-" ? "-" : (items.f162).toFixed(2)},
{ "name": "sj", "item": items.f167 == "-" ? "-" : (items.f167).toFixed(2)},
{ "name": "zsz", "item": items.f116 == "-" ? "-" : NumbericFormat(items.f116) },
{ "name": "ltsz", "item": items.f117 == "-" ? "-" : NumbericFormat(items.f117) }
];
for (var i = 0; i < list.length; i++) {
var name = $("#" + list[i].name);
name.text(list[i].item);
name.removeClass("red").removeClass("green").addClass(list[i].color);
}
//箭头颜色
onChangeDataRender(items.f170);
//时间
if(items.f86) {
var d = new Date(items.f86 * 1000); //("0" + (d.getMonth() + 1)).slice(-2) d.getMinutes() d.getMinutes() d.getSeconds()
var whichDay = '';
if(d.getDay() == 0) {
whichDay = '星期天';
} else if(d.getDay() == 1) {
whichDay = '星期一';
} else if(d.getDay() == 2) {
whichDay = '星期二';
} else if(d.getDay() == 3) {
whichDay = '星期三';
} else if(d.getDay() == 4) {
whichDay = '星期四';
} else if(d.getDay() == 5) {
whichDay = '星期五';
}else if(d.getDay() == 6) {
whichDay = '星期六';
}
youWant = d.getFullYear() + '-' + (("0" + (d.getMonth() + 1)).slice(-2)) + '-' + (("0" + (d.getDate())).slice(-2)) + ' ' + whichDay + ' ' + ("0" + (d.getHours())).slice(-2) + ':' + ("0" + (d.getMinutes())).slice(-2) + ':' + ("0" + (d.getSeconds())).slice(-2);
$("#day").html('(' + youWant + ')');
}
//判断若字数过长 字体变小
if(items.f43 >= 1000) {
$("#zxj").css("font-size", "23px")
}
}
function renderContent(items) {
var obj = {
"market": items.f107,
"code": items.f57,
"name": items.f58,
"id": items.f107 + items.f57,
"time": '111'
}
$(".n1").attr("href", "http://data.eastmoney.com/zlsj/detail/" + obj["code"] + "-1.html");
if(obj["name"]) {
$(".n2").text("查看" + obj["name"] + "买卖点");
$(".n3").text(obj["name"] + "股价预警")
$(".quote_title_0").text(obj["name"]);
}
$(".quote_title_1").text(obj["code"]);
var src = "//pifm3.eastmoney.com/EM_Finance2014PictureInterface/RC.ashx?url=http://m.quote.eastmoney.com/stock," + obj["code"] + ".shtml";
$(".QRcode-div img").attr("src", src);
//ie8
if(obj["code"] && obj["name"]) {
// $("title").text(obj["name"] + obj["code"] + "股票价格_行情_走势图—东方财富网");
document.title = obj["name"] + obj["code"] + "股票价格_行情_走势图—东方财富网";
}
}
function renderRong(items) {
//融 和港股通
// if(items.f177 & 64) {
// $(".rongi").show();
// $(".rongi a").attr("href", "http://data.eastmoney.com/rzrq/detail/" + items.f57 + ".html");
// } else {
// $(".rongi").hide();
// }
// if (items.f177 & 1024) {
// $("#sgt_icon").show();
// $("#hgt_icon").hide();
// } else if (items.f177 & 512){
// $("#sgt_icon").hide();
// $("#hgt_icon").show();
// }
if (items.f110 == 1 && items.f111 == 2) {
$("#jys-box").show().find("b ").text("沪主板");
$("#jys-box").attr("title", "该股票在沪主板上市");
$("#jys-box").find("a").attr("href", "//quote.eastmoney.com/center/gridlist.html#sh_a_board")
}
if (items.f110 == 0 && items.f111 == 6) {
$("#jys-box").show().find("b ").text("深主板");
$("#jys-box").attr("title", "该股票在深主板上市");
$("#jys-box").find("a").attr("href", "//quote.eastmoney.com/center/gridlist.html#sz_a_board")
}
if (items.f107 == 0 && items.f111 == 13) {
$("#jys-box").show().find("b ").text("中小板");
$("#jys-box").attr("title", "该股票在中小板上市");
$("#jys-box").find("a").attr("href", "//quote.eastmoney.com/center/gridlist.html#sme_board")
}
if (items.f107 == 0 && items.f111 == 80) {
$("#jys-box").show().find("b ").text("创业板");
$("#jys-box").attr("title", "该股票在创业板上市");
$("#jys-box").find("a").attr("href", "//quote.eastmoney.com/center/gridlist.html#gem_board")
}
if ((items.f177) & 512) {
$("#hgt_icon").show()
} else {
if ((items.f177) & 1024) {
$("#sgt_icon").show()
}
}
if(items.f107 == 1 && items.f111 == 23) {
$("#kcb").show()
}else {
$("#kcb").hide()
}
if ((items.f177) & 64) {
$("#rong").show();
$("#rong a").attr("href", "http://data.eastmoney.com/rzrq/detail/" + items.f57 + ".html");
} else {
$("#rong").hide()
}
if (items.f177 & 32768) {
$("#hlt").show().find("b ").text("沪伦通");
$("#hlt").attr("title", "该股票为沪伦通标的");
$("#hlt").find("a").attr("href", "http://quote.eastmoney.com/uk/" + items.f286 + "." + items.f285 + ".html");
$("#GDR").show().find("b ").text("GDR");
$("#GDR").attr("title", "该股票存在关联的GDR(全球存托凭证)");
$("#GDR").find("a").attr("href", "http://quote.eastmoney.com/uk/" + items.f286 + "." + items.f285 + ".html")
}
var marketzhai = items.f263 == "1" ? "sh" : "sz";
if (items.f262 && items.f262 != "-") {
$("#Z-box").show().find("b").text("可转债");
$("#Z-box").attr("title", "点击查看关联可转债行情");
$("#Z-box").find("a").attr("href", "http://quote.eastmoney.com/bond/" + marketzhai + items.f262 + ".html")
} else {
$("#Z-box").hide()
}
var marketH = items.f257;
if (items.f256 && items.f256 != "-") {
$("#H-box").show().find("b").text("H股");
$("#H-box").attr("title", "点击查看关联H股行情");
$("#H-box").find("a").attr("href", "http://quote.eastmoney.com/unify/r/" + marketH + "." + items.f256)
} else {
$("#H-box").hide()
}
var marketB = items.f270 == "1" ? "sh" : "sz";
if (items.f269 && items.f269 != "-") {
$("#B-box").show().find("b ").text("B股");
$("#B-box").attr("title", "点击查看关联B股行情");
$("#B-box").find("a").attr("href", "http://quote.eastmoney.com/" + marketB + items.f269 + ".html")
} else {
$("#B-box").hide()
}
//
}
//更名
function renderGenming() {
var _this = this;
var code = getCode();
$.ajax({
url: "http://dcfm.eastmoney.com/em_mutisvcexpandinterface/api/js/get?type=ABLS_MB&token=70f12f2f4f091e459a279469fe49eca5&filter=(scode=%27" + code + "%27)&st=changedate",
dataType: "jsonp",
scriptCharset: "utf-8",
accept: "json",
success: function(json) {
if (json.length > 1) {
$("#stock_change_name").show();
$("#stock_change_name .rongi").css("display", "block");
var html = "<span class=usedName>" + json[0].sname + "</span> <span class='usedName'>>>></span>";
for (var i = 1; i < json.length - 1; i++) {
html += '<span class="usedName">' + json[i].sname + '<span class="hasDate">(' + json[i].changedate.substring(0, 9).replace(/\//g, "-") + ")</span></span><span class='usedName'>>>></span>"
}
html += '<span class="usedName">' + json[json.length - 1].sname + '<span class="hasDate">(' + json[json.length - 1].changedate.substring(0, 9).replace(/\//g, "-") + ")</span></span>";
$("#stock_change_name .shorthandInfo").html(html)
$("#stock_change_name").hover(function() {
// console.log('777');
$(".comNameExInfo").show();
$(".arrowUp").show();
}, function() {
$(".comNameExInfo").hide();
})
$(".closeBtn").click(function() {
$(".comNameExInfo").hide();
})
}
}
})
}
function onChangeDataRender(item) {
if (item != 0 && item != "-") {
if (item > 0) {
$("#arrow-find").removeClass("down-arrow").addClass("up-arrow");
} else {
$("#arrow-find").removeClass("up-arrow").addClass("down-arrow");
}
}
if(item == 0){
$("#arrow-find").removeClass("up-arrow").removeClass("down-arrow");
}
}
function getParam(name) {
var urlpara = location.search;
var par = {};
if (urlpara != "") {
urlpara = urlpara.substring(1, urlpara.length);
var para = urlpara.split("&");
var parname;
var parvalue;
for (var i = 0; i < para.length; i++) {
parname = para[i].substring(0, para[i].indexOf("="));
parvalue = para[i].substring(para[i].indexOf("=") + 1, para[i].length);
par[parname] = parvalue;
}
}
if (typeof (par[name]) != "undefined") {
return par[name];
} else {
return null;
}
}
//获取code和market
function getCode() {
var code;
if(getParam('code')) {
code = getParam('code');
}else if(getParam('newcode')) {
code = getParam('newcode').split('.')[1];
} else if(getParam('id')) {
code = getParam('id').substr(0,6);
}
return code;
}
function getMarket() {
var market;
if(getParam('market')) {
var newmarket = getParam('market') == 2 ? 0 : 1;
market = newmarket;
}else if(getParam('newcode')) {
market = getParam('newcode').split('.')[0];
}else if(getParam('id')) {
market = getParam('id').substr(6);
var newmarket = market == 2 ? 0 : 1;
market = newmarket;
}
return market
}
function gw_now(time, id) {
var str = time.replace(/-/g, "/");
var _date = new Date(str);
var year = _date.getFullYear();
var month = gw_now_addzero(_date.getMonth() + 1);
var day = gw_now_addzero(_date.getDate());
var hour = gw_now_addzero(_date.getHours());
var minute = gw_now_addzero(_date.getMinutes());
var second = gw_now_addzero(_date.getSeconds());
var week = "";
switch (_date.getDay()) {
case 0: week = "星期天"; break
case 1: week = "星期一"; break
case 2: week = "星期二"; break
case 3: week = "星期三"; break
case 4: week = "星期四"; break
case 5: week = "星期五"; break
case 6: week = "星期六"; break
}
var Time = year + "-" + month + "-" + day + " " + week + " " + hour + ":" + minute + ":" + second;
$(id).html('(' + Time + ')')
}
function gw_now_addzero(temp) {
if (temp < 10) return "0" + temp;
else return temp;
}
function getColor(str) {
var context = str.toString();
context = context.replace("%", "");
if (context == 0 || isNaN(context)) {
return "";
} else if (context > 0) {
return "red";
} else {
return "green";
}
}
//融和沪深股通
function isRong() {
var _url = "//nufm.dfcfw.com/EM_Finance2014NumericApplication/JS.aspx?type=CT&sty=MCSS&st=z&js=((x))&token=883df69e21fd8fe57ebf53dd71aae25f&cmd=" + stockEnity.stockId
$.ajax({
url: _url,
dataType: "jsonp",
jsonp: "cb",
success: function (json) {
if (typeof json !== "string") return;
var data = json.split(",");
if (data[4] == "深股通") {
$("#sgt_icon").show();
$("#hgt_icon").hide();
} else if (data[4] == "沪股通"){
$("#sgt_icon").hide();
$("#hgt_icon").show();
}
if (data[5] == "True") {
$(".rongi").show();
$(".rongi a").attr("href", "http://data.eastmoney.com/rzrq/detail/" + data[1] + ".html");
} else {
$(".rongi").hide();
}
}
})
}
function NumbericFormat(string) {
var context = Number(string);
//var fushu = false;
if (!isNaN(context)) {
var item = parseInt(string);
if ((item > 0 && item < 1e4) || (item < 0 && item > -1e4)) {
return item;
} else if ((item > 0 && item < 1e6) || (item < 0 && item > -1e6)) {
item = item / 10000;
return item.toFixed(2) + "万";
} else if ((item > 0 && item < 1e7) || (item < 0 && item > -1e7)) {
item = item / 10000;
return item.toFixed(1) + "万";
} else if ((item > 0 && item < 1e8) || (item < 0 && item > -1e8)) {
item = item / 10000;
return item.toFixed(0) + "万";
} else if ((item > 0 && item < 1e10) || (item < 0 && item > -1e10)) {
item = item / 1e8;
return item.toFixed(2) + "亿";
} else if ((item > 0 && item < 1e11) || (item < 0 && item > -1e11)) {
item = item / 1e8;
return item.toFixed(1) + "亿";
} else if ((item > 0 && item < 1e12) || (item < 0 && item > -1e12)) {
item = item / 1e8;
return item.toFixed(0) + "亿";
} else if ((item > 0 && item < 1e14) || (item < 0 && item > -1e14)) {
item = item / 1e12;
return item.toFixed(2) + "万亿";
} else if ((item > 0 && item < 1e15) || (item < 0 && item > -1e15)) {
item = item / 1e12;
return item.toFixed(1) + "万亿";
} else if ((item > 0 && item < 1e16) || (item < 0 && item > -1e16)) {
item = item / 1e12;
return item.toFixed(0) + "万亿";
} else {
return item;
}
}
return context.toString();
}
/***/ }),
/***/ "./src/modules/template/index.js":
/*!***************************************!*\
!*** ./src/modules/template/index.js ***!
\***************************************/
/*! no static exports found */
/***/ (function(module, exports, __webpack_require__) {
var __WEBPACK_AMD_DEFINE_RESULT__;/*!art-template - Template Engine | http://aui.github.com/artTemplate/*/
!function(){function a(a){return a.replace(t,"").replace(u,",").replace(v,"").replace(w,"").replace(x,"").split(y)}function b(a){return"'"+a.replace(/('|\\)/g,"\\$1").replace(/\r/g,"\\r").replace(/\n/g,"\\n")+"'"}function c(c,d){function e(a){return m+=a.split(/\n/).length-1,k&&(a=a.replace(/\s+/g," ").replace(/<!--[\w\W]*?-->/g,"")),a&&(a=s[1]+b(a)+s[2]+"\n"),a}function f(b){var c=m;if(j?b=j(b,d):g&&(b=b.replace(/\n/g,function(){return m++,"$line="+m+";"})),0===b.indexOf("=")){var e=l&&!/^=[=#]/.test(b);if(b=b.replace(/^=[=#]?|[\s;]*$/g,""),e){var f=b.replace(/\s*\([^\)]+\)/,"");n[f]||/^(include|print)$/.test(f)||(b="$escape("+b+")")}else b="$string("+b+")";b=s[1]+b+s[2]}return g&&(b="$line="+c+";"+b),r(a(b),function(a){if(a&&!p[a]){var b;b="print"===a?u:"include"===a?v:n[a]?"$utils."+a:o[a]?"$helpers."+a:"$data."+a,w+=a+"="+b+",",p[a]=!0}}),b+"\n"}var g=d.debug,h=d.openTag,i=d.closeTag,j=d.parser,k=d.compress,l=d.escape,m=1,p={$data:1,$filename:1,$utils:1,$helpers:1,$out:1,$line:1},q="".trim,s=q?["$out='';","$out+=",";","$out"]:["$out=[];","$out.push(",");","$out.join('')"],t=q?"$out+=text;return $out;":"$out.push(text);",u="function(){var text=''.concat.apply('',arguments);"+t+"}",v="function(filename,data){data=data||$data;var text=$utils.$include(filename,data,$filename);"+t+"}",w="'use strict';var $utils=this,$helpers=$utils.$helpers,"+(g?"$line=0,":""),x=s[0],y="return new String("+s[3]+");";r(c.split(h),function(a){a=a.split(i);var b=a[0],c=a[1];1===a.length?x+=e(b):(x+=f(b),c&&(x+=e(c)))});var z=w+x+y;g&&(z="try{"+z+"}catch(e){throw {filename:$filename,name:'Render Error',message:e.message,line:$line,source:"+b(c)+".split(/\\n/)[$line-1].replace(/^\\s+/,'')};}");try{var A=new Function("$data","$filename",z);return A.prototype=n,A}catch(B){throw B.temp="function anonymous($data,$filename) {"+z+"}",B}}var d=function(a,b){return"string"==typeof b?q(b,{filename:a}):g(a,b)};d.version="3.0.0",d.config=function(a,b){e[a]=b};var e=d.defaults={openTag:"<%",closeTag:"%>",escape:!0,cache:!0,compress:!1,parser:null},f=d.cache={};d.render=function(a,b){return q(a)(b)};var g=d.renderFile=function(a,b){var c=d.get(a)||p({filename:a,name:"Render Error",message:"Template not found"});return b?c(b):c};d.get=function(a){var b;if(f[a])b=f[a];else if("object"==typeof document){var c=document.getElementById(a);if(c){var d=(c.value||c.innerHTML).replace(/^\s*|\s*$/g,"");b=q(d,{filename:a})}}return b};var h=function(a,b){return"string"!=typeof a&&(b=typeof a,"number"===b?a+="":a="function"===b?h(a.call(a)):""),a},i={"<":"<",">":">",'"':""","'":"'","&":"&"},j=function(a){return i[a]},k=function(a){return h(a).replace(/&(?![\w#]+;)|[<>"']/g,j)},l=Array.isArray||function(a){return"[object Array]"==={}.toString.call(a)},m=function(a,b){var c,d;if(l(a))for(c=0,d=a.length;d>c;c++)b.call(a,a[c],c,a);else for(c in a)b.call(a,a[c],c)},n=d.utils={$helpers:{},$include:g,$string:h,$escape:k,$each:m};d.helper=function(a,b){o[a]=b};var o=d.helpers=n.$helpers;d.onerror=function(a){var b="Template Error\n\n";for(var c in a)b+="<"+c+">\n"+a[c]+"\n\n";"object"==typeof console&&console.error(b)};var p=function(a){return d.onerror(a),function(){return"{Template Error}"}},q=d.compile=function(a,b){function d(c){try{return new i(c,h)+""}catch(d){return b.debug?p(d)():(b.debug=!0,q(a,b)(c))}}b=b||{};for(var g in e)void 0===b[g]&&(b[g]=e[g]);var h=b.filename;try{var i=c(a,b)}catch(j){return j.filename=h||"anonymous",j.name="Syntax Error",p(j)}return d.prototype=i.prototype,d.toString=function(){return i.toString()},h&&b.cache&&(f[h]=d),d},r=n.$each,s="break,case,catch,continue,debugger,default,delete,do,else,false,finally,for,function,if,in,instanceof,new,null,return,switch,this,throw,true,try,typeof,var,void,while,with,abstract,boolean,byte,char,class,const,double,enum,export,extends,final,float,goto,implements,import,int,interface,long,native,package,private,protected,public,short,static,super,synchronized,throws,transient,volatile,arguments,let,yield,undefined",t=/\/\*[\w\W]*?\*\/|\/\/[^\n]*\n|\/\/[^\n]*$|"(?:[^"\\]|\\[\w\W])*"|'(?:[^'\\]|\\[\w\W])*'|\s*\.\s*[$\w\.]+/g,u=/[^\w$]+/g,v=new RegExp(["\\b"+s.replace(/,/g,"\\b|\\b")+"\\b"].join("|"),"g"),w=/^\d[^,]*|,\d[^,]*/g,x=/^,+|,+$/g,y=/^$|,+/;e.openTag="{{",e.closeTag="}}";var z=function(a,b){var c=b.split(":"),d=c.shift(),e=c.join(":")||"";return e&&(e=", "+e),"$helpers."+d+"("+a+e+")"};e.parser=function(a){a=a.replace(/^\s/,"");var b=a.split(" "),c=b.shift(),e=b.join(" ");switch(c){case"if":a="if("+e+"){";break;case"else":b="if"===b.shift()?" if("+b.join(" ")+")":"",a="}else"+b+"{";break;case"/if":a="}";break;case"each":var f=b[0]||"$data",g=b[1]||"as",h=b[2]||"$value",i=b[3]||"$index",j=h+","+i;"as"!==g&&(f="[]"),a="$each("+f+",function("+j+"){";break;case"/each":a="});";break;case"echo":a="print("+e+");";break;case"print":case"include":a=c+"("+b.join(",")+");";break;default:if(/^\s*\|\s*[\w\$]/.test(e)){var k=!0;0===a.indexOf("#")&&(a=a.substr(1),k=!1);for(var l=0,m=a.split("|"),n=m.length,o=m[l++];n>l;l++)o=z(o,m[l]);a=(k?"=":"=#")+o}else a=d.helpers[c]?"=#"+c+"("+b.join(",")+");":"="+a}return a}, true?!(__WEBPACK_AMD_DEFINE_RESULT__ = (function(){return d}).call(exports, __webpack_require__, exports, module),
__WEBPACK_AMD_DEFINE_RESULT__ !== undefined && (module.exports = __WEBPACK_AMD_DEFINE_RESULT__)):undefined}();
/***/ })
/******/ });
//# sourceMappingURL=timetrade.js.map''' | PypiClean |
/AMAS_sb-1.0.1-py3-none-any.whl/AMAS/recommender.py | # import collections
import compress_pickle
import copy
import fnmatch
import itertools
import libsbml
import numpy as np
import os
import pandas as pd
import re
from AMAS import annotation_maker as am
from AMAS import constants as cn
from AMAS import iterator as it
from AMAS import tools
from AMAS import species_annotation as sa
from AMAS import reaction_annotation as ra
ELEMENT_TYPES = ['species', 'reaction']
class Recommender(object):
def __init__(self,
libsbml_fpath=None,
libsbml_cl=None,
model_specs=None):
"""
Parameters
----------
libsbml_cl: libsbml.SBMLDocument
A libsbml document class instance
libsbml_fpath: str
File path of an SBML model
mdoel_specs: tuple/list
Iterable object of two tuples including model specifications
"""
# Document will be updated and saved if chosen.
self.sbml_document = None
# First of all, collect model information from libsbml model
# and send the informaton to create species/reaction annotations
fname = None
if libsbml_fpath:
spec_tuple, reac_tuple = self._parseSBML(libsbml_fpath)
# basically split fpath and use the last one
fname = libsbml_fpath.split('/')[-1]
elif libsbml_cl:
spec_tuple, reac_tuple = self._parseSBML(libsbml_cl)
elif model_specs:
spec_tuple = model_specs[0]
reac_tuple = model_specs[1]
else:
spec_tuple = None
reac_tuple = None
self.fname = fname
self.species = sa.SpeciesAnnotation(inp_tuple=spec_tuple)
self.reactions = ra.ReactionAnnotation(inp_tuple=reac_tuple)
# Below are elements to interact with user
self.current_type = None
self.just_displayed = None
self.selection = {val:dict() for val in ELEMENT_TYPES}
def getDataFrameFromRecommendation(self,
rec,
show_url=False):
"""
Get a pandas dataframe from
a single recommendation.
Parameters
----------
rec: cn.Recommendation
show_url: bool
If False, omit this column
Returns
-------
:str
"""
cands = [val[0] for val in rec.candidates]
match_scores = [val[1] for val in rec.candidates]
labels = rec.labels
# index starts from 1;
df = pd.DataFrame({'annotation':cands,
cn.DF_MATCH_SCORE_COL:match_scores,
'label':labels})
df.index.name = rec.id
if show_url:
urls = rec.urls
df['url'] = urls
return df
def getRecommendationFromDataFrame(self,
df):
"""
Convert dataframe back to
namedtuple Recommendation.
WARNING: it may not work with
empty dataframe, so be careful.
Parameters
----------
df: pandas.DataFrame
element_type: str
'species' or 'reaction'
Returns
-------
Recommendation (namedtuple)
"""
cands_tups = list(zip(df['annotation'], df['match score']))
one_annotation = cands_tups[0][0]
# indicating species
if one_annotation[:4] == 'CHEB':
default_url = cn.CHEBI_DEFAULT_URL
url_digit = 6
# indicating reaction
elif one_annotation[:4] == 'RHEA':
default_url = cn.RHEA_DEFAULT_URL
url_digit = 5
return cn.Recommendation(df.index.name,
list(zip(df['annotation'], df['match score'])),
[default_url + val[url_digit:] for val in df['annotation']],
list(df['label']))
def getMarkdownFromRecommendation(self,
rec,
show_url=False):
"""
Get a markdown using
a cn.Recommendation or pandas.DataFrame.
Parameters
----------
rec: cn.Recommendation/pandas.DataFrame
show_url: bool
If False, omit this column
Returns
-------
:str
"""
if isinstance(rec, pd.DataFrame):
# to deepcopy so original data doesn't get changed
# in line 156.
df = copy.deepcopy(rec)
idx_name = df.index.name.split(' ')
rec_id = idx_name[0]
else:
df = self.getDataFrameFromRecommendation(rec, show_url)
rec_id = rec.id
# In markdown, title is shown separately,
# so index name with element ID is removed;
df.index.name=None
df_str = df.to_markdown(tablefmt="grid", floatfmt=".03f", index=True)
# Centering and adding the title
len_first_line = len(df_str.split('\n')[0])
title_line = rec_id
title_line = title_line.center(len_first_line)
df_str = title_line + '\n' + df_str
return df_str
def getSpeciesRecommendation(self,
pred_str=None,
pred_id=None,
method='cdist',
mssc='top',
cutoff=0.0,
update=True,
get_df=False):
"""
Predict annotations of species using
the provided string or ID.
If pred_str is given, directly use the string;
if pred_id is given, determine the appropriate
name using the species ID.
Algorithmically, it is a special case of
self.getSpeciesListRecommendation.
Parameters
----------
pred_str: str
Species name to predict annotation with
pred_id: str
ID of species (search for name using it)
method: str
One of ['cdist', 'edist']
'cdist' represents Cosine Similarity
'edist' represents Edit Distance.
Default method id 'cdist'
mssc: match score selection criteria
'top' will recommend candidates with
the highest match score above cutoff
'above' will recommend all candidates with
match scores above cutoff
cutoff: float
Cutoff value; only candidates with match score
at or above the cutoff will be recommended.
update: bool
If true, update existing species annotations
(i.e., replace or create new entries)
in self.species.candidates and self.species.formula
get_df: bool
If true, return a pandas.DataFrame.
If False, return a cn.Recommendation
Returns
-------
cn.Recommendation (namedtuple) / str
"""
if pred_str:
result = self.getSpeciesListRecommendation(pred_strs=[pred_str],
method=method,
mssc=mssc,
cutoff=cutoff,
update=update,
get_df=get_df)
elif pred_id:
result = self.getSpeciesListRecommendation(pred_ids=[pred_id],
method=method,
mssc=mssc,
cutoff=cutoff,
update=update,
get_df=get_df)
return result[0]
def getSpeciesIDs(self, pattern=None, regex=False):
"""
Returns Species IDs that match the pattern.
The pattern is given as glob
If none is given, returns all available
species that exist in the model.
Parameters
---------
pattern: str/None
string pattern
reges: bool
if True, use regex
if False, use glob
Returns
-------
list-str/None
None returned if no match was found
"""
# list of species ids
specs = list(self.species.names.keys())
# returns a list of ids thta match pattern, if None, return all
if pattern is None:
return specs
else:
if regex:
re_pattern = pattern
else:
re_pattern = fnmatch.translate(pattern)
matched = [re.match(re_pattern, val) for val in specs]
filt_matched = [val.group(0) for val in matched if val]
if len(filt_matched)>0:
return filt_matched
else:
return None
def getSpeciesListRecommendation(self,
pred_strs=None,
pred_ids=None,
method='cdist',
mssc='top',
cutoff=0.0,
update=True,
get_df=False):
"""
Get annotation of multiple species,
given as a list (or an iterable object).
self.getSpeciesRecommendation is applied to
each element.
Parameters
----------
pred_strs: str-list
:Species names to predict annotations with
pred_ids: str-list
:Species IDs to predict annotations with
(model info should have been already loaded)
method: str
One of ['cdist', 'edist']
'cdist' represents Cosine Similarity
'edist' represents Edit Distance.
Default method id 'cdist'
mssc: match score selection criteria
'top' will recommend candidates with
the highest match score above cutoff
'above' will recommend all candidates with
match scores above cutoff
cutoff: float
Cutoff value; only candidates with match score
at or above the cutoff will be recommended.
update: bool
:If true, update the current annotations
(i.e., replace or create new entries)
in self.species.candidates and self.species.formula
get_df: bool
If True, return a list of pandas.DataFrame.
If False, return a list of cn.Recommendation
Returns
-------
list-Recommendation (list-namedtuple) / list-str
"""
scoring_methods = {'edist': self.species.getEScores,
'cdist': self.species.getCScores}
if pred_strs:
ids_dict = {k:k for k in pred_strs}
inp_strs = pred_strs
elif pred_ids:
ids_dict = {k:self.species.getNameToUse(inp_id=k) \
for k in pred_ids}
inp_strs = [ids_dict[k] for k in ids_dict.keys()]
pred_res = scoring_methods[method](inp_strs=inp_strs,
mssc=mssc,
cutoff=cutoff)
result = []
for spec in ids_dict.keys():
urls = [cn.CHEBI_DEFAULT_URL + val[0][6:] for val in pred_res[ids_dict[spec]]]
labels = [cn.REF_CHEBI2LABEL[val[0]] for val in pred_res[ids_dict[spec]]]
one_recom = cn.Recommendation(spec,
[(val[0], np.round(val[1], cn.ROUND_DIGITS)) \
for val in pred_res[ids_dict[spec]]],
urls,
labels)
result.append(one_recom)
if update:
_ = self.species.updateSpeciesWithRecommendation(one_recom)
if get_df:
return [self.getDataFrameFromRecommendation(rec=val) \
for val in result]
else:
return result
def getReactionRecommendation(self, pred_id,
use_exist_species_annotation=False,
spec_res=None,
spec_method='cdist',
mssc='top',
cutoff=0.0,
update=True,
get_df=False):
"""
Predict annotations of reactions using
the provided IDs (argument).
Can be either singular (string) or plural
Parameters
----------
pred_id: str
A single ID of reaction to annotate
use_exist_speices_annotation: bool
If True, use existing species annotation
spec_res: list-cn.Recommendation
If provided, species will not be predicted
for remaining species
spec_method: str
Method to use if to directly predict species annotation;
if 'cdist' Cosine Similarity
if 'edist' Edit distance
mssc: match score selection criteria
'top' will recommend candidates with
the highest match score above cutoff
'above' will recommend all candidates with
match scores above cutoff
cutoff: float
Cutoff value; only candidates with match score
at or above the cutoff will be recommended.
update: bool
If true, update existing species annotations
(i.e., replace or create new entries)
in self.reactions.candidates
get_df: bool
If True, return a pandas DataFrame.
If False, return a cn.Recommendation
Returns
-------
Recommendation (namedtuple) / str
"""
result = self.getReactionListRecommendation(pred_ids=[pred_id],
use_exist_species_annotation=use_exist_species_annotation,
spec_res=spec_res,
spec_method=spec_method,
mssc=mssc,
cutoff=cutoff,
update=update,
get_df=get_df)
return result[0]
def getReactionIDs(self, pattern=None, by_species=False, regex=False):
"""
Get IDs of reactions based on
the pattern.
If by_species is True, it retrieves
all reaction with the species that match
the pattern;
if False, it searches based on the ID of
reactions
Parameters
---------
pattern: str
Pattern
by_species: bool
If True, find species with pattern
If False, find reaction IDs
regex: bool
If True, use regex expression
If False, convert it to regex.
"""
reacts = list(self.reactions.reaction_components.keys())
if pattern is None:
return reacts
# returns a list of ids thta match pattern, if None, return all
if regex:
re_pattern = pattern
else:
re_pattern = fnmatch.translate(pattern)
if by_species:
specs2use = self.getSpeciesIDs(pattern=re_pattern, regex=True)
if any(specs2use):
comp_items = list(self.reactions.reaction_components.items())
result = [val[0] for val in comp_items \
if any(set(val[1]).intersection(specs2use))]
# if no species match was found
else:
return None
else:
matched = [re.match(re_pattern, val) for val in reacts]
result = [val.group(0) for val in matched if val]
return result
def getReactionListRecommendation(self, pred_ids,
use_exist_species_annotation=False,
spec_res=None,
spec_method='cdist',
mssc='top',
cutoff=0.0,
update=True,
get_df=False):
"""
Get annotation of multiple reactions.
Instead of applying getReactionRecommendation
for each reaction,
it'll predict all component species first
and proceed (this will reduce computational cost).
Parameters
----------
pred_ids: str-list
For now, it only accommodates calling by reaction IDs.
use_exist_species_annotation: tool
If True, search existing annotation for species
and predict the remaining species
spec_res: list-cn.Recommendation
If provided, species will not be predicted
for remaining species
spec_method: str
Method to use if to directly predict species annotation;
if 'cdist' Cosine Similarity
if 'edist' Edit distance
mssc: match score selection criteria
'top' will recommend candidates with
the highest match score above cutoff
'above' will recommend all candidates with
match scores above cutoff
cutoff: float
Cutoff value; only candidates with match score
at or above the cutoff will be recommended.
update: bool
If true, update existing species annotations
(i.e., replace or create new entries)
in self.reactions.candidates
get_df: bool
If True, return a list of pandas DataFrames.
If False, return a list of cn.Recommendation
Returns
-------
list-Reccommendation (list-namedtuple) / list-str
"""
# First, collect all species IDs to annotate
specs_to_annotate = list(set(itertools.chain(*[self.reactions.reaction_components[val] \
for val in pred_ids])))
if use_exist_species_annotation:
pred_formulas = {val:self.species.exist_annotation_formula[val] \
for val in specs_to_annotate \
if val in self.species.exist_annotation_formula.keys()}
else:
pred_formulas = {}
remaining_species = [val for val in specs_to_annotate if val not in pred_formulas.keys()]
# Get annotation of collected species
if len(remaining_species) > 0:
if spec_res:
spec_results = [val for val in spec_res if val.id in remaining_species]
else:
# No updates; use MSSC Top, cutoff 0.0.
spec_results = self.getSpeciesListRecommendation(pred_ids=remaining_species,
update=False,
method=spec_method)
for one_recom in spec_results:
chebis = [val[0] for val in one_recom.candidates]
forms = list(set([cn.REF_CHEBI2FORMULA[k] \
for k in chebis if k in cn.REF_CHEBI2FORMULA.keys()]))
pred_formulas[one_recom.id] = forms
# Predict reaction annotations.
pred_res = self.reactions.getRScores(spec_dict=pred_formulas,
reacs=pred_ids,
mssc=mssc,
cutoff=cutoff)
result = []
for reac in pred_res.keys():
urls = [cn.RHEA_DEFAULT_URL + val[0][5:] for val in pred_res[reac]]
labels = [cn.REF_RHEA2LABEL[val[0]] for val in pred_res[reac]]
one_recom = cn.Recommendation(reac,
[(val[0], np.round(val[1], cn.ROUND_DIGITS)) \
for val in pred_res[reac]],
urls,
labels)
result.append(one_recom)
if update:
self.reactions.candidates = pred_res
if get_df:
return [self.getDataFrameFromRecommendation(rec=val) \
for val in result]
else:
return result
def _parseSBML(self, sbml):
"""
Parse SBML file and return
two tuples, for species and reactions
respecitvely,
equivalent to model_specs.
Can use either libsbml.Document class
or a file path.
Parameters
----------
sbml: str(filepath)/libsbml.Document
Returns
-------
(tuple, tuple):
Two tuples to create species/reaction annotation classes
(species_tuple, reaction_tuple)
"""
if isinstance(sbml, str):
reader = libsbml.SBMLReader()
# Reading the model string file
with open(sbml, 'r') as f:
model_str = f.read()
self.sbml_document = reader.readSBMLFromString(model_str)
elif isinstance(sbml, libsbml.SBMLDocument):
self.sbml_document = sbml
model = self.sbml_document.getModel()
exist_spec_annotation = tools.extractExistingSpeciesAnnotation(model)
species_names = {val.getId():val.name for val in model.getListOfSpecies()}
species_tuple = (species_names, exist_spec_annotation)
#
reac_exist_annotation = tools.extractExistingReactionAnnotation(inp_model=model)
# Next, reaction components for each reaction
reac_components = {val.getId():list(set([k.species for k in val.getListOfReactants()]+\
[k.species for k in val.getListOfProducts()])) \
for val in model.getListOfReactions()}
reaction_tuple = (reac_components, reac_exist_annotation)
return species_tuple, reaction_tuple
def getSpeciesStatistics(self,
mssc='top',
cutoff=0.0,
model_mean=True):
"""
Get recall and precision
of species in a model, for both species and
reactions.
This method works only if there exists
annotation in model; otherwise
None will be returned.
In the result, values will be
returned after rounding to the two decimal places.
Parameters
----------
mssc: str
match score selection criteria
'top' will recommend candidates with
the highest match score above cutoff
'above' will recommend all candidates with
match scores above cutoff
cutoff: float
Cutoff value; only candidates with match score
at or above the cutoff will be recommended.
model_mean: bool
If True, get single float values for recall/precision.
If False, get a dictionary for recall/precision.
Returns
-------
None/dict
Return None if there is nothing to evaluate
(i.e., if there is no existing model annotation)
"""
# get dictionary of formulas if they exist
refs = {val:self.species.exist_annotation_formula[val] \
for val in self.species.exist_annotation_formula.keys() \
if self.species.exist_annotation_formula[val]}
specs2eval = list(refs.keys())
if len(specs2eval) == 0:
return None
preds_comb = self.getSpeciesListRecommendation(pred_ids=specs2eval,
mssc=mssc,
cutoff=cutoff)
chebi_preds = {val.id:[k[0] for k in val.candidates] \
for val in preds_comb}
preds = {k:[cn.REF_CHEBI2FORMULA[val] for val in chebi_preds[k] \
if val in cn.REF_CHEBI2FORMULA.keys()] \
for k in chebi_preds.keys()}
recall = tools.getRecall(ref=refs, pred=preds, mean=model_mean)
precision = tools.getPrecision(ref=refs, pred=preds, mean=model_mean)
return {cn.RECALL: recall, cn.PRECISION: precision}
def getReactionStatistics(self,
model_mean=True,
mssc='top',
cutoff=0.0):
"""
Get recall and precision
of reactions in a model, for both species and
reactions.
This method works only if there exists
annotation in model; otherwise
None will be returned.
In the result, values will be
returned after rounding to the two decimal places.
Parameters
----------
mssc: str
match score selection criteria
'top' will recommend candidates with
the highest match score above cutoff
'above' will recommend all candidates with
match scores above cutoff
cutoff: float
Cutoff value; only candidates with match score
at or above the cutoff will be recommended.
model_mean: bool
If True, get single float values for recall/precision.
If False, get a dictionary for recall/precision.
Returns
-------
None/dict
Return None if there is nothing to evaluate
(i.e., if there is no existing model annotation)
"""
# For reactions, component species should be
# predicted first.
refs = self.reactions.exist_annotation
if len(refs) == 0:
return None
specs2pred = list(set(itertools.chain(*([self.reactions.reaction_components[val] for val in refs.keys()]))))
## Use mssc top, cutoff 0.0.
preds_comb = self.getSpeciesListRecommendation(pred_ids=specs2pred,
mssc='top',
cutoff=0.0)
chebi_preds = {val.id:[k[0] for k in val.candidates] \
for val in preds_comb}
specs_predicted = {k:[cn.REF_CHEBI2FORMULA[val] for val in chebi_preds[k] \
if val in cn.REF_CHEBI2FORMULA.keys()] \
for k in chebi_preds.keys()}
reac_preds = self.reactions.getRScores(spec_dict=specs_predicted,
reacs=refs.keys(),
mssc='top',
cutoff=0.0)
preds = {k:[val[0] for val in reac_preds[k]] for k in reac_preds.keys()}
recall = tools.getRecall(ref=refs, pred=preds, mean=model_mean)
precision = tools.getPrecision(ref=refs, pred=preds, mean=model_mean)
return {cn.RECALL: recall, cn.PRECISION: precision}
def filterDataFrameByThreshold(self, df, min_score):
"""
Filter dataframe by min_score (threshold),
and returns the result;
Note that if no item meets the threshold,
it'll still return an empty dataframe.
Paramters
---------
df: pd.DataFrame
min_score: float (0.0-1.0)
Returns
-------
pd.DataFrame
"""
scores = df[cn.DF_MATCH_SCORE_COL]
filt_idx = scores[scores>=min_score].index
filt_df = df.loc[filt_idx, :]
return filt_df
def autoSelectAnnotation(self,
df,
cutoff=0.0,
mssc='top'):
"""
Choose annotations based on
the set threshold;
(1) if None meets the threshold, return an empty frame
(2) if multiple meet the threshold,
(a) if method is 'best':
(i) find the highest match score among them
(ii) return all above match score == highest_match_score
(b) if method is 'all':
(i) return all that is at or above min_score
Parameters
----------
df: pandas.DataFrame
cutoff: float (0.0-1.0)
Match score cutoff
mssc: str
Match score selection criteria;
either 'top' or 'above'.
Returns
-------
pandas.DataFrame
if nothing matched,
an empty dataframe is returned
"""
scores = df[cn.DF_MATCH_SCORE_COL]
# max_score: highest match score that exists
# min_score: cutoff
max_score = np.max(scores)
if max_score < cutoff:
# this will create an empty dataframe
filt_idx = scores[scores>=cutoff].index
elif mssc=='top':
filt_idx = scores[scores==max_score].index
else:
filt_idx = scores[scores>=cutoff].index
filt_df = df.loc[filt_idx, :]
return filt_df
def recommendAnnotation(self,
mssc='top',
cutoff=0.0,
optimize=False,
outtype='table'):
"""
Combine recommendSpecies and recommendReactions
methods; can optimize.
Parameters
----------
mssc: str
cutoff: float
optiimize: bool
outtype: str
If 'table', returns recommendation table
if 'sbml', returns an updated SBML model.
Returns
-------
pandas.DataFrame / str
"""
pred_spec = self.getSpeciesListRecommendation(pred_ids=self.getSpeciesIDs(),
mssc=mssc,
cutoff=cutoff,
get_df=True)
pred_reac = self.getReactionListRecommendation(pred_ids=self.getReactionIDs(),
mssc=mssc,
cutoff=cutoff,
get_df=True)
if optimize:
res_tab = self.optimizePrediction(pred_spec=pred_spec,
pred_reac=pred_reac)
else:
s_df = self.getRecomTable(element_type='species',
recommended=pred_spec)
r_df = self.getRecomTable(element_type='reaction',
recommended=pred_reac)
res_tab = pd.concat([s_df, r_df],
ignore_index=True)
if outtype == 'table':
return res_tab
elif outtype == 'sbml':
res_sbml = self.getSBMLDocument(sbml_document=self.sbml_document,
chosen=res_tab,
auto_feedback=True)
return libsbml.writeSBMLToString(res_sbml)
def recommendReactions(self,
ids=None,
min_len=0,
mssc='top',
cutoff=0.0,
outtype='table'):
"""
Recommend one or more ids of reactions
and returns a single dataframe or
a list of dataframes.
Parameters
----------
ids: str/list-str
If None, recommend all reactionos.
min_len: int
Minimum number of reaction components
to be returned for results
mssc: str
match score selection criteria.
'top' or 'above'
cutoff: float
MSSC cutoff
outtype: str
Either 'table' or 'sbml'.
'table' will return a pandas.DataFrame
'sbml' will return an sbml string
Returns
-------
None
"""
self.updateCurrentElementType('reaction')
if isinstance(ids, str):
reacs = [ids]
elif ids is None:
reacs = self.getReactionIDs()
else:
reacs = ids
filt_reacs = [val for val in reacs \
if len(self.reactions.reaction_components[val])>=min_len]
if len(filt_reacs) == 0:
print("No reaction after the element filter.\n")
return None
pred = self.getReactionListRecommendation(pred_ids=filt_reacs,
mssc=mssc,
cutoff=cutoff,
get_df=True)
res_table = self.getRecomTable(element_type='reaction',
recommended=pred)
if outtype == 'table':
return res_table
elif outtype == 'sbml':
res_sbml = self.getSBMLDocument(sbml_document=self.sbml_document,
chosen=res_table,
auto_feedback=True)
return libsbml.writeSBMLToString(res_sbml)
return None
def recommendSpecies(self,
ids=None,
min_len=0,
mssc='top',
cutoff=0.0,
outtype='table'):
"""
Recommend one or more ids of species
and returns a single dataframe or
a list of dataframes.
Parameters
----------
ids: str/list-str
If None, will predict all
min_len: int
Minimum length of species name
to be returned for results
mssc: str
Match score selection criteria.
cutoff: match score cutoff
If None given, returns all values.
outtype: str
Either 'table' or 'sbml'.
'table' will return a pandas.DataFrame
'sbml' will return an sbml string
Returns
-------
: pd.DataFrame/str/None
Either
"""
self.updateCurrentElementType('species')
if isinstance(ids, str):
specs = [ids]
elif ids is None:
specs = self.getSpeciesIDs()
else:
specs = ids
filt_specs = [val for val in specs \
if len(self.species.getNameToUse(val))>=min_len]
if len(filt_specs) == 0:
print("No species after the element filter.\n")
return None
pred = self.getSpeciesListRecommendation(pred_ids=filt_specs,
mssc=mssc,
cutoff=cutoff,
get_df=True)
res_table = self.getRecomTable(element_type='species',
recommended=pred)
if outtype == 'table':
return res_table
elif outtype == 'sbml':
res_sbml = self.getSBMLDocument(sbml_document=self.sbml_document,
chosen=res_table,
auto_feedback=True)
return libsbml.writeSBMLToString(res_sbml)
return None
def updateCurrentElementType(self, element_type):
"""
Updating self.current_type
indicator; updated when
recommendSpecies or recommendReactions
is called;
Parameters
----------
element_type: str
Either 'species' or 'reaction'
"""
self.current_type = element_type
def updateJustDisplayed(self, df_dict):
"""
Used it every time
result is shown to user.
called by
/recommendSpecies/recommendReactions/
/selectAnnotation/
For now, always in the format as
pandas.DataFrame.
Parameters
----------
df_dict: dict()
Dictionary of pandas.DataFrame
Returns
-------
None
"""
self.just_displayed = df_dict
def selectAnnotation(self, choice=None):
"""
Based on the previous recommendation,
determine the annotations to store.
If 'all' given in choice[1],
select all.
Parameters
----------
choice: tuple/list-tuple (str, int)
[(element ID, choice number)]
"""
# assumes self.just_displayced is {id: pd.dataframe}
sel_id = choice[0]
sel_idx = choice[1]
df = self.just_displayed[choice[0]]
if sel_idx == 'all':
result = df
else:
if isinstance(sel_idx, int):
chosen = [sel_idx]
else:
chosen = sel_idx
result = df.loc[chosen, :]
# Now, update the selected annotation
self.updateSelection(sel_id, result)
print("Selection updated.")
return None
def updateSelection(self, sel_id, sel_df):
"""
Direct result of selectAnnotation;
filtered or non-filtered
dictionary of dataframes.
By calling SaveFile,
All selected annotations will be
saved as an .xml file.
Parameters
----------
sel_id: str
sel_df: pandas.DataFrame
"""
self.selection[self.current_type].update({sel_id: sel_df})
def displaySelection(self):
"""
To assist user,
display all selected
annotations from
self.selection.
"""
for one_type in ELEMENT_TYPES:
type_selection = self.selection[one_type]
for k in type_selection.keys():
print(self.getMarkdownFromRecommendation(type_selection[k])+"\n")
def getRecomTable(self,
element_type,
recommended):
"""
Extend the dataframe using
results obtained by
self.get....ListRecommendation.
A new, extended dataframe will be
returned; to be either
saved as CSV or shown to the user.
Parameters
----------
element_type: str
either 'species' or 'reaction'
recommended: list-pandas.DataFrame
result of get....ListRecommendation method
Returns
-------
:pandas.DataFrame
a single dataframe
(not a list of dataframes)
"""
etype = element_type
model = self.sbml_document.getModel()
TYPE_EXISTING_ATTR = {'species': self.species.exist_annotation,
'reaction': self.reactions.exist_annotation}
ELEMENT_FUNC = {'species': model.getSpecies,
'reaction': model.getReaction}
TYPE_LABEL = {'species': cn.REF_CHEBI2LABEL,
'reaction': cn.REF_RHEA2LABEL}
pd.set_option('display.max_colwidth', 255)
edfs = []
for one_edf in recommended:
element_id = one_edf.index.name
if one_edf.shape[0] == 0:
continue
annotations = list(one_edf['annotation'])
match_scores = list(one_edf[cn.DF_MATCH_SCORE_COL])
labels = list(one_edf['label'])
# if there is existing annotation among predicted candidates;
if element_id in TYPE_EXISTING_ATTR[etype].keys():
existings = [1 if val in TYPE_EXISTING_ATTR[etype][element_id] else 0 \
for idx, val in enumerate(one_edf['annotation'])]
upd_annotation = ['keep' if val in TYPE_EXISTING_ATTR[etype][element_id] else 'ignore' \
for idx, val in enumerate(one_edf['annotation'])]
annotation2add_raw = [val for val in TYPE_EXISTING_ATTR[etype][element_id] \
if val not in list(one_edf['annotation'])]
# only use existing annotation that exists in the label dictionaries
annotation2add = [val for val in annotation2add_raw \
if val in TYPE_LABEL[etype].keys()]
# if there doesn't exist existing annotation among predicted candidates;
else:
existings = [0] * len(annotations)
upd_annotation = ['ignore'] * len(annotations)
annotation2add = []
# handling existing annotations that were not predicted
for new_anot in annotation2add:
annotations.append(new_anot)
if etype=='reaction':
match_scores.append(self.getMatchScoreOfRHEA(element_id, new_anot))
labels.append(cn.REF_RHEA2LABEL[new_anot])
elif etype=='species':
match_scores.append(self.getMatchScoreOfCHEBI(element_id, new_anot))
labels.append(cn.REF_CHEBI2LABEL[new_anot])
existings.append(1)
upd_annotation.append('keep')
new_edf = pd.DataFrame({'type': [etype]*len(annotations),
'id': [element_id]*len(annotations),
'display name': [ELEMENT_FUNC[etype](element_id).name]*len(annotations),
'meta id': [ELEMENT_FUNC[etype](element_id).meta_id]*len(annotations),
'annotation': annotations,
'annotation label': labels,
cn.DF_MATCH_SCORE_COL: match_scores,
'existing': existings,
cn.DF_UPDATE_ANNOTATION_COL: upd_annotation})
edfs.append(new_edf)
res = pd.concat(edfs, ignore_index=True)
res.insert(0, 'file', self.fname)
return res
def getSBMLDocument(self,
sbml_document,
chosen,
auto_feedback=False):
"""
Create an updated SBML document
based on the feedback.
If auto_feedback is True,
replace 'ignore' with 'add'
and subsequently update the file.
Parameters
----------
sbml_document: libsbml.SBMLDocument
chosen: pandas.DataFrame
Returns
-------
str
SBML document
"""
model = sbml_document.getModel()
if auto_feedback:
chosen.replace('ignore', 'add', inplace=True)
ELEMENT_FUNC = {'species': model.getSpecies,
'reaction': model.getReaction}
element_types = list(np.unique(chosen['type']))
for one_type in element_types:
maker = am.AnnotationMaker(one_type)
ACTION_FUNC = {'delete': maker.deleteAnnotation,
'add': maker.addAnnotation}
df_type = chosen[chosen['type']==one_type]
uids = list(np.unique(df_type['id']))
meta_ids = {val:list(df_type[df_type['id']==val]['meta id'])[0] for val in uids}
# going through one id at a time
for one_id in uids:
orig_str = ELEMENT_FUNC[one_type](one_id).getAnnotationString()
df_id = df_type[df_type['id']==one_id]
dels = list(df_id[df_id[cn.DF_UPDATE_ANNOTATION_COL]=='delete'].loc[:, 'annotation'])
adds_raw = list(df_id[df_id[cn.DF_UPDATE_ANNOTATION_COL]=='add'].loc[:, 'annotation'])
# existing annotations to be kept
keeps = list(df_id[df_id[cn.DF_UPDATE_ANNOTATION_COL]=='keep'].loc[:, 'annotation'])
# TODO: remove RHEA
adds_raw = list(set(adds_raw + keeps))
adds = []
for one_add in adds_raw:
# if it is rhea, only store number
if one_add[:4].lower()=='rhea':
adds.append(one_add[5:])
# if it is else, store as it is
else:
adds.append(one_add)
# if type is 'reaction', need to map rhea terms back to ec/kegg terms to delete them.
if one_type == 'reaction':
rhea_del_terms = list(set(itertools.chain(*[tools.getAssociatedTermsToRhea(val) for val in dels])))
deled = maker.deleteAnnotation(rhea_del_terms, orig_str)
elif one_type == 'species':
deled = maker.deleteAnnotation(dels, orig_str)
added = maker.addAnnotation(adds, deled, meta_ids[one_id])
ELEMENT_FUNC[one_type](one_id).setAnnotation(added)
return sbml_document
def optimizePrediction(self,
pred_spec,
pred_reac):
"""
Optimize prediction using iteration.
Parameters
----------
pred_spec: list-DataFrame
Result of getSpeciesListRecommendation
with get_df=True
pred_reac: list-DataFrame
Result of getReactionListRecommendation
with get_df=True
reactions_to_update: list
IDs of reactions
Returns
-------
fin_spec_recom: Recommendation (namedtuple)
fin_reac_recom: Recommendation (namedtuple)
"""
# filtering out reactions that can be updated
filt_reac = [val for val in pred_reac if val.shape[0]>0]
filt_reac_ids = [val.index.name for val in filt_reac]
spec_formulas = dict()
for one_rec in pred_spec:
formulas = list(set([cn.REF_CHEBI2FORMULA[k] \
for k in one_rec['annotation'] \
if k in cn.REF_CHEBI2FORMULA.keys()]))
spec_formulas[one_rec.index.name] = formulas
anot_iter = it.Iterator(cur_spec_formula=spec_formulas,
reaction_cl=self.reactions,
reactions_to_update=filt_reac_ids)
res_iter = anot_iter.match()
recoms_tobe_added = []
for one_spec in res_iter.keys():
pred_reac_ids = [val.index.name for val in pred_reac]
reacs_using_one_spec = [val for val in pred_reac_ids \
if one_spec in self.reactions.reaction_components[val]]
filt_pred_reac = [val for val in pred_reac if val.index.name in reacs_using_one_spec]
# match score of reactions using that species
# average of the [very first match score from each candidaets set]
adj_match_score = np.mean([val['match score'].iloc[0] for val in filt_pred_reac])
cands = res_iter[one_spec]
scores = [adj_match_score for val in cands]
labels = [cn.REF_CHEBI2LABEL[val] for val in cands]
adj_recom = pd.DataFrame({'annotation': cands,
'match score': scores,
'label': labels})
adj_recom.index.name = one_spec
recoms_tobe_added.append(adj_recom)
upd_spec_dfs = recoms_tobe_added + \
[val for val in pred_spec if val.index.name not in res_iter.keys()]
# need to be converted back to namedtuple DataFrame
upd_spec_recom = [self.getRecommendationFromDataFrame(val) for val in upd_spec_dfs]
upd_reac_dfs = self.getReactionListRecommendation(pred_ids=filt_reac_ids,
spec_res=upd_spec_recom,
get_df=True)
s_df = self.getRecomTable(element_type='species',
recommended=upd_spec_dfs)
r_df = self.getRecomTable(element_type='reaction',
recommended=upd_reac_dfs)
return pd.concat([s_df, r_df], ignore_index=True)
def saveToCSV(self, obj,
fpath="recommendation.csv"):
"""
Save a completed dataframe
to csv. Doesn't proceed if obj is None,
which indicates it didn't pass the element filter.
Parameters
----------
obj: pandas.DataFrame
Object that can be saved to csv.
fpath: str
Path of the csv file to be saved.
"""
if isinstance(obj, pd.DataFrame):
obj.to_csv(fpath, index=False)
# print a summary message
for one_type in ELEMENT_TYPES:
saved_elements = list(np.unique(obj[obj['type']==one_type]['id']))
self.printSummary(saved_elements, one_type)
# def saveToSBML(self,
# fpath='model_amas_annotations.xml',
# option='augment'):
# """
# Update and save model;
# How to distinguish species vs. reactions?
# by using self.current_element_type.
# If option is 'augment',
# it'll add candidate annotations to
# existing annotation string.
# If option is 'replace',
# create a new annotation string and
# replace whatevers exists.
# Default to 'augment'.
# Call annotation maker;
# Parameters
# ----------
# fpath: str
# Path to save file
# option: str
# Either 'augment' or 'replace'
# """
# model = self.sbml_document.getModel()
# ELEMENT_FUNC = {'species': model.getSpecies,
# 'reaction': model.getReaction}
# # dictionary with empty lists;
# saved_elements = {k:[] for k in ELEMENT_TYPES}
# for one_type in ELEMENT_TYPES:
# type_selection = self.selection[one_type]
# maker = am.AnnotationMaker(one_type)
# sel2save = type_selection
# for one_k in sel2save.keys():
# one_element = ELEMENT_FUNC[one_type](one_k)
# meta_id = one_element.meta_id
# df = sel2save[one_k]
# # cands2save = list(df['annotation'])
# cands2save = []
# for val2save in list(df['annotation']):
# if val2save[:4].lower() == 'rhea':
# cands2save.append(val2save[5:])
# else:
# cands2save.append(val2save)
# if cands2save:
# if option == 'augment':
# orig_annotation = one_element.getAnnotationString()
# annotation_str = maker.addAnnotation(cands2save,
# orig_annotation,
# meta_id)
# elif option == 'replace':
# annotation_str = maker.getAnnotationString(cands2save, meta_id)
# one_element.setAnnotation(annotation_str)
# saved_elements[one_type].append(one_k)
# else:
# continue
# # finally, write the sbml document
# libsbml.writeSBMLToFile(self.sbml_document, fpath)
# # Summary message
# for one_type in ELEMENT_TYPES:
# self.printSummary(saved_elements[one_type], one_type)
def printSummary(self, saved, element_type):
"""
Print out a summary of
saved element IDs and numbers.
Parameters
----------
saved: list-str
List of elements saved.
element_type: str
'species' or 'reaction'
"""
plural_str = {'species': '',
'reaction': '(s)'}
num_saved = len(saved)
if num_saved == 0:
return None
print("Annotation recommended for %d %s%s:\n[%s]\n" %\
(num_saved,
element_type,
plural_str[element_type],
', '.join(saved)))
def getMatchScoreOfCHEBI(self, inp_id, inp_chebi):
"""
Calculate match score of a species (by ID)
with a ChEBI term.
This is to inform user of how well it matches with
a specific ChEBI term.
If the ChEBI term somehow
doesn't exist in the dictionary,
0.0 will be returned.
Parameters
----------
inp_id: str
ID of a species
inp_chebi: str
A ChEBI term.
Returns
-------
res: float
"""
inp_str = self.species.getNameToUse(inp_id)
scores = self.species.getCScores(inp_strs=[inp_str],
mssc='above',
cutoff=0.0)[inp_str]
# searching for the match score
res = next((np.round(v[1], cn.ROUND_DIGITS) \
for v in scores if v[0] == inp_chebi), 0.0)
return res
def getMatchScoreOfRHEA(self, inp_id, inp_rhea):
"""
Calculate match score of a reaction (by ID)
with a Rhea term.
This is to inform user of how well it matches with
a specific Rhea term.
Parameters
----------
inp_id: str
ID of a species
inp_rhea: str
A Rhea term.
Returns
-------
res_match_score: float
"""
specs2predict = self.reactions.reaction_components[inp_id]
spec_results = self.getSpeciesListRecommendation(pred_ids=specs2predict,
update=False,
method='cdist',
mssc='top',
cutoff=0.0)
pred_formulas = dict()
for one_spec_res in spec_results:
chebis = [val[0] for val in one_spec_res.candidates]
forms = list(set([cn.REF_CHEBI2FORMULA[k] \
for k in chebis if k in cn.REF_CHEBI2FORMULA.keys()]))
pred_formulas[one_spec_res.id] = forms
scores = self.reactions.getRScores(spec_dict=pred_formulas,
reacs=[inp_id],
mssc='above',
cutoff=0.0)[inp_id]
# searching for the match score
res = next((np.round(v[1], cn.ROUND_DIGITS) \
for v in scores if v[0] == inp_rhea), 0.0)
return res | PypiClean |
/FastNLP-1.0.1.tar.gz/FastNLP-1.0.1/fastNLP/modules/torch/decoder/crf.py | __all__ = [
"ConditionalRandomField",
"allowed_transitions"
]
from typing import Union, List, Tuple
import torch
from torch import nn
from ....core.metrics.span_f1_pre_rec_metric import _get_encoding_type_from_tag_vocab, _check_tag_vocab_and_encoding_type
from ....core.vocabulary import Vocabulary
def allowed_transitions(tag_vocab:Union[Vocabulary, dict], encoding_type:str=None, include_start_end:bool=False) -> List[Tuple[int, int]]:
r"""
给定一个 ``id`` 到 ``label`` 的映射表,返回所有可以跳转的 ``(from_tag_id, to_tag_id)`` 列表。
:param tag_vocab: 支持类型为 tag 或 tag-label 。只有 tag 的,比如 ``"B"`` 、 ``"M"``,也可以是 ``"B-NN"`` 、 ``"M-NN"``,
tag 和 label之间一定要用 ``"-"`` 隔开。如果传入 :class:`dict` ,格式需要形如 ``{0:"O", 1:"B-tag1"}`` ,即 **index 在前,tag 在后** 。
:param encoding_type: 支持 ``["bio", "bmes", "bmeso", "bioes", None]``。默认为 ``None``,通过 ``tag_vocab`` 自动推断
:param include_start_end: 是否包含开始与结尾的转换。比如在 ``bio`` 中, ``b/o`` 可以在开头,但是 ``i`` 不能在开头;
- 为 ``True`` -- 返回的结果中会包含 ``(start_idx, b_idx), (start_idx, o_idx)`` ,但是不包含 ``(start_idx, i_idx)`` ;
其中 ``start_idx=len(id2label)``,``end_idx=len(id2label)+1`` ;
- 为 ``False`` , 返回的结果中不含与开始结尾相关的内容;
:return: 一系列元组构成的列表,内部的 :class:`Tuple` 是可以进行跳转的 ``(from_tag_id, to_tag_id)``。
"""
if encoding_type is None:
encoding_type = _get_encoding_type_from_tag_vocab(tag_vocab)
else:
encoding_type = encoding_type.lower()
_check_tag_vocab_and_encoding_type(tag_vocab, encoding_type)
pad_token = '<pad>'
unk_token = '<unk>'
if isinstance(tag_vocab, Vocabulary):
id_label_lst = list(tag_vocab.idx2word.items())
pad_token = tag_vocab.padding
unk_token = tag_vocab.unknown
else:
id_label_lst = list(tag_vocab.items())
num_tags = len(tag_vocab)
start_idx = num_tags
end_idx = num_tags + 1
allowed_trans = []
if include_start_end:
id_label_lst += [(start_idx, 'start'), (end_idx, 'end')]
def split_tag_label(from_label):
from_label = from_label.lower()
if from_label in ['start', 'end']:
from_tag = from_label
from_label = ''
else:
from_tag = from_label[:1]
from_label = from_label[2:]
return from_tag, from_label
for from_id, from_label in id_label_lst:
if from_label in [pad_token, unk_token]:
continue
from_tag, from_label = split_tag_label(from_label)
for to_id, to_label in id_label_lst:
if to_label in [pad_token, unk_token]:
continue
to_tag, to_label = split_tag_label(to_label)
if _is_transition_allowed(encoding_type, from_tag, from_label, to_tag, to_label):
allowed_trans.append((from_id, to_id))
return allowed_trans
def _is_transition_allowed(encoding_type, from_tag, from_label, to_tag, to_label):
r"""
:param str encoding_type: 支持"BIO", "BMES", "BEMSO", 'bioes'。
:param str from_tag: 比如"B", "M"之类的标注tag. 还包括start, end等两种特殊tag
:param str from_label: 比如"PER", "LOC"等label
:param str to_tag: 比如"B", "M"之类的标注tag. 还包括start, end等两种特殊tag
:param str to_label: 比如"PER", "LOC"等label
:return: bool,能否跃迁
"""
if to_tag == 'start' or from_tag == 'end':
return False
encoding_type = encoding_type.lower()
if encoding_type == 'bio':
r"""
第一行是to_tag, 第一列是from_tag. y任意条件下可转,-只有在label相同时可转,n不可转
+-------+---+---+---+-------+-----+
| | B | I | O | start | end |
+-------+---+---+---+-------+-----+
| B | y | - | y | n | y |
+-------+---+---+---+-------+-----+
| I | y | - | y | n | y |
+-------+---+---+---+-------+-----+
| O | y | n | y | n | y |
+-------+---+---+---+-------+-----+
| start | y | n | y | n | n |
+-------+---+---+---+-------+-----+
| end | n | n | n | n | n |
+-------+---+---+---+-------+-----+
"""
if from_tag == 'start':
return to_tag in ('b', 'o')
elif from_tag in ['b', 'i']:
return any([to_tag in ['end', 'b', 'o'], to_tag == 'i' and from_label == to_label])
elif from_tag == 'o':
return to_tag in ['end', 'b', 'o']
else:
raise ValueError("Unexpect tag {}. Expect only 'B', 'I', 'O'.".format(from_tag))
elif encoding_type == 'bmes':
r"""
第一行是to_tag, 第一列是from_tag,y任意条件下可转,-只有在label相同时可转,n不可转
+-------+---+---+---+---+-------+-----+
| | B | M | E | S | start | end |
+-------+---+---+---+---+-------+-----+
| B | n | - | - | n | n | n |
+-------+---+---+---+---+-------+-----+
| M | n | - | - | n | n | n |
+-------+---+---+---+---+-------+-----+
| E | y | n | n | y | n | y |
+-------+---+---+---+---+-------+-----+
| S | y | n | n | y | n | y |
+-------+---+---+---+---+-------+-----+
| start | y | n | n | y | n | n |
+-------+---+---+---+---+-------+-----+
| end | n | n | n | n | n | n |
+-------+---+---+---+---+-------+-----+
"""
if from_tag == 'start':
return to_tag in ['b', 's']
elif from_tag == 'b':
return to_tag in ['m', 'e'] and from_label == to_label
elif from_tag == 'm':
return to_tag in ['m', 'e'] and from_label == to_label
elif from_tag in ['e', 's']:
return to_tag in ['b', 's', 'end']
else:
raise ValueError("Unexpect tag type {}. Expect only 'B', 'M', 'E', 'S'.".format(from_tag))
elif encoding_type == 'bmeso':
if from_tag == 'start':
return to_tag in ['b', 's', 'o']
elif from_tag == 'b':
return to_tag in ['m', 'e'] and from_label == to_label
elif from_tag == 'm':
return to_tag in ['m', 'e'] and from_label == to_label
elif from_tag in ['e', 's', 'o']:
return to_tag in ['b', 's', 'end', 'o']
else:
raise ValueError("Unexpect tag type {}. Expect only 'B', 'M', 'E', 'S', 'O'.".format(from_tag))
elif encoding_type == 'bioes':
if from_tag == 'start':
return to_tag in ['b', 's', 'o']
elif from_tag == 'b':
return to_tag in ['i', 'e'] and from_label == to_label
elif from_tag == 'i':
return to_tag in ['i', 'e'] and from_label == to_label
elif from_tag in ['e', 's', 'o']:
return to_tag in ['b', 's', 'end', 'o']
else:
raise ValueError("Unexpect tag type {}. Expect only 'B', 'I', 'E', 'S', 'O'.".format(from_tag))
else:
raise ValueError("Only support BIO, BMES, BMESO, BIOES encoding type, got {}.".format(encoding_type))
class ConditionalRandomField(nn.Module):
r"""
条件随机场。提供 :meth:`forward` 以及 :meth:`viterbi_decode` 两个方法,分别用于 **训练** 与 **inference** 。
:param num_tags: 标签的数量
:param include_start_end_trans: 是否考虑各个 tag 作为开始以及结尾的分数。
:param allowed_transitions: 内部的 ``Tuple[from_tag_id(int), to_tag_id(int)]`` 视为允许发生的跃迁,其他没
有包含的跃迁认为是禁止跃迁,可以通过 :func:`allowed_transitions` 函数得到;如果为 ``None`` ,则所有跃迁均为合法。
"""
def __init__(self, num_tags:int, include_start_end_trans:bool=False, allowed_transitions:List=None):
super(ConditionalRandomField, self).__init__()
self.include_start_end_trans = include_start_end_trans
self.num_tags = num_tags
# the meaning of entry in this matrix is (from_tag_id, to_tag_id) score
self.trans_m = nn.Parameter(torch.randn(num_tags, num_tags))
if self.include_start_end_trans:
self.start_scores = nn.Parameter(torch.randn(num_tags))
self.end_scores = nn.Parameter(torch.randn(num_tags))
if allowed_transitions is None:
constrain = torch.zeros(num_tags + 2, num_tags + 2)
else:
constrain = torch.full((num_tags + 2, num_tags + 2), fill_value=-10000.0, dtype=torch.float)
has_start = False
has_end = False
for from_tag_id, to_tag_id in allowed_transitions:
constrain[from_tag_id, to_tag_id] = 0
if from_tag_id==num_tags:
has_start = True
if to_tag_id==num_tags+1:
has_end = True
if not has_start:
constrain[num_tags, :].fill_(0)
if not has_end:
constrain[:, num_tags+1].fill_(0)
self._constrain = nn.Parameter(constrain, requires_grad=False)
def _normalizer_likelihood(self, logits, mask):
r"""Computes the (batch_size,) denominator term for the log-likelihood, which is the
sum of the likelihoods across all possible state sequences.
:param logits:FloatTensor, ``[max_len, batch_size, num_tags]``
:param mask:ByteTensor, ``[max_len, batch_size]``
:return:FloatTensor, ``[batch_size,]``
"""
seq_len, batch_size, n_tags = logits.size()
alpha = logits[0]
if self.include_start_end_trans:
alpha = alpha + self.start_scores.view(1, -1)
flip_mask = mask.eq(False)
for i in range(1, seq_len):
emit_score = logits[i].view(batch_size, 1, n_tags)
trans_score = self.trans_m.view(1, n_tags, n_tags)
tmp = alpha.view(batch_size, n_tags, 1) + emit_score + trans_score
alpha = torch.logsumexp(tmp, 1).masked_fill(flip_mask[i].view(batch_size, 1), 0) + \
alpha.masked_fill(mask[i].eq(True).view(batch_size, 1), 0)
if self.include_start_end_trans:
alpha = alpha + self.end_scores.view(1, -1)
return torch.logsumexp(alpha, 1)
def _gold_score(self, logits, tags, mask):
r"""
Compute the score for the gold path.
:param logits: FloatTensor, ``[max_len, batch_size, num_tags]``
:param tags: LongTensor, ``[max_len, batch_size]``
:param mask: ByteTensor, ``[max_len, batch_size]``
:return:FloatTensor, ``[batch_size.]``
"""
seq_len, batch_size, _ = logits.size()
batch_idx = torch.arange(batch_size, dtype=torch.long, device=logits.device)
seq_idx = torch.arange(seq_len, dtype=torch.long, device=logits.device)
# trans_socre [L-1, B]
mask = mask.eq(True)
flip_mask = mask.eq(False)
trans_score = self.trans_m[tags[:seq_len - 1], tags[1:]].masked_fill(flip_mask[1:, :], 0)
# emit_score [L, B]
emit_score = logits[seq_idx.view(-1, 1), batch_idx.view(1, -1), tags].masked_fill(flip_mask, 0)
# score [L-1, B]
score = trans_score + emit_score[:seq_len - 1, :]
score = score.sum(0) + emit_score[-1].masked_fill(flip_mask[-1], 0)
if self.include_start_end_trans:
st_scores = self.start_scores.view(1, -1).repeat(batch_size, 1)[batch_idx, tags[0]]
last_idx = mask.long().sum(0) - 1
ed_scores = self.end_scores.view(1, -1).repeat(batch_size, 1)[batch_idx, tags[last_idx, batch_idx]]
score = score + st_scores + ed_scores
# return [B,]
return score
def forward(self, feats: "torch.FloatTensor", tags: "torch.LongTensor", mask: "torch.ByteTensor") -> "torch.FloatTensor":
r"""
用于计算 ``CRF`` 的前向 loss,返回值为一个形状为 ``[batch_size,]`` 的 :class:`torch.FloatTensor` ,可能需要 :func:`mean` 求得 loss 。
:param feats: 特征矩阵,形状为 ``[batch_size, max_len, num_tags]``
:param tags: 标签矩阵,形状为 ``[batch_size, max_len]``
:param mask: 形状为 ``[batch_size, max_len]`` ,为 **0** 的位置认为是 padding。
:return: ``[batch_size,]``
"""
feats = feats.transpose(0, 1)
tags = tags.transpose(0, 1).long()
mask = mask.transpose(0, 1).float()
all_path_score = self._normalizer_likelihood(feats, mask)
gold_path_score = self._gold_score(feats, tags, mask)
return all_path_score - gold_path_score
def viterbi_decode(self, logits: "torch.FloatTensor", mask: "torch.ByteTensor", unpad=False):
r"""给定一个 **特征矩阵** 以及 **转移分数矩阵** ,计算出最佳的路径以及对应的分数
:param logits: 特征矩阵,形状为 ``[batch_size, max_len, num_tags]``
:param mask: 标签矩阵,形状为 ``[batch_size, max_len]`` ,为 **0** 的位置认为是 padding。如果为 ``None`` ,则认为没有 padding。
:param unpad: 是否将结果删去 padding:
- 为 ``False`` 时,返回的是 ``[batch_size, max_len]`` 的张量
- 为 ``True`` 时,返回的是 :class:`List` [:class:`List` [ :class:`int` ]], 内部的 :class:`List` [:class:`int` ] 为每个
sequence 的 label ,已经除去 pad 部分,即每个 :class:`List` [ :class:`int` ] 的长度是这个 sample 的有效长度。
:return: (paths, scores)。
- ``paths`` -- 解码后的路径, 其值参照 ``unpad`` 参数.
- ``scores`` -- :class:`torch.FloatTensor` ,形状为 ``[batch_size,]`` ,对应每个最优路径的分数。
"""
batch_size, max_len, n_tags = logits.size()
seq_len = mask.long().sum(1)
logits = logits.transpose(0, 1).data # L, B, H
mask = mask.transpose(0, 1).data.eq(True) # L, B
flip_mask = mask.eq(False)
# dp
vpath = logits.new_zeros((max_len, batch_size, n_tags), dtype=torch.long)
vscore = logits[0] # bsz x n_tags
transitions = self._constrain.data.clone()
transitions[:n_tags, :n_tags] += self.trans_m.data
if self.include_start_end_trans:
transitions[n_tags, :n_tags] += self.start_scores.data
transitions[:n_tags, n_tags + 1] += self.end_scores.data
vscore += transitions[n_tags, :n_tags]
trans_score = transitions[:n_tags, :n_tags].view(1, n_tags, n_tags).data
end_trans_score = transitions[:n_tags, n_tags+1].view(1, 1, n_tags).repeat(batch_size, 1, 1) # bsz, 1, n_tags
# 针对长度为1的句子
vscore += transitions[:n_tags, n_tags+1].view(1, n_tags).repeat(batch_size, 1) \
.masked_fill(seq_len.ne(1).view(-1, 1), 0)
for i in range(1, max_len):
prev_score = vscore.view(batch_size, n_tags, 1)
cur_score = logits[i].view(batch_size, 1, n_tags) + trans_score
score = prev_score + cur_score.masked_fill(flip_mask[i].view(batch_size, 1, 1), 0) # bsz x n_tag x n_tag
# 需要考虑当前位置是该序列的最后一个
score += end_trans_score.masked_fill(seq_len.ne(i+1).view(-1, 1, 1), 0)
best_score, best_dst = score.max(1)
vpath[i] = best_dst
# 由于最终是通过last_tags回溯,需要保持每个位置的vscore情况
vscore = best_score.masked_fill(flip_mask[i].view(batch_size, 1), 0) + \
vscore.masked_fill(mask[i].view(batch_size, 1), 0)
# backtrace
batch_idx = torch.arange(batch_size, dtype=torch.long, device=logits.device)
seq_idx = torch.arange(max_len, dtype=torch.long, device=logits.device)
lens = (seq_len - 1)
# idxes [L, B], batched idx from seq_len-1 to 0
idxes = (lens.view(1, -1) - seq_idx.view(-1, 1)) % max_len
ans = logits.new_empty((max_len, batch_size), dtype=torch.long)
ans_score, last_tags = vscore.max(1)
ans[idxes[0], batch_idx] = last_tags
for i in range(max_len - 1):
last_tags = vpath[idxes[i], batch_idx, last_tags]
ans[idxes[i + 1], batch_idx] = last_tags
ans = ans.transpose(0, 1)
if unpad:
paths = []
for idx, max_len in enumerate(lens):
paths.append(ans[idx, :max_len + 1].tolist())
else:
paths = ans
return paths, ans_score | PypiClean |
/Hikka_Pyro-2.0.66-py3-none-any.whl/pyrogram/methods/advanced/save_file.py |
import asyncio
import functools
import inspect
import io
import logging
import math
import os
from hashlib import md5
from pathlib import PurePath
from typing import Union, BinaryIO, Callable
import pyrogram
from pyrogram import StopTransmission
from pyrogram import raw
from pyrogram.session import Session
log = logging.getLogger(__name__)
class SaveFile:
async def save_file(
self: "pyrogram.Client",
path: Union[str, BinaryIO],
file_id: int = None,
file_part: int = 0,
progress: Callable = None,
progress_args: tuple = ()
):
"""Upload a file onto Telegram servers, without actually sending the message to anyone.
Useful whenever an InputFile type is required.
.. note::
This is a utility method intended to be used **only** when working with raw
:obj:`functions <pyrogram.api.functions>` (i.e: a Telegram API method you wish to use which is not
available yet in the Client class as an easy-to-use method).
.. include:: /_includes/usable-by/users-bots.rst
Parameters:
path (``str`` | ``BinaryIO``):
The path of the file you want to upload that exists on your local machine or a binary file-like object
with its attribute ".name" set for in-memory uploads.
file_id (``int``, *optional*):
In case a file part expired, pass the file_id and the file_part to retry uploading that specific chunk.
file_part (``int``, *optional*):
In case a file part expired, pass the file_id and the file_part to retry uploading that specific chunk.
progress (``Callable``, *optional*):
Pass a callback function to view the file transmission progress.
The function must take *(current, total)* as positional arguments (look at Other Parameters below for a
detailed description) and will be called back each time a new file chunk has been successfully
transmitted.
progress_args (``tuple``, *optional*):
Extra custom arguments for the progress callback function.
You can pass anything you need to be available in the progress callback scope; for example, a Message
object or a Client instance in order to edit the message with the updated progress status.
Other Parameters:
current (``int``):
The amount of bytes transmitted so far.
total (``int``):
The total size of the file.
*args (``tuple``, *optional*):
Extra custom arguments as defined in the ``progress_args`` parameter.
You can either keep ``*args`` or add every single extra argument in your function signature.
Returns:
``InputFile``: On success, the uploaded file is returned in form of an InputFile object.
Raises:
RPCError: In case of a Telegram RPC error.
"""
if path is None:
return None
async def worker(session):
while True:
data = await queue.get()
if data is None:
return
try:
await session.invoke(data)
except Exception as e:
log.error(e)
part_size = 512 * 1024
if isinstance(path, (str, PurePath)):
fp = open(path, "rb")
elif isinstance(path, io.IOBase):
fp = path
else:
raise ValueError("Invalid file. Expected a file path as string or a binary (not text) file pointer")
file_name = getattr(fp, "name", "file.jpg")
fp.seek(0, os.SEEK_END)
file_size = fp.tell()
fp.seek(0)
if file_size == 0:
raise ValueError("File size equals to 0 B")
file_size_limit_mib = 4000 if self.me.is_premium else 2000
if file_size > file_size_limit_mib * 1024 * 1024:
raise ValueError(f"Can't upload files bigger than {file_size_limit_mib} MiB")
file_total_parts = int(math.ceil(file_size / part_size))
is_big = file_size > 10 * 1024 * 1024
pool_size = 3 if is_big else 1
workers_count = 4 if is_big else 1
is_missing_part = file_id is not None
file_id = file_id or self.rnd_id()
md5_sum = md5() if not is_big and not is_missing_part else None
pool = [
Session(
self, await self.storage.dc_id(), await self.storage.auth_key(),
await self.storage.test_mode(), is_media=True
) for _ in range(pool_size)
]
workers = [self.loop.create_task(worker(session)) for session in pool for _ in range(workers_count)]
queue = asyncio.Queue(16)
try:
for session in pool:
await session.start()
fp.seek(part_size * file_part)
while True:
chunk = fp.read(part_size)
if not chunk:
if not is_big and not is_missing_part:
md5_sum = "".join([hex(i)[2:].zfill(2) for i in md5_sum.digest()])
break
if is_big:
rpc = raw.functions.upload.SaveBigFilePart(
file_id=file_id,
file_part=file_part,
file_total_parts=file_total_parts,
bytes=chunk
)
else:
rpc = raw.functions.upload.SaveFilePart(
file_id=file_id,
file_part=file_part,
bytes=chunk
)
await queue.put(rpc)
if is_missing_part:
return
if not is_big and not is_missing_part:
md5_sum.update(chunk)
file_part += 1
if progress:
func = functools.partial(
progress,
min(file_part * part_size, file_size),
file_size,
*progress_args
)
if inspect.iscoroutinefunction(progress):
await func()
else:
await self.loop.run_in_executor(self.executor, func)
except StopTransmission:
raise
except Exception as e:
log.error(e, exc_info=True)
else:
if is_big:
return raw.types.InputFileBig(
id=file_id,
parts=file_total_parts,
name=file_name,
)
else:
return raw.types.InputFile(
id=file_id,
parts=file_total_parts,
name=file_name,
md5_checksum=md5_sum
)
finally:
for _ in workers:
await queue.put(None)
await asyncio.gather(*workers)
for session in pool:
await session.stop()
if isinstance(path, (str, PurePath)):
fp.close() | PypiClean |
/FreePyBX-1.0-RC1.tar.gz/FreePyBX-1.0-RC1/freepybx/public/js/dojox/grid/enhanced/plugins/NestedSorting.js | define("dojox/grid/enhanced/plugins/NestedSorting",["dojo/_base/declare","dojo/_base/array","dojo/_base/connect","dojo/_base/lang","dojo/_base/html","dojo/_base/event","dojo/_base/window","dojo/keys","dojo/query","dojo/string","../_Plugin","../../EnhancedGrid"],function(_1,_2,_3,_4,_5,_6,_7,_8,_9,_a,_b,_c){
var _d=_1("dojox.grid.enhanced.plugins.NestedSorting",_b,{name:"nestedSorting",_currMainSort:"none",_currRegionIdx:-1,_a11yText:{"dojoxGridDescending":"▾","dojoxGridAscending":"▴","dojoxGridAscendingTip":"۸","dojoxGridDescendingTip":"۷","dojoxGridUnsortedTip":"x"},constructor:function(){
this._sortDef=[];
this._sortData={};
this._headerNodes={};
this._excludedColIdx=[];
this.nls=this.grid._nls;
this.grid.setSortInfo=function(){
};
this.grid.setSortIndex=_4.hitch(this,"_setGridSortIndex");
this.grid.getSortIndex=function(){
};
this.grid.getSortProps=_4.hitch(this,"getSortProps");
if(this.grid.sortFields){
this._setGridSortIndex(this.grid.sortFields,null,true);
}
this.connect(this.grid.views,"render","_initSort");
this.initCookieHandler();
this.subscribe("dojox/grid/rearrange/move/"+this.grid.id,_4.hitch(this,"_onColumnDnD"));
},onStartUp:function(){
this.inherited(arguments);
this.connect(this.grid,"onHeaderCellClick","_onHeaderCellClick");
this.connect(this.grid,"onHeaderCellMouseOver","_onHeaderCellMouseOver");
this.connect(this.grid,"onHeaderCellMouseOut","_onHeaderCellMouseOut");
},_onColumnDnD:function(_e,_f){
if(_e!=="col"){
return;
}
var m=_f,obj={},d=this._sortData,p;
var cr=this._getCurrentRegion();
this._blurRegion(cr);
var idx=this._getRegionHeader(cr).getAttribute("idx");
for(p in m){
if(d[p]){
obj[m[p]]=d[p];
delete d[p];
}
if(p===idx){
idx=m[p];
}
}
for(p in obj){
d[p]=obj[p];
}
var c=this._headerNodes[idx];
this._currRegionIdx=_2.indexOf(this._getRegions(),c.firstChild);
this._initSort(false);
},_setGridSortIndex:function(_10,_11,_12){
if(_4.isArray(_10)){
var i,d,_13;
for(i=0;i<_10.length;i++){
d=_10[i];
_13=this.grid.getCellByField(d.attribute);
if(!_13){
console.warn("Invalid sorting option, column ",d.attribute," not found.");
return;
}
if(_13["nosort"]||!this.grid.canSort(_13.index,_13.field)){
console.warn("Invalid sorting option, column ",d.attribute," is unsortable.");
return;
}
}
this.clearSort();
_2.forEach(_10,function(d,i){
_13=this.grid.getCellByField(d.attribute);
this.setSortData(_13.index,"index",i);
this.setSortData(_13.index,"order",d.descending?"desc":"asc");
},this);
}else{
if(!isNaN(_10)){
if(_11===undefined){
return;
}
this.setSortData(_10,"order",_11?"asc":"desc");
}else{
return;
}
}
this._updateSortDef();
if(!_12){
this.grid.sort();
}
},getSortProps:function(){
return this._sortDef.length?this._sortDef:null;
},_initSort:function(_14){
var g=this.grid,n=g.domNode,len=this._sortDef.length;
_5.toggleClass(n,"dojoxGridSorted",!!len);
_5.toggleClass(n,"dojoxGridSingleSorted",len===1);
_5.toggleClass(n,"dojoxGridNestSorted",len>1);
if(len>0){
this._currMainSort=this._sortDef[0].descending?"desc":"asc";
}
var idx,_15=this._excludedCoIdx=[];
this._headerNodes=_9("th",g.viewsHeaderNode).forEach(function(n){
idx=parseInt(n.getAttribute("idx"),10);
if(_5.style(n,"display")==="none"||g.layout.cells[idx]["nosort"]||(g.canSort&&!g.canSort(idx,g.layout.cells[idx]["field"]))){
_15.push(idx);
}
});
this._headerNodes.forEach(this._initHeaderNode,this);
this._initFocus();
if(_14){
this._focusHeader();
}
},_initHeaderNode:function(_16){
_5.toggleClass(_16,"dojoxGridSortNoWrap",true);
var _17=_9(".dojoxGridSortNode",_16)[0];
if(_17){
_5.toggleClass(_17,"dojoxGridSortNoWrap",true);
}
if(_2.indexOf(this._excludedCoIdx,_16.getAttribute("idx"))>=0){
_5.addClass(_16,"dojoxGridNoSort");
return;
}
if(!_9(".dojoxGridSortBtn",_16).length){
this._connects=_2.filter(this._connects,function(_18){
if(_18._sort){
_3.disconnect(_18);
return false;
}
return true;
});
var n=_5.create("a",{className:"dojoxGridSortBtn dojoxGridSortBtnNested",title:_a.substitute(this.nls.sortingState,[this.nls.nestedSort,this.nls.ascending]),innerHTML:"1"},_16.firstChild,"last");
n.onmousedown=_6.stop;
n=_5.create("a",{className:"dojoxGridSortBtn dojoxGridSortBtnSingle",title:_a.substitute(this.nls.sortingState,[this.nls.singleSort,this.nls.ascending])},_16.firstChild,"last");
n.onmousedown=_6.stop;
}else{
var a1=_9(".dojoxGridSortBtnSingle",_16)[0];
var a2=_9(".dojoxGridSortBtnNested",_16)[0];
a1.className="dojoxGridSortBtn dojoxGridSortBtnSingle";
a2.className="dojoxGridSortBtn dojoxGridSortBtnNested";
a2.innerHTML="1";
_5.removeClass(_16,"dojoxGridCellShowIndex");
_5.removeClass(_16.firstChild,"dojoxGridSortNodeSorted");
_5.removeClass(_16.firstChild,"dojoxGridSortNodeAsc");
_5.removeClass(_16.firstChild,"dojoxGridSortNodeDesc");
_5.removeClass(_16.firstChild,"dojoxGridSortNodeMain");
_5.removeClass(_16.firstChild,"dojoxGridSortNodeSub");
}
this._updateHeaderNodeUI(_16);
},_onHeaderCellClick:function(e){
this._focusRegion(e.target);
if(_5.hasClass(e.target,"dojoxGridSortBtn")){
this._onSortBtnClick(e);
_6.stop(e);
this._focusRegion(this._getCurrentRegion());
}
},_onHeaderCellMouseOver:function(e){
if(!e.cell){
return;
}
if(this._sortDef.length>1){
return;
}
if(this._sortData[e.cellIndex]&&this._sortData[e.cellIndex].index===0){
return;
}
var p;
for(p in this._sortData){
if(this._sortData[p]&&this._sortData[p].index===0){
_5.addClass(this._headerNodes[p],"dojoxGridCellShowIndex");
break;
}
}
if(!_5.hasClass(_7.body(),"dijit_a11y")){
return;
}
var i=e.cell.index,_19=e.cellNode;
var _1a=_9(".dojoxGridSortBtnSingle",_19)[0];
var _1b=_9(".dojoxGridSortBtnNested",_19)[0];
var _1c="none";
if(_5.hasClass(this.grid.domNode,"dojoxGridSingleSorted")){
_1c="single";
}else{
if(_5.hasClass(this.grid.domNode,"dojoxGridNestSorted")){
_1c="nested";
}
}
var _1d=_1b.getAttribute("orderIndex");
if(_1d===null||_1d===undefined){
_1b.setAttribute("orderIndex",_1b.innerHTML);
_1d=_1b.innerHTML;
}
if(this.isAsc(i)){
_1b.innerHTML=_1d+this._a11yText.dojoxGridDescending;
}else{
if(this.isDesc(i)){
_1b.innerHTML=_1d+this._a11yText.dojoxGridUnsortedTip;
}else{
_1b.innerHTML=_1d+this._a11yText.dojoxGridAscending;
}
}
if(this._currMainSort==="none"){
_1a.innerHTML=this._a11yText.dojoxGridAscending;
}else{
if(this._currMainSort==="asc"){
_1a.innerHTML=this._a11yText.dojoxGridDescending;
}else{
if(this._currMainSort==="desc"){
_1a.innerHTML=this._a11yText.dojoxGridUnsortedTip;
}
}
}
},_onHeaderCellMouseOut:function(e){
var p;
for(p in this._sortData){
if(this._sortData[p]&&this._sortData[p].index===0){
_5.removeClass(this._headerNodes[p],"dojoxGridCellShowIndex");
break;
}
}
},_onSortBtnClick:function(e){
var _1e=e.cell.index;
if(_5.hasClass(e.target,"dojoxGridSortBtnSingle")){
this._prepareSingleSort(_1e);
}else{
if(_5.hasClass(e.target,"dojoxGridSortBtnNested")){
this._prepareNestedSort(_1e);
}else{
return;
}
}
_6.stop(e);
this._doSort(_1e);
},_doSort:function(_1f){
if(!this._sortData[_1f]||!this._sortData[_1f].order){
this.setSortData(_1f,"order","asc");
}else{
if(this.isAsc(_1f)){
this.setSortData(_1f,"order","desc");
}else{
if(this.isDesc(_1f)){
this.removeSortData(_1f);
}
}
}
this._updateSortDef();
this.grid.sort();
this._initSort(true);
},setSortData:function(_20,_21,_22){
var sd=this._sortData[_20];
if(!sd){
sd=this._sortData[_20]={};
}
sd[_21]=_22;
},removeSortData:function(_23){
var d=this._sortData,i=d[_23].index,p;
delete d[_23];
for(p in d){
if(d[p].index>i){
d[p].index--;
}
}
},_prepareSingleSort:function(_24){
var d=this._sortData,p;
for(p in d){
delete d[p];
}
this.setSortData(_24,"index",0);
this.setSortData(_24,"order",this._currMainSort==="none"?null:this._currMainSort);
if(!this._sortData[_24]||!this._sortData[_24].order){
this._currMainSort="asc";
}else{
if(this.isAsc(_24)){
this._currMainSort="desc";
}else{
if(this.isDesc(_24)){
this._currMainSort="none";
}
}
}
},_prepareNestedSort:function(_25){
var i=this._sortData[_25]?this._sortData[_25].index:null;
if(i===0||!!i){
return;
}
this.setSortData(_25,"index",this._sortDef.length);
},_updateSortDef:function(){
this._sortDef.length=0;
var d=this._sortData,p;
for(p in d){
this._sortDef[d[p].index]={attribute:this.grid.layout.cells[p].field,descending:d[p].order==="desc"};
}
},_updateHeaderNodeUI:function(_26){
var _27=this._getCellByNode(_26);
var _28=_27.index;
var _29=this._sortData[_28];
var _2a=_9(".dojoxGridSortNode",_26)[0];
var _2b=_9(".dojoxGridSortBtnSingle",_26)[0];
var _2c=_9(".dojoxGridSortBtnNested",_26)[0];
_5.toggleClass(_2b,"dojoxGridSortBtnAsc",this._currMainSort==="asc");
_5.toggleClass(_2b,"dojoxGridSortBtnDesc",this._currMainSort==="desc");
if(this._currMainSort==="asc"){
_2b.title=_a.substitute(this.nls.sortingState,[this.nls.singleSort,this.nls.descending]);
}else{
if(this._currMainSort==="desc"){
_2b.title=_a.substitute(this.nls.sortingState,[this.nls.singleSort,this.nls.unsorted]);
}else{
_2b.title=_a.substitute(this.nls.sortingState,[this.nls.singleSort,this.nls.ascending]);
}
}
var _2d=this;
function _2e(){
var _2f="Column "+(_27.index+1)+" "+_27.field;
var _30="none";
var _31="ascending";
if(_29){
_30=_29.order==="asc"?"ascending":"descending";
_31=_29.order==="asc"?"descending":"none";
}
var _32=_2f+" - is sorted by "+_30;
var _33=_2f+" - is nested sorted by "+_30;
var _34=_2f+" - choose to sort by "+_31;
var _35=_2f+" - choose to nested sort by "+_31;
_2b.setAttribute("aria-label",_32);
_2c.setAttribute("aria-label",_33);
var _36=[_2d.connect(_2b,"onmouseover",function(){
_2b.setAttribute("aria-label",_34);
}),_2d.connect(_2b,"onmouseout",function(){
_2b.setAttribute("aria-label",_32);
}),_2d.connect(_2c,"onmouseover",function(){
_2c.setAttribute("aria-label",_35);
}),_2d.connect(_2c,"onmouseout",function(){
_2c.setAttribute("aria-label",_33);
})];
_2.forEach(_36,function(_37){
_37._sort=true;
});
};
_2e();
var _38=_5.hasClass(_7.body(),"dijit_a11y");
if(!_29){
_2c.innerHTML=this._sortDef.length+1;
_2c.title=_a.substitute(this.nls.sortingState,[this.nls.nestedSort,this.nls.ascending]);
if(_38){
_2a.innerHTML=this._a11yText.dojoxGridUnsortedTip;
}
return;
}
if(_29.index||(_29.index===0&&this._sortDef.length>1)){
_2c.innerHTML=_29.index+1;
}
_5.addClass(_2a,"dojoxGridSortNodeSorted");
if(this.isAsc(_28)){
_5.addClass(_2a,"dojoxGridSortNodeAsc");
_2c.title=_a.substitute(this.nls.sortingState,[this.nls.nestedSort,this.nls.descending]);
if(_38){
_2a.innerHTML=this._a11yText.dojoxGridAscendingTip;
}
}else{
if(this.isDesc(_28)){
_5.addClass(_2a,"dojoxGridSortNodeDesc");
_2c.title=_a.substitute(this.nls.sortingState,[this.nls.nestedSort,this.nls.unsorted]);
if(_38){
_2a.innerHTML=this._a11yText.dojoxGridDescendingTip;
}
}
}
_5.addClass(_2a,(_29.index===0?"dojoxGridSortNodeMain":"dojoxGridSortNodeSub"));
},isAsc:function(_39){
return this._sortData[_39].order==="asc";
},isDesc:function(_3a){
return this._sortData[_3a].order==="desc";
},_getCellByNode:function(_3b){
var i;
for(i=0;i<this._headerNodes.length;i++){
if(this._headerNodes[i]===_3b){
return this.grid.layout.cells[i];
}
}
return null;
},clearSort:function(){
this._sortData={};
this._sortDef.length=0;
},initCookieHandler:function(){
if(this.grid.addCookieHandler){
this.grid.addCookieHandler({name:"sortOrder",onLoad:_4.hitch(this,"_loadNestedSortingProps"),onSave:_4.hitch(this,"_saveNestedSortingProps")});
}
},_loadNestedSortingProps:function(_3c,_3d){
this._setGridSortIndex(_3c);
},_saveNestedSortingProps:function(_3e){
return this.getSortProps();
},_initFocus:function(){
var f=this.focus=this.grid.focus;
this._focusRegions=this._getRegions();
if(!this._headerArea){
var _3f=this._headerArea=f.getArea("header");
_3f.onFocus=f.focusHeader=_4.hitch(this,"_focusHeader");
_3f.onBlur=f.blurHeader=f._blurHeader=_4.hitch(this,"_blurHeader");
_3f.onMove=_4.hitch(this,"_onMove");
_3f.onKeyDown=_4.hitch(this,"_onKeyDown");
_3f._regions=[];
_3f.getRegions=null;
this.connect(this.grid,"onBlur","_blurHeader");
}
},_focusHeader:function(e){
if(this._currRegionIdx===-1){
this._onMove(0,1,null);
}else{
this._focusRegion(this._getCurrentRegion());
}
try{
_6.stop(e);
}
catch(e){
}
return true;
},_blurHeader:function(e){
this._blurRegion(this._getCurrentRegion());
return true;
},_onMove:function(_40,_41,e){
var _42=this._currRegionIdx||0,_43=this._focusRegions;
var _44=_43[_42+_41];
if(!_44){
return;
}else{
if(_5.style(_44,"display")==="none"||_5.style(_44,"visibility")==="hidden"){
this._onMove(_40,_41+(_41>0?1:-1),e);
return;
}
}
this._focusRegion(_44);
var _45=this._getRegionView(_44);
_45.scrollboxNode.scrollLeft=_45.headerNode.scrollLeft;
},_onKeyDown:function(e,_46){
if(_46){
switch(e.keyCode){
case _8.ENTER:
case _8.SPACE:
if(_5.hasClass(e.target,"dojoxGridSortBtnSingle")||_5.hasClass(e.target,"dojoxGridSortBtnNested")){
this._onSortBtnClick(e);
}
}
}
},_getRegionView:function(_47){
var _48=_47;
while(_48&&!_5.hasClass(_48,"dojoxGridHeader")){
_48=_48.parentNode;
}
if(_48){
return _2.filter(this.grid.views.views,function(_49){
return _49.headerNode===_48;
})[0]||null;
}
return null;
},_getRegions:function(){
var _4a=[],_4b=this.grid.layout.cells;
this._headerNodes.forEach(function(n,i){
if(_5.style(n,"display")==="none"){
return;
}
if(_4b[i]["isRowSelector"]){
_4a.push(n);
return;
}
_9(".dojoxGridSortNode,.dojoxGridSortBtnNested,.dojoxGridSortBtnSingle",n).forEach(function(_4c){
_4c.setAttribute("tabindex",0);
_4a.push(_4c);
});
},this);
return _4a;
},_focusRegion:function(_4d){
if(!_4d){
return;
}
var _4e=this._getCurrentRegion();
if(_4e&&_4d!==_4e){
this._blurRegion(_4e);
}
var _4f=this._getRegionHeader(_4d);
_5.addClass(_4f,"dojoxGridCellSortFocus");
if(_5.hasClass(_4d,"dojoxGridSortNode")){
_5.addClass(_4d,"dojoxGridSortNodeFocus");
}else{
if(_5.hasClass(_4d,"dojoxGridSortBtn")){
_5.addClass(_4d,"dojoxGridSortBtnFocus");
}
}
_4d.focus();
this.focus.currentArea("header");
this._currRegionIdx=_2.indexOf(this._focusRegions,_4d);
},_blurRegion:function(_50){
if(!_50){
return;
}
var _51=this._getRegionHeader(_50);
_5.removeClass(_51,"dojoxGridCellSortFocus");
if(_5.hasClass(_50,"dojoxGridSortNode")){
_5.removeClass(_50,"dojoxGridSortNodeFocus");
}else{
if(_5.hasClass(_50,"dojoxGridSortBtn")){
_5.removeClass(_50,"dojoxGridSortBtnFocus");
}
}
_50.blur();
},_getCurrentRegion:function(){
return this._focusRegions?this._focusRegions[this._currRegionIdx]:null;
},_getRegionHeader:function(_52){
while(_52&&!_5.hasClass(_52,"dojoxGridCell")){
_52=_52.parentNode;
}
return _52;
},destroy:function(){
this._sortDef=this._sortData=null;
this._headerNodes=this._focusRegions=null;
this.inherited(arguments);
}});
_c.registerPlugin(_d);
return _d;
}); | PypiClean |
/Chameleon-4.1.0.tar.gz/Chameleon-4.1.0/src/chameleon/zpt/template.py | from functools import partial
from hashlib import sha256
from os.path import dirname
from ..astutil import Symbol
from ..compiler import ExpressionEngine
from ..i18n import simple_translate
from ..loader import TemplateLoader
from ..tal import RepeatDict
from ..tales import DEFAULT_MARKER
from ..tales import ExistsExpr
from ..tales import ExpressionParser
from ..tales import ImportExpr
from ..tales import NotExpr
from ..tales import ProxyExpr
from ..tales import PythonExpr
from ..tales import StringExpr
from ..tales import StructureExpr
from ..template import BaseTemplate
from ..template import BaseTemplateFile
from .program import MacroProgram
try:
bytes
except NameError:
bytes = str
class PageTemplate(BaseTemplate):
"""Constructor for the page template language.
Takes a string input as the only positional argument::
template = PageTemplate("<div>Hello, ${name}.</div>")
Configuration (keyword arguments):
``auto_reload``
Enables automatic reload of templates. This is mostly useful
in a development mode since it takes a significant performance
hit.
``default_expression``
Set the default expression type. The default setting is
``python``.
``encoding``
The default text substitution value is a string.
Pass an encoding to allow encoded byte string input
(e.g. UTF-8).
``boolean_attributes``
Attributes included in this set are treated as booleans: if a
true value is provided, the attribute value is the attribute
name, e.g.::
boolean_attributes = {"selected"}
If we insert an attribute with the name "selected" and
provide a true value, the attribute will be rendered::
selected="selected"
If a false attribute is provided (including the empty string),
the attribute is dropped.
The special return value ``default`` drops or inserts the
attribute based on the value element attribute value.
``translate``
Use this option to set a translation function.
Example::
def translate(msgid, domain=None, mapping=None, default=None,
context=None):
...
return translation
Note that if ``target_language`` is provided at render time,
the translation function must support this argument.
``implicit_i18n_translate``
Enables implicit translation for text appearing inside
elements. Default setting is ``False``.
While implicit translation does work for text that includes
expression interpolation, each expression must be simply a
variable name (e.g. ``${foo}``); otherwise, the text will not
be marked for translation.
``implicit_i18n_attributes``
Any attribute contained in this set will be marked for
implicit translation. Each entry must be a lowercase string.
Example::
implicit_i18n_attributes = set(['alt', 'title'])
``on_error_handler``
This is an optional exception handler that is invoked during the
"on-error" fallback block.
``strict``
Enabled by default. If disabled, expressions are only required
to be valid at evaluation time.
This setting exists to provide compatibility with the
reference implementation which compiles expressions at
evaluation time.
``trim_attribute_space``
If set, additional attribute whitespace will be stripped.
``restricted_namespace``
True by default. If set False, ignored all namespace except chameleon
default namespaces. It will be useful working with attributes based
javascript template renderer like VueJS.
Example:
<div v-bind:id="dynamicId"></div>
<button v-on:click="greet">Greet</button>
<button @click="greet">Greet</button>
``tokenizer``
None by default. If provided, this tokenizer is used instead of the
default (which is selected based on the template mode parameter.)
``value_repr``
This can be used to override the default value representation
function which is used to format values when formatting an
exception output. The function must not raise an exception (it
should be safe to call with any value).
``default_marker``
This default marker is used as the marker object bound to the `default`
name available to any expression. The semantics is such that if an
expression evaluates to the marker object, the default action is used;
for an attribute expression, this is the static attribute text; for an
element this is the static element text. If there is no static text
then the default action is similar to an expression result of `None`.
Output is of type ``str``.
"""
expression_types = {
'python': PythonExpr,
'string': StringExpr,
'not': NotExpr,
'exists': ExistsExpr,
'import': ImportExpr,
'structure': StructureExpr,
}
default_expression = 'python'
translate = staticmethod(simple_translate)
encoding = None
boolean_attributes = set()
mode = "xml"
implicit_i18n_translate = False
implicit_i18n_attributes = set()
trim_attribute_space = False
enable_data_attributes = False
enable_comment_interpolation = True
on_error_handler = None
restricted_namespace = True
tokenizer = None
default_marker = Symbol(DEFAULT_MARKER)
def __init__(self, body, **config):
self.macros = Macros(self)
super().__init__(body, **config)
def __getitem__(self, name):
return self.macros[name]
@property
def builtins(self):
return self._builtins()
@property
def engine(self):
return partial(
ExpressionEngine,
self.expression_parser,
default_marker=self.default_marker,
)
@property
def expression_parser(self):
return ExpressionParser(self.expression_types, self.default_expression)
def parse(self, body):
return MacroProgram(
body, self.mode, self.filename,
escape=True if self.mode == "xml" else False,
default_marker=self.default_marker,
boolean_attributes=self.boolean_attributes,
implicit_i18n_translate=self.implicit_i18n_translate,
implicit_i18n_attributes=self.implicit_i18n_attributes,
trim_attribute_space=self.trim_attribute_space,
enable_data_attributes=self.enable_data_attributes,
enable_comment_interpolation=self.enable_comment_interpolation,
restricted_namespace=self.restricted_namespace,
tokenizer=self.tokenizer
)
def render(self, encoding=None, **_kw):
"""Render template to string.
If providd, the ``encoding`` argument overrides the template
default value.
Additional keyword arguments are passed as template variables.
In addition, some also have a special meaning:
``translate``
This keyword argument will override the default template
translate function.
``target_language``
This will be used as the default argument to the translate
function if no `i18n:target` value is provided.
If not provided, the `translate` function will need to
negotiate a language based on the provided context.
"""
translate = _kw.get('translate')
if translate is None:
translate = self.translate
# This should not be necessary, but we include it for
# backward compatibility.
if translate is None:
translate = type(self).translate
encoding = encoding if encoding is not None else self.encoding
if encoding is not None:
def translate(msgid, txl=translate, encoding=encoding, **kwargs):
if isinstance(msgid, bytes):
msgid = bytes.decode(msgid, encoding)
return txl(msgid, **kwargs)
def decode(inst, encoding=encoding):
return bytes.decode(inst, encoding, 'ignore')
else:
decode = bytes.decode
target_language = _kw.get('target_language')
setdefault = _kw.setdefault
setdefault("__translate", translate)
setdefault("__convert",
partial(translate, target_language=target_language))
setdefault("__decode", decode)
setdefault("target_language", None)
setdefault("__on_error_handler", self.on_error_handler)
# Make sure we have a repeat dictionary
if 'repeat' not in _kw:
_kw['repeat'] = RepeatDict({})
return super().render(**_kw)
def include(self, *args, **kwargs):
self.cook_check()
self._render(*args, **kwargs)
def digest(self, body, names):
hex = super().digest(body, names)
if isinstance(hex, str):
hex = hex.encode('utf-8')
digest = sha256(hex)
digest.update(';'.join(names).encode('utf-8'))
for attr in (
'trim_attribute_space',
'implicit_i18n_translate',
'strict'
):
v = getattr(self, attr)
digest.update(
(";{}={}".format(attr, str(v))).encode('ascii')
)
return digest.hexdigest()[:32]
def _builtins(self):
return {
'template': self,
'macros': self.macros,
'nothing': None,
}
class PageTemplateFile(PageTemplate, BaseTemplateFile):
"""File-based constructor.
Takes a string input as the only positional argument::
template = PageTemplateFile(absolute_path)
Note that the file-based template class comes with the expression
type ``load`` which loads templates relative to the provided
filename.
Below are listed the configuration arguments specific to
file-based templates; see the string-based template class for
general options and documentation:
Configuration (keyword arguments):
``loader_class``
The provided class will be used to create the template loader
object. The default implementation supports relative and
absolute path specs.
The class must accept keyword arguments ``search_path``
(sequence of paths to search for relative a path spec) and
``default_extension`` (if provided, this should be added to
any path spec).
``prepend_relative_search_path``
Inserts the path relative to the provided template file path
into the template search path.
The default setting is ``True``.
``search_path``
If provided, this is used as the search path for the ``load:``
expression. It must be a string or an iterable yielding a
sequence of strings.
"""
expression_types = PageTemplate.expression_types.copy()
expression_types['load'] = partial(
ProxyExpr, '__loader',
ignore_prefix=False
)
prepend_relative_search_path = True
def __init__(self, filename, search_path=None, loader_class=TemplateLoader,
**config):
if search_path is None:
search_path = []
else:
if isinstance(search_path, str):
search_path = [search_path]
else:
search_path = list(search_path)
def post_init():
# If the flag is set (this is the default), prepend the path
# relative to the template file to the search path
if self.prepend_relative_search_path:
path = dirname(self.filename)
search_path.insert(0, path)
loader = loader_class(search_path=search_path, **config)
template_class = type(self)
# Bind relative template loader instance to the same template
# class, providing the same keyword arguments.
self._loader = loader.bind(template_class)
super().__init__(
filename,
post_init_hook=post_init,
**config
)
def _builtins(self):
d = super()._builtins()
d['__loader'] = self._loader
return d
class PageTextTemplate(PageTemplate):
"""Text-based template class.
Takes a non-XML input::
template = PageTextTemplate("Hello, ${name}.")
This is similar to the standard library class ``string.Template``,
but uses the expression engine to substitute variables.
"""
mode = "text"
class PageTextTemplateFile(PageTemplateFile):
"""File-based constructor."""
mode = "text"
def render(self, **vars):
result = super().render(**vars)
return result.encode(self.encoding or 'utf-8')
class Macro:
__slots__ = "include",
def __init__(self, render):
self.include = render
class Macros:
__slots__ = "template",
def __init__(self, template):
self.template = template
def __getitem__(self, name):
name = name.replace('-', '_')
self.template.cook_check()
try:
function = getattr(self.template, "_render_%s" % name)
except AttributeError:
raise KeyError(
"Macro does not exist: '%s'." % name)
return Macro(function)
@property
def names(self):
self.template.cook_check()
result = []
for name in self.template.__dict__:
if name.startswith('_render_'):
result.append(name[8:])
return result | PypiClean |
/Nuitka_fixed-1.1.2-cp310-cp310-win_amd64.whl/nuitka/plugins/standard/AntiBloatPlugin.py | import ast
import re
from nuitka.containers.OrderedDicts import OrderedDict
from nuitka.Errors import NuitkaForbiddenImportEncounter
from nuitka.plugins.PluginBase import NuitkaPluginBase
from nuitka.utils.ModuleNames import ModuleName
from nuitka.utils.Yaml import getYamlPackageConfiguration
class NuitkaPluginAntiBloat(NuitkaPluginBase):
plugin_name = "anti-bloat"
plugin_desc = (
"Patch stupid imports out of widely used library modules source codes."
)
@staticmethod
def isAlwaysEnabled():
return True
def __init__(
self,
noinclude_setuptools_mode,
noinclude_pytest_mode,
noinclude_unittest_mode,
noinclude_ipython_mode,
noinclude_default_mode,
custom_choices,
show_changes,
):
# Many details, due to many repetitive arguments, pylint: disable=too-many-branches
self.show_changes = show_changes
# Default manually to default argument value:
if noinclude_setuptools_mode is None:
noinclude_setuptools_mode = noinclude_default_mode
if noinclude_pytest_mode is None:
noinclude_pytest_mode = noinclude_default_mode
if noinclude_unittest_mode is None:
noinclude_unittest_mode = noinclude_default_mode
if noinclude_ipython_mode is None:
noinclude_ipython_mode = noinclude_default_mode
self.config = getYamlPackageConfiguration()
self.handled_modules = OrderedDict()
# These should be checked, to allow disabling anti-bloat contents.
self.control_tags = OrderedDict()
if noinclude_setuptools_mode != "allow":
self.handled_modules["setuptools"] = noinclude_setuptools_mode
self.handled_modules["setuptools_scm"] = noinclude_setuptools_mode
else:
self.control_tags["use_setuptools"] = True
if noinclude_pytest_mode != "allow":
self.handled_modules["pytest"] = noinclude_pytest_mode
self.handled_modules["nose2"] = noinclude_pytest_mode
self.handled_modules["nose"] = noinclude_pytest_mode
else:
self.control_tags["use_pytest"] = True
if noinclude_unittest_mode != "allow":
self.handled_modules["unittest"] = noinclude_unittest_mode
else:
self.control_tags["use_unittest"] = True
if noinclude_ipython_mode != "allow":
self.handled_modules["IPython"] = noinclude_ipython_mode
else:
self.control_tags["use_ipython"] = True
for custom_choice in custom_choices:
if ":" not in custom_choice:
self.sysexit(
"Error, malformed value '%s' for '--noinclude-custom-mode' used."
% custom_choice
)
module_name, mode = custom_choice.rsplit(":", 1)
if mode not in ("error", "warning", "nofollow", "allow", "bytecode"):
self.sysexit(
"Error, illegal mode given '%s' in '--noinclude-custom-mode=%s'"
% (mode, custom_choice)
)
self.handled_modules[ModuleName(module_name)] = mode
def getCacheContributionValues(self, module_name):
config = self.config.get(module_name, section="anti-bloat")
if config:
yield str(config)
@classmethod
def addPluginCommandLineOptions(cls, group):
group.add_option(
"--show-anti-bloat-changes",
action="store_true",
dest="show_changes",
default=False,
help="""\
Annotate what changes are by the plugin done.""",
)
group.add_option(
"--noinclude-setuptools-mode",
action="store",
dest="noinclude_setuptools_mode",
choices=("error", "warning", "nofollow", "allow"),
default=None,
help="""\
What to do if a 'setuptools' or import is encountered. This package can be big with
dependencies, and should definitely be avoided. Also handles 'setuptools_scm'.""",
)
group.add_option(
"--noinclude-pytest-mode",
action="store",
dest="noinclude_pytest_mode",
choices=("error", "warning", "nofollow", "allow"),
default=None,
help="""\
What to do if a 'pytest' import is encountered. This package can be big with
dependencies, and should definitely be avoided. Also handles 'nose' imports.""",
)
group.add_option(
"--noinclude-unittest-mode",
action="store",
dest="noinclude_unittest_mode",
choices=("error", "warning", "nofollow", "allow"),
default=None,
help="""\
What to do if a unittest import is encountered. This package can be big with
dependencies, and should definitely be avoided.""",
)
group.add_option(
"--noinclude-IPython-mode",
action="store",
dest="noinclude_ipython_mode",
choices=("error", "warning", "nofollow", "allow"),
default=None,
help="""\
What to do if a IPython import is encountered. This package can be big with
dependencies, and should definitely be avoided.""",
)
group.add_option(
"--noinclude-default-mode",
action="store",
dest="noinclude_default_mode",
choices=("error", "warning", "nofollow", "allow"),
default="warning",
help="""\
This actually provides the default "warning" value for above options, and
can be used to turn all of these on.""",
)
group.add_option(
"--noinclude-custom-mode",
action="append",
dest="custom_choices",
default=[],
help="""\
What to do if a specific import is encountered. Format is module name,
which can and should be a top level package and then one choice, "error",
"warning", "nofollow", e.g. PyQt5:error.""",
)
def _onModuleSourceCode(self, module_name, anti_bloat_config, source_code):
# Complex dealing with many cases, pylint: disable=too-many-branches,too-many-locals,too-many-statements
description = anti_bloat_config.get("description", "description not given")
# To allow detection if it did anything.
change_count = 0
context = {}
context_code = anti_bloat_config.get("context", "")
if type(context_code) in (tuple, list):
context_code = "\n".join(context_code)
# We trust the yaml files, pylint: disable=eval-used,exec-used
context_ready = not bool(context_code)
for replace_src, replace_code in anti_bloat_config.get(
"replacements", {}
).items():
# Avoid the eval, if the replace doesn't hit.
if replace_src not in source_code:
continue
if replace_code:
if not context_ready:
try:
exec(context_code, context)
except Exception as e: # pylint: disable=broad-except
self.sysexit(
"Error, cannot execute context code '%s' due to: %s"
% (context_code, e)
)
context_ready = True
try:
replace_dst = eval(replace_code, context)
except Exception as e: # pylint: disable=broad-except
self.sysexit(
"Error, cannot evaluate code '%s' in '%s' due to: %s"
% (replace_code, context_code, e)
)
else:
replace_dst = ""
if type(replace_dst) is not str:
self.sysexit(
"Error, expression needs to generate string, not %s"
% type(replace_dst)
)
old = source_code
source_code = source_code.replace(replace_src, replace_dst)
if old != source_code:
change_count += 1
for replace_src, replace_dst in anti_bloat_config.get(
"replacements_plain", {}
).items():
old = source_code
source_code = source_code.replace(replace_src, replace_dst)
if old != source_code:
change_count += 1
for replace_src, replace_dst in anti_bloat_config.get(
"replacements_re", {}
).items():
old = source_code
source_code = re.sub(replace_src, replace_dst, source_code)
if old != source_code:
change_count += 1
append_code = anti_bloat_config.get("append_result", "")
if type(append_code) in (tuple, list):
append_code = "\n".join(append_code)
if append_code:
if not context_ready:
exec(context_code, context)
context_ready = True
try:
append_result = eval(append_code, context)
except Exception as e: # pylint: disable=broad-except
self.sysexit(
"Error, cannot evaluate code '%s' in '%s' due to: %s"
% (append_code, context_code, e)
)
source_code += "\n" + append_result
change_count += 1
append_plain = anti_bloat_config.get("append_plain", "")
if type(append_plain) in (tuple, list):
append_plain = "\n".join(append_plain)
if append_plain:
source_code += "\n" + append_plain
change_count += 1
if change_count > 0 and self.show_changes:
self.info(
"Handling module '%s' with %d change(s) for: %s."
% (module_name.asString(), change_count, description)
)
module_code = anti_bloat_config.get("module_code", None)
if module_code is not None:
assert not change_count
if self.show_changes:
self.info(
"Handling module '%s' with full replacement : %s."
% (module_name.asString(), description)
)
source_code = module_code
return source_code
def onModuleSourceCode(self, module_name, source_code):
for anti_bloat_config in self.config.get(module_name, section="anti-bloat"):
if self.evaluateCondition(
full_name=module_name,
condition=anti_bloat_config.get("when", "True"),
# Allow disabling config for a module with matching control tags.
control_tags=self.control_tags,
):
source_code = self._onModuleSourceCode(
module_name=module_name,
anti_bloat_config=anti_bloat_config,
source_code=source_code,
)
return source_code
def _onFunctionBodyParsing(
self, module_name, anti_bloat_config, function_name, body
):
context = {}
context_code = anti_bloat_config.get("context", "")
if type(context_code) in (tuple, list):
context_code = "\n".join(context_code)
# We trust the yaml files, pylint: disable=eval-used,exec-used
context_ready = not bool(context_code)
replace_code = anti_bloat_config.get("change_function", {}).get(function_name)
if replace_code == "un-callable":
replace_code = """'raise RuntimeError("Must not call %s.%s")'""" % (
module_name,
function_name,
)
if replace_code is not None:
if not context_ready:
exec(context_code, context)
context_ready = True
try:
replacement = eval(replace_code, context)
except Exception as e: # pylint: disable=broad-except
self.sysexit(
"Error, cannot evaluate code '%s' in '%s' due to: %s"
% (replace_code, context_code, e)
)
# Single node is required, extract the generated module body with
# single expression only statement value or a function body.
replacement = ast.parse(replacement).body[0]
if type(replacement) is ast.Expr:
if type(replacement.value) is ast.Lambda:
body[:] = [ast.Return(replacement.value.body)]
else:
body[:] = [ast.Return(replacement.value)]
elif type(replacement) is ast.Raise:
body[:] = [replacement]
else:
body[:] = replacement.body
if self.show_changes:
self.info(
"Updated module '%s' function '%s'."
% (module_name.asString(), function_name)
)
def onFunctionBodyParsing(self, module_name, function_name, body):
config = self.config.get(module_name, section="anti-bloat")
if config:
for anti_bloat_config in config:
self._onFunctionBodyParsing(
module_name=module_name,
anti_bloat_config=anti_bloat_config,
function_name=function_name,
body=body,
)
def onModuleRecursion(self, module_name, module_filename, module_kind):
for handled_module_name, mode in self.handled_modules.items():
if module_name.hasNamespace(handled_module_name):
# Make sure the compilation aborts.
if mode == "error":
raise NuitkaForbiddenImportEncounter(module_name)
if mode == "warning":
self.warning("Unwanted import of '%s' encountered." % module_name)
def onModuleEncounter(self, module_name, module_filename, module_kind):
for handled_module_name, mode in self.handled_modules.items():
if module_name.hasNamespace(handled_module_name):
# Either issue a warning, or pretend the module doesn't exist for standalone or
# at least will not be included.
if mode == "nofollow":
if self.show_changes:
self.info(
"Forcing import of '%s' to not be followed." % module_name
)
return (
False,
"user requested to not follow '%s' import" % module_name,
)
# Do not provide an opinion about it.
return None
def decideCompilation(self, module_name):
for handled_module_name, mode in self.handled_modules.items():
if mode != "bytecode":
continue
if module_name.hasNamespace(handled_module_name):
return "bytecode" | PypiClean |
/NREL_reVX-0.3.53-py3-none-any.whl/reVX/config/turbine_flicker.py | from reV.config.base_analysis_config import AnalysisConfig
class TurbineFlickerConfig(AnalysisConfig):
"""Config framework for turbine flicker calculation"""
NAME = 'TurbineFlicker'
REQUIREMENTS = ('excl_fpath', 'res_fpath', 'hub_height', 'rotor_diameter')
def __init__(self, config):
"""
Parameters
----------
config : str | dict
Path to config .json or pre-extracted config input dictionary.
"""
super().__init__(config)
self._default_tm_dset = 'techmap_wtk'
self._default_resolution = 128
self._default_grid_cell_size = 90
self._default_max_flicker_exclusion_range = "10x"
self._default_building_threshold = 0
self._default_flicker_threshold = 30
self._default_hsds_flag = False
@property
def excl_fpath(self):
"""Get the exclusions .h5 file path (required)."""
return self['excl_fpath']
@property
def res_fpath(self):
"""Get the resource .h5 file path (required)."""
return self['res_fpath']
@property
def regulations_fpath(self):
"""Get regulations .csv path"""
return self.get('regulations_fpath', None)
@property
def building_layer(self):
"""Get the building layer name."""
return self.get('building_layer', None)
@property
def hub_height(self):
"""
Get the turbine hub-height for which shadow flicker will be computed.
"""
return self['hub_height']
@property
def rotor_diameter(self):
"""
Get turbine rotor diameter for which shadow flicker will be computed.
"""
return self['rotor_diameter']
@property
def tm_dset(self):
"""Get the techmap dataset name."""
return self.get('tm_dset', self._default_tm_dset)
@property
def resolution(self):
"""Get the supply curve resolution."""
return self.get('resolution', self._default_resolution)
@property
def grid_cell_size(self):
"""Get the length (m) of a side of each grid cell in `excl_fpath`."""
return self.get('grid_cell_size', self._default_grid_cell_size)
@property
def max_flicker_exclusion_range(self):
"""
Get the max distance (m) that flicker exclusions will extend in
any of the cardinal directions.
"""
return self.get('max_flicker_exclusion_range',
self._default_max_flicker_exclusion_range)
@property
def building_threshold(self):
"""
Get the threshold for which buildings are identified in the
building_layer.
"""
return self.get('building_threshold', self._default_building_threshold)
@property
def flicker_threshold(self):
"""
Get the threshold at which shadow flicker will lead to an exclusions,
values are in hours
"""
return self.get('flicker_threshold', self._default_flicker_threshold)
@property
def out_layer(self):
"""
Get the output layer name under which turbine flicker exclusions will
be saved
"""
return self.get('out_layer', None)
@property
def replace(self):
"""Get replace flag"""
return self.get('replace', False)
@property
def hsds(self):
"""Get hsds flag"""
return self.get('hsds', self._default_hsds_flag) | PypiClean |
/ALP4lib-1.0.1.tar.gz/ALP4lib-1.0.1/README.md | # ALP4lib
ALP4lib is a Python module to control Vialux DMDs based on ALP4.X API.
This is not an independant open source module, it uses the .ddl files provided by [Vialux](http://www.vialux.de/en/).
This software is experimental, use it at your own risk.
## What is it?
This module wraps the basic function of the Vialux dlls to control a digitial micro-mirror device with a Vialux board.
Vialux provides dlls and also modules for Matlab and Labview but not for Python.
This code is tested with a device using the 4.3 version of the ALP API, other versions may have issues.
LED control related functions are not implemented.
Please read the ALP API description provided with the [Vialux](http://www.vialux.de/en/) ALP installation.
## Requirements
* Windows 32 or 64,
* Vialux drivers and the ALP4.X dll files available for download on [Vialux website](http://www.vialux.de/en/),
* Compatible Python 2.7 and 3.X.
## Citing the code
If the code was helpful to your work, please consider citing it:
[](https://zenodo.org/badge/latestdoi/70229567)
## Installation
### Manual installation
Just copy the ALP4.py file in the working directory.
### Automatic installation
To automatically download and copy the module in the python directory (so it can be available from anywhere), run the command:
```shell
pip install ALP4lib
```
or
```shell
easy_install ALP4lib
```
### Installation from source (Github)
To install the latest version from Github, clone the repository, and install the package with the the following command.
```shell script
python setup.py install
```
Instead of the normal installation, if you want to install ALP4lib in [development mode](https://setuptools.readthedocs.io/en/latest/userguide/development_mode.html), use:
```shell script
python setup.py develop
```
## Copy the .dll
The win32 ALPX.dll files should be directly in the working directory and the win64 dll with the same name in a /x64 subfolder.
Alternatively, a different dll directory can be set at the initialization of the DMD handler object.
The dlls have the following names respectively for the 4.1, 4.2 and 4.3 versions of the ALP API: 'alp41.dll', 'alp42.dll' and 'alp4395.dll'.
## A simple example
```python
import numpy as np
from ALP4 import *
import time
# Load the Vialux .dll
DMD = ALP4(version = '4.3')
# Initialize the device
DMD.Initialize()
# Binary amplitude image (0 or 1)
bitDepth = 1
imgBlack = np.zeros([DMD.nSizeY,DMD.nSizeX])
imgWhite = np.ones([DMD.nSizeY,DMD.nSizeX])*(2**8-1)
imgSeq = np.concatenate([imgBlack.ravel(),imgWhite.ravel()])
# Allocate the onboard memory for the image sequence
DMD.SeqAlloc(nbImg = 2, bitDepth = bitDepth)
# Send the image sequence as a 1D list/array/numpy array
DMD.SeqPut(imgData = imgSeq)
# Set image rate to 50 Hz
DMD.SetTiming(pictureTime = 20000)
# Run the sequence in an infinite loop
DMD.Run()
time.sleep(10)
# Stop the sequence display
DMD.Halt()
# Free the sequence from the onboard memory
DMD.FreeSeq()
# De-allocate the device
DMD.Free()
```
| PypiClean |
/OctoBot-Services-1.6.2.tar.gz/OctoBot-Services-1.6.2/octobot_services/interfaces/util/util.py | import threading
import octobot_commons.logging as logging
import octobot_commons.constants as commons_constants
import octobot_trading.api as trading_api
import octobot_services.interfaces as interfaces
def get_exchange_managers(bot_api=None, independent_backtesting=None, trading_exchanges_only=True):
if bot_api is not None:
return _filter_exchange_manager(trading_api.get_exchange_managers_from_exchange_ids(
bot_api.get_exchange_manager_ids()), trading_exchanges_only)
elif independent_backtesting is not None:
try:
import octobot.api as api
return _filter_exchange_manager(
trading_api.get_exchange_managers_from_exchange_ids(
api.get_independent_backtesting_exchange_manager_ids(independent_backtesting)),
trading_exchanges_only)
except ImportError:
logging.get_logger("octobot_services/interfaces/util/util.py").error(
"get_exchange_managers requires OctoBot package installed")
else:
return _filter_exchange_manager(interfaces.AbstractInterface.get_exchange_managers(), trading_exchanges_only)
def _filter_exchange_manager(exchange_managers, trading_exchanges_only):
if trading_exchanges_only:
return trading_api.get_trading_exchanges(exchange_managers)
return exchange_managers
def run_in_bot_main_loop(coroutine, blocking=True, log_exceptions=True,
timeout=commons_constants.DEFAULT_FUTURE_TIMEOUT):
if blocking:
return interfaces.get_bot_api().run_in_main_asyncio_loop(coroutine, log_exceptions=log_exceptions,
timeout=timeout)
else:
threading.Thread(
target=interfaces.get_bot_api().run_in_main_asyncio_loop,
name=f"run_in_bot_main_loop {coroutine.__name__}",
args=(coroutine,),
kwargs={"timeout": timeout}
).start()
def run_in_bot_async_executor(coroutine):
return interfaces.get_bot_api().run_in_async_executor(coroutine) | PypiClean |
/Newcalls-0.0.1-cp37-cp37m-win_amd64.whl/newcalls/node_modules/util-deprecate/README.md | util-deprecate
==============
### The Node.js `util.deprecate()` function with browser support
In Node.js, this module simply re-exports the `util.deprecate()` function.
In the web browser (i.e. via browserify), a browser-specific implementation
of the `util.deprecate()` function is used.
## API
A `deprecate()` function is the only thing exposed by this module.
``` javascript
// setup:
exports.foo = deprecate(foo, 'foo() is deprecated, use bar() instead');
// users see:
foo();
// foo() is deprecated, use bar() instead
foo();
foo();
```
## License
(The MIT License)
Copyright (c) 2014 Nathan Rajlich <[email protected]>
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
| PypiClean |
/Bluebook-0.0.1.tar.gz/Bluebook-0.0.1/pylot/component/views/contact_page.py | from pylot import (Pylot,
utils,
route,
abort,
redirect,
request,
url_for)
from pylot.component import (mailer,
recaptcha)
def contact_page(**kwargs):
template_dir = kwargs["template_dir"] if "template_dir" \
in kwargs else "Pylot/ContactPage"
template_page = template_dir + "/%s.html"
def wrapper(view):
@view.extends__
@route("contact", methods=["GET", "POST"], endpoint="ContactPage")
def contact(self):
if request.method == "POST":
error_message = None
email = request.form.get("email")
subject = request.form.get("subject")
message = request.form.get("message")
name = request.form.get("name")
contact_email = self.get_config__("CONTACT_PAGE_EMAIL_RECIPIENT")
if recaptcha.verify():
if not email or not subject or not message:
error_message = "All fields are required"
elif not utils.is_valid_email(email):
error_message = "Invalid email address"
if error_message:
self.flash_error__(error_message)
else:
mailer.send_template("contact-us.txt",
to=contact_email,
reply_to=email,
mail_from=email,
mail_subject=subject,
mail_message=message,
mail_name=name
)
self.flash_success__("Message sent successfully! We'll get in touch with you soon.")
else:
self.flash_error__("Invalid security code")
return redirect(url_for("ContactPage"))
else:
self.set_meta__(title="Contact Us")
return self.render(view_template=template_page % "contact")
return view
return wrapper | PypiClean |
/ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/muxbyname.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified June 22, 2016
Description: Multiplexes reads from multiple files after renaming them based on their initial file.
Opposite of demuxbyname.
Usage: muxbyname.sh in=<file,file,file...> out=<output file>
Input files may also be given without an in= prefix, so that you can use wildcards:
muxbyname.sh *.fastq out=muxed.fastq
Standard parameters:
in=<file,file> A list of input files.
in2=<file,file> Read 2 input if reads are in paired files.
out=<file> Primary output, or read 1 output.
out2=<file> Read 2 output if reads are in paired files.
overwrite=f (ow) Set to false to force the program to abort rather than
overwrite an existing file.
showspeed=t (ss) Set to 'f' to suppress display of processing speed.
ziplevel=2 (zl) Set to 1 (lowest) through 9 (max) to change compression
level; lower compression is faster.
Processing parameters:
None yet!
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will
specify 200 megs. The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory
exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx400m"
z2="-Xms400m"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
}
calcXmx "$@"
muxbyname() {
local CMD="java $EA $EOOM $z -cp $CP driver.RenameAndMux $@"
echo $CMD >&2
eval $CMD
}
muxbyname "$@" | PypiClean |
/BigJob-0.64.5.tar.gz/BigJob-0.64.5/docs/source/tutorial/index.rst | ########
Tutorial
########
This tutorial will walk you through the installation as well as the most important features of BigJob. Many of these examples illustrate common workflow usages as well as distributed data movement.
**Prerequisites:**
* You are familiar with Linux or UNIX
* You can read and write Python code
* You can use SSH and understand how public and private keys work
* You understand the basic concepts of distributed computing
**You will learn how to:**
* Install BigJob into user space on your own machine or XSEDE resource
* Learn to submit multiple jobs through a 'container job' to either your local machine or a remote resource
* Write a program that runs multiple jobs to form a simple ensemble workflow
* Write programs that demonstrate more complex, frequently used workflows
* Use file transfer capabilities to transfer input files to the location of an executable
**Contents:**
.. toctree::
:numbered:
:maxdepth: 1
part1
part2
part3
part4
table
| PypiClean |
/AmFast-0.5.3-r541.tar.gz/AmFast-0.5.3-r541/amfast/remoting/gae_subscription_manager.py | import pickle
from google.appengine.ext import db
import amfast
from subscription_manager import Subscription, SubscriptionManager
class GaeSubscription(db.Model, Subscription):
"""A client's subscription to a topic, persisted in a Google Datastore."""
@classmethod
def getKeyName(cls, connection_id, client_id, topic):
return ':'.join((connection_id, client_id, topic))
connection_id = db.StringProperty(required=True)
client_id = db.StringProperty(required=True)
topic = db.StringProperty(required=True)
class GaeMessageBody(db.Model):
"""Flex message body persisted in a Google Datastore."""
p_message = db.BlobProperty(required=True)
class GaeMessageMetadata(db.Model):
"""Flex message metadata persisted in a Google Datastore."""
time_to_live = db.FloatProperty(required=True)
timestamp = db.FloatProperty(required=True)
topic = db.StringProperty(required=True)
message_body = db.ReferenceProperty(reference_class=GaeMessageBody, required=True)
class GaeSubscriptionManager(SubscriptionManager):
"""Stores subscriptions in Google DataStore."""
def reset(self):
query = GaeSubscription.all(keys_only=True)
for result in query:
db.delete(result)
query = GaeMessageMetadata.all(keys_only=True)
for result in query:
db.delete(result)
query = GaeMessageBody.all(keys_only=True)
for result in query:
db.delete(result)
def subscribe(self, connection_id, client_id, topic, sub_topic=None, selector=None):
"""Add a subscription to a topic."""
topic = SubscriptionManager.getTopicKey(topic, sub_topic)
key_name = GaeSubscription.getKeyName(connection_id, client_id, topic)
subscription = GaeSubscription(key_name=key_name,
connection_id=connection_id, client_id=client_id, topic=topic)
subscription.put()
def unSubscribe(self, connection_id, client_id, topic, sub_topic=None):
"""Remove a subscription from a topic."""
topic = SubscriptionManager.getTopicKey(topic, sub_topic)
key_name = GaeSubscription.getKeyName(connection_id, client_id, topic)
subscription = GaeSubscription.get_by_key_name(key_name)
db.delete(subscription)
def deleteConnection(self, connection):
"""Delete connection-subscription information."""
query = GaeSubscription.all(keys_only=True)
query.filter('connection_id = ', connection.id)
db.delete(query)
def persistMessage(self, msg):
"""Save message object."""
# Remove connection data,
# so that it is not pickled
tmp_connection = getattr(msg, 'connection', None)
if tmp_connection is not None:
msg.connection = None
message_body = GaeMessageBody(p_message=pickle.dumps(msg))
message_body.put()
message_data = GaeMessageMetadata(timestamp=msg.timestamp,
time_to_live=float(msg.timeToLive), topic=self.getMessageTopicKey(msg),
message_body=message_body)
message_data.put()
# Restore connection attr.
if tmp_connection is not None:
msg.connection = tmp_connection
def iterConnectionSubscriptions(self, connection):
"""Iterate through all Subscriptions that belong to a specific connection."""
query = GaeSubscription.all()
query.filter('connection_id = ', connection.id)
return query
def iterSubscribers(self, topic, sub_topic=None):
"""Iterate through all connection ids subscribed to a topic."""
topic = SubscriptionManager.getTopicKey(topic, sub_topic)
connection_ids = {} # Keep track of unique IDs.
query = GaeSubscription.all()
query.filter('topic = ', topic)
for subscription in query:
if subscription.connection_id in connection_ids:
continue
connection_ids[subscription.connection_id] = True
yield subscription.connection_id
def pollMessages(self, topic, cutoff_time, current_time):
"""Retrieves all qeued messages, and discards expired messages.
arguments:
===========
* topic - string, Topic to find messages for.
* cutoff_time - float, epoch time, only messages published
after this time will be returned.
* current_time - float, epoch time, used to determine if a
message is expired.
"""
query = GaeMessageMetadata.all()
query.filter('topic = ', topic)
query.filter('timestamp > ', cutoff_time)
query.order('timestamp')
for message_data in query:
yield pickle.loads(message_data.message_body.p_message)
def deleteExpiredMessages(self, cutoff_time):
query = GaeMessageMetadata.all(keys_only=True)
query.filter('timestamp < ', cutoff_time)
for result in query:
db.delete(result) | PypiClean |
/CsuPTMD-1.0.12.tar.gz/CsuPTMD-1.0.12/PTMD/maskrcnn_benchmark/engine/trainer.py | import datetime
import logging
import time
import torch
import torch.distributed as dist
from PTMD.maskrcnn_benchmark.utils.comm import get_world_size
from PTMD.maskrcnn_benchmark.utils.metric_logger import MetricLogger
from apex import amp
def reduce_loss_dict(loss_dict):
"""
Reduce the loss dictionary from all processes so that process with rank
0 has the averaged results. Returns a dict with the same fields as
loss_dict, after reduction.
"""
world_size = get_world_size()
if world_size < 2:
return loss_dict
with torch.no_grad():
loss_names = []
all_losses = []
for k in sorted(loss_dict.keys()):
loss_names.append(k)
all_losses.append(loss_dict[k])
all_losses = torch.stack(all_losses, dim=0)
dist.reduce(all_losses, dst=0)
if dist.get_rank() == 0:
# only main process gets accumulated, so only divide by
# world_size in this case
all_losses /= world_size
reduced_losses = {k: v for k, v in zip(loss_names, all_losses)}
return reduced_losses
def do_train(
model,
data_loader,
optimizer,
scheduler,
checkpointer,
device,
checkpoint_period,
arguments,
):
logger = logging.getLogger("maskrcnn_benchmark.trainer")
logger.info("Start training")
meters = MetricLogger(delimiter=" ")
max_iter = len(data_loader)
start_iter = arguments["iteration"]
model.train()
start_training_time = time.time()
end = time.time()
for iteration, (images, targets, _) in enumerate(data_loader, start_iter):
data_time = time.time() - end
iteration = iteration + 1
arguments["iteration"] = iteration
scheduler.step()
images = images.to(device)
targets = [target.to(device) for target in targets]
loss_dict = model(images, targets)
losses = sum(loss for loss in loss_dict.values())
# reduce losses over all GPUs for logging purposes
loss_dict_reduced = reduce_loss_dict(loss_dict)
losses_reduced = sum(loss for loss in loss_dict_reduced.values())
meters.update(loss=losses_reduced, **loss_dict_reduced)
optimizer.zero_grad()
# Note: If mixed precision is not used, this ends up doing nothing
# Otherwise apply loss scaling for mixed-precision recipe
with amp.scale_loss(losses, optimizer) as scaled_losses:
scaled_losses.backward()
optimizer.step()
batch_time = time.time() - end
end = time.time()
meters.update(time=batch_time, data=data_time)
eta_seconds = meters.time.global_avg * (max_iter - iteration)
eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))
if iteration % 20 == 0 or iteration == max_iter:
logger.info(
meters.delimiter.join(
[
"eta: {eta}",
"iter: {iter}",
"{meters}",
"lr: {lr:.6f}",
"max mem: {memory:.0f}",
]
).format(
eta=eta_string,
iter=iteration,
meters=str(meters),
lr=optimizer.param_groups[0]["lr"],
memory=torch.cuda.max_memory_allocated() / 1024.0 / 1024.0,
)
)
if iteration % checkpoint_period == 0:
checkpointer.save("model_{:07d}".format(iteration), **arguments)
if iteration == max_iter:
checkpointer.save("model_final", **arguments)
total_training_time = time.time() - start_training_time
total_time_str = str(datetime.timedelta(seconds=total_training_time))
logger.info(
"Total training time: {} ({:.4f} s / it)".format(
total_time_str, total_training_time / (max_iter)
)
) | PypiClean |
/crosscat-0.1.55-cp27-none-macosx_10_6_intel.whl/crosscat/IPClusterEngine.py | import functools
#
from IPython.parallel import Client
#
import crosscat
import crosscat.LocalEngine as LE
def partialize(func, args_dict, dview):
# why is this push necessary?
dview.push(args_dict, block=True)
helper = functools.partial(func, **args_dict)
return helper
class IPClusterEngine(LE.LocalEngine):
"""A simple interface to the Cython-wrapped C++ engine
IPClusterEngine
"""
def __init__(self, config_filename=None, profile=None, seed=None, sshkey=None, packer='json'):
"""Initialize a IPClusterEngine
Do IPython.parallel operations to set up cluster and generate mapper.
"""
super(IPClusterEngine, self).__init__(seed=seed)
rc = Client(config_filename, profile=profile, sshkey=sshkey, packer=packer)
# FIXME: add a warning if environment in direct view is not 'empty'?
# else, might become dependent on an object created in
# environemnt in a prior run
dview = rc.direct_view()
lview = rc.load_balanced_view()
with dview.sync_imports(local=True):
import crosscat
mapper = lambda f, tuples: self.lview.map(f, *tuples)
# if you're trying to debug issues, consider clearning to start fresh
# rc.clear(block=True)
#
self.rc = rc
self.dview = dview
self.lview = lview
self.mapper = mapper
self.do_initialize = None
self.do_analyze = None
return
def get_initialize_arg_tuples(self, M_c, M_r, T, initialization,
row_initialization, n_chains):
args_dict = dict(M_c=M_c, M_r=M_r, T=T, initialization=initialization,
row_initialization=row_initialization)
do_initialize = partialize(crosscat.LocalEngine._do_initialize,
args_dict, self.dview)
seeds = [self.get_next_seed() for seed_idx in range(n_chains)]
arg_tuples = [seeds]
#
self.do_initialize = do_initialize
return arg_tuples
def get_analyze_arg_tuples(self, M_c, T, X_L, X_D, kernel_list=(), n_steps=1, c=(), r=(),
max_iterations=-1, max_time=-1, diagnostic_func_dict=None, every_N=1):
n_chains = len(X_L)
args_dict = dict(M_c=M_c, T=T, kernel_list=kernel_list, n_steps=n_steps,
c=c, r=r, max_iterations=max_iterations, max_time=max_time,
diagnostic_func_dict=diagnostic_func_dict, every_N=every_N)
do_analyze = partialize(crosscat.LocalEngine._do_analyze_with_diagnostic,
args_dict, self.dview)
seeds = [self.get_next_seed() for seed_idx in range(n_chains)]
arg_tuples = [seeds, X_L, X_D]
#
self.do_analyze = do_analyze
return arg_tuples | PypiClean |
/FaceRecogAI-0.1.tar.gz/FaceRecogAI-0.1/README.rst | A brief into to this package.
This was made to practice CV2 image processing, as well as simplify the process of including facial recognition and blurring in my further projects which will feature them commonly.
There are 3 functions in the package right now, with more to be added, with different principles, including deep learning and others, such as video camera systems as well.
1. detect_face(img_file): It contains a simple Haar cascade recognition system utilized by OpenCV. Creates a rectangle around the face
2. block_face(img_file): It contains a simple Haar cascade recognition system utilized by OpenCV. Creates a rectangle around the face filled in, as to block the face with a white screen.
3. blur_face(img_file): It contains a simple Haar cascade recognition system utilized by OpenCV. Creates a rectangle-shaped blur around the face using Gaussian blur.
| PypiClean |
/FlaskCms-0.0.4.tar.gz/FlaskCms-0.0.4/flask_cms/static/js/ace/theme-vibrant_ink.js | ace.define("ace/theme/vibrant_ink",["require","exports","module","ace/lib/dom"], function(require, exports, module) {
exports.isDark = true;
exports.cssClass = "ace-vibrant-ink";
exports.cssText = ".ace-vibrant-ink .ace_gutter {\
background: #1a1a1a;\
color: #BEBEBE\
}\
.ace-vibrant-ink .ace_print-margin {\
width: 1px;\
background: #1a1a1a\
}\
.ace-vibrant-ink {\
background-color: #0F0F0F;\
color: #FFFFFF\
}\
.ace-vibrant-ink .ace_cursor {\
color: #FFFFFF\
}\
.ace-vibrant-ink .ace_marker-layer .ace_selection {\
background: #6699CC\
}\
.ace-vibrant-ink.ace_multiselect .ace_selection.ace_start {\
box-shadow: 0 0 3px 0px #0F0F0F;\
border-radius: 2px\
}\
.ace-vibrant-ink .ace_marker-layer .ace_step {\
background: rgb(102, 82, 0)\
}\
.ace-vibrant-ink .ace_marker-layer .ace_bracket {\
margin: -1px 0 0 -1px;\
border: 1px solid #404040\
}\
.ace-vibrant-ink .ace_marker-layer .ace_active-line {\
background: #333333\
}\
.ace-vibrant-ink .ace_gutter-active-line {\
background-color: #333333\
}\
.ace-vibrant-ink .ace_marker-layer .ace_selected-word {\
border: 1px solid #6699CC\
}\
.ace-vibrant-ink .ace_invisible {\
color: #404040\
}\
.ace-vibrant-ink .ace_keyword,\
.ace-vibrant-ink .ace_meta {\
color: #FF6600\
}\
.ace-vibrant-ink .ace_constant,\
.ace-vibrant-ink .ace_constant.ace_character,\
.ace-vibrant-ink .ace_constant.ace_character.ace_escape,\
.ace-vibrant-ink .ace_constant.ace_other {\
color: #339999\
}\
.ace-vibrant-ink .ace_constant.ace_numeric {\
color: #99CC99\
}\
.ace-vibrant-ink .ace_invalid,\
.ace-vibrant-ink .ace_invalid.ace_deprecated {\
color: #CCFF33;\
background-color: #000000\
}\
.ace-vibrant-ink .ace_fold {\
background-color: #FFCC00;\
border-color: #FFFFFF\
}\
.ace-vibrant-ink .ace_entity.ace_name.ace_function,\
.ace-vibrant-ink .ace_support.ace_function,\
.ace-vibrant-ink .ace_variable {\
color: #FFCC00\
}\
.ace-vibrant-ink .ace_variable.ace_parameter {\
font-style: italic\
}\
.ace-vibrant-ink .ace_string {\
color: #66FF00\
}\
.ace-vibrant-ink .ace_string.ace_regexp {\
color: #44B4CC\
}\
.ace-vibrant-ink .ace_comment {\
color: #9933CC\
}\
.ace-vibrant-ink .ace_entity.ace_other.ace_attribute-name {\
font-style: italic;\
color: #99CC99\
}\
.ace-vibrant-ink .ace_indent-guide {\
background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAACCAYAAACZgbYnAAAAEklEQVQImWNgYGBgYNDTc/oPAALPAZ7hxlbYAAAAAElFTkSuQmCC) right repeat-y\
}";
var dom = require("../lib/dom");
dom.importCssString(exports.cssText, exports.cssClass);
}); | PypiClean |
/DLTA-AI-1.1.tar.gz/DLTA-AI-1.1/DLTA_AI_app/mmdetection/configs/common/mstrain_3x_coco.py | _base_ = '../_base_/default_runtime.py'
# dataset settings
dataset_type = 'CocoDataset'
data_root = 'data/coco/'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
# In mstrain 3x config, img_scale=[(1333, 640), (1333, 800)],
# multiscale_mode='range'
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='Resize',
img_scale=[(1333, 640), (1333, 800)],
multiscale_mode='range',
keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1333, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
])
]
# Use RepeatDataset to speed up training
data = dict(
samples_per_gpu=2,
workers_per_gpu=2,
train=dict(
type='RepeatDataset',
times=3,
dataset=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_train2017.json',
img_prefix=data_root + 'train2017/',
pipeline=train_pipeline)),
val=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_val2017.json',
img_prefix=data_root + 'val2017/',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_val2017.json',
img_prefix=data_root + 'val2017/',
pipeline=test_pipeline))
evaluation = dict(interval=1, metric='bbox')
# optimizer
optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
# learning policy
# Experiments show that using step=[9, 11] has higher performance
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.001,
step=[9, 11])
runner = dict(type='EpochBasedRunner', max_epochs=12) | PypiClean |
/LibRecommender-limited-0.6.6.6.tar.gz/LibRecommender-limited-0.6.6.6/libreco/algorithms/fm.py | from itertools import islice
import os
import numpy as np
import pandas as pd
import tensorflow.compat.v1 as tf
from tensorflow.keras.initializers import (
truncated_normal as tf_truncated_normal
)
from .base import Base, TfMixin
from ..data.data_generator import DataGenFeat
from ..evaluation.evaluate import EvalMixin
from ..utils.tf_ops import (
reg_config,
dropout_config,
lr_decay_config,
multi_sparse_combine_embedding
)
from ..utils.sampling import NegativeSampling
from ..utils.misc import count_params
from ..feature import (
get_predict_indices_and_values,
get_recommend_indices_and_values,
features_from_dict,
add_item_features
)
tf.disable_v2_behavior()
class FM(Base, TfMixin, EvalMixin):
"""
Note this implementation is actually a mixture of FM and NFM,
since it uses one dense layer in the final output
"""
user_variables = ["linear_user_feat", "pairwise_user_feat"]
item_variables = ["linear_item_feat", "pairwise_item_feat"]
sparse_variables = ["linear_sparse_feat", "pairwise_sparse_feat"]
dense_variables = ["linear_dense_feat", "pairwise_dense_feat"]
def __init__(
self,
task,
data_info=None,
embed_size=16,
n_epochs=20,
lr=0.01,
lr_decay=False,
reg=None,
batch_size=256,
num_neg=1,
use_bn=True,
dropout_rate=None,
batch_sampling=False,
multi_sparse_combiner="sqrtn",
seed=42,
lower_upper_bound=None,
tf_sess_config=None
):
Base.__init__(self, task, data_info, lower_upper_bound)
TfMixin.__init__(self, tf_sess_config)
EvalMixin.__init__(self, task, data_info)
self.task = task
self.data_info = data_info
self.embed_size = embed_size
self.n_epochs = n_epochs
self.lr = lr
self.lr_decay = lr_decay
self.reg = reg_config(reg)
self.batch_size = batch_size
self.num_neg = num_neg
self.use_bn = use_bn
self.dropout_rate = dropout_config(dropout_rate)
self.batch_sampling = batch_sampling
self.n_users = data_info.n_users
self.n_items = data_info.n_items
self.seed = seed
self.user_consumed = data_info.user_consumed
self.sparse = self._decide_sparse_indices(data_info)
self.dense = self._decide_dense_values(data_info)
if self.sparse:
self.sparse_feature_size = self._sparse_feat_size(data_info)
self.sparse_field_size = self._sparse_field_size(data_info)
self.multi_sparse_combiner = self._check_multi_sparse(
data_info, multi_sparse_combiner)
self.true_sparse_field_size = self._true_sparse_field_size(
data_info, self.sparse_field_size, self.multi_sparse_combiner)
if self.dense:
self.dense_field_size = self._dense_field_size(data_info)
self.all_args = locals()
def _build_model(self):
self.graph_built = True
tf.set_random_seed(self.seed)
self.labels = tf.placeholder(tf.float32, shape=[None])
self.is_training = tf.placeholder_with_default(False, shape=[])
self.linear_embed, self.pairwise_embed = [], []
self._build_user_item()
if self.sparse:
self._build_sparse()
if self.dense:
self._build_dense()
linear_embed = tf.concat(self.linear_embed, axis=1)
pairwise_embed = tf.concat(self.pairwise_embed, axis=1)
# linear_term = tf.reduce_sum(linear_embed, axis=1,
# keepdims=True)
# B * 1
linear_term = tf.layers.dense(linear_embed, units=1, activation=None)
# B * K
pairwise_term = 0.5 * tf.subtract(
tf.square(tf.reduce_sum(pairwise_embed, axis=1)),
tf.reduce_sum(tf.square(pairwise_embed), axis=1)
)
# For original FM, just add K dim together:
# pairwise_term = 0.5 * tf.reduce_sum(pairwise_term, axis=1)
if self.use_bn:
pairwise_term = tf.layers.batch_normalization(
pairwise_term, training=self.is_training)
pairwise_term = tf.layers.dense(inputs=pairwise_term,
units=1,
activation=tf.nn.elu)
self.output = tf.squeeze(tf.add(linear_term, pairwise_term))
count_params()
def _build_user_item(self):
self.user_indices = tf.placeholder(tf.int32, shape=[None])
self.item_indices = tf.placeholder(tf.int32, shape=[None])
linear_user_feat = tf.get_variable(
name="linear_user_feat",
shape=[self.n_users + 1, 1],
initializer=tf_truncated_normal(0.0, 0.03),
regularizer=self.reg)
linear_item_feat = tf.get_variable(
name="linear_item_feat",
shape=[self.n_items + 1, 1],
initializer=tf_truncated_normal(0.0, 0.03),
regularizer=self.reg)
pairwise_user_feat = tf.get_variable(
name="pairwise_user_feat",
shape=[self.n_users + 1, self.embed_size],
initializer=tf_truncated_normal(0.0, 0.03),
regularizer=self.reg)
pairwise_item_feat = tf.get_variable(
name="pairwise_item_feat",
shape=[self.n_items + 1, self.embed_size],
initializer=tf_truncated_normal(0.0, 0.03),
regularizer=self.reg)
# print(linear_embed.get_shape().as_list())
linear_user_embed = tf.nn.embedding_lookup(linear_user_feat,
self.user_indices)
linear_item_embed = tf.nn.embedding_lookup(linear_item_feat,
self.item_indices)
self.linear_embed.extend([linear_user_embed, linear_item_embed])
pairwise_user_embed = tf.expand_dims(
tf.nn.embedding_lookup(pairwise_user_feat, self.user_indices),
axis=1)
pairwise_item_embed = tf.expand_dims(
tf.nn.embedding_lookup(pairwise_item_feat, self.item_indices),
axis=1
)
self.pairwise_embed.extend([pairwise_user_embed, pairwise_item_embed])
def _build_sparse(self):
self.sparse_indices = tf.placeholder(
tf.int32, shape=[None, self.sparse_field_size])
linear_sparse_feat = tf.get_variable(
name="linear_sparse_feat",
shape=[self.sparse_feature_size],
initializer=tf_truncated_normal(0.0, 0.03),
regularizer=self.reg)
pairwise_sparse_feat = tf.get_variable(
name="pairwise_sparse_feat",
shape=[self.sparse_feature_size, self.embed_size],
initializer=tf_truncated_normal(0.0, 0.03),
regularizer=self.reg)
if (self.data_info.multi_sparse_combine_info
and self.multi_sparse_combiner in ("sum", "mean", "sqrtn")):
linear_sparse_embed = multi_sparse_combine_embedding(
self.data_info, linear_sparse_feat, self.sparse_indices,
self.multi_sparse_combiner, 1)
pairwise_sparse_embed = multi_sparse_combine_embedding(
self.data_info, pairwise_sparse_feat, self.sparse_indices,
self.multi_sparse_combiner, self.embed_size)
else:
linear_sparse_embed = tf.nn.embedding_lookup( # B * F1
linear_sparse_feat, self.sparse_indices)
pairwise_sparse_embed = tf.nn.embedding_lookup( # B * F1 * K
pairwise_sparse_feat, self.sparse_indices)
self.linear_embed.append(linear_sparse_embed)
self.pairwise_embed.append(pairwise_sparse_embed)
def _build_dense(self):
self.dense_values = tf.placeholder(
tf.float32, shape=[None, self.dense_field_size])
dense_values_reshape = tf.reshape(
self.dense_values, [-1, self.dense_field_size, 1])
batch_size = tf.shape(self.dense_values)[0]
linear_dense_feat = tf.get_variable(
name="linear_dense_feat",
shape=[self.dense_field_size],
initializer=tf_truncated_normal(0.0, 0.03),
regularizer=self.reg)
pairwise_dense_feat = tf.get_variable(
name="pairwise_dense_feat",
shape=[self.dense_field_size, self.embed_size],
initializer=tf_truncated_normal(0.0, 0.03),
regularizer=self.reg)
# B * F2
linear_dense_embed = tf.tile(linear_dense_feat, [batch_size])
linear_dense_embed = tf.reshape(
linear_dense_embed, [-1, self.dense_field_size])
linear_dense_embed = tf.multiply(
linear_dense_embed, self.dense_values)
pairwise_dense_embed = tf.expand_dims(pairwise_dense_feat, axis=0)
# B * F2 * K
pairwise_dense_embed = tf.tile(
pairwise_dense_embed, [batch_size, 1, 1])
pairwise_dense_embed = tf.multiply(
pairwise_dense_embed, dense_values_reshape)
self.linear_embed.append(linear_dense_embed)
self.pairwise_embed.append(pairwise_dense_embed)
def _build_train_ops(self, **kwargs):
if self.task == "rating":
self.loss = tf.losses.mean_squared_error(labels=self.labels,
predictions=self.output)
elif self.task == "ranking":
self.loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(labels=self.labels,
logits=self.output)
)
if self.reg is not None:
reg_keys = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
total_loss = self.loss + tf.add_n(reg_keys)
else:
total_loss = self.loss
if self.lr_decay:
n_batches = int(self.data_info.data_size / self.batch_size)
self.lr, global_steps = lr_decay_config(self.lr, n_batches,
**kwargs)
else:
global_steps = None
optimizer = tf.train.AdamOptimizer(self.lr)
optimizer_op = optimizer.minimize(total_loss, global_step=global_steps)
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
self.training_op = tf.group([optimizer_op, update_ops])
self.sess.run(tf.global_variables_initializer())
def fit(self, train_data, verbose=1, shuffle=True,
eval_data=None, metrics=None, **kwargs):
self.show_start_time()
if not self.graph_built:
self._build_model()
self._build_train_ops(**kwargs)
if self.task == "ranking" and self.batch_sampling:
self._check_has_sampled(train_data, verbose)
data_generator = NegativeSampling(train_data,
self.data_info,
self.num_neg,
self.sparse,
self.dense,
batch_sampling=True)
else:
data_generator = DataGenFeat(train_data,
self.sparse,
self.dense)
self.train_feat(data_generator, verbose, shuffle, eval_data, metrics,
**kwargs)
self.assign_oov()
def predict(self, user, item, feats=None, cold_start="average",
inner_id=False):
user, item = self.convert_id(user, item, inner_id)
unknown_num, unknown_index, user, item = self._check_unknown(user, item)
(
user_indices,
item_indices,
sparse_indices,
dense_values
) = get_predict_indices_and_values(
self.data_info, user, item, self.n_items, self.sparse, self.dense)
if feats is not None:
assert isinstance(feats, (dict, pd.Series)), (
"feats must be dict or pandas.Series.")
assert len(user_indices) == 1, "only support single user for feats"
sparse_indices, dense_values = features_from_dict(
self.data_info, sparse_indices, dense_values, feats, "predict")
feed_dict = self._get_feed_dict(user_indices, item_indices,
sparse_indices, dense_values,
None, False)
preds = self.sess.run(self.output, feed_dict)
if self.task == "rating":
preds = np.clip(preds, self.lower_bound, self.upper_bound)
elif self.task == "ranking":
preds = 1 / (1 + np.exp(-preds))
if unknown_num > 0 and cold_start == "popular":
preds[unknown_index] = self.default_prediction
return preds
def recommend_user(self, user, n_rec, user_feats=None, item_data=None,
cold_start="average", inner_id=False):
user_id = self._check_unknown_user(user, inner_id)
if user_id is None:
if cold_start == "average":
user_id = self.n_users
elif cold_start == "popular":
return self.data_info.popular_items[:n_rec]
else:
raise ValueError(user)
(
user_indices,
item_indices,
sparse_indices,
dense_values
) = get_recommend_indices_and_values(
self.data_info, user_id, self.n_items, self.sparse, self.dense)
if user_feats is not None:
assert isinstance(user_feats, (dict, pd.Series)), (
"feats must be dict or pandas.Series.")
sparse_indices, dense_values = features_from_dict(
self.data_info, sparse_indices, dense_values, user_feats,
"recommend")
if item_data is not None:
assert isinstance(item_data, pd.DataFrame), (
"item_data must be pandas DataFrame")
assert "item" in item_data.columns, (
"item_data must contain 'item' column")
sparse_indices, dense_values = add_item_features(
self.data_info, sparse_indices, dense_values, item_data)
feed_dict = self._get_feed_dict(user_indices, item_indices,
sparse_indices, dense_values,
None, False)
recos = self.sess.run(self.output, feed_dict)
if self.task == "ranking":
recos = 1 / (1 + np.exp(-recos))
consumed = set(self.user_consumed[user_id])
count = n_rec + len(consumed)
ids = np.argpartition(recos, -count)[-count:]
rank = sorted(zip(ids, recos[ids]), key=lambda x: -x[1])
recs_and_scores = islice(
(rec if inner_id else (self.data_info.id2item[rec[0]], rec[1])
for rec in rank if rec[0] not in consumed),
n_rec
)
return list(recs_and_scores)
def save(self, path, model_name, manual=True, inference_only=False):
if not os.path.isdir(path):
print(f"file folder {path} doesn't exists, creating a new one...")
os.makedirs(path)
self.save_params(path)
if manual:
self.save_variables(path, model_name, inference_only)
else:
self.save_tf_model(path, model_name)
@classmethod
def load(cls, path, model_name, data_info, manual=True):
if manual:
return cls.load_variables(path, model_name, data_info)
else:
return cls.load_tf_model(path, model_name, data_info) | PypiClean |
/Nuitka_fixed-1.1.2-cp310-cp310-win_amd64.whl/nuitka/build/inline_copy/lib/scons-4.4.0/SCons/Node/Alias.py | import collections
import SCons.Errors
import SCons.Node
import SCons.Util
from SCons.Util import hash_signature
class AliasNameSpace(collections.UserDict):
def Alias(self, name, **kw):
if isinstance(name, SCons.Node.Alias.Alias):
return name
try:
a = self[name]
except KeyError:
a = SCons.Node.Alias.Alias(name, **kw)
self[name] = a
return a
def lookup(self, name, **kw):
try:
return self[name]
except KeyError:
return None
class AliasNodeInfo(SCons.Node.NodeInfoBase):
__slots__ = ('csig',)
current_version_id = 2
field_list = ['csig']
def str_to_node(self, s):
return default_ans.Alias(s)
def __getstate__(self):
"""
Return all fields that shall be pickled. Walk the slots in the class
hierarchy and add those to the state dictionary. If a '__dict__' slot is
available, copy all entries to the dictionary. Also include the version
id, which is fixed for all instances of a class.
"""
state = getattr(self, '__dict__', {}).copy()
for obj in type(self).mro():
for name in getattr(obj,'__slots__',()):
if hasattr(self, name):
state[name] = getattr(self, name)
state['_version_id'] = self.current_version_id
try:
del state['__weakref__']
except KeyError:
pass
return state
def __setstate__(self, state):
"""
Restore the attributes from a pickled state.
"""
# TODO check or discard version
del state['_version_id']
for key, value in state.items():
if key not in ('__weakref__',):
setattr(self, key, value)
class AliasBuildInfo(SCons.Node.BuildInfoBase):
__slots__ = ()
current_version_id = 2
class Alias(SCons.Node.Node):
NodeInfo = AliasNodeInfo
BuildInfo = AliasBuildInfo
def __init__(self, name):
super().__init__()
self.name = name
self.changed_since_last_build = 1
self.store_info = 0
def str_for_display(self):
return '"' + self.__str__() + '"'
def __str__(self):
return self.name
def make_ready(self):
self.get_csig()
really_build = SCons.Node.Node.build
is_up_to_date = SCons.Node.Node.children_are_up_to_date
def is_under(self, dir):
# Make Alias nodes get built regardless of
# what directory scons was run from. Alias nodes
# are outside the filesystem:
return 1
def get_contents(self):
"""The contents of an alias is the concatenation
of the content signatures of all its sources."""
childsigs = [n.get_csig() for n in self.children()]
return ''.join(childsigs)
def sconsign(self):
"""An Alias is not recorded in .sconsign files"""
pass
#
#
#
def build(self):
"""A "builder" for aliases."""
pass
def convert(self):
try: del self.builder
except AttributeError: pass
self.reset_executor()
self.build = self.really_build
def get_csig(self):
"""
Generate a node's content signature, the digested signature
of its content.
node - the node
cache - alternate node to use for the signature cache
returns - the content signature
"""
try:
return self.ninfo.csig
except AttributeError:
pass
contents = self.get_contents()
csig = hash_signature(contents)
self.get_ninfo().csig = csig
return csig
default_ans = AliasNameSpace()
SCons.Node.arg2nodes_lookups.append(default_ans.lookup)
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4: | PypiClean |
/IsaBelaMora-0.0.0.tar.gz/IsaBelaMora-0.0.0/README.txt | ScriptSupreme is a cutting-edge software designed to revolutionize the way screenwriters, playwrights, and novelists bring their stories to life. With its powerful suite of features, ScriptSupreme offers a seamless and intuitive writing experience, allowing you to focus on your creativity and storytelling.
The software includes a robust set of tools for organizing your ideas, structuring your plot, and developing your characters. With its user-friendly interface, you can easily navigate through your work, create outlines, and visualize your story arc.
ScriptSupreme also features an advanced AI-powered tool that can suggest improvements to your writing based on style, tone, and grammar. With this feature, you can quickly identify areas where your writing can be improved and refine your prose to perfection.
Whether you're a seasoned professional or just starting out, ScriptSupreme is the perfect tool to take your writing to the next level. So why wait? Download ScriptSupreme today and start creating the stories you've always dreamed of. | PypiClean |
/CADET-Process-0.7.3.tar.gz/CADET-Process-0.7.3/CADETProcess/optimization/individual.py | import hashlib
from addict import Dict
import numpy as np
from CADETProcess import CADETProcessError
from CADETProcess.dataStructure import StructMeta
from CADETProcess.dataStructure import Float, Vector
def hash_array(array):
"""Compute a hash value for an array of floats using the sha256 hash function.
Parameters
----------
array : numpy.ndarray
An array of floats.
Returns
-------
str
A hash value as a string of hexadecimal characters.
Examples
--------
>>> import numpy as np
>>> hash_array(np.array([1, 2.0]))
'3dfc9d56e04dcd01590f48b1b57c9ed9fecb1e94e11d3c3f13cf0fd97b7a9f0f'
"""
array = np.asarray(array)
return hashlib.sha256(array.tobytes()).hexdigest()
class Individual(metaclass=StructMeta):
"""Set of variables evaluated during Optimization.
Attributes
----------
x : list
Variable values.
f : list
Objective values.
g : list
Nonlinear constraint values.
m : list
Meta score values.
cv : list
Nonlinear constraints violation.
cv_tol : float
Tolerance for constraints violation.
See Also
--------
CADETProcess.optimization.Population
"""
x = Vector()
x_untransformed = Vector()
f = Vector()
g = Vector()
m = Vector()
cv = Vector()
cv_tol = Float()
def __init__(
self,
x,
f=None,
g=None,
m=None,
cv=None,
cv_tol=0,
x_untransformed=None,
independent_variable_names=None,
objective_labels=None,
contraint_labels=None,
meta_score_labels=None,
variable_names=None):
self.x = x
self.f = f
self.g = g
self.m = m
if cv is None:
cv = g
self.cv = cv
if cv_tol is None:
cv_tol = self.n_g*[0]
self.cv_tol = cv_tol
if x_untransformed is None:
x_untransformed = x
variable_names = independent_variable_names
self.x_untransformed = x_untransformed
if isinstance(variable_names, np.ndarray):
variable_names = [s.decode() for s in variable_names]
self.variable_names = variable_names
if isinstance(independent_variable_names, np.ndarray):
independent_variable_names = [s.decode() for s in independent_variable_names]
self.independent_variable_names = independent_variable_names
if isinstance(objective_labels, np.ndarray):
objective_labels = [s.decode() for s in objective_labels]
self.objective_labels = objective_labels
if isinstance(contraint_labels, np.ndarray):
contraint_labels = [s.decode() for s in contraint_labels]
self.contraint_labels = contraint_labels
if isinstance(meta_score_labels, np.ndarray):
meta_score_labels = [s.decode() for s in meta_score_labels]
self.meta_score_labels = meta_score_labels
self.id = hash_array(self.x)
@property
def is_evaluated(self):
"""bool: Return True if individual has been evaluated. False otherwise."""
if self.f is None:
return False
else:
return True
@property
def is_feasible(self):
"""bool: Return False if any constraint is not met. True otherwise."""
if self.cv is not None and np.any(np.array(self.cv) > self.cv_tol):
return False
else:
return True
@property
def n_x(self):
return len(self.x)
@property
def n_f(self):
if self.f is None:
return 0
return len(self.f)
@property
def n_g(self):
if self.g is None:
return 0
else:
return len(self.g)
@property
def n_m(self):
if self.m is None:
return 0
else:
return len(self.m)
@property
def dimensions(self):
"""tuple: Individual dimensions (n_x, n_f, n_g)"""
return (self.n_x, self.n_f, self.n_g, self.n_m)
def dominates(self, other):
"""Determine if individual dominates other.
Parameters
----------
other : Individual
Other individual
Returns
-------
dominates : bool
True if objectives of "self" are not strictly worse than the
corresponding objectives of "other" and at least one objective is
strictly better. False otherwise
"""
if not self.is_evaluated:
raise CADETProcessError("Individual needs to be evaluated first.")
if not other.is_evaluated:
raise CADETProcessError("Other individual needs to be evaluated first.")
if self.is_feasible and not other.is_feasible:
return True
if not self.is_feasible and not other.is_feasible:
if np.any(self.cv < other.cv):
return True
if self.m is not None:
self_values = self.m
other_values = other.m
else:
self_values = self.f
other_values = other.f
if np.any(self_values > other_values):
return False
if np.any(self_values < other_values):
return True
return False
def is_similar(self, other, tol=1e-1):
"""Determine if individual is similar to other.
Parameters
----------
other : Individual
Other individual
tol : float, optional
Relative tolerance parameter.
To reduce number of entries, a rather high rtol is chosen.
Returns
-------
is_similar : bool
True if individuals are close to each other. False otherwise
"""
if tol is None:
return False
similar_x = self.is_similar_x(other, tol)
similar_f = self.is_similar_f(other, tol)
if self.g is not None:
similar_g = self.is_similar_g(other, tol)
else:
similar_g = True
if self.m is not None:
similar_m = self.is_similar_m(other, tol)
else:
similar_m = True
return similar_x and similar_f and similar_g and similar_m
def is_similar_x(self, other, tol=1e-1):
"""Determine if individual is similar to other based on parameter values.
Parameters
----------
other : Individual
Other individual
tol : float
Relative tolerance parameter.
To reduce number of entries, a rather high rtol is chosen.
Returns
-------
is_similar : bool
True if parameters are close to each other. False otherwise
"""
similar_x = np.allclose(self.x, other.x, rtol=tol)
return similar_x
def is_similar_f(self, other, tol=1e-1):
"""Determine if individual is similar to other based on objective values.
Parameters
----------
other : Individual
Other individual
tol : float
Relative tolerance parameter.
To reduce number of entries, a rather high rtol is chosen.
Returns
-------
is_similar : bool
True if parameters are close to each other. False otherwise
"""
similar_f = np.allclose(self.f, other.f, rtol=tol)
return similar_f
def is_similar_g(self, other, tol=1e-1):
"""Determine if individual is similar to other based on constraint values.
Parameters
----------
other : Individual
Other individual
tol : float
Relative tolerance parameter.
To reduce number of entries, a rather high rtol is chosen.
Returns
-------
is_similar : bool
True if parameters are close to each other. False otherwise
"""
similar_g = np.allclose(self.g, other.g, rtol=tol)
return similar_g
def is_similar_m(self, other, tol=1e-1):
"""Determine if individual is similar to other based on meta score values.
Parameters
----------
other : Individual
Other individual
tol : float
Relative tolerance parameter.
To reduce number of entries, a rather high rtol is chosen.
Returns
-------
is_similar : bool
True if parameters are close to each other. False otherwise
"""
similar_m = np.allclose(self.m, other.m, rtol=tol)
return similar_m
def __str__(self):
return str(list(self.x))
def __repr__(self):
if self.g is None:
return f'{self.__class__.__name__}({self.x}, {self.f})'
else:
return f'{self.__class__.__name__}({self.x}, {self.f}, {self.g})'
def to_dict(self):
"""Convert individual to a dictionary.
Returns
-------
dict: A dictionary representation of the individual's attributes.
"""
data = Dict()
data.x = self.x
data.f = self.f
if self.g is not None:
data.g = self.g
if self.cv is not None:
data.cv = self.cv
if self.m is not None:
data.m = self.m
data.x_untransformed = self.x_untransformed
data.variable_names = self.variable_names
data.independent_variable_names = self.independent_variable_names
if self.objective_labels is not None:
data.objective_labels = self.objective_labels
if self.contraint_labels is not None:
data.contraint_labels = self.contraint_labels
if self.meta_score_labels is not None:
data.meta_score_labels = self.meta_score_labels
return data
@classmethod
def from_dict(cls, data):
"""Create Individual from dictionary representation of its attributes.
Parameters
----------
data : dict
A dictionary representation of the individual's attributes.
Returns
-------
individual
Individual idual created from the dictionary.
"""
return cls(**data) | PypiClean |
/CleverSheep-0.7.6.tar.gz/CleverSheep-0.7.6/README.txt | ===================================
A collection of re-usable packages.
===================================
Introduction
============
This is the top-level package for various other general purpose packages.
It exists in order keep the other packages tidily hidden within a single
name space.
It is not recommended CleverSheep be used for new projects.
For more details see https://gitlab.com/LCaraccio/clever-sheep
Installation
============
Run './setup.py build' and './setup.py install'
Dependencies
============
Mostly you only need Python, but...
1. The curses library is used and on some Linux distributions 'pycurses' is not
installed by default.
2. If you are using tornado or twisted as your event loop manager they will need
to be installed
3. six.
| PypiClean |
/BigJob2-0.54.post73.tar.gz/BigJob2-0.54.post73/pilot/filemanagement/irods_adaptor.py | import urlparse
import datetime
import errno
import sys
import os
import stat
import logging
import traceback
import time
import re
import shutil
import pdb
import glob
import pexpect
# This is for local debugging!
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../.."))
import saga
from pilot.api import State
from bigjob import logger
class iRodsFileAdaptor(object):
""" BigData File Management for Pilot Data
Supports pilot data on top of iRods.
Assumption: working iRods installation
irods://localhost/${OSG_DATA}/?vo=<vo>&resource-group=<resource-group>
"""
def __init__(self, resource_url, security_context=None, pilot_data_description=None):
self.resource_url = saga.Url(resource_url)
query_string = self.resource_url.query
self.localpath = self.resource_url.path
self.vo = re.search("(?<=vo=)(.*)([&\b]{1})", query_string).group(1)
self.resource_group = re.search("(?<=resource-group=)(.*)[&\b$]?", query_string).group(1)
logger.debug("VO: %s, Resource Group: %s"%(self.vo, self.resource_group))
self.is_local = self.__is_local()
def __is_local(self):
# test whether path contains environment variable
match = re.search("\$\{(.*)\}", self.localpath)
if match:
env_var = match.group(1)
logger.debug("Found: " + env_var + " in URL.")
logger.debug("Env list: " + str(os.environ))
if os.environ.has_key(env_var):
self.localpath = re.sub(r'\$\{.*\}', os.environ[env_var], self.localpath)
#self.localpath = os.environ[env_var]
logger.debug("Expanding URL Path to: " + self.localpath)
return True
logger.debug("No expansion in: " + self.localpath)
return False
def get_security_context(self):
""" Returns security context that needs to be available on the distributed
node in order to access this Pilot Data """
return None
def initialize_pilotdata(self):
pass
def get_pilotdata_size(self):
# unlimited size
return None
def delete_pilotdata(self):
self.__state=State.Done
def get_state(self):
return self.__state
def create_du(self, du_id):
logger.debug("create iRods collection: " + du_id)
if self.is_local:
command = "mkdir %s"%(os.path.join(self.localpath, du_id))
else:
command = "imkdir %s"%(du_id)
self.__run_command(command)
def put_du(self, du):
start = time.time()
logger.debug("Copy DU to iRod")
du_items = du.list()
for i in du_items.keys():
try:
local_filename=du_items[i]["local"]
remote_path = os.path.join(str(du.id), os.path.basename(local_filename))
logger.debug("copy %s to %s"%(local_filename, remote_path))
self._put_file(local_filename, remote_path)
except:
logger.debug("Could not copy: " + str(i))
logger.debug("Finished Put DU in: " + str(time.time()-start) + " sec.")
def get_du(self, du, target_url):
#du_id = "du-7370d7b5-ed0b-11e1-95df-705681b3df0f"
start = time.time()
du_id = du.id
logger.debug("Get DU: " + str(du_id))
if self.is_local:
command = "cp -r %s %s"%(os.path.join(self.localpath, du_id), target_url)
source_path = os.path.join(self.localpath, du_id, "*")
target_path = target_url
logger.debug("Target and source host are localhost. Processing: %s" %(source_path))
expanded_path = glob.glob(source_path)
logger.debug("Expanded path: " + str(expanded_path))
for path in expanded_path:
if os.path.isdir(path):
logger.debug("Source path %s is directory"%path)
files = os.listdir(path)
for i in files:
try:
os.symlink(os.path.join(files, i), target_path)
os.chmod(os.path.join(target_path, os.path.basename(path)), 0777)
except:
self.__print_traceback()
else:
try:
os.symlink(path, os.path.join(target_path, os.path.basename(path)))
os.chmod(os.path.join(target_path, os.path.basename(path)), 0777)
except:
self.__print_traceback()
else:
command = "iget -f -r %s %s"%(du_id, target_url)
logger.debug(command)
self.__run_command(command)
full_path = os.path.join(target_url, du_id)
#logger.debug("Path: " + str(full_path) + " Exists: " + str(os.path.exists(full_path)))
#while os.path.exists(full_path)==False:
# time.sleep(1)
for i in os.listdir(full_path):
try:
logger.debug("chmod " + str(i))
os.chmod(os.path.join(full_path, i), 0777)
logger.debug("move " + str(i))
shutil.move(os.path.join(full_path, i), target_url)
except:
self.__print_traceback()
shutil.rmtree(full_path, ignore_errors=True)
#time.sleep(2)
#if target_url==".":
# target_url = os.getcwd()
#command = "mv %s/* %s"%(os.path.join(target_url, du_id), target_url)
#self.__run_command(command)
logger.debug("Finished Get DU " + du.id + " in: " + str(time.time()-start) + " sec.")
def copy_du(self, du, pd_new):
remote_url = pd_new.resource_url + "/" + str(du.id)
local_url = self.resource_url + "/" + str(du.id)
self.copy_du_to_url(du, local_url, remote_url)
def remove_du(self, du):
if self.is_local:
command = "rm -rf %s"%(os.path.join(self.localpath, du.id))
else:
command = "irm %s"%du.id
self.__run_command(command)
###########################################################################
# Pure File Management APIs
def _put_file(self, source, target):
logger.debug("Put file: %s to %s"%(source, target))
start = time.time()
if self.is_local:
command = "cp -r %s %s"%(source, target)
else:
command = "iput -f -R %s %s %s"%(self.resource_group, source, target)
self.__run_command(command)
put_time = time.time() - start
number_replica = 0
if self.is_local==False:
#pdb.set_trace()
home_directory= self.__run_command("ipwd")[0].strip()
full_filename = os.path.join(home_directory, target)
command = "irepl-osg -f %s -G %s"%(full_filename, self.resource_group)
output = self.__run_command(command)
for i in output:
if i.find("copied") > 0 or i.find("replica")>0:
number_replica = number_replica + 1
rep_time = time.time() - start - put_time
logger.info("Upload;Replication;Total;File Size;Backend;Number Replica;Timestamp: %f;%f;%f;%d;%s;%d;%s"%(put_time, rep_time, time.time()-start, os.path.getsize(source), self.resource_group, number_replica, datetime.datetime.today().isoformat()))
def transfer(self, source_url, target_url):
pass
def create_remote_directory(self, target_url):
return True
###########################################################################
def __run_command(self, command):
logger.debug(command)
child = pexpect.spawn(command, timeout=None)
output = child.readlines()
logger.debug("Run %s Output: %s"%(command, str(output)))
child.close()
return output
def __print_traceback(self):
exc_type, exc_value, exc_traceback = sys.exc_info()
print "*** print_tb:"
traceback.print_tb(exc_traceback, limit=1, file=sys.stderr)
print "*** print_exception:"
traceback.print_exception(exc_type, exc_value, exc_traceback,
limit=2, file=sys.stderr)
def test_irods():
irods = iRodsFileAdaptor("irods://gw68/${OSG_DATA}/osg/home/luckow/?vo=osg&resource-group=osgGridFtpGroup")
irods.initialize_pilotdata()
irods.create_du("du-7370d7b5-ed0b-11e1-95df-705681b3df0f")
irods._put_file("test.txt", "du-7370d7b5-ed0b-11e1-95df-705681b3df0f/test.txt")
irods.get_du("du-7370d7b5-ed0b-11e1-95df-705681b3df0f", "export")
if __name__ == "__main__":
test_irods() | PypiClean |
/CoreTracker-1.5.3.tar.gz/CoreTracker-1.5.3/coretracker/settings/settings.py | import parameters
import Bio.SubsMat.MatrixInfo as MatrixInfo
AVAILABLE_MAT = MatrixInfo.available_matrices + ['identity']
class Settings():
"""Contains global settings for the current run of CoReTracker"""
def __init__(self):
pass
def set(self, **kwargs):
"""Set settings values from a dict"""
EXCLUDE_AA = kwargs.get('EXCLUDE_AA', parameters.EXCLUDE_AA)
# list of aa to exclude
self.EXCLUDE_AA_FROM = kwargs.get(
'EXCLUDE_AA_FROM', parameters.EXCLUDE_AA_FROM)
# list of accepted aa
self.AA_LETTERS = "".join(
[aa for aa in "ACDEFGHIKLMNPQRSTVWY" if aa not in EXCLUDE_AA])
self.AA_MAJORITY_THRESH = kwargs.get(
# Set min frequency per column to use an aa as the most predominant
'AA_MAJORITY_THRESH', parameters.AA_MAJORITY_THRESH)
# whether or not analysis should be restricted to suspected species
self.LIMIT_TO_SUSPECTED_SPECIES = False
try:
self.LIMIT_TO_SUSPECTED_SPECIES = kwargs.get(
'LIMIT_TO_SUSPECTED_SPECIES', parameters.LIMIT_TO_SUSPECTED_SPECIES)
except:
pass
# old method of finding suspected species by count
self.FREQUENCY_THRESHOLD = kwargs.get(
'FREQUENCY_THRESHOLD', parameters.FREQUENCY_THRESHOLD)
# minimum aa substitution to consider reassignment as validation
# this is use to filter out false positive
self.COUNT_THRESHOLD = kwargs.get(
'COUNT_THRESHOLD', parameters.COUNT_THRESHOLD)
# minimum codon presence in reassigned position to make prediction
self.CODON_COUNT_THRESHOLD = kwargs.get(
'CODON_COUNT_THRESHOLD', parameters.CODON_COUNT_THRESHOLD)
# default genetic code to use
self.GENETIC_CODE = kwargs.get(
'GENETIC_CODE', parameters.GENETIC_CODE)
# whether or not codon in mixte position should be shown in the report
self.SHOW_MIXTE_CODONS = kwargs.get(
'SHOW_MIXTE_CODONS', parameters.SHOW_MIXTE_CODONS)
# whether or not filtered position should be shown
self.SHOW_GLOBAL_CODON_DATA = kwargs.get(
'SHOW_GLOBAL_CODON_DATA', parameters.SHOW_GLOBAL_CODON_DATA)
# Add a suffix to each leaf if colors is not available
self.ADD_LABEL_TO_LEAF = kwargs.get(
'ADD_LABEL_TO_LEAF', parameters.ADD_LABEL_TO_LEAF)
# Add number inside the piechart
self.ADD_NUMBER_PIE = kwargs.get(
'ADD_NUMBER_PIE', parameters.ADD_NUMBER_PIE)
# whether or not consensus should be used for the likelihood
self.USE_CONSENSUS_FOR_LIKELIHOOD = kwargs.get(
'USE_CONSENSUS_FOR_LIKELIHOOD', parameters.USE_CONSENSUS_FOR_LIKELIHOOD)
# save filtered alignment
self.SAVE_ALIGN = kwargs.get('SAVE_ALIGN', parameters.SAVE_ALIGN)
# do not output results for no predicted reassignment
self.SKIP_EMPTY = kwargs.get('SKIP_EMPTY', parameters.SKIP_EMPTY)
# if global alignment should be used to find the suspected list
self.USE_GLOBAL = kwargs.get(
'USE_GLOBAL', parameters.USE_GLOBAL)
# hidden parameter for debug purpose
self.MODEL_TYPE = kwargs.get(
'MODEL_TYPE', parameters.MODEL_TYPE)
# hmm loop
self.HMMLOOP = kwargs.get(
'HMMLOOP', parameters.HMMLOOP)
# choose algorithm for computing the suspected species
self.MODE = kwargs.get('MODE', parameters.MODE)
# choose matrix to compute substitution, default is blosum62
self.MATRIX = kwargs.get('MATRIX', parameters.MATRIX)
if self.MATRIX not in AVAILABLE_MAT:
self.MATRIX = "blosum62"
try:
self.SUBMAT = getattr(MatrixInfo, self.MATRIX)
except:
self.SUBMAT = getattr(MatrixInfo, "blosum62")
# alpha to use , default is 0.05
self.CONF = kwargs.get('CONF', parameters.CONF) or 0.05
self.STARTDIST = kwargs.get('STARTDIST', parameters.STARTDIST)
self.SHOW_ALL = kwargs.get('SHOW_ALL', parameters.SHOW_ALL)
self.FORCED_CHI2 = kwargs.get('FORCED_CHI2', parameters.FORCED_CHI2)
# output format. Should be pdf for the moment
self.IMAGE_FORMAT = "pdf"
# The following are the binaries setting for HMMER package
self.hmmbuild = kwargs.get('HMMBUILD', 'hmmbuild')
# self.eslalimask = kwargs.get('eslalimask', 'esl-alimask')
# self.eslalimanip = kwargs.get('eslalimanip', 'esl-alimanip')
# this can be used next version for a better filtering
# not need right now
self.hmmalign = kwargs.get('HMMALIGN', 'hmmalign')
# default values for supplemental data
self.VALIDATION = True
self.COMPUTE_POS = False
self.SCALE = 1.0
self.PROTFORMAT = kwargs.get('PROTFORMAT', parameters.PROTFORMAT)
self.DNAFORMAT = kwargs.get('DNAFORMAT', parameters.DNAFORMAT)
self.OUTDIR = ""
def get_external_binaries(self):
ext = [self.hmmbuild, self.hmmalign]
return ext
def fill(self, params):
if isinstance(params, dict):
self.set(**params)
else:
self.__dict__.update(params.__dict__)
def update_params(self, **kwargs):
for k, v in kwargs.items():
if k == 'MATRIX' and v in AVAILABLE_MAT:
self.__dict__[k] = v
if v != "identity":
self.__dict__['MATRIX'] = getattr(MatrixInfo, v)
else:
self.__dict__[k] = v | PypiClean |
/AsyncDex-1.1.tar.gz/AsyncDex-1.1/asyncdex/models/chapter_list.py | import asyncio
from collections import defaultdict
from datetime import datetime
from functools import partial
from typing import Any, Callable, Dict, Iterable, List, Optional, TYPE_CHECKING, Tuple, Union
from natsort import natsort_keygen
from .abc import ModelList
from .aggregate import MangaAggregate, VolumeAggregate
from .chapter import Chapter
from .group import Group
from .pager import Pager
from .user import User
from ..constants import routes
from ..enum import DuplicateResolutionAlgorithm
from ..list_orders import MangaFeedListOrder
from ..utils import InclusionExclusionPair, Interval, return_date_string
if TYPE_CHECKING:
from .manga import Manga
def _chapter_attr_map(items: List[Chapter], attr: str):
return {getattr(item, attr): item for item in items}
def _get_smallest_creation_time(items: List[Chapter]):
return sorted(((item.created_at, item) for item in items), key=lambda i: i[0])[0]
def _check_values(set1: set, set2: set) -> int:
"""This checks how many of the values present inside of the first set are in the second set."""
match = 0
if set1.issuperset(set2):
match += 1
for item in set2:
if set1.issuperset({item}):
match += 1
return match
def _resolve_duplicates(
chapter_list: List[Chapter],
algo: List[DuplicateResolutionAlgorithm],
specific_groups: Optional[Iterable[Group]],
specific_users: Optional[Iterable[User]],
):
"""Actually does the duplicate resolving."""
last_chapter: Optional[set] = None
specific_groups = specific_groups or []
specific_users = specific_users or []
chapter_dict: Dict[Optional[str], List[Chapter]] = defaultdict(list)
final = ChapterList(chapter_list[0].manga)
for item in chapter_list:
chapter_dict[item.number].append(item)
final.extend(chapter_dict.pop(None, []))
# We grouped up the chapters by number for logical sorting.
if DuplicateResolutionAlgorithm.VIEWS_ASC in algo or DuplicateResolutionAlgorithm.VIEWS_DESC in algo:
raise NotImplementedError("MangaDex API does not return views yet, sorry!")
duplicate_last_prios = _check_values(
{
DuplicateResolutionAlgorithm.VIEWS_ASC,
DuplicateResolutionAlgorithm.VIEWS_DESC,
DuplicateResolutionAlgorithm.CREATION_DATE_ASC,
DuplicateResolutionAlgorithm.CREATION_DATE_DESC,
},
set(algo),
)
if duplicate_last_prios > 1:
raise ValueError("The lowest-priority operations cannot be combined.")
elif not duplicate_last_prios:
algo.append(DuplicateResolutionAlgorithm.CREATION_DATE_ASC)
for chapter_num, items in chapter_dict.items():
# Now the sorting begins.
matches_found = items
if last_chapter is not None and DuplicateResolutionAlgorithm.PREVIOUS_GROUP in algo:
# Determine the priority. Obviously, if the last chapter was made by one group, that's easy. But if
# multiple groups contributed towards a chapter, we would want to match chapters made by only one of the
# multiple groups. Use a priority system to determine who gets to move on
set_group = defaultdict(list)
for item in matches_found:
set_match_prio = _check_values(set(item.groups), last_chapter)
if set_match_prio:
set_group[set_match_prio].append(item)
if set_group:
matches_found = sorted(set_group.items(), key=lambda i: i[0])[0][1]
if len(matches_found) > 1 and DuplicateResolutionAlgorithm.SPECIFIC_GROUP in algo:
# Either we did not go the last time or there were more than one chapter with the same group priority.
# Now we try if the "Specific Group" strategy was chosen.
matches = list(filter(lambda chapter: _check_values(set(item.groups), set(specific_groups)), matches_found))
if len(matches) > 0:
matches_found = matches
else:
final.append(matches_found[0])
continue
if len(matches_found) > 1 and DuplicateResolutionAlgorithm.SPECIFIC_USER in algo:
matches = list(filter(lambda chapter: item.user in specific_users, matches_found))
if len(matches) > 0:
matches_found = matches
else:
final.append(matches_found[0])
continue
if len(matches_found) > 1:
if DuplicateResolutionAlgorithm.CREATION_DATE_ASC in algo:
final.append(sorted(matches_found, key=lambda chapter: chapter.created_at)[0])
elif DuplicateResolutionAlgorithm.CREATION_DATE_DESC in algo:
final.append(sorted(matches_found, key=lambda chapter: chapter.created_at, reverse=True)[0])
elif DuplicateResolutionAlgorithm.VIEWS_ASC in algo:
# final.append(sorted(matches_found, key=lambda chapter: chapter.created_at)[0])
raise NotImplementedError("DuplicateResolutionAlgorithm.VIEWS_ASC not implemented.")
elif DuplicateResolutionAlgorithm.VIEWS_DESC in algo:
# final.append(sorted(matches_found, key=lambda chapter: chapter.created_at)[0])
raise NotImplementedError("DuplicateResolutionAlgorithm.VIEWS_DESC not implemented.")
else:
final.append(matches_found[0])
return final
class ChapterList(ModelList[Chapter]):
"""An object representing a list of chapters from a manga.
.. versionadded:: 0.3
:param entries: Pre-fill the ChapterList with the given entries.
:type entries: Iterable[Chapter]
"""
manga: "Manga"
"""The :class:`.Manga` that this chapter list belongs to."""
def __init__(self, manga: "Manga", *, entries: Optional[Iterable[Chapter]] = None):
super().__init__(entries or [])
self.manga = manga
async def get(
self,
*,
languages: Optional[List[str]] = None,
created_after: Optional[datetime] = None,
updated_after: Optional[datetime] = None,
published_after: Optional[datetime] = None,
order: Optional[MangaFeedListOrder] = None,
limit: Optional[int] = None,
):
"""Gets the list of chapters.
.. versionchanged:: 0.5
* Parameter ``locales`` was renamed to ``languages``
.. deprecated:: 0.5
Parameter ``locales``
.. versionchanged:: 1.0
Parameter ``locales`` was removed.
:param languages: The languages to filter by.
:type languages: List[str]
:param created_after: Get chapters created after this date.
.. note::
The datetime object needs to be in UTC time. It does not matter if the datetime if naive or timezone
aware.
:type created_after: datetime
:param updated_after: Get chapters updated after this date.
.. note::
The datetime object needs to be in UTC time. It does not matter if the datetime if naive or timezone
aware.
:type updated_after: datetime
:param published_after: Get chapters published after this date.
.. note::
The datetime object needs to be in UTC time. It does not matter if the datetime if naive or timezone
aware.
:type published_after: datetime
:param order: The order to sort the chapters.
.. versionadded:: 0.5
:type order: MangaFeedListOrder
.. versionadded:: 0.5
:param limit: Only return up to this many chapters.
:type limit: int
"""
params = {}
if languages:
params["translatedLanguage"] = languages
if created_after:
params["createdAtSince"] = return_date_string(created_after)
if updated_after:
params["updatedAtSince"] = return_date_string(updated_after)
if published_after:
params["publishAtSince"] = return_date_string(published_after)
self.manga.client._add_order(params, order)
async for item in Pager(
routes["manga_chapters"].format(id=self.manga.id),
Chapter,
self.manga.client,
params=params,
limit_size=500,
limit=limit,
):
item.manga = self.manga
if item in self:
self[self.index(item)] = item
else:
self.append(item)
async def get_new(
self,
*,
languages: Optional[List[str]] = None,
created_after: Optional[datetime] = None,
updated_after: Optional[datetime] = None,
published_after: Optional[datetime] = None,
order: Optional[MangaFeedListOrder] = None,
limit: Optional[int] = None,
) -> "ChapterList":
"""A method that gets chapters but returns a new ChapterList.
.. versionadded:: 0.5
:param languages: The languages to filter by.
:type languages: List[str]
:param created_after: Get chapters created after this date.
.. note::
The datetime object needs to be in UTC time. It does not matter if the datetime if naive or timezone
aware.
:type created_after: datetime
:param updated_after: Get chapters updated after this date.
.. note::
The datetime object needs to be in UTC time. It does not matter if the datetime if naive or timezone
aware.
:type updated_after: datetime
:param published_after: Get chapters published after this date.
.. note::
The datetime object needs to be in UTC time. It does not matter if the datetime if naive or timezone
aware.
:type published_after: datetime
:param order: The order to sort the chapters.
:type order: MangaFeedListOrder
:param limit: Only return up to this many chapters.
:type limit: int
:return: A new chapter list.
:rtype: ChapterList
"""
cl = type(self)(self.manga)
await cl.get(
languages=languages,
created_after=created_after,
updated_after=updated_after,
published_after=published_after,
order=order,
limit=limit,
)
return cl
def filter(
self,
*,
languages: Optional[List[str]] = None,
creation_time: Optional[Interval[datetime]] = None,
update_time: Optional[Interval[datetime]] = None,
publish_time: Optional[Interval[datetime]] = None,
views: Optional[Interval[int]] = None,
has_number: Optional[bool] = None,
chapter_number_range: Optional[Interval[float]] = None,
chapter_numbers: Optional[InclusionExclusionPair[Optional[float]]] = None,
remove_duplicates: bool = False,
duplicate_strategy: Optional[List[DuplicateResolutionAlgorithm]] = None,
duplicate_strategy_groups: Optional[List[Group]] = None,
duplicate_strategy_users: Optional[List[User]] = None,
groups: Optional[InclusionExclusionPair[Group]] = None,
users: Optional[InclusionExclusionPair[User]] = None,
read: Optional[bool] = None,
volumes: Optional[InclusionExclusionPair[int]] = None,
) -> "ChapterList":
"""Filter the chapter list and return a new :class:`.ChapterList`. Calling this method without specifying any
additional filtering mechanisms will return a shallow copy of the list.
The order of the filter will be as follows:
#. Filter the datetimes first
#. Filter by the intervals
#. Filter by the inclusion and exclusion pairs
#. Filter duplicates
.. versionchanged:: 0.5
Parameter ``locales`` was renamed to ``languages``
.. deprecated:: 0.5
Parameter ``locales``
.. versionchanged:: 1.0
Parameter ``locales`` was removed.
:param languages: The languages that should be present in the chapters.
:type languages: List[str]
:param creation_time: An :class:`.Interval` representing the bounds of the chapter's creation time.
:attr:`.Interval.min` will select all chapters created **after** the given time, and :attr:`.Interval.max`
will select all chapters created **before** the given time.
.. note::
The datetime objects needs to be a non-timezone aware datetime in UTC time. A datetime in any
timezone can be converted to a naive UTC timezone by:
.. code-block:: python
from datetime import timezone
# dt is the datetime object.
utc_naive = dt.astimezone(timezone.utc).replace(tzinfo=None)
Example intervals:
.. code-block:: python
from asyncdex import Interval
min_interval = Interval(min=datetime.datetime(2021, 1, 1))
max_interval = Interval(max=datetime.datetime(2021, 1, 1))
both = Interval(datetime.datetime(2021, 1, 1), datetime.datetime(2021, 5, 1))
:type creation_time: Interval[datetime]
:param update_time: An :class:`.Interval` representing the bounds of the chapter's update time.
:attr:`.Interval.min` will select all chapters updated **after** the given time, and :attr:`.Interval.max`
will select all chapters updated **before** the given time.
.. note::
The datetime objects needs to be a non-timezone aware datetime in UTC time. A datetime in any
timezone can be converted to a naive UTC timezone by:
.. code-block:: python
from datetime import timezone
# dt is the datetime object.
utc_naive = dt.astimezone(timezone.utc).replace(tzinfo=None)
Example intervals:
.. code-block:: python
from asyncdex import Interval
min_interval = Interval(min=datetime.datetime(2021, 1, 1))
max_interval = Interval(max=datetime.datetime(2021, 1, 1))
both = Interval(datetime.datetime(2021, 1, 1), datetime.datetime(2021, 5, 1))
:type update_time: Interval[datetime]
:param publish_time: An :class:`.Interval` representing the bounds of the chapter's publish time.
:attr:`.Interval.min` will select all chapters published **after** the given time, and :attr:`.Interval.max`
will select all chapters published **before** the given time.
.. note::
The datetime objects needs to be a non-timezone aware datetime in UTC time. A datetime in any
timezone can be converted to a naive UTC timezone by:
.. code-block:: python
from datetime import timezone
# dt is the datetime object.
utc_naive = dt.astimezone(timezone.utc).replace(tzinfo=None)
Example intervals:
.. code-block:: python
min_interval = Interval(min=datetime.datetime(2021, 1, 1))
max_interval = Interval(max=datetime.datetime(2021, 1, 1))
both = Interval(datetime.datetime(2021, 1, 1), datetime.datetime(2021, 5, 1))
:type publish_time: Interval[datetime]
:param views: An :class:`.Interval` of the views that a manga can have.
.. warning::
The MangaDex API does not return views yet, so specifying something for this parameter will result in
:class:`.NotImplementedError` being raised.
Example intervals:
.. code-block:: python
from asyncdex import Interval
min_interval = Interval(min=100)
max_interval = Interval(max=25000)
both = Interval(100, 25000)
:type views: Interval[int]
:param has_number: Only select chapters with valid numbers.
:type has_number: bool
:param chapter_number_range: An :class:`.Interval` of the number of the chapter.
.. note::
Chapters without a number will be given a provisional number of 0 when sorted.
Example intervals:
.. code-block:: python
from asyncdex import Interval
min_interval = Interval(min=2)
max_interval = Interval(max=20.5)
both = Interval(2, 20.5)
:type chapter_number_range: Interval[float]
:param chapter_numbers: An :class:`.InclusionExclusionPair` denoting the chapter numbers that are either
included or excluded.
.. note::
Chapters without a number will be given a provisional number of 0 when sorted.
Example inclusion/exclusion pairs:
.. code-block:: python
from asyncdex import InclusionExclusionPair
include = InclusionExclusionPair(include=[5, 6])
exclude = InclusionExclusionPair(exclude=[7, 8, 9.5])
:type chapter_numbers: InclusionExclusionPair[float]
:param remove_duplicates: Whether or not to remove duplicate chapters, ie chapters with the same chapter number.
.. note::
This will not take languages into consideration. Make sure to specify a locale in the ``languages``
parameter if you want duplicates filtered for a specific locale.
:type remove_duplicates: bool
:param duplicate_strategy: The list of strategies used to resolve duplicates. See the values in
:class:`.DuplicateResolutionAlgorithm` to find the possible algorithms. By default, the strategy of
choosing the previous group and the strategy of choosing the first chapter chronologically when there is
no previous group will be used.
.. note::
If an adequate strategy is not found for dealing with certain chapters, the fallback mechanism of
selecting the chapter that was created first will be used.
:type duplicate_strategy: List[DuplicateResolutionAlgorithm]
:param duplicate_strategy_groups: The groups to use for :attr:`.DuplicateResolutionAlgorithm.SPECIFIC_GROUP`.
.. note::
If the group is not present in all the chapters for a specific number, an alternate resolution
algorithm will be used. Use the ``include_groups`` param if you only want chapters from that group.
:type duplicate_strategy_groups: List[Group]
:param duplicate_strategy_users: The users to use for :attr:`.DuplicateResolutionAlgorithm.SPECIFIC_USER`.
.. note::
If the user is not present in all the chapters for a specific number, an alternate resolution
algorithm will be used. Use the ``include_users`` param if you only want chapters from that user.
:type duplicate_strategy_users: List[User]
:param users: An :class:`.InclusionExclusionPair` denoting the users to include/exclude from the listing.
:type users: InclusionExclusionPair[User]
:param groups: An :class:`.InclusionExclusionPair` denoting the groups to include/exclude from the listing.
:type groups: InclusionExclusionPair[Group]
:param read: Whether or not the manga is read.
.. versionadded:: 0.5
:type read: bool
:param volumes: An :class:`.InclusionExclusionPair` denoting the volumes to include/exclude from the listing.
.. versionadded:: 0.5
:type volumes: InclusionExclusionPair[int]
:return: A filtered :class:`.ChapterList`.
.. note::
The filtered list is not cached in :attr:`.Manga.chapters`.
:rtype: ChapterList
"""
base: Iterable[Chapter] = self.copy()
options = (
languages,
creation_time,
update_time,
publish_time,
views,
has_number,
chapter_number_range,
chapter_numbers,
duplicate_strategy,
duplicate_strategy_groups,
duplicate_strategy_users,
groups,
users,
read,
volumes,
)
if options.count(None) == len(options) and not remove_duplicates:
return ChapterList(self.manga, entries=self.copy())
duplicate_strategy = duplicate_strategy or [
DuplicateResolutionAlgorithm.PREVIOUS_GROUP,
DuplicateResolutionAlgorithm.CREATION_DATE_ASC,
]
if languages:
base = filter(lambda chapter: chapter.language in languages, base)
if has_number:
base = filter(lambda chapter: chapter.number is not None, base)
if creation_time:
base = filter(lambda chapter: chapter.created_at in creation_time, base)
if update_time:
base = filter(lambda chapter: chapter.created_at in update_time, base)
if publish_time:
base = filter(lambda chapter: chapter.created_at in publish_time, base)
if views:
raise NotImplementedError("Views not implemented in the MangaDex API.")
# base = filter(lambda chapter: chapter.views in views, base)
if chapter_number_range:
base = filter(lambda chapter: chapter.number in chapter_number_range, base)
if chapter_numbers:
base = filter(lambda chapter: chapter_numbers.matches_include_exclude_pair(chapter.number), base)
if groups:
base = filter(
lambda chapter: _check_values(set(chapter.groups), set(groups.include))
and not _check_values(set(chapter.groups), set(groups.exclude)),
base,
)
if users:
base = filter(lambda chapter: users.matches_include_exclude_pair(chapter.user), base)
if read is not None:
base = filter(lambda chapter: chapter.read == read, base)
if volumes:
base = filter(lambda chapter: volumes.matches_include_exclude_pair(chapter.volume), base)
final = list(base)
if remove_duplicates:
final = _resolve_duplicates(final, duplicate_strategy, duplicate_strategy_groups, duplicate_strategy_users)
return type(self)(self.manga, entries=final)
def __repr__(self) -> str:
"""Provide a string representation of the object.
:return: The string representation
:rtype: str
"""
return f"{type(self).__name__}{super().__repr__()}"
def sort(self, *, key: Optional[Callable[[Chapter], Any]] = None, reverse: bool = False):
"""Sort the ChapterList. This uses a natural sorting algorithm to sort the chapters.
:param key: An optional key if you want to override the sorting key used by the class.
:type key: Callable[[Chapter], Any]
:param reverse: Whether or not to reverse the list.
:type reverse: bool
"""
super().sort(key=key or natsort_keygen(key=lambda chapter: chapter.name), reverse=reverse)
async def download_all(
self,
*,
skip_bad: bool = True,
folder_format: str = "{manga}/{chapter_num}{separator}{title}",
file_format: str = "{num}",
as_bytes_list: bool = False,
overwrite: bool = True,
retries: int = 3,
use_data_saver: bool = False,
ssl_only: bool = False,
) -> Dict[Chapter, Optional[List[str]]]:
"""Download all chapters in the list.
.. versionadded:: 0.4
:param skip_bad: Whether or not to skip bad chapters. Defaults to True.
:type skip_bad: bool
:param folder_format: The format of the folder to create for the chapter. The folder can already be existing.
The default format is ``{manga}/{chapter_num}{separator}{chapter_title}``.
.. note::
Specify ``.`` if you want to save the pages in the current folder.
Available variables:
* ``{manga}``: The name of the manga. If the chapter's manga object does not contain a title object,
it will be fetched.
* ``{chapter_num}``: The number of the chapter, if it exists.
* ``{separator}``: A separator if both the chapter's number and title exists.
* ``{title}``: The title of the chapter, if it exists.
:type folder_format: str
:param file_format: The format of the individual image file names. The default format is ``{num}``.
.. note::
The file extension is applied automatically from the real file name. There is no need to include it.
Available variables:
* ``{num}``: The numbering of the image files starting from 1. This respects the order the images are in
inside of :attr:`.page_names`.
* ``{num0}``: The same as ``{num}`` but starting from 0.
* ``{name}``: The actual filename of the image from :attr:`.page_names`, without the file extension.
:type file_format: str
:param as_bytes_list: Whether or not to return the pages as a list of raw bytes. Setting this parameter to
``True`` will ignore the value of the ``folder_format`` parameter.
:type as_bytes_list: bool
:param overwrite: Whether or not to override existing files with the same name as the page. Defaults to
``True``.
:type overwrite: bool
:param retries: How many times to retry a chapter if a MD@H node does not let us download the pages.
Defaults to ``3``.
:type retries: int
:param use_data_saver: Whether or not to use the data saver pages or the normal pages. Defaults to ``False``.
:type use_data_saver: bool
:param ssl_only: Whether or not the given URL has port ``443``. Useful if your firewall blocks outbound
connections to ports that are not port ``443``. Defaults to ``False``.
.. note::
This will lower the pool of available clients and can cause higher download times.
:type ssl_only: bool
:raises: :class:`aiohttp.ClientResponseError` if there is an error after all retries are exhausted.
:return: A dictionary mapping consisting of :class:`.Chapter` objects as keys and the data from that chapter's
:meth:`.download_chapter` method. If ``skip_bad`` is True, chapters with exceptions will have ``None``
instead of a list of bytes.
:rtype: List[Optional[List[bytes]]]
"""
tasks = [
asyncio.create_task(
item.download_chapter(
folder_format=folder_format,
file_format=file_format,
as_bytes_list=as_bytes_list,
overwrite=overwrite,
retries=retries,
use_data_saver=use_data_saver,
ssl_only=ssl_only,
)
)
for item in self
]
data = await asyncio.gather(*tasks, return_exceptions=skip_bad)
return_mapping = {}
for num, item in enumerate(data):
if isinstance(item, BaseException):
item = None
return_mapping[self[num]] = item
return return_mapping
def group_by_volumes(self) -> Dict[Optional[str], "ChapterList"]:
"""Creates a dictionary mapping volume numbers to chapters.
.. versionadded:: 0.5
:return: A dictionary where the keys are volume numbers and the values are a list of :class:`.Chapter` objects.
:rtype: Dict[Optional[str], ChapterList]
"""
dd = defaultdict(partial(ChapterList, self.manga))
for item in self:
dd[item.volume].append(item)
return dict(dd)
def group_by_numbers(self) -> Dict[Optional[str], "ChapterList"]:
"""Creates a dictionary mapping chapter numbers to chapters.
.. versionadded:: 0.5
:return: A dictionary where the keys are chapter numbers and the values are a list of :class:`.Chapter` objects.
:rtype: Dict[Optional[str], ChapterList]
"""
dd = defaultdict(partial(ChapterList, self.manga))
for item in self:
dd[item.number].append(item)
return dict(dd)
def group_by_volume_and_chapters(self) -> Dict[Tuple[Optional[str], Optional[str]], "ChapterList"]:
"""Creates a dictionary mapping volume numbers and chapter numbers to chapters.
.. versionadded:: 0.5
:return: A dictionary where the keys are a tuple of volume and chapter numbers and the values are a list of
:class:`.Chapter` objects.
:rtype: Dict[Tuple[Optional[str], Optional[str]], ChapterList]
"""
dd = defaultdict(partial(ChapterList, self.manga))
for item in self:
dd[(item.volume, item.number)].append(item)
return dict(dd)
def calculate_aggregate(self) -> MangaAggregate:
"""Calculates an aggregate of the chapters contained.
.. versionadded:: 0.5
:return: The aggregate of the chapters.
:rtype: MangaAggregate
"""
ma = MangaAggregate()
for (volume_number, chapter_number), chapter in self.group_by_volume_and_chapters().items():
ma.setdefault(volume_number, VolumeAggregate()).setdefault(chapter_number, 0)
ma[volume_number][chapter_number] += 1
return ma
def languages(self) -> List[str]:
"""Get the list of languages that exist in the chapter list.
.. versionadded:: 0.5
:return: A list of languages.
:rtype: List[str]
"""
return list({item.language for item in self})
def _update_read_data(self, data: Dict[str, Union[str, List[str]]]):
id_mapping = self.id_map()
for id in data["data"]:
if id in id_mapping:
id_mapping[id].read = True
async def get_read(self):
"""Gets the list of chapters which are read. Chapters whose IDs are found in this list will be set as read.
|auth|
.. versionadded:: 0.5
"""
self.manga.client.raise_exception_if_not_authenticated("GET", routes["manga_read"])
r = await self.manga.client.request("GET", routes["manga_read"].format(id=self.manga.id))
self.manga._check_404(r)
json = await r.json()
r.close()
self._update_read_data(json)
async def fetch_all(self):
await self.manga.client.batch_chapters(*self) | PypiClean |
/DLTA-AI-1.1.tar.gz/DLTA-AI-1.1/DLTA_AI_app/mmdetection/mmdet/models/plugins/pixel_decoder.py | import torch
import torch.nn as nn
import torch.nn.functional as F
from mmcv.cnn import PLUGIN_LAYERS, Conv2d, ConvModule, caffe2_xavier_init
from mmcv.cnn.bricks.transformer import (build_positional_encoding,
build_transformer_layer_sequence)
from mmcv.runner import BaseModule, ModuleList
@PLUGIN_LAYERS.register_module()
class PixelDecoder(BaseModule):
"""Pixel decoder with a structure like fpn.
Args:
in_channels (list[int] | tuple[int]): Number of channels in the
input feature maps.
feat_channels (int): Number channels for feature.
out_channels (int): Number channels for output.
norm_cfg (:obj:`mmcv.ConfigDict` | dict): Config for normalization.
Defaults to dict(type='GN', num_groups=32).
act_cfg (:obj:`mmcv.ConfigDict` | dict): Config for activation.
Defaults to dict(type='ReLU').
encoder (:obj:`mmcv.ConfigDict` | dict): Config for transorformer
encoder.Defaults to None.
positional_encoding (:obj:`mmcv.ConfigDict` | dict): Config for
transformer encoder position encoding. Defaults to
dict(type='SinePositionalEncoding', num_feats=128,
normalize=True).
init_cfg (:obj:`mmcv.ConfigDict` | dict): Initialization config dict.
Default: None
"""
def __init__(self,
in_channels,
feat_channels,
out_channels,
norm_cfg=dict(type='GN', num_groups=32),
act_cfg=dict(type='ReLU'),
init_cfg=None):
super().__init__(init_cfg=init_cfg)
self.in_channels = in_channels
self.num_inputs = len(in_channels)
self.lateral_convs = ModuleList()
self.output_convs = ModuleList()
self.use_bias = norm_cfg is None
for i in range(0, self.num_inputs - 1):
lateral_conv = ConvModule(
in_channels[i],
feat_channels,
kernel_size=1,
bias=self.use_bias,
norm_cfg=norm_cfg,
act_cfg=None)
output_conv = ConvModule(
feat_channels,
feat_channels,
kernel_size=3,
stride=1,
padding=1,
bias=self.use_bias,
norm_cfg=norm_cfg,
act_cfg=act_cfg)
self.lateral_convs.append(lateral_conv)
self.output_convs.append(output_conv)
self.last_feat_conv = ConvModule(
in_channels[-1],
feat_channels,
kernel_size=3,
padding=1,
stride=1,
bias=self.use_bias,
norm_cfg=norm_cfg,
act_cfg=act_cfg)
self.mask_feature = Conv2d(
feat_channels, out_channels, kernel_size=3, stride=1, padding=1)
def init_weights(self):
"""Initialize weights."""
for i in range(0, self.num_inputs - 2):
caffe2_xavier_init(self.lateral_convs[i].conv, bias=0)
caffe2_xavier_init(self.output_convs[i].conv, bias=0)
caffe2_xavier_init(self.mask_feature, bias=0)
caffe2_xavier_init(self.last_feat_conv, bias=0)
def forward(self, feats, img_metas):
"""
Args:
feats (list[Tensor]): Feature maps of each level. Each has
shape of (batch_size, c, h, w).
img_metas (list[dict]): List of image information. Pass in
for creating more accurate padding mask. Not used here.
Returns:
tuple: a tuple containing the following:
- mask_feature (Tensor): Shape (batch_size, c, h, w).
- memory (Tensor): Output of last stage of backbone.\
Shape (batch_size, c, h, w).
"""
y = self.last_feat_conv(feats[-1])
for i in range(self.num_inputs - 2, -1, -1):
x = feats[i]
cur_feat = self.lateral_convs[i](x)
y = cur_feat + \
F.interpolate(y, size=cur_feat.shape[-2:], mode='nearest')
y = self.output_convs[i](y)
mask_feature = self.mask_feature(y)
memory = feats[-1]
return mask_feature, memory
@PLUGIN_LAYERS.register_module()
class TransformerEncoderPixelDecoder(PixelDecoder):
"""Pixel decoder with transormer encoder inside.
Args:
in_channels (list[int] | tuple[int]): Number of channels in the
input feature maps.
feat_channels (int): Number channels for feature.
out_channels (int): Number channels for output.
norm_cfg (:obj:`mmcv.ConfigDict` | dict): Config for normalization.
Defaults to dict(type='GN', num_groups=32).
act_cfg (:obj:`mmcv.ConfigDict` | dict): Config for activation.
Defaults to dict(type='ReLU').
encoder (:obj:`mmcv.ConfigDict` | dict): Config for transorformer
encoder.Defaults to None.
positional_encoding (:obj:`mmcv.ConfigDict` | dict): Config for
transformer encoder position encoding. Defaults to
dict(type='SinePositionalEncoding', num_feats=128,
normalize=True).
init_cfg (:obj:`mmcv.ConfigDict` | dict): Initialization config dict.
Default: None
"""
def __init__(self,
in_channels,
feat_channels,
out_channels,
norm_cfg=dict(type='GN', num_groups=32),
act_cfg=dict(type='ReLU'),
encoder=None,
positional_encoding=dict(
type='SinePositionalEncoding',
num_feats=128,
normalize=True),
init_cfg=None):
super(TransformerEncoderPixelDecoder, self).__init__(
in_channels,
feat_channels,
out_channels,
norm_cfg,
act_cfg,
init_cfg=init_cfg)
self.last_feat_conv = None
self.encoder = build_transformer_layer_sequence(encoder)
self.encoder_embed_dims = self.encoder.embed_dims
assert self.encoder_embed_dims == feat_channels, 'embed_dims({}) of ' \
'tranformer encoder must equal to feat_channels({})'.format(
feat_channels, self.encoder_embed_dims)
self.positional_encoding = build_positional_encoding(
positional_encoding)
self.encoder_in_proj = Conv2d(
in_channels[-1], feat_channels, kernel_size=1)
self.encoder_out_proj = ConvModule(
feat_channels,
feat_channels,
kernel_size=3,
stride=1,
padding=1,
bias=self.use_bias,
norm_cfg=norm_cfg,
act_cfg=act_cfg)
def init_weights(self):
"""Initialize weights."""
for i in range(0, self.num_inputs - 2):
caffe2_xavier_init(self.lateral_convs[i].conv, bias=0)
caffe2_xavier_init(self.output_convs[i].conv, bias=0)
caffe2_xavier_init(self.mask_feature, bias=0)
caffe2_xavier_init(self.encoder_in_proj, bias=0)
caffe2_xavier_init(self.encoder_out_proj.conv, bias=0)
for p in self.encoder.parameters():
if p.dim() > 1:
nn.init.xavier_uniform_(p)
def forward(self, feats, img_metas):
"""
Args:
feats (list[Tensor]): Feature maps of each level. Each has
shape of (batch_size, c, h, w).
img_metas (list[dict]): List of image information. Pass in
for creating more accurate padding mask.
Returns:
tuple: a tuple containing the following:
- mask_feature (Tensor): shape (batch_size, c, h, w).
- memory (Tensor): shape (batch_size, c, h, w).
"""
feat_last = feats[-1]
bs, c, h, w = feat_last.shape
input_img_h, input_img_w = img_metas[0]['batch_input_shape']
padding_mask = feat_last.new_ones((bs, input_img_h, input_img_w),
dtype=torch.float32)
for i in range(bs):
img_h, img_w, _ = img_metas[i]['img_shape']
padding_mask[i, :img_h, :img_w] = 0
padding_mask = F.interpolate(
padding_mask.unsqueeze(1),
size=feat_last.shape[-2:],
mode='nearest').to(torch.bool).squeeze(1)
pos_embed = self.positional_encoding(padding_mask)
feat_last = self.encoder_in_proj(feat_last)
# (batch_size, c, h, w) -> (num_queries, batch_size, c)
feat_last = feat_last.flatten(2).permute(2, 0, 1)
pos_embed = pos_embed.flatten(2).permute(2, 0, 1)
# (batch_size, h, w) -> (batch_size, h*w)
padding_mask = padding_mask.flatten(1)
memory = self.encoder(
query=feat_last,
key=None,
value=None,
query_pos=pos_embed,
query_key_padding_mask=padding_mask)
# (num_queries, batch_size, c) -> (batch_size, c, h, w)
memory = memory.permute(1, 2, 0).view(bs, self.encoder_embed_dims, h,
w)
y = self.encoder_out_proj(memory)
for i in range(self.num_inputs - 2, -1, -1):
x = feats[i]
cur_feat = self.lateral_convs[i](x)
y = cur_feat + \
F.interpolate(y, size=cur_feat.shape[-2:], mode='nearest')
y = self.output_convs[i](y)
mask_feature = self.mask_feature(y)
return mask_feature, memory | PypiClean |
/BPMN_RPA-7.1.2.tar.gz/BPMN_RPA-7.1.2/BPMN_RPA/Scripts/FTP.py | import ftplib
import pickle
# The BPMN-RPA FTP module is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# The BPMN-RPA FTP module is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
#
# The BPMN-RPA FTP module is based on the Python ftplib module, which is licensed under the MIT license:
# Copyright (c) 2007 Giampaolo Rodola
class FTP:
def __init__(self, host, user, password):
"""
Create a new FTP connection.
:param host: The host name or IP address of the FTP server.
:param user: The user name to connect with.
:param password: The password to connect with.
"""
self.user = user
self.password = password
self.ftp = None
self.host = host
self.__connect__()
def __connect__(self):
"""
Internal function to connect to the FTP server.
"""
self.ftp = ftplib.FTP()
# open FTP connection
self.ftp.connect(self.host, 21)
# login
self.ftp.login(self.user, self.password)
def __is_picklable__(self, obj: any) -> bool:
"""
Internal function to determine if the object is pickable.
:param obj: The object to check.
:return: True or False
"""
try:
pickle.dumps(obj)
return True
except Exception as e:
return False
def __getstate__(self):
"""
Internal function for serialization
"""
state = self.__dict__.copy()
for key, val in state.items():
if not self.__is_picklable__(val):
state[key] = str(val)
return state
def __setstate__(self, state):
"""
Internal function for deserialization
:param state: The state to set to the 'self' object of the class
"""
self.__dict__.update(state)
self.__connect__()
def close_ftp(self):
"""
Close FTP connection.
"""
self.ftp.close()
def download_ftp_file(self, remote_file, local_file):
"""
Download a file from the FTP server.
:param remote_file: The full path to the file to download.
:param local_file: The full path to the file to create.
"""
# download file
with open(local_file, 'wb') as f:
self.ftp.retrbinary('RETR ' + remote_file, f.write)
def upload_ftp_file(self, local_file, remote_file):
"""
Upload a file to the FTP server.
:param local_file: The full path to the file to upload.
:param remote_file: The full path to the file to create.
"""
# upload file
self.ftp.storbinary("STOR " + remote_file, open(local_file, 'rb'))
def list_ftp_files(self, path):
"""
List the files in a folder on the FTP server.
:param path: The full path to the folder to list.
:return: A list of files in the folder.
"""
# list files in directory
self.ftp.cwd(path)
return self.ftp.nlst()
def list_ftp_files_with_wildcard(self, path, wildcard):
"""
List the files in a folder with a specific wildcard on the FTP server.
:param path: The full path to the folder to list.
:param wildcard: The wildcard of the files to return.
:return: A list of files in the folder.
"""
files = []
self.ftp.cwd(path)
for file in self.ftp.nlst():
if file.startswith(wildcard):
files.append(file)
return files
def copy_ftp_file(self, source, destination):
"""
Copy a file on the FTP server.
:param source: The full path to the file to copy.
:param destination: The full path to the destination.
"""
# download file
with open(destination, 'wb') as f:
self.ftp.retrbinary('RETR ' + source, f.write)
def move_ftp_file(self, source, destination):
"""
Move a file on the FTP server.
:param source: The full path to the file to move.
:param destination: The full path to the destination.
"""
# download file
with open(destination, 'wb') as f:
self.ftp.retrbinary('RETR ' + source, f.write)
# delete file
self.ftp.delete(source)
def delete_ftp_file(self, path):
"""
Delete a file from the FTP server.
:param path: The full path to the file to delete.
"""
# delete file
self.ftp.delete(path)
def delete_ftp_folder(self, path):
"""
Delete a folder on the FTP server.
:param path: The full path to the folder to delete.
"""
# delete folder and its contents
self.ftp.rmd(path)
def create_ftp_folder(self, path):
"""
Create a folder on the FTP server.
:param path: The full path to the folder to create.
"""
# create folder
self.ftp.mkd(path)
def rename_ftp_file(self, source, destination):
"""
Rename a file on the FTP server.
:param source: The full path to the file to rename.
:param destination: The full path to the destination.
"""
# rename file
self.ftp.rename(source, destination)
def rename_ftp_folder(self, source, destination):
"""
Rename a folder on the FTP server.
:param source: The full path to the folder to rename.
:param destination: The full path to the destination.
"""
# rename folder
self.ftp.rename(source, destination)
def get_ftp_file_size(self, path):
"""
Get the size of a file on the FTP server.
:param path: The full path to the file.
:return: The size of the file.
"""
# get file size
return self.ftp.size(path)
def get_ftp_folder_size(self, path):
"""
Get the size of a folder on the FTP server.
:param path: The full path to the folder.
:return: The size of the folder.
"""
# get folder size
size = 0
self.ftp.cwd(path)
for file in self.ftp.nlst():
size += self.ftp.size(file)
return size
def get_ftp_file_timestamp(self, path):
"""
Get the timestamp of a file on the FTP server.
:param path: The full path to the file.
:return: The timestamp of the file.
"""
# get file timestamp
return self.ftp.sendcmd("MDTM " + path)
def get_ftp_folder_timestamp(self, path):
"""
Get the timestamp of a folder on the FTP server.
:param path: The full path to the folder.
:return: The timestamp of the folder.
"""
# get folder timestamp
return self.ftp.sendcmd("MDTM " + path)
def get_ftp_file_permissions(self, path):
"""
Get the permissions of a file on the FTP server.
:param path: The full path to the file.
:return: The permissions of the file.
"""
# get file permissions
return self.ftp.sendcmd("SITE CHMOD " + path)
def get_ftp_folder_permissions(self, path):
"""
Get the permissions of a folder on the FTP server.
:param path: The full path to the folder.
:return: The permissions of the folder.
"""
# get folder permissions
return self.ftp.sendcmd("SITE CHMOD " + path)
def set_ftp_file_permissions(self, path, permissions):
"""
Set the permissions of a file on the FTP server.
:param path: The full path to the file.
:param permissions: The permissions to set.
"""
# set file permissions
self.ftp.sendcmd("SITE CHMOD " + permissions + " " + path)
def set_ftp_folder_permissions(self, path, permissions):
"""
Set the permissions of a folder on the FTP server.
:param path: The full path to the folder.
:param permissions: The permissions to set.
"""
# set folder permissions
self.ftp.sendcmd("SITE CHMOD " + permissions + " " + path)
def get_ftp_file_owner(self, path):
"""
Get the owner of a file on the FTP server.
:param path: The full path to the file.
:return: The owner of the file.
"""
# get file owner
return self.ftp.sendcmd("SITE CHOWN " + path)
def get_ftp_folder_owner(self, path):
"""
Get the owner of a folder on the FTP server.
:param path: The full path to the folder.
:return: The owner of the folder.
"""
# get folder owner
return self.ftp.sendcmd("SITE CHOWN " + path)
def set_ftp_file_owner(self, path, owner):
"""
Set the owner of a file on the FTP server.
:param path: The full path to the file.
:param owner: The owner to set.
"""
# set file owner
self.ftp.sendcmd("SITE CHOWN " + owner + " " + path)
def set_ftp_folder_owner(self, path, owner):
"""
Set the owner of a folder on the FTP server.
:param path: The full path to the folder.
:param owner: The owner to set.
"""
# set folder owner
self.ftp.sendcmd("SITE CHOWN " + owner + " " + path)
def get_ftp_file_group(self, path):
"""
Get the group of a file on the FTP server.
:param path: The full path to the file.
:return: The group of the file.
"""
# get file group
return self.ftp.sendcmd("SITE CHGRP " + path)
def get_ftp_folder_group(self, path):
"""
Get the group of a folder on the FTP server.
:param path: The full path to the folder.
:return: The group of the folder.
"""
# get folder group
return self.ftp.sendcmd("SITE CHGRP " + path)
def set_ftp_file_group(self, path, group):
"""
Set the group of a file on the FTP server.
:param path: The full path to the file.
:param group: The group to set.
"""
# set file group
self.ftp.sendcmd("SITE CHGRP " + group + " " + path)
def set_ftp_folder_group(self, path, group):
"""
Set the group of a folder on the FTP server.
:param path: The full path to the folder.
:param group: The group to set.
"""
# set folder group
self.ftp.sendcmd("SITE CHGRP " + group + " " + path)
def get_ftp_file_md5(self, path):
"""
Get the MD5 hash of a file on the FTP server.
:param path: The full path to the file.
:return: The MD5 hash of the file.
"""
# get file md5
return self.ftp.sendcmd("SITE MD5 " + path)
def get_ftp_folder_md5(self, path):
"""
Get the MD5 hash of a folder on the FTP server.
:param path: The full path to the folder.
:return: The MD5 hash of the folder.
"""
# get folder md5
return self.ftp.sendcmd("SITE MD5 " + path)
def change_working_directory(self, path):
"""
Change the working directory on the FTP server.
:param path: The full path to the folder.
"""
# change working directory
self.ftp.cwd(path)
def get_working_directory(self):
"""
Get the working directory on the FTP server.
:return: The working directory.
"""
# get working directory
return self.ftp.pwd() | PypiClean |
/IsoCon-0.3.3.tar.gz/IsoCon-0.3.3/modules/correction_module.py | import signal
from multiprocessing import Pool
import multiprocessing as mp
import sys
import math
import copy
import random
from collections import Counter
from modules.functions import create_multialignment_matrix, create_position_frequency_matrix
def correct_strings(partition_alignments, seq_to_acc, ccs_dict, step, nr_cores = 1, verbose = False):
S_prime = {}
S_prime_quality = {}
partition_unique_seq_to_acc = {}
for m, partition in partition_alignments.items():
partition_unique_seq_to_acc[m] = {}
partition_unique_seq_to_acc[m][m] = seq_to_acc[m]
for s in partition:
if s in seq_to_acc:
s_accessions = seq_to_acc[s]
partition_unique_seq_to_acc[m][s] = s_accessions
if ccs_dict:
partitioned_ccs_dict = {}
for m, partition in partition_alignments.items():
partitioned_ccs_dict[m] = {}
for s in partition:
if s in seq_to_acc:
s_accessions = seq_to_acc[s]
for s_acc in s_accessions:
partitioned_ccs_dict[m][s_acc] = ccs_dict[s_acc]
else:
partitioned_ccs_dict = {}
for m, partition in partition_alignments.items():
partitioned_ccs_dict[m] = {}
if nr_cores == 1:
for m, partition in sorted(partition_alignments.items()):
S_prime_partition, S_prime_quality_vectors = correct_to_consensus_helper( ((m, partition, partition_unique_seq_to_acc[m], step, verbose, partitioned_ccs_dict[m]), {}) )
for acc, s in S_prime_partition.items():
assert acc not in S_prime
S_prime[acc] = s
for acc, qual_vector in S_prime_quality_vectors.items():
S_prime_quality[acc] = qual_vector
else:
####### parallelize statistical tests #########
# pool = Pool(processes=mp.cpu_count())
original_sigint_handler = signal.signal(signal.SIGINT, signal.SIG_IGN)
signal.signal(signal.SIGINT, original_sigint_handler)
pool = Pool(processes=nr_cores)
try:
res = pool.map_async(correct_to_consensus_helper, [ ( (m, partition, partition_unique_seq_to_acc[m], step, verbose, partitioned_ccs_dict[m]), {}) for m, partition in partition_alignments.items() if len(partition) > 1 ] )
S_prime_partition_dicts =res.get(999999999) # Without the timeout this blocking call ignores all signals.
except KeyboardInterrupt:
print("Caught KeyboardInterrupt, terminating workers")
pool.terminate()
sys.exit()
else:
# print("Normal termination")
pool.close()
pool.join()
for S_prime_partition, S_prime_quality_vectors in S_prime_partition_dicts:
for acc, s in S_prime_partition.items():
assert acc not in S_prime
S_prime[acc] = s
for acc, qual_vector in S_prime_quality_vectors.items():
S_prime_quality[acc] = qual_vector
return S_prime, S_prime_quality
def correct_to_consensus_helper(arguments):
args, kwargs = arguments
if args[5]:
print("Correction with ccs probabilities")
return correct_to_consensus_ccs_qual(*args, **kwargs)
else:
return correct_to_consensus(*args[:-1], **kwargs), {}
def annotate_with_quality_values(alignment_matrix, seq_to_acc, ccs_dict):
alignment_matrix_of_qualities = {}
alignment_matrix_of_max_qualities = {}
for s in alignment_matrix:
s_accessions = seq_to_acc[s]
all_quals = [ ccs_dict[s_acc].qual for s_acc in s_accessions ]
sum_quals_vector = [sum(t) for t in zip(*all_quals)]
max_quals_vector = [max(t) for t in zip(*all_quals)]
# sum all probabilities from all reads equal to s here using all accessions in s to acc s_to_acc
list_sum_quals = []
list_max_quals = []
current_pos_in_s = 0
for j in range(len(alignment_matrix[s])):
# print(current_pos_in_s, len(sum_quals_vector) )
current_quality = sum_quals_vector[current_pos_in_s]
current_max_quality = max_quals_vector[current_pos_in_s]
list_sum_quals.append(current_quality)
list_max_quals.append(current_max_quality)
char_at_pos = alignment_matrix[s][j]
if char_at_pos != "-":
if current_pos_in_s < len(sum_quals_vector) - 1:
current_pos_in_s += 1
alignment_matrix_of_qualities[s] = list_sum_quals
alignment_matrix_of_max_qualities[s] = list_max_quals
PFM_qualities = []
PFM_max_qualities = []
for j in range(len(alignment_matrix[s])): # for each column
PFM_qualities.append({"A": 0, "C": 0, "G": 0, "T": 0, "-": 0})
PFM_max_qualities.append({"A": 0, "C": 0, "G": 0, "T": 0, "-": 0})
for s in alignment_matrix_of_qualities:
nucl = alignment_matrix[s][j]
sum_quality_at_position = alignment_matrix_of_qualities[s][j]
PFM_qualities[j][nucl] += sum_quality_at_position
max_quality_at_position = alignment_matrix_of_max_qualities[s][j]
PFM_max_qualities[j][nucl] += max_quality_at_position
# get all the differences to majority here
list_of_majority_nucleotides = []
for j in range(len(PFM_qualities)):
max_v_j = max(PFM_qualities[j], key = lambda x: PFM_qualities[j][x] )
majority_count = PFM_qualities[j][max_v_j]
max_v_j_set = set([v for v in PFM_qualities[j] if PFM_qualities[j][v] == majority_count ])
all_major = "".join(max_v_j_set)
list_of_majority_nucleotides.append(all_major)
assert len(list_of_majority_nucleotides) == len(PFM_qualities)
global_all_difference_qualities = []
for s in alignment_matrix:
s_aligned_vector = alignment_matrix[s]
for j in range(len(s_aligned_vector)):
if s_aligned_vector[j] not in list_of_majority_nucleotides[j] and len(list_of_majority_nucleotides[j]) == 1:
global_all_difference_qualities.append(alignment_matrix_of_qualities[s][j])
global_all_difference_qualities.sort()
if len(global_all_difference_qualities) > 0:
global_correction_threshold = global_all_difference_qualities[ int( math.ceil( len(global_all_difference_qualities)/2.0) ) - 1 ]
else:
global_correction_threshold = -1
print("nothing to correct!")
print("GLOBAL QUAL THRESH:", global_correction_threshold)
return alignment_matrix_of_qualities, PFM_qualities, PFM_max_qualities, global_correction_threshold
def correct_to_consensus_ccs_qual(m, partition, seq_to_acc, step, ccs_dict):
S_prime_partition = {}
S_prime_quality_vector = {}
N_t = sum([container_tuple[3] for s, container_tuple in partition.items()]) # total number of sequences in partition
if len(partition) > 1:
# all strings has not converged
alignment_matrix = create_multialignment_matrix(m, partition)
PFM = create_position_frequency_matrix(alignment_matrix, partition)
for s_before in partition:
s_after = "".join([n for n in alignment_matrix[s_before] if n != "-"])
assert s_before == s_after
# print(len(partition), N_t)
alignment_matrix_of_qualities, PFM_qualities, PFM_max_qualities, global_correction_threshold = annotate_with_quality_values(alignment_matrix, seq_to_acc, ccs_dict)
assert len(alignment_matrix_of_qualities) == len(alignment_matrix)
if global_correction_threshold < 0:
return S_prime_partition, S_prime_quality_vector
majority_vector = []
for j in range(len(PFM_qualities)):
max_v_j = max(PFM_qualities[j], key = lambda x: PFM_qualities[j][x] )
majority_count = PFM_qualities[j][max_v_j]
max_v_j_set = set([v for v in PFM_qualities[j] if PFM_qualities[j][v] == majority_count ])
all_major = "".join(max_v_j_set)
majority_vector.append( all_major )
assert len(majority_vector) == len(PFM_qualities)
############################
############################
for s in sorted(partition):
if partition[s][3] > 1: # at least 2 identical sequences --> its a nearest_neighbor of the partition, has converged, and should not be corrected
print("not correcting converged sequence!")
continue
s_alignment_in_matrix = alignment_matrix[s]
# s_min = 0
for i, p in enumerate(s_alignment_in_matrix):
if p != "-":
s_min = i
break
for i, p in enumerate(s_alignment_in_matrix[::-1]):
if p != "-":
s_max = len(s_alignment_in_matrix) - i
break
print("S", s_min, s_max)
s_quals = alignment_matrix_of_qualities[s]
# ALL POSITIONS with sum of probabilities lower than the largest probability
minority_positions = [ (j,majority_vector[j], s_alignment_in_matrix[j]) for j in range(len(majority_vector)) if s_alignment_in_matrix[j] not in majority_vector[j] and s_min <= j <= s_max ]
minority_positions_correctable = [ j for j in range(len(majority_vector)) if (len(majority_vector[j]) == 1 and majority_vector[j] != s_alignment_in_matrix[j] ) ]
minority_positions_correctable = [ (j, PFM_qualities[j][ s_alignment_in_matrix[j] ]) for j in minority_positions_correctable ]
nr_pos_to_correct = int(math.ceil( len(minority_positions_correctable) * 0.5)) # (step/ float(step +1)) ))
# print("positions to correct:", nr_pos_to_correct)
if nr_pos_to_correct == 0:
print("Edit distance to nearest_neighbor:", partition[s][0], "is nearest_neighbor:", s ==m, "Minority positions:", minority_positions)
continue
if len(minority_positions_correctable) == 0:
print("no unambiguous majority positions")
continue
minority_positions_correctable.sort(key=lambda x: x[1])
print(len(minority_positions_correctable) ,minority_positions_correctable)
_, quality_threshold_to_correct = minority_positions_correctable[ nr_pos_to_correct - 1 ]
minority_positions_to_correct = [ (j, qual_j) for j, qual_j in minority_positions_correctable if qual_j <= quality_threshold_to_correct ]
print(quality_threshold_to_correct, len(minority_positions_to_correct))
# minority_positions_to_correct = [ (j, qual_j) for j, qual_j in minority_positions_correctable if qual_j <= global_correction_threshold ]
# print("actual:", len(minority_positions_to_correct))
# minority_positions_to_correct = sorted(minority_positions_correctable, key=lambda x: x[1])[:nr_pos_to_correct] # sorted list with the smallest probabilities first
# print(minority_positions_to_correct)
s_new = alignment_matrix[s]
s_qual_new = alignment_matrix_of_qualities[s]
for j, qual_j in minority_positions_to_correct:
highest_prob_character_at_j = majority_vector[j]
assert len(majority_vector[j]) == 1
s_new[j] = highest_prob_character_at_j
s_qual_new[j] = PFM_max_qualities[j][highest_prob_character_at_j]
s_modified = "".join([nucl for nucl in s_new if nucl != "-" ])
s_qual_modified = [s_qual_new[j] for j in range(len(s_new)) if s_new[j] != "-" ]
# only unique strings can change in this step
accessions_of_s = seq_to_acc[s]
for acc in accessions_of_s:
S_prime_partition[acc] = s_modified
S_prime_quality_vector[acc] = s_qual_modified
else:
print("Partition converged: Partition size(unique strings):{0}, partition support: {1}.".format(len(partition), N_t))
return S_prime_partition, S_prime_quality_vector
def correct_to_consensus(m, partition, seq_to_acc, step, verbose):
S_prime_partition = {}
N_t = sum([container_tuple[3] for s, container_tuple in partition.items()]) # total number of sequences in partition
# if N_t == 2:
# print("Partition has size", N_t, "no meaningful correction can be done")
# for s, container_tuple in partition.items():
# print(seq_to_acc[s])
if len(partition) > 1 and N_t > 2:
# all strings has not converged
alignment_matrix = create_multialignment_matrix(m, partition)
PFM = create_position_frequency_matrix(alignment_matrix, partition)
for s_before in partition:
s_after = "".join([n for n in alignment_matrix[s_before] if n != "-"])
assert s_before == s_after
# consensus_alignment = [ max(PFM[j], key=lambda k: PFM[j][k]) for j in range(len(PFM))]
# print("nearest_neighbor errors:", math.ceil(min([ partition[s][0] for s in partition if partition[s][3] > 1 or s !=m ]) / 2.0) )
# frozen_positions = get_frozen_positions(alignment_matrix[m])
## TEST LOG ERROR TYPES #######
c_del = 0
c_ins = 0
c_subs = 0
majority_vector = []
temp_majority_string = ""
for j in range(len(PFM)):
max_v_j = max(PFM[j], key = lambda x: PFM[j][x] )
if max_v_j != "-":
temp_majority_string += max_v_j
majority_count = PFM[j][max_v_j]
max_v_j_set = set([v for v in PFM[j] if PFM[j][v] == majority_count ])
all_major = "".join(max_v_j_set)
majority_vector.append( all_major )
if len(max_v_j_set) > 1:
continue # DO not count errors where majority position is ambigous since we don't know the true majority here
for v in PFM[j]:
if max_v_j == "-":
if v != max_v_j:
c_ins += PFM[j][v]
else:
if v != max_v_j:
if v == "-":
c_del += PFM[j][v]
else:
c_subs += PFM[j][v]
if verbose:
print("Partition error types:", c_del, c_ins, c_subs, "depth:", N_t )
assert len(majority_vector) == len(PFM)
############################
for s in sorted(partition):
if partition[s][3] > 1: # at least 2 identical sequences --> its a nearest_neighbor of the partition, has converged, and should not be corrected
continue
nr_pos_to_correct2 = int(math.ceil(partition[s][0] / 2.0)) #decide how many errors we should correct here
s_alignment_in_matrix = alignment_matrix[s]
minority_positions = [ (j,majority_vector[j], s_alignment_in_matrix[j]) for j in range(len(majority_vector)) if s_alignment_in_matrix[j] not in majority_vector[j] ]
nr_pos_to_correct = int(math.ceil( len([ 1 for j in range(len(majority_vector)) if (len(majority_vector[j]) == 1 and majority_vector[j] != s_alignment_in_matrix[j] ) ]) * 0.5)) # (step/ float(step +1)) ))
# print("positions to correct:", nr_pos_to_correct)
if verbose:
if nr_pos_to_correct == 0:
print("Edit distance to nearest_neighbor:", partition[s][0], "is nearest_neighbor:", s ==m, "Minority positions:", minority_positions)
if nr_pos_to_correct2 > 0 and nr_pos_to_correct == 0:
print("Edit distance to nearest_neighbor: {0}, {1} minority positions, correcting no position. Length partition (unique): {2}, total seqs: {3}".format(partition[s][0], len(minority_positions), len(partition), N_t))
# for s in partition:
# print(s)
# print([ (j,majority_vector[j], s_alignment_in_matrix[j], PFM[j]) for j in range(len(majority_vector)) if majority_vector[j] != s_alignment_in_matrix[j] ])
if nr_pos_to_correct == 0:
continue
# find the position probabilities of the alignment of s in PFM
###################### ORIGINAL CORRECTION ######################################
# pos_freqs_for_s = []
# for j in range(len(PFM)):
# pos_freqs_for_s.append( (j, PFM[j][s_alignment_in_matrix[j]]) )
# pos_freqs_for_s.sort(key=lambda x: x[1]) # sort with respect to smallest frequencies
# pos, highest_freq_of_error_to_correct = pos_freqs_for_s[ nr_pos_to_correct - 1 ]
# end_position_in_list = nr_pos_to_correct
# pp = nr_pos_to_correct
# while pos_freqs_for_s[pp][1] == highest_freq_of_error_to_correct:
# end_position_in_list += 1
# pp += 1
# J = [j for j, freq in random.sample(pos_freqs_for_s[:end_position_in_list], nr_pos_to_correct)]
############################################################
############################################################
########### TEST WEIGHTING EACH MINORITY POSITION BY IT'S OBSERVED FREQUENCY THROUGHOUT THE ALIGNMENTS TO THE nearest_neighbor ################
pos_freqs_for_s_mod = []
for j in range(len(PFM)):
majority_variant = majority_vector[j]
v_j = s_alignment_in_matrix[j]
if v_j == majority_variant or len(majority_variant) > 1: # no correction of majority position or ambigous majority!
continue
else:
if majority_variant == "-":
pos_freqs_for_s_mod.append( (j, PFM[j][v_j] / float(max(c_ins, 1) ) ))
elif v_j == "-":
pos_freqs_for_s_mod.append( (j, PFM[j][v_j] / float(max(c_del, 1) ) ))
else:
pos_freqs_for_s_mod.append( (j, PFM[j][v_j] / float(max(c_subs, 1) ) ))
# max_v_j = max(PFM[j], key = lambda x: PFM[j][x] )
# if max_v_j == v_j:
# pos_freqs_for_s_mod.append( (j, PFM[j][v_j] / float(1)) )
# elif max_v_j == "-":
# pos_freqs_for_s_mod.append( (j, PFM[j][v_j] / float(max(c_ins, 1) ) ))
# elif v_j == "-":
# pos_freqs_for_s_mod.append( (j, PFM[j][v_j] / float(max(c_del, 1) ) ))
# else:
# pos_freqs_for_s_mod.append( (j, PFM[j][v_j] / float(max(c_subs, 1) ) ))
if len(pos_freqs_for_s_mod) == 0:
continue
pos_freqs_for_s_mod.sort(key=lambda x: x[1]) # sort with respect to smallest frequencies
if len(pos_freqs_for_s_mod) < nr_pos_to_correct:
end_position_in_list = len(pos_freqs_for_s_mod)
else:
pos, highest_freq_of_error_to_correct = pos_freqs_for_s_mod[ nr_pos_to_correct - 1 ]
end_position_in_list = nr_pos_to_correct
for pp in range(nr_pos_to_correct, len(pos_freqs_for_s_mod)):
# print(pos_freqs_for_s_mod[pp][1], highest_freq_of_error_to_correct)
if pos_freqs_for_s_mod[pp][1] > highest_freq_of_error_to_correct:
break
else:
end_position_in_list += 1
# J = [j for j, freq in random.sample(pos_freqs_for_s_mod[:end_position_in_list], nr_pos_to_correct)]
J_temp = [j for j, freq in pos_freqs_for_s_mod[:end_position_in_list]]
# print("end pos:", end_position_in_list)
#############################################
s_new = alignment_matrix[s]
for j in J_temp:
# if j in frozen_positions:
# # print("tried to correct in frozen positions")
# continue
old_nucl = s_new[j]
highest_prob_character_at_j = majority_vector[j]
assert len(majority_vector[j]) == 1
# highest_prob_character_at_j = max(PFM[j], key=lambda k: PFM[j][k])
if highest_prob_character_at_j == old_nucl: # choose the other highest on if tie (should happen only when partition consist of two sequences)
print("Highest count nucl was about to be corrected.")
# pmf_j_minus_variant = copy.deepcopy(PFM[j])
# del pmf_j_minus_variant[old_nucl]
# highest_prob_character_at_j = max(pmf_j_minus_variant, key=lambda k: pmf_j_minus_variant[k])
# print("correcting", s_new[j], "to", highest_prob_character_at_j )
s_new[j] = highest_prob_character_at_j
s_modified = "".join([nucl for nucl in s_new if nucl != "-" ])
# only unique strings can change in this step
accessions_of_s = seq_to_acc[s]
for acc in accessions_of_s:
S_prime_partition[acc] = s_modified
elif len(partition) > 1:
ed = 0
for s in partition:
if partition[s][0] > ed:
ed = partition[s][0]
if verbose:
print("Partition could not be corrected: Partition size(unique strings):{0}, partition support: {1}, edit distance:{2}.".format(len(partition), N_t, ed))
else:
if verbose:
print("Partition converged: Partition size(unique strings):{0}, partition support: {1}.".format(len(partition), N_t))
return S_prime_partition
# def get_frozen_positions(m_in_alignment_matrix):
# """
# positions in Multialingment matrix where there are insels longer than 2 bp w.r.t. nearest_neighbor, these regions are prone to errors in alingments and
# we wait to correct these in another partition where they are hopefully split into several true partitions.
# """
# frozen_pos = set()
# ins_count = 0
# ins_pos = set()
# for j in range(len(m_in_alignment_matrix)):
# if m_in_alignment_matrix[j] == "-":
# ins_count += 1
# ins_pos.add(j)
# else:
# if ins_count > 4: # always padded, so indel of length 2 be will be of length 4 in alignment matrixx -
# for k in ins_pos:
# frozen_pos.add(k)
# ins_count = 0
# ins_pos = set()
# print("frozen:", len(frozen_pos), "tot:", len(m_in_alignment_matrix) )
# return frozen_pos | PypiClean |
/MeleeUploader-1.22.2.tar.gz/MeleeUploader-1.22.2/README.md | # Melee-YouTube-Uploader
A YouTube Uploader for my Melee recordings
A modified version of FRC-YouTube-Uploader for Video Game Tournaments.
**IMPORTANT NOTE**
This application **DOES NOT/CANNOT** support enabling monetization due to YouTube's API restrictions. Thus by default videos are uploaded as unlisted so you can set monetization settings before making them public. Optionally you can update monetization settings as they are uploaded without breaking anything.
## To Do
- Automate creation of thumbnails
- Slippi Integration (plan is to allow pulling characters from the slippi stream, additional ideas are appreciated).
- Maybe other stuff
## Contributing
PRs are appreciated and will be reviewed quickly, the only code quality standard I have is to follow PEP8 standard except for the line length. If you have trouble understanding my code just ask me.
## Questions and Support
I am always open to help setup the program or fix any techincal issues you may have. Please send me a DM on twitter with your issue and I'll get you sorted out.
## Current Feature Set:
- Upload videos
- Manual or automatic file selection
- Queue and dequeue Videos to upload (Queue items can be modified)
- Add relevant YouTube tags
- Make and add to playlists
- Load old uploads from history
- Save a queue to be uploaded later
- Melee, Ultimate, Smash 64, Rivals of Aether, Splatoon<sup>\*</sup> and Custom Character Lists
- Hook into Scoreboard Assistant, Stream Control, Streameta, and OBS
- Automatic Updates (sometimes)
<sup>\*</sup>There are no characters in the list, but this does set splatoon specific tags which are useful
## How to Setup - Video Version: https://youtu.be/zrcf4t_qk5A
1. Install [Python 3.7+](https://www.python.org/downloads/) for your OS with the PATH added and make sure there are no other versions of Python 3.
2. Install the program by typing `pip3 install -I meleeuploader` into Command Prompt/Terminal. If you want untested features you can download the repo and install with `pip3 install -I /path/to/repo`
3. Start the program by running `meleeuploader` for the Melee character list or `smashuploader` for the Ultimate character list in Command Prompt/Terminal.
4. Select the YouTube profile you would like to use for the first authentication request and then any google account for the second authentication request. If the second one fails, close your web browser and then run the command to open the program again.
5. Add in the necessary info in the Event Values and Match Values tabs
6. Hit submit when the match finishes.
7. Update forms with the next match's info.
8. Repeat steps 5-7 and enjoy not having to deal with YouTube's front end 🎉.
### Create Your Own Credentials
In the future I will not be including YouTube API credentials with this project. So here is a guide to create your own credentials.
1. Open the [Google Developer Console](https://console.developers.google.com/)
2. Hit the `Select Project` button near the top and create a new project.
3. Once the project is created, select the project.
4. Hit the `Enable APIs and Services` button and enable the YouTube Data API V3 and the Google Sheets API.
5. Once the APIs are enabled it will tell you to create credentials and there will be a button to press.
6. Google will ask you to setup an Oauth consent screen. Set the consent screen to internal and then only add an application name. Hit save to exit that page and then click `Credentials` on the left tab.
7. Hit `Create Credentials` -> `OAuth client ID`, select other from the options given, and then type any name you want. Hit save.
8. Select the name of your new credentials in the `OAuth 2.0 Client IDs` section. Then select `Download JSON` at the top.
9. Once you have downloaded your credentails remember to rename them `client_secrets.json` (if you don't see the `.json` when renaming the file just use `client_secrets`) and put the file in `C:\Users\[Your Username]\` or, if you are on macOS or Unix, whatever `echo ~` returns in terminal. macOS users can also just do `open ~` to open a Finder window at that directory.
10. If you already created YouTube Account credentials for the program, open the program and select `Settings -> Remove YouTube Credentials`
### Additional Setup Options
#### Windows
If you want to launch the application easily, you can find the exe by hitting the Windows key and typing `meleeuploader`, if that doesn't show the option to run the command then you can probably find the exe at `C:\Users\[Your Username]\AppData\Local\Programs\Python\Python37\Scripts\`. Pinning the exe to the taskbar allows quick access to the program.
If you would like to have no console window on your screen, you will need to find out where your pythonw.exe file is (it should be in the same place your python.exe) and create a shortcut to it. Then open the shortcut properties window and edit the target to include `-m meleeuploader` for Melee, `-m meleeuploader ult` for Ultimate, `-m meleeuploader 64` for 64, `-m meleeuploader rivals` for Rivals, or `-m meleeuploader splatoon` for Splatoon at the end. This can then be pinned to your taskbar for easy access. This method does not allow authenticating yourself so you will have to fall back to CMD/Terminal for that part.
#### Mac and Unix
`meleeuploader &` if you want to hide the terminal window. There are probably ways to launch the program quicker, but I don't use macOS/Linux for uploading.
## How to use each field
### Required
`Event Name`, `Title Format`, `Video Privacy`, `File`, `Match Type`, and `Player Tags` are the only required fields for uploading any file.
#### File
File is able to be used as either a file or directory input. Because of how the input selector is setup you will need to either type out the directory address or select a file within the directory you wish to use and then you can delete the filename from the field. If you select a directory the field will not be cleared after submission.
When using the directory feature, it will find the newest file in the directory you give it, so make sure no other files are written to this folder other than the recordings. This is best used for uploading or queueing videos during an event.
#### Title Format
All the options support no characters and the available options can be expanded upon on request.
### Optional
#### Match Type Prefix and Suffix
These are fairly self explanatory, you can add a bit of text before and after the `Match Type`. When submitting the video the `Prefix` is kept while the `Suffix` is cleared.
#### Sponsor Tag
This field will be added to the player tag like so `{sponsor} | {player}` resulting in names like `TSM | Leffen`.
#### Characters
Melee characters are currently ordered by tier list placing, according to the 2021 PGStats tier list.
Ultimate characters are currently ordered by the default character select screen without echo stacking.
If you don't add any characters for either player, both players will not have characters in the title.
Characters that are selected will be in the order they are shown in the list, not the selected order (unfortunate issue with the GUI framework).
You can swap the character list using the menu bar or load the preferred character list by using `meleeuploader` for Melee, `smashuploader` for Ultimate, `s64uploader` for Smash 64, `rivalsuploader` for Rivals, and `splatoonuploader` for Splatoon.
There is also the option to load your own character list, instructions can be found at the bottom.
#### YouTube PlaylistID
The URL of the playlist after creation can be put here, the program will trim it to just the part it needs. The URL should look like `https://www.youtube.com/playlist?list=PLSCJwgNAP2cXdlHlwbZr38JDHuuc8vx_s`, if the address has a string with `PL` at the start, it should work.
If you want to generate a playlist you can also do that by typing in a playlist name. Make sure it doesn't contain "PL" anywhere in the name otherwise it will fail.
#### Bracket Link
Adds a direct link to the bracket at the top of the description. Any URL will work here, just make sure to include `https://` so YouTube users can click on the link in the description.
#### Tags
If you want to add additional tags, for a specific event or your channel, add them here. Separate the tags with commas and don't worry about leading or trailing spaces.
Also multiple tags about the game selected and the players are added by the program so don't add any related to those in this field.
#### Description
Additional text can be added to the description here, it will go between the bracket link and the credit text. If you would like to put the bracket link in a different spot, don't input anything in `Bracket Link` and instead insert your format in `Description`.
#### Submit
The submit button does a lot, it adds submission to queue, clears fields in match values that aren't generally needed for consecutive matches, and prevents you from adding submissions that don't meet the minimum criteria.
## How to use advanced features - Video Version: https://youtu.be/mw0sP7YJVfE
### History - Fixing an upload that didn't work
History was built so I could fix uploads that exceeded the title limit on YouTube (100 characters). This actually happens very rarely now because I've employed a number of tricks to minify the title as much as possible.
By loading the history window from the menubar, you can double click any row in the list to reload the form with the values you inputted for that submission. Every submission will create a new entry, but the history window is only updated on load, you will need to close and reopen it to see new entries.
### Queue - Saving uploads for later
Queue was built so I could upload VODs after an event because the venue didn't have the bandwidth to support streaming and uploading simultaneously.
Queue refers to the list of upcoming uploads in the status tab. By selecting `Toggle Uploads` you can toggle the uploading function, but continue to add entries to the queue. Once you have finished adding all the VODs you want to upload, selecting `Save Queue` will write the entire queue to your disk to be loaded later on. Finally, using `Load Queue` will load the entire queue file and start uploading immediately.
You can also load the queue on startup by running `<uploader command> -q`. This will instantly load whatever is in the queue and start uploading. If you don't want to start uploading then you should use the usual method.
Items in the queue can also be modified by double clicking the queue item in the queue list and then changing one of the cells in the right column of the window that appears.
### Scoreboard Assistant Websocket - Never retype anything
SA Websocket was built so I could avoid retyping information that I put into Scoreboard Assistant.
To enable the websocket open the `Settings` menu tab and select the `Enable Websocket` option. Just make sure that SA is open before you start the websocket.
The program will pull from the `Player 1`, `Player 2`, and `Match` fields. The `Match` field will be parsed to find which of the match types defined in the program are a substring, then it will split the input at the substring and update `Match Prefix` and `Match Suffix` with whatever is left over. For example, `Doubles - Winners R1` as the input would result in `Doubles -` and `R1` being the prefix and suffix respectively.
There is also support for character selection if you use stock icons from this [link](https://drive.google.com/file/d/1L8M-4FUDcQo-2cuh1Ak_VabJSQlWz8B_/view?usp=sharing).
### StreamControl and Streameta Integration
This integration is similar to the SA websocket but is done by polling a file or HTTP endpoint respectively. The SC integration is flexible, just map your `streamcontrol.json` to the fields in the uploader.
### OBS Websocket - Never submit manually
I've set it up so once recording has been stopped, you can have the application either submit the information that is currently inputted or stop updating the form if you are using a SA/SC/Streameta hook. This combined with `SA Websocket` or `SC/Streameta Integration` makes it effortless to quickly queue sets without touching the program
In addition to enabling the settings you will need to update OBS with the websocket plugin found here: https://github.com/obsproject/obs-websocket
### Custom Character List
You can add your own custom character lists by putting comma separated names in a file called `.smash_custom_list.txt` in the smashuploader config folder which is in your root directory.
If you don't know where your root directory is, open the program and select `Characters -> Custom` and the program will make a blank text file where it is supposed to be.
| PypiClean |
/Kiwiii-server-0.7.0.tar.gz/Kiwiii-server-0.7.0/kiwiii/worker.py |
from concurrent import futures as cf
import threading
from tornado import gen, process
from tornado.queues import Queue
PROCESSES = process.cpu_count()
class Worker(object):
"""General-purpose multiprocess worker
Args:
args: iterable task array
func: task processor
"""
def __init__(self, args, func):
self.args = args
self.func = func
self.interruption_requested = False
self._queue = Queue(PROCESSES * 2)
@gen.coroutine
def run(self):
self.on_start()
with cf.ThreadPoolExecutor(PROCESSES) as tp:
for p in range(PROCESSES):
tp.submit(self._consumer())
with cf.ProcessPoolExecutor(PROCESSES) as pp:
for i, a in enumerate(self.args):
yield self._queue.put(pp.submit(self.func, a))
if self.interruption_requested:
yield self._queue.join()
self.on_interrupted()
return
yield self._queue.join()
self.status = -1
self.on_finish()
@gen.coroutine
def _consumer(self):
while True:
f = yield self._queue.get()
res = yield f
with threading.Lock():
self.on_task_done(res)
self._queue.task_done()
def on_task_done(self, res):
pass
def on_interrupted(self):
pass
def on_finish(self):
pass
def interrupt(self):
print("Interruption requested...")
self.interruption_requested = True
class WorkerQueue(object):
def __init__(self):
self.queue = Queue()
self.current_worker_id = None
self.current_worker = None
self.queued_ids = []
self.aborted_ids = []
self._dispatcher()
def put(self, id_, worker):
""" Put a job to the queue """
self.queued_ids.append(id_)
self.queue.put_nowait((id_, worker))
def abort(self, id_):
if id_ in self.queued_ids:
self.aborted_ids.append(id_)
elif id_ == self.current_worker_id:
self.current_worker.interrupt()
def status(self, id_):
if id_ in self.queued_ids:
return "Queued"
elif id_ == self.current_worker_id:
return self.current_worker.status
else:
return "Completed"
@gen.coroutine
def _dispatcher(self):
while 1:
id_, worker = yield self.queue.get()
self.queued_ids.remove(id_)
if id_ in self.aborted_ids:
self.aborted_ids.remove(id_)
continue
self.current_worker_id = id_
self.current_worker = worker
yield self.current_worker.run()
self.current_worker_id = None
self.current_worker = None | PypiClean |
/125softNLP-0.0.1-py3-none-any.whl/pysoftNLP/kashgari/processors/base_processor.py |
# author: BrikerMan
# contact: [email protected]
# blog: https://eliyar.biz
# file: base_processor.py
# time: 2019-05-21 11:27
import collections
import logging
import operator
from typing import List, Optional, Union, Dict, Any
import numpy as np
from tensorflow.python.keras.preprocessing.sequence import pad_sequences
from pysoftNLP.kashgari import utils
class BaseProcessor(object):
"""
Corpus Pre Processor class
"""
def __init__(self, **kwargs):
self.token2idx: Dict[str, int] = kwargs.get('token2idx', {})
self.idx2token: Dict[int, str] = dict([(v, k) for (k, v) in self.token2idx.items()])
self.token2count: Dict = {}
self.label2idx: Dict[str, int] = kwargs.get('label2idx', {})
self.idx2label: Dict[int, str] = dict([(v, k) for (k, v) in self.label2idx.items()])
self.token_pad: str = kwargs.get('token_pad', '<PAD>')
self.token_unk: str = kwargs.get('token_unk', '<UNK>')
self.token_bos: str = kwargs.get('token_bos', '<BOS>')
self.token_eos: str = kwargs.get('token_eos', '<EOS>')
self.dataset_info: Dict[str, Any] = kwargs.get('dataset_info', {})
self.add_bos_eos: bool = kwargs.get('add_bos_eos', False)
self.sequence_length = kwargs.get('sequence_length', None)
self.min_count = kwargs.get('min_count', 3)
def info(self):
return {
'class_name': self.__class__.__name__,
'config': {
'label2idx': self.label2idx,
'token2idx': self.token2idx,
'token_pad': self.token_pad,
'token_unk': self.token_unk,
'token_bos': self.token_bos,
'token_eos': self.token_eos,
'dataset_info': self.dataset_info,
'add_bos_eos': self.add_bos_eos,
'sequence_length': self.sequence_length
},
'module': self.__class__.__module__,
}
def analyze_corpus(self,
corpus: Union[List[List[str]]],
labels: Union[List[List[str]], List[str]],
force: bool = False):
rec_len = sorted([len(seq) for seq in corpus])[int(0.95 * len(corpus))]
self.dataset_info['RECOMMEND_LEN'] = rec_len
if len(self.token2idx) == 0 or force:
self._build_token_dict(corpus, self.min_count)
if len(self.label2idx) == 0 or force:
self._build_label_dict(labels)
def _build_token_dict(self, corpus: List[List[str]], min_count: int = 3):
"""
Build token index dictionary using corpus
Args:
corpus: List of tokenized sentences, like ``[['I', 'love', 'tf'], ...]``
min_count:
"""
token2idx = {
self.token_pad: 0,
self.token_unk: 1,
self.token_bos: 2,
self.token_eos: 3
}
token2count = {}
for sentence in corpus:
for token in sentence:
count = token2count.get(token, 0)
token2count[token] = count + 1
self.token2count = token2count
# 按照词频降序排序
sorted_token2count = sorted(token2count.items(),
key=operator.itemgetter(1),
reverse=True)
token2count = collections.OrderedDict(sorted_token2count)
for token, token_count in token2count.items():
if token not in token2idx and token_count >= min_count:
token2idx[token] = len(token2idx)
self.token2idx = token2idx
self.idx2token = dict([(value, key)
for key, value in self.token2idx.items()])
logging.debug(f"build token2idx dict finished, contains {len(self.token2idx)} tokens.")
self.dataset_info['token_count'] = len(self.token2idx)
def _build_label_dict(self, corpus: Union[List[List[str]], List[str]]):
raise NotImplementedError
def process_x_dataset(self,
data: List[List[str]],
max_len: Optional[int] = None,
subset: Optional[List[int]] = None) -> np.ndarray:
if max_len is None:
max_len = self.sequence_length
if subset is not None:
target = utils.get_list_subset(data, subset)
else:
target = data
numerized_samples = self.numerize_token_sequences(target)
return pad_sequences(numerized_samples, max_len, padding='post', truncating='post')
def process_y_dataset(self,
data: Union[List[List[str]], List[str]],
max_len: Optional[int],
subset: Optional[List[int]] = None) -> np.ndarray:
raise NotImplementedError
def numerize_token_sequences(self,
sequences: List[List[str]]):
raise NotImplementedError
def numerize_label_sequences(self,
sequences: List[List[str]]) -> List[List[int]]:
raise NotImplementedError
def reverse_numerize_label_sequences(self, sequence, **kwargs):
raise NotImplementedError
def __repr__(self):
return f"<{self.__class__}>"
def __str__(self):
return self.__repr__()
if __name__ == "__main__":
print("Hello world") | PypiClean |
/DjangoDjangoAppCenter-0.0.11-py3-none-any.whl/AppCenter/simpleui/static/admin/simpleui-x/elementui/umd/locale/sk.js | (function (global, factory) {
if (typeof define === "function" && define.amd) {
define('element/locale/sk', ['module', 'exports'], factory);
} else if (typeof exports !== "undefined") {
factory(module, exports);
} else {
var mod = {
exports: {}
};
factory(mod, mod.exports);
global.ELEMENT.lang = global.ELEMENT.lang || {};
global.ELEMENT.lang.sk = mod.exports;
}
})(this, function (module, exports) {
'use strict';
exports.__esModule = true;
exports.default = {
el: {
colorpicker: {
confirm: 'OK',
clear: 'Zmazať'
},
datepicker: {
now: 'Teraz',
today: 'Dnes',
cancel: 'Zrušiť',
clear: 'Zmazať',
confirm: 'OK',
selectDate: 'Vybrať dátum',
selectTime: 'Vybrať čas',
startDate: 'Dátum začiatku',
startTime: 'Čas začiatku',
endDate: 'Dátum konca',
endTime: 'Čas konca',
prevYear: 'Predošlý rok',
nextYear: 'Ďalší rok',
prevMonth: 'Predošlý mesiac',
nextMonth: 'Ďalší mesiac',
day: 'Deň',
week: 'Týždeň',
month: 'Mesiac',
year: 'Rok',
month1: 'Január',
month2: 'Február',
month3: 'Marec',
month4: 'Apríl',
month5: 'Máj',
month6: 'Jún',
month7: 'Júl',
month8: 'August',
month9: 'September',
month10: 'Október',
month11: 'November',
month12: 'December',
weeks: {
sun: 'Ne',
mon: 'Po',
tue: 'Ut',
wed: 'St',
thu: 'Št',
fri: 'Pi',
sat: 'So'
},
months: {
jan: 'Jan',
feb: 'Feb',
mar: 'Mar',
apr: 'Apr',
may: 'Máj',
jun: 'Jún',
jul: 'Júl',
aug: 'Aug',
sep: 'Sep',
oct: 'Okt',
nov: 'Nov',
dec: 'Dec'
}
},
select: {
loading: 'Načítavanie',
noMatch: 'Žiadna zhoda',
noData: 'Žiadne dáta',
placeholder: 'Vybrať'
},
cascader: {
noMatch: 'Žiadna zhoda',
loading: 'Načítavanie',
placeholder: 'Vybrať',
noData: 'Žiadne dáta'
},
pagination: {
goto: 'Choď na',
pagesize: 'na stranu',
total: 'Všetko {total}',
pageClassifier: ''
},
messagebox: {
title: 'Správa',
confirm: 'OK',
cancel: 'Zrušiť',
error: 'Neplatný vstup'
},
upload: {
deleteTip: 'pre odstránenie stisni klávesu Delete',
delete: 'Vymazať',
preview: 'Prehliadať',
continue: 'Pokračovať'
},
table: {
emptyText: 'Žiadne dáta',
confirmFilter: 'Potvrdiť',
resetFilter: 'Zresetovať',
clearFilter: 'Všetko',
sumText: 'Spolu'
},
tree: {
emptyText: 'Žiadne dáta'
},
transfer: {
noMatch: 'Žiadna zhoda',
noData: 'Žiadne dáta',
titles: ['Zoznam 1', 'Zoznam 2'],
filterPlaceholder: 'Filtrovať podľa',
noCheckedFormat: '{total} položiek',
hasCheckedFormat: '{checked}/{total} označených'
},
image: {
error: 'FAILED' // to be translated
},
pageHeader: {
title: 'Back' // to be translated
}
}
};
module.exports = exports['default'];
}); | PypiClean |
/Lantz-0.3.zip/Lantz-0.3/lantz/drivers/andor/neo.py | import ctypes as ct
import numpy as np
from lantz import Feat, Action, Q_
from lantz.foreign import RetStr, RetTuple
from .andor import Andor
class Neo(Andor):
"""Neo Andor CMOS Camera
"""
def initialize(self):
super().initialize()
self.flush()
self.fan_speed = 1
self.width ,self.height = self.sensor_size
self.length = self.width * self.height
self.clock_rate = 100
self.pixel_encoding = 32
self.imagesizebytes = self.getint("ImageSizeBytes")
self.userbuffer = ct.create_string_buffer(' ' * self.imagesizebytes)
@Feat(None, values={32: 'Mono32', 64: 'Mono64'})
def pixel_encoding(self, value):
"""Pixel encoding.
"""
self.setenumstring("PixelEncoding", value)
@Feat()
def sensor_size(self):
width = self.getint("SensorWidth")
height = self.getint("SensorHeight")
return width, height
@Feat(None, values={100: '100 MHz', 200: '200 MHz', 280: '280 MHz'})
def clock_rate(self, value):
"""Pixel clock rate
"""
self.setenumstring("PixelReadoutRate", value)
@Feat(None)
def fan_peed(self, value = 1):
"""Fan speed.
"""
self.setenumerated("FanSpeed", value)
@Feat()
def sensor_temp(self):
"""Sensor temperature.
"""
return self.getfloat("SensorTemperature")
@Feat()
def exposure_time(self):
"""Get exposure time.
"""
return self.getfloat("ExposureTime")
@exposure_time.setter
def exposure_time(self, exposure):
self.setfloat("ExposureTime", exposure)
@Feat(None)
def roi(self, width_height_top_left):
"""Set region of interest
"""
width, height, top, left = width_height_top_left
self.setint("AOIWidth", width)
self.setint("AOILeft", left)
self.setint("AOIHeight", height)
self.setint("AOITop", top)
@Action()
def take_image(self):
"""Image acquisition.
"""
self.queuebuffer(self.userbuffer, self.imagesizebytes)
self.command("AcquisitionStart")
self.waitbuffer(*RetStr(1))
self.command("AcquisitionStop")
self.flush()
image = np.fromstring(self.userbuffer, dtype=np.uint32, count=self.length)
image.shape = (self.height, self.width)
return image
@Action()
def take_image(self, numbuff, numframes):
"""Image acquisition with circular buffer.
"""
imagesizebytes = self.getint("ImageSizeBytes")
userbuffer = []
for i in range(numbuff):
userbuffer.append(ct.create_string_buffer(' ' * imagesizebytes))
self.queuebuffer(userbuffer, imagesizebytes)
self.command("AcquisitionStart")
for i in range(numbuff):
self.waitbuffer(*RetStr(1))
self.queuebuffer(userbuffer[i], imagesizebytes)
self.command("AcquisitionStop")
self.flush()
image = np.fromstring(userbuffer[0], dtype=np.uint32, count=self.length)
image.shape = (self.height, self.width)
return image | PypiClean |
/Diofant-0.14.0a2.tar.gz/Diofant-0.14.0a2/diofant/printing/str.py | from __future__ import annotations
import typing
import mpmath.libmp as mlib
from mpmath.libmp import prec_to_dps
from ..core import Integer, Mul, Pow, Rational, S, oo
from ..core.mul import _keep_coeff
from ..sets import Reals
from ..utilities import default_sort_key
from .defaults import DefaultPrinting
from .precedence import PRECEDENCE, precedence
from .printer import Printer
class StrPrinter(Printer):
"""Str printer."""
printmethod = '_diofantstr'
_default_settings: dict[str, typing.Any] = {
'order': None,
'full_prec': 'auto',
}
_relationals: dict[str, str] = {}
def parenthesize(self, item, level):
if precedence(item) <= level:
return '(%s)' % self._print(item)
else:
return self._print(item)
def stringify(self, args, sep, level=0):
return sep.join([self.parenthesize(item, level) for item in args])
def emptyPrinter(self, expr):
if isinstance(expr, str):
return expr
elif hasattr(expr, '__str__') and not issubclass(expr.__class__,
DefaultPrinting):
return str(expr)
else:
return repr(expr)
def _print_Add(self, expr, order=None):
if self.order == 'none':
terms = list(expr.args)
else:
terms = expr.as_ordered_terms(order=order or self.order)
PREC = precedence(expr)
l = []
for term in terms:
t = self._print(term)
if t.startswith('-'):
sign = '-'
t = t[1:]
else:
sign = '+'
if precedence(term) < PREC:
l.extend([sign, f'({t})'])
else:
l.extend([sign, t])
sign = l.pop(0)
if sign == '+':
sign = ''
return sign + ' '.join(l)
def _print_BooleanTrue(self, expr):
return 'true'
def _print_BooleanFalse(self, expr):
return 'false'
def _print_Not(self, expr):
return '~%s' % self.parenthesize(expr.args[0], PRECEDENCE['Not'])
def _print_And(self, expr):
return self.stringify(expr.args, ' & ', PRECEDENCE['BitwiseAnd'])
def _print_Or(self, expr):
return self.stringify(expr.args, ' | ', PRECEDENCE['BitwiseOr'])
def _print_Basic(self, expr):
l = [self._print(o) for o in expr.args]
return expr.__class__.__name__ + '(%s)' % ', '.join(l)
def _print_BlockMatrix(self, B):
return self._print(B.blocks)
def _print_Catalan(self, expr):
return 'Catalan'
def _print_ComplexInfinity(self, expr):
return 'zoo'
def _print_Derivative(self, expr):
return 'Derivative(%s)' % ', '.join(map(self._print, expr.args))
def _print_dict(self, d):
keys = sorted(d, key=default_sort_key)
items = []
for key in keys:
item = f'{self._print(key)}: {self._print(d[key])}'
items.append(item)
return '{%s}' % ', '.join(items)
def _print_Dict(self, expr):
return self._print_dict(expr)
def _print_RandomDomain(self, d):
return 'Domain: ' + self._print(d.as_boolean())
def _print_Dummy(self, expr):
return '_' + expr.name
def _print_EulerGamma(self, expr):
return 'EulerGamma'
def _print_Exp1(self, expr):
return 'E'
def _print_ExprCondPair(self, expr):
return f'({expr.expr}, {expr.cond})'
def _print_FiniteSet(self, s):
s = sorted(s, key=default_sort_key)
if len(s) > 10:
printset = s[:3] + ['...'] + s[-3:]
else:
printset = s
return '{' + ', '.join(self._print(el) for el in printset) + '}'
def _print_Function(self, expr):
return expr.func.__name__ + '(%s)' % self.stringify(expr.args, ', ')
def _print_GeometryEntity(self, expr):
# GeometryEntity is special -- it's base is tuple
return str(expr)
def _print_GoldenRatio(self, expr):
return 'GoldenRatio'
def _print_ImaginaryUnit(self, expr):
return 'I'
def _print_Infinity(self, expr):
return 'oo'
def _print_Integral(self, expr):
def _xab_tostr(xab):
if len(xab) == 1:
return self._print(xab[0])
else:
return self._print((xab[0],) + tuple(xab[1:]))
L = ', '.join([_xab_tostr(l) for l in expr.limits])
return f'Integral({self._print(expr.function)}, {L})'
def _print_Interval(self, i):
if i.left_open:
left = '('
else:
left = '['
if i.right_open:
right = ')'
else:
right = ']'
return f'{left}{self._print(i.start)}, {self._print(i.end)}{right}'
def _print_Inverse(self, I):
return '%s^-1' % self.parenthesize(I.arg, PRECEDENCE['Pow'])
def _print_Lambda(self, obj):
args, expr = obj.args
if len(args) == 1:
return f'Lambda({args.args[0]}, {expr})'
else:
arg_string = ', '.join(self._print(arg) for arg in args)
return f'Lambda(({arg_string}), {expr})'
def _print_LatticeOp(self, expr):
args = sorted(expr.args, key=default_sort_key)
return expr.func.__name__ + '(%s)' % ', '.join(self._print(arg) for arg in args)
def _print_Limit(self, expr):
e, z, z0, dir = expr.args
if dir == -1:
return f'Limit({e}, {z}, {z0})'
elif dir == Reals:
return f'Limit({e}, {z}, {z0}, dir=Reals)'
else:
return f'Limit({e}, {z}, {z0}, dir={dir})'
def _print_list(self, expr):
return '[%s]' % self.stringify(expr, ', ')
def _print_MatrixBase(self, expr):
return expr._format_str(self)
def _print_MatrixElement(self, expr):
return self._print(expr.parent) + f'[{expr.i}, {expr.j}]'
def _print_MatrixSlice(self, expr):
def strslice(x):
x = list(x)
if x[2] == 1:
del x[2]
if x[1] == x[0] + 1:
del x[1]
if x[0] == 0:
x[0] = ''
return ':'.join(map(self._print, x))
return (self._print(expr.parent) + '[' +
strslice(expr.rowslice) + ', ' +
strslice(expr.colslice) + ']')
def _print_Mul(self, expr):
prec = precedence(expr)
c, e = expr.as_coeff_Mul()
if c < 0:
expr = _keep_coeff(-c, e)
sign = '-'
else:
sign = ''
a = [] # items in the numerator
b = [] # items that are in the denominator (if any)
if self.order != 'none':
args = expr.as_ordered_factors()
else:
# use make_args in case expr was something like -x -> x
args = Mul.make_args(expr)
multiple_ones = len([x for x in args if x == 1]) > 1
# Gather args for numerator/denominator
for item in args:
if item.is_commutative and item.is_Pow and item.exp.is_Rational and item.exp.is_negative:
if item.exp != -1:
b.append(Pow(item.base, -item.exp, evaluate=False))
else:
b.append(Pow(item.base, -item.exp))
elif item.is_Rational and item is not oo:
if item.numerator != 1 or multiple_ones:
a.append(Rational(item.numerator))
if item.denominator != 1:
b.append(Rational(item.denominator))
else:
a.append(item)
a = a or [Integer(1)]
a_str = [self.parenthesize(x, prec) for x in a]
b_str = [self.parenthesize(x, prec) for x in b]
if len(b) == 0:
return sign + '*'.join(a_str)
elif len(b) == 1:
return sign + '*'.join(a_str) + '/' + b_str[0]
else:
return sign + '*'.join(a_str) + '/(%s)' % '*'.join(b_str)
def _print_MatMul(self, expr):
return '*'.join([self.parenthesize(arg, precedence(expr))
for arg in expr.args])
def _print_HadamardProduct(self, expr):
return '.*'.join([self.parenthesize(arg, precedence(expr))
for arg in expr.args])
def _print_MatAdd(self, expr):
return ' + '.join([self.parenthesize(arg, precedence(expr))
for arg in expr.args])
def _print_NaN(self, expr):
return 'nan'
def _print_NegativeInfinity(self, expr):
return '-oo'
def _print_Order(self, expr):
if all(p == 0 for p in expr.point) or not expr.variables:
if len(expr.variables) <= 1:
return 'O(%s)' % self._print(expr.expr)
else:
return 'O(%s)' % self.stringify((expr.expr,) + expr.variables, ', ', 0)
else:
return 'O(%s)' % self.stringify(expr.args, ', ', 0)
def _print_Cycle(self, expr):
"""We want it to print as Cycle in doctests for which a repr is required.
With __repr__ defined in Cycle, interactive output gives Cycle form but
during doctests, the dict's __repr__ form is used. Defining this _print
function solves that problem.
>>> Cycle(1, 2) # will print as a dict without this method
Cycle(1, 2)
"""
return expr.__repr__()
def _print_Permutation(self, expr):
from ..combinatorics import Cycle, Permutation
if Permutation.print_cyclic:
if not expr.size:
return 'Permutation()'
# before taking Cycle notation, see if the last element is
# a singleton and move it to the head of the string
s = Cycle(expr)(expr.size - 1).__repr__()[len('Cycle'):]
last = s.rfind('(')
if not last == 0 and ',' not in s[last:]:
s = s[last:] + s[:last]
return f'Permutation{s}'
else:
s = expr.support()
if not s:
if expr.size < 5:
return 'Permutation(%s)' % str(expr.array_form)
return f'Permutation([], size={expr.size})'
trim = str(expr.array_form[:s[-1] + 1]) + f', size={expr.size}'
use = full = str(expr.array_form)
if len(trim) < len(full):
use = trim
return f'Permutation({use})'
def _print_TensorIndex(self, expr):
return expr._print()
def _print_TensorHead(self, expr):
return expr._print()
def _print_Tensor(self, expr):
return expr._print()
def _print_TensMul(self, expr):
return expr._print()
def _print_TensAdd(self, expr):
return expr._print()
def _print_PermutationGroup(self, expr):
p = [' %s' % str(a) for a in expr.args]
return 'PermutationGroup([\n%s])' % ',\n'.join(p)
def _print_Pi(self, expr):
return 'pi'
def _print_PolyElement(self, poly):
return poly._str(self, PRECEDENCE, '%s**%d', '*')
def _print_FracElement(self, frac):
if frac.denominator == 1:
return self._print(frac.numerator)
else:
numer = self.parenthesize(frac.numerator, PRECEDENCE['Add'])
denom = self.parenthesize(frac.denominator, PRECEDENCE['Atom']-1)
return numer + '/' + denom
def _print_Poly(self, expr):
terms, gens = [], expr.gens
for monom, coeff in expr.terms():
s_monom = []
for i, exp in enumerate(monom):
if exp > 0:
if exp == 1:
s_monom.append(self._print(gens[i]))
else:
s_monom.append(self.parenthesize(gens[i],
PRECEDENCE['Atom'] - 1) + f'**{exp:d}')
s_monom = '*'.join(s_monom)
if coeff.is_Add:
if s_monom:
s_coeff = '(' + self._print(coeff) + ')'
else:
s_coeff = self._print(coeff)
else:
if s_monom:
if coeff == 1:
terms.extend(['+', s_monom])
continue
if coeff == -1:
terms.extend(['-', s_monom])
continue
s_coeff = self._print(coeff)
if not s_monom:
s_term = s_coeff
else:
s_term = s_coeff + '*' + s_monom
if s_term.startswith('-'):
terms.extend(['-', s_term[1:]])
else:
terms.extend(['+', s_term])
if not terms:
terms.extend(['+', '0'])
modifier = terms.pop(0)
if modifier == '-':
terms[0] = '-' + terms[0]
format = expr.__class__.__name__ + '(%s, %s'
from ..polys.polyerrors import PolynomialError
try:
format += ', modulus=%s' % expr.get_modulus()
except PolynomialError:
format += f", domain='{expr.domain}'"
format += ')'
return format % (' '.join(terms), ', '.join(self._print(s) for s in expr.gens))
def _print_ProductSet(self, p):
return ' x '.join(self._print(set) for set in p.sets)
def _print_AlgebraicElement(self, expr):
return self._print(expr.parent.to_expr(expr))
def _print_ModularInteger(self, expr):
return f'{expr.rep}'
def _print_GaloisFieldElement(self, expr):
from ..domains import ZZ_python
return 'GF(%s, %s)(%s)' % (expr.parent.characteristic,
expr.mod.set_domain(ZZ_python).all_coeffs(),
int(expr))
def _print_Pow(self, expr, rational=False):
PREC = precedence(expr)
if not expr.exp.is_Float and expr.exp == Rational(1, 2) and not rational:
return 'sqrt(%s)' % self._print(expr.base)
if expr.is_commutative and not expr.exp.is_Float:
if -expr.exp == Rational(1, 2) and not rational:
return '1/sqrt(%s)' % self._print(expr.base)
if expr.exp == -1:
return '1/%s' % self.parenthesize(expr.base, PREC)
e = self.parenthesize(expr.exp, PREC)
if (self.printmethod == '_diofantrepr' and
expr.exp.is_Rational and not expr.exp.is_Integer):
# The parenthesized exp should be '(Rational(a, b))' so strip
# parens, but just check to be sure.
return f'{self.parenthesize(expr.base, PREC)}**{e[1:-1]}'
return f'{self.parenthesize(expr.base, PREC)}**{e}'
def _print_Mod(self, expr):
PREC = precedence(expr)
a, b = expr.args
return f'{self.parenthesize(a, PREC)}%{self.parenthesize(b, PREC)}'
def _print_MatPow(self, expr):
PREC = precedence(expr)
return '%s**%s' % (self.parenthesize(expr.base, PREC),
self.parenthesize(expr.exp, PREC))
def _print_ImmutableDenseNDimArray(self, expr):
return str(expr)
_print_ImmutableSparseNDimArray = _print_ImmutableDenseNDimArray
_print_MutableDenseNDimArray = _print_ImmutableDenseNDimArray
_print_MutableSparseNDimArray = _print_ImmutableDenseNDimArray
def _print_Integer(self, expr):
return str(expr.numerator)
def _print_Rational(self, expr):
if expr.denominator == 1:
return str(expr.numerator)
else:
return f'{expr.numerator}/{expr.denominator}'
def _print_Float(self, expr):
prec = expr._prec
if prec < 5:
dps = 0
else:
dps = prec_to_dps(expr._prec)
if self._settings['full_prec'] is True:
strip = False
elif self._settings['full_prec'] is False:
strip = True
elif self._settings['full_prec'] == 'auto':
strip = self._print_level > 1
else:
raise NotImplementedError
rv = mlib.to_str(expr._mpf_, dps, strip_zeros=strip)
if rv.startswith('-.0'):
rv = '-0.' + rv[3:]
elif rv.startswith('.0'):
rv = '0.' + rv[2:]
elif rv.startswith('+'):
rv = rv[1:]
return rv
def _print_Relational(self, expr):
charmap = {
'==': 'Eq',
'!=': 'Ne',
}
if expr.rel_op in charmap:
return f'{charmap[expr.rel_op]}({expr.lhs}, {expr.rhs})'
return '%s %s %s' % (self.parenthesize(expr.lhs, precedence(expr)),
self._relationals.get(expr.rel_op) or expr.rel_op,
self.parenthesize(expr.rhs, precedence(expr)))
def _print_RootOf(self, expr):
p = self._print_Add(expr.expr, order='lex')
if expr.free_symbols:
return f'RootOf({p}, {expr.poly.gen}, {expr.index:d})'
else:
return f'RootOf({p}, {expr.index:d})'
def _print_RootSum(self, expr):
args = [self._print_Add(expr.expr, order='lex')]
if expr.fun is not S.IdentityFunction:
args.append(self._print(expr.fun))
return 'RootSum(%s)' % ', '.join(args)
def _print_GroebnerBasis(self, basis):
cls = basis.__class__.__name__
exprs = [self._print_Add(arg, order=basis.order)
for arg in basis.exprs]
exprs = '[%s]' % ', '.join(exprs)
gens = [self._print(gen) for gen in basis.gens]
domain = "domain='%s'" % self._print(basis.domain)
order = "order='%s'" % self._print(basis.order)
args = [exprs] + gens + [domain, order]
return '%s(%s)' % (cls, ', '.join(args))
def _print_frozenset(self, s):
items = sorted(s, key=default_sort_key)
args = ', '.join(self._print(item) for item in items)
if args:
args = '{%s}' % args
return f'{type(s).__name__}({args})'
def _print_set(self, expr):
if expr == set():
return 'set()'
return '{%s}' % self.stringify(sorted(expr, key=default_sort_key), ', ')
def _print_Sum(self, expr):
def _xab_tostr(xab):
return self._print((xab[0],) + tuple(xab[1:]))
L = ', '.join([_xab_tostr(l) for l in expr.limits])
return f'Sum({self._print(expr.function)}, {L})'
def _print_Symbol(self, expr):
return expr.name
_print_BaseSymbol = _print_Symbol
_print_MatrixSymbol = _print_Symbol
_print_RandomSymbol = _print_Symbol
def _print_Identity(self, expr):
return 'I'
def _print_ZeroMatrix(self, expr):
return '0'
def _print_str(self, expr):
return expr
def _print_tuple(self, expr):
if len(expr) == 1:
return '(%s,)' % self._print(expr[0])
else:
return '(%s)' % self.stringify(expr, ', ')
def _print_Tuple(self, expr):
return self._print_tuple(expr)
def _print_Monomial(self, expr):
if expr.gens:
return '*'.join(['%s**%s' % (gen, exp)
for gen, exp in zip(expr.gens, expr)])
else:
return self._print_tuple(expr)
def _print_Transpose(self, T):
return '%s.T' % self.parenthesize(T.arg, PRECEDENCE['Pow'])
def _print_Union(self, expr):
return ' U '.join(self._print(set) for set in expr.args)
def _print_Complement(self, expr):
return r' \ '.join(self._print(set) for set in expr.args)
def _print_Wild(self, expr):
return expr.name + '_'
def _print_WildFunction(self, expr):
return expr.name + '_'
def _print_Zero(self, expr):
return '0'
def _print_BaseScalarField(self, field):
return field._coord_sys._names[field._index]
def _print_BaseVectorField(self, field):
return f'e_{field._coord_sys._names[field._index]}'
def _print_Differential(self, diff):
field = diff._form_field
if hasattr(field, '_coord_sys'):
return f'd{field._coord_sys._names[field._index]}'
else:
return 'd(%s)' % self._print(field)
def _print_Tr(self, expr):
# TODO : Handle indices
return '%s(%s)' % ('Tr', self._print(expr.args[0]))
def _print_Domain(self, expr):
return expr.rep
def sstr(expr, **settings):
"""Returns the expression as a string.
For large expressions where speed is a concern, use the setting
order='none'.
Examples
========
>>> sstr(Eq(a + b, 0))
'Eq(a + b, 0)'
"""
p = StrPrinter(settings)
s = p.doprint(expr)
return s
class StrReprPrinter(StrPrinter):
"""(internal) -- see sstrrepr"""
def _print_str(self, expr):
return repr(expr)
def sstrrepr(expr, **settings):
"""Return expr in mixed str/repr form.
i.e. strings are returned in repr form with quotes, and everything else
is returned in str form.
This function could be useful for hooking into sys.displayhook
"""
p = StrReprPrinter(settings)
s = p.doprint(expr)
return s | PypiClean |
/DigitalRover-0.0.1.tar.gz/DigitalRover-0.0.1/src/experimental/wayback/wayback.py |
# https://opensourceseo.org/find-list-old-urls-domain-using-wayback-cdx-api/
# https://web.archive.org/web/20090420095939/https://twitter.com/realDonaldTrump
# https://web.archive.org/cdx/search/cdx?url=twitter.com/realDonaldTrump
# https://archive.org/web/researcher/cdx_file_format.php
# https://support.archive-it.org/hc/en-us/articles/115001790023-Access-Archive-It-s-Wayback-index-with-the-CDX-C-API
# Format Of Files
# ---------------
# Canonized URL
# Date
# Original URL
# Mime Type of Original Document
# Response Code
# Old Style Checksum
# Length
# Example Command
# curl "https://web.archive.org/cdx/search/cdx?url=twitter.com/realDonaldTrump" > realDonaldTrump.url
# Generate Commands
# dolt sql -q "select concat('curl \"https://web.archive.org/cdx/search/cdx?url=twitter.com/', twitter_handle, '\" > ', twitter_handle, '.url') as commands from total_tweets;" -r csv > urls.csv
# 20090420095939
# For Unmodified Pages - https://web.archive.org/web/20090420095939id_/http://twitter.com:80/realDonaldTrump
# May Want Modified Pages For Media
# For Tweet 1346928882595885058
# https://web.archive.org/web/20210106212653/https://video.twimg.com/ext_tw_video/1346928794456776705/pu/vid/1280x720/xJsJiUTRa-ggqL1D.mp4
import os
from typing import Optional
import pandas as pd
import requests
list_folder: str = "working/archive-me"
temp_files: list = os.listdir(list_folder)
files: list = []
# Remove Non-URL Files
for file in temp_files:
if ".url" in file:
files.append(file)
if not os.path.exists('working/wayback'):
os.makedirs('working/wayback')
for file in files:
download_folder_stripped: str = str(file).rstrip(".url")
download_folder: str = os.path.join('working/wayback', download_folder_stripped)
if not os.path.exists(download_folder):
print(f"Creating: {download_folder}")
os.makedirs(download_folder)
print(f"Downloading Archives For {download_folder_stripped}")
contents: pd.DataFrame = pd.read_csv(filepath_or_buffer=os.path.join(list_folder, file),
header=None,
delimiter=" ",
usecols=[0, 1, 2, 3, 4, 5, 6],
names=["canonized_url",
"date",
"original_url",
"mime_type",
"response_code",
"old_checksum",
"length"])
# https://web.archive.org/web/20090420095939id_/http://twitter.com:80/realDonaldTrump
contents["raw_url"] = "https://web.archive.org/web/" + contents["date"].astype(str) + "id_/" + contents["original_url"].astype(str)
contents["archive_url"] = "https://web.archive.org/web/" + contents["date"].astype(str) + "/" + contents["original_url"].astype(str)
print(contents)
contents.to_csv(path_or_buf=os.path.join(list_folder, download_folder_stripped + ".csv"))
contents.drop_duplicates(inplace=True, subset=["old_checksum"])
# Headers
headers: dict = {
'User-Agent': 'Chrome/90', # python-requests/2.25.1
'Accept-Encoding': 'gzip, deflate',
'Accept': '*/*',
'Connection': 'keep-alive'
}
total: int = len(contents.index)
count: int = 1
for row in contents.itertuples():
# TODO: Means To Skip Already Downloaded Files (Doesn't Handle Multiple Different Files On Same Date)
# if os.path.exists(os.path.join(download_folder, str(row.date) + "-archive.html")):
# print(f"Page {count}/{total} - Skipping {row.date} From {download_folder_stripped}")
# count += 1
# continue
try:
if not os.path.exists(os.path.join(download_folder, str(row.date) + "-raw.html")):
# https://twitter.com/AlexisEvelyn42/status/1350405163048169475
r = requests.get(row.raw_url, allow_redirects=True, headers=headers)
r_page = open(mode="w", file=os.path.join(download_folder, str(row.date) + "-raw.html"))
r_page.writelines(r.text)
r_page.close()
print(f"Raw Page {count}/{total} - Saved {row.date} From {download_folder_stripped}")
else:
print(f"Raw Page {count}/{total} - Skipping {row.date} From {download_folder_stripped}")
except:
print(f"Failed To Download Raw Page!!! Logging!!! Page: {download_folder_stripped}/{row.date}")
failed_log = open(mode="a", file='working/wayback/failed-download.txt')
failed_log.writelines(f"Raw - {download_folder_stripped}/{row.date}\n")
failed_log.close()
# exit(1)
try:
if not os.path.exists(os.path.join(download_folder, str(row.date) + "-archive.html")):
a = requests.get(row.archive_url, allow_redirects=True, headers=headers)
a_page = open(mode="w", file=os.path.join(download_folder, str(row.date) + "-archive.html"))
a_page.writelines(a.text)
a_page.close()
print(f"Archive Page {count}/{total} - Saved {row.date} From {download_folder_stripped}")
else:
print(f"Archive Page {count}/{total} - Skipping {row.date} From {download_folder_stripped}")
except:
print(f"Failed Connection!!! Page: {download_folder_stripped}/{row.date}")
failed_log = open(mode="a", file='working/wayback/failed-download.txt')
failed_log.writelines(f"Archive - {download_folder_stripped}/{row.date}\n")
failed_log.close()
# exit(1)
# print(f"Page {count}/{total} - Saved {row.date} From {download_folder_stripped}")
count += 1 | PypiClean |
/observations-0.1.4.tar.gz/observations-0.1.4/observations/yelp17.py | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
import numpy as np
import os
def yelp17(path, categories=None):
"""Load Yelp reviews from the Yelp Dataset Challenge in 2017. It
contains ~4.1 million reviews, ~1 million users, and ~144,000
businesses from cities in the UK, Germany, Canada, and the US. We
only load the review's text and its rating.
Args:
path: str.
Path to directory which stores file. Filename is
`yelp_dataset_challenge_round9/`.
categories: str or list of str, optional.
Business categories to include reviews from. It is case-sensitive,
e.g., "Restaurants". Default is to include all categories.
Returns:
Tuple of list x_train and np.ndarray y_train. Each pair of
elements corresponds to the review text and its rating.
"""
if isinstance(categories, str):
categories = [categories]
path = os.path.expanduser(path)
path = os.path.join(path, 'yelp_dataset_challenge_round9')
if not os.path.exists(path):
url = 'https://www.yelp.com/dataset_challenge'
raise IOError("Files not found. Downloading requires explicit permission. "
"See {}".format(e, url))
if categories is not None:
business = {}
with open(os.path.join(path, 'yelp_academic_dataset_business.json')) as f:
for line in f:
data = json.loads(line)
business[data['business_id']] = data['categories']
x_train = []
y_train = []
with open(os.path.join(path, 'yelp_academic_dataset_review.json')) as f:
for line in f:
data = json.loads(line)
if categories is None:
x_train.append(data['text'])
y_train.append(data['stars'])
else:
business_categories = business.get(data['business_id'])
if business_categories and \
any(cat in categories for cat in business_categories):
x_train.append(data['text'])
y_train.append(data['stars'])
y_train = np.array(y_train, dtype=np.int)
return x_train, y_train | PypiClean |
/MagicBus-4.1.2.tar.gz/MagicBus-4.1.2/magicbus/plugins/signalhandler.py |
import os
import signal as _signal
import sys
from magicbus.compat import basestring
class SignalHandler(object):
"""Register bus channels (and listeners) for system signals.
You can modify what signals your application listens for, and what it does
when it receives signals, by modifying :attr:`SignalHandler.handlers`,
a dict of {signal name: callback} pairs. The default set is::
handlers = {'SIGTERM': self.bus.transition("EXITED"),
'SIGHUP': self.handle_SIGHUP,
'SIGUSR1': self.bus.transition("IDLE"); self.bus.transition("RUN"),
}
The :func:`SignalHandler.handle_SIGHUP`` method calls execv if the process
is daemonized, but exits if the process is attached to a TTY. This is
because Unix window managers tend to send SIGHUP to terminal windows
when the user closes them.
Feel free to add signals which are not available on every platform. The
:class:`SignalHandler` will ignore errors raised from attempting to
register handlers for unknown signals.
"""
handlers = {}
"""A map from signal names (e.g. 'SIGTERM') to handlers."""
signals = {}
"""A map from signal numbers to names."""
for k, v in vars(_signal).items():
if k.startswith('SIG') and not k.startswith('SIG_'):
signals[v] = k
del k, v
def __init__(self, bus):
self.bus = bus
# Set default handlers
self.handlers = {'SIGTERM': self.handle_SIGTERM,
'SIGHUP': self.handle_SIGHUP,
'SIGUSR1': self.bus.graceful,
}
if sys.platform[:4] == 'java':
del self.handlers['SIGUSR1']
self.handlers['SIGUSR2'] = self.bus.graceful
self.bus.log('SIGUSR1 cannot be set on the JVM platform. '
'Using SIGUSR2 instead.')
self.handlers['SIGINT'] = self._jython_SIGINT_handler
self._previous_handlers = {}
def _jython_SIGINT_handler(self, signum=None, frame=None):
# See http://bugs.jython.org/issue1313
self.bus.log('Keyboard Interrupt: shutting down bus')
self.bus.transition('EXITED')
def subscribe(self):
self.bus.subscribe('ENTER', self.subscribe_handlers)
def subscribe_handlers(self):
"""Subscribe self.handlers to signals."""
for sig, func in self.handlers.items():
try:
self.set_handler(sig, func)
except ValueError:
pass
# Only run after Daemonizer.ENTER (65)
subscribe_handlers.priority = 70
def unsubscribe(self):
"""Unsubscribe self.handlers from signals."""
for signum, handler in self._previous_handlers.items():
signame = self.signals[signum]
if handler is None:
self.bus.log('Restoring %s handler to SIG_DFL.' % signame)
handler = _signal.SIG_DFL
else:
self.bus.log('Restoring %s handler %r.' % (signame, handler))
try:
our_handler = _signal.signal(signum, handler)
if our_handler is None:
self.bus.log('Restored old %s handler %r, but our '
'handler was not registered.' %
(signame, handler), level=30)
except ValueError:
self.bus.log('Unable to restore %s handler %r.' %
(signame, handler), level=40, traceback=True)
def set_handler(self, signal, listener=None):
"""Subscribe a handler for the given signal (number or name).
If the optional 'listener' argument is provided, it will be
subscribed as a listener for the given signal's channel.
If the given signal name or number is not available on the current
platform, ValueError is raised.
"""
if isinstance(signal, basestring):
signum = getattr(_signal, signal, None)
if signum is None:
raise ValueError('No such signal: %r' % signal)
signame = signal
else:
try:
signame = self.signals[signal]
except KeyError:
raise ValueError('No such signal: %r' % signal)
signum = signal
prev = _signal.signal(signum, self._handle_signal)
self._previous_handlers[signum] = prev
if listener is not None:
self.bus.log('Listening for %s.' % signame)
self.bus.subscribe(signame, listener)
def _handle_signal(self, signum=None, frame=None):
"""Python signal handler (self.set_handler subscribes it for you)."""
signame = self.signals[signum]
self.bus.log('Caught signal %s.' % signame)
self.bus.publish(signame)
def handle_SIGTERM(self):
"""Transition to the EXITED state."""
self.bus.log('SIGTERM caught. Exiting.')
self.bus.transition('EXITED')
def handle_SIGHUP(self):
"""Restart if daemonized, else exit."""
if os.isatty(sys.stdin.fileno()):
# not daemonized (may be foreground or background)
self.bus.log('SIGHUP caught but not daemonized. Exiting.')
self.bus.transition('EXITED')
else:
self.bus.log('SIGHUP caught while daemonized. Restarting.')
self.bus.restart() | PypiClean |
/Files.com-1.0.1051-py3-none-any.whl/files_sdk/models/api_key.py | import builtins
import datetime
from files_sdk.api import Api
from files_sdk.list_obj import ListObj
from files_sdk.exceptions import InvalidParameterError, MissingParameterError, NotImplementedError
class ApiKey:
default_attributes = {
'id': None, # int64 - API Key ID
'descriptive_label': None, # string - Unique label that describes this API key. Useful for external systems where you may have API keys from multiple accounts and want a human-readable label for each key.
'description': None, # string - User-supplied description of API key.
'created_at': None, # date-time - Time which API Key was created
'expires_at': None, # date-time - API Key expiration date
'key': None, # string - API Key actual key string
'last_use_at': None, # date-time - API Key last used - note this value is only updated once per 3 hour period, so the 'actual' time of last use may be up to 3 hours later than this timestamp.
'name': None, # string - Internal name for the API Key. For your use.
'path': None, # string - Folder path restriction for this api key. This must be slash-delimited, but it must neither start nor end with a slash. Maximum of 5000 characters.
'permission_set': None, # string - Permissions for this API Key. Keys with the `desktop_app` permission set only have the ability to do the functions provided in our Desktop App (File and Share Link operations). Additional permission sets may become available in the future, such as for a Site Admin to give a key with no administrator privileges. If you have ideas for permission sets, please let us know.
'platform': None, # string - If this API key represents a Desktop app, what platform was it created on?
'url': None, # string - URL for API host.
'user_id': None, # int64 - User ID for the owner of this API Key. May be blank for Site-wide API Keys.
}
def __init__(self, attributes=None, options=None):
if not isinstance(attributes, dict):
attributes = {}
if not isinstance(options, dict):
options = {}
self.set_attributes(attributes)
self.options = options
def set_attributes(self, attributes):
for (attribute, default_value) in ApiKey.default_attributes.items():
setattr(self, attribute, attributes.get(attribute, default_value))
def get_attributes(self):
return {k: getattr(self, k, None) for k in ApiKey.default_attributes if getattr(self, k, None) is not None}
# Parameters:
# name - string - Internal name for the API Key. For your use.
# description - string - User-supplied description of API key.
# expires_at - string - API Key expiration date
# permission_set - string - Permissions for this API Key. Keys with the `desktop_app` permission set only have the ability to do the functions provided in our Desktop App (File and Share Link operations). Additional permission sets may become available in the future, such as for a Site Admin to give a key with no administrator privileges. If you have ideas for permission sets, please let us know.
def update(self, params = None):
if not isinstance(params, dict):
params = {}
if hasattr(self, "id") and self.id:
params['id'] = self.id
else:
raise MissingParameterError("Current object doesn't have a id")
if "id" not in params:
raise MissingParameterError("Parameter missing: id")
if "id" in params and not isinstance(params["id"], int):
raise InvalidParameterError("Bad parameter: id must be an int")
if "name" in params and not isinstance(params["name"], str):
raise InvalidParameterError("Bad parameter: name must be an str")
if "description" in params and not isinstance(params["description"], str):
raise InvalidParameterError("Bad parameter: description must be an str")
if "expires_at" in params and not isinstance(params["expires_at"], str):
raise InvalidParameterError("Bad parameter: expires_at must be an str")
if "permission_set" in params and not isinstance(params["permission_set"], str):
raise InvalidParameterError("Bad parameter: permission_set must be an str")
response, _options = Api.send_request("PATCH", "/api_keys/{id}".format(id=params['id']), params, self.options)
return response.data
def delete(self, params = None):
if not isinstance(params, dict):
params = {}
if hasattr(self, "id") and self.id:
params['id'] = self.id
else:
raise MissingParameterError("Current object doesn't have a id")
if "id" not in params:
raise MissingParameterError("Parameter missing: id")
if "id" in params and not isinstance(params["id"], int):
raise InvalidParameterError("Bad parameter: id must be an int")
response, _options = Api.send_request("DELETE", "/api_keys/{id}".format(id=params['id']), params, self.options)
return response.data
def destroy(self, params = None):
self.delete(params)
def save(self):
if hasattr(self, "id") and self.id:
self.update(self.get_attributes())
else:
new_obj = create(self.get_attributes(), self.options)
self.set_attributes(new_obj.get_attributes())
# Parameters:
# user_id - int64 - User ID. Provide a value of `0` to operate the current session's user.
# cursor - string - Used for pagination. When a list request has more records available, cursors are provided in the response headers `X-Files-Cursor-Next` and `X-Files-Cursor-Prev`. Send one of those cursor value here to resume an existing list from the next available record. Note: many of our SDKs have iterator methods that will automatically handle cursor-based pagination.
# per_page - int64 - Number of records to show per page. (Max: 10,000, 1,000 or less is recommended).
# sort_by - object - If set, sort records by the specified field in either `asc` or `desc` direction (e.g. `sort_by[expires_at]=desc`). Valid fields are `expires_at`.
# filter - object - If set, return records where the specified field is equal to the supplied value. Valid fields are `expires_at`.
# filter_gt - object - If set, return records where the specified field is greater than the supplied value. Valid fields are `expires_at`.
# filter_gteq - object - If set, return records where the specified field is greater than or equal the supplied value. Valid fields are `expires_at`.
# filter_lt - object - If set, return records where the specified field is less than the supplied value. Valid fields are `expires_at`.
# filter_lteq - object - If set, return records where the specified field is less than or equal the supplied value. Valid fields are `expires_at`.
def list(params = None, options = None):
if not isinstance(params, dict):
params = {}
if not isinstance(options, dict):
options = {}
if "user_id" in params and not isinstance(params["user_id"], int):
raise InvalidParameterError("Bad parameter: user_id must be an int")
if "cursor" in params and not isinstance(params["cursor"], str):
raise InvalidParameterError("Bad parameter: cursor must be an str")
if "per_page" in params and not isinstance(params["per_page"], int):
raise InvalidParameterError("Bad parameter: per_page must be an int")
if "sort_by" in params and not isinstance(params["sort_by"], dict):
raise InvalidParameterError("Bad parameter: sort_by must be an dict")
if "filter" in params and not isinstance(params["filter"], dict):
raise InvalidParameterError("Bad parameter: filter must be an dict")
if "filter_gt" in params and not isinstance(params["filter_gt"], dict):
raise InvalidParameterError("Bad parameter: filter_gt must be an dict")
if "filter_gteq" in params and not isinstance(params["filter_gteq"], dict):
raise InvalidParameterError("Bad parameter: filter_gteq must be an dict")
if "filter_lt" in params and not isinstance(params["filter_lt"], dict):
raise InvalidParameterError("Bad parameter: filter_lt must be an dict")
if "filter_lteq" in params and not isinstance(params["filter_lteq"], dict):
raise InvalidParameterError("Bad parameter: filter_lteq must be an dict")
return ListObj(ApiKey,"GET", "/api_keys", params, options)
def all(params = None, options = None):
list(params, options)
def find_current(params = None, options = None):
if not isinstance(params, dict):
params = {}
if not isinstance(options, dict):
options = {}
response, options = Api.send_request("GET", "/api_key", params, options)
return ApiKey(response.data, options)
# Parameters:
# id (required) - int64 - Api Key ID.
def find(id, params = None, options = None):
if not isinstance(params, dict):
params = {}
if not isinstance(options, dict):
options = {}
params["id"] = id
if "id" in params and not isinstance(params["id"], int):
raise InvalidParameterError("Bad parameter: id must be an int")
if "id" not in params:
raise MissingParameterError("Parameter missing: id")
response, options = Api.send_request("GET", "/api_keys/{id}".format(id=params['id']), params, options)
return ApiKey(response.data, options)
def get(id, params = None, options = None):
find(id, params, options)
# Parameters:
# user_id - int64 - User ID. Provide a value of `0` to operate the current session's user.
# name - string - Internal name for the API Key. For your use.
# description - string - User-supplied description of API key.
# expires_at - string - API Key expiration date
# permission_set - string - Permissions for this API Key. Keys with the `desktop_app` permission set only have the ability to do the functions provided in our Desktop App (File and Share Link operations). Additional permission sets may become available in the future, such as for a Site Admin to give a key with no administrator privileges. If you have ideas for permission sets, please let us know.
# path - string - Folder path restriction for this api key.
def create(params = None, options = None):
if not isinstance(params, dict):
params = {}
if not isinstance(options, dict):
options = {}
if "user_id" in params and not isinstance(params["user_id"], int):
raise InvalidParameterError("Bad parameter: user_id must be an int")
if "name" in params and not isinstance(params["name"], str):
raise InvalidParameterError("Bad parameter: name must be an str")
if "description" in params and not isinstance(params["description"], str):
raise InvalidParameterError("Bad parameter: description must be an str")
if "expires_at" in params and not isinstance(params["expires_at"], str):
raise InvalidParameterError("Bad parameter: expires_at must be an str")
if "permission_set" in params and not isinstance(params["permission_set"], str):
raise InvalidParameterError("Bad parameter: permission_set must be an str")
if "path" in params and not isinstance(params["path"], str):
raise InvalidParameterError("Bad parameter: path must be an str")
response, options = Api.send_request("POST", "/api_keys", params, options)
return ApiKey(response.data, options)
# Parameters:
# expires_at - string - API Key expiration date
# name - string - Internal name for the API Key. For your use.
# permission_set - string - Permissions for this API Key. Keys with the `desktop_app` permission set only have the ability to do the functions provided in our Desktop App (File and Share Link operations). Additional permission sets may become available in the future, such as for a Site Admin to give a key with no administrator privileges. If you have ideas for permission sets, please let us know.
def update_current(params = None, options = None):
if not isinstance(params, dict):
params = {}
if not isinstance(options, dict):
options = {}
if "expires_at" in params and not isinstance(params["expires_at"], str):
raise InvalidParameterError("Bad parameter: expires_at must be an str")
if "name" in params and not isinstance(params["name"], str):
raise InvalidParameterError("Bad parameter: name must be an str")
if "permission_set" in params and not isinstance(params["permission_set"], str):
raise InvalidParameterError("Bad parameter: permission_set must be an str")
response, options = Api.send_request("PATCH", "/api_key", params, options)
return ApiKey(response.data, options)
# Parameters:
# name - string - Internal name for the API Key. For your use.
# description - string - User-supplied description of API key.
# expires_at - string - API Key expiration date
# permission_set - string - Permissions for this API Key. Keys with the `desktop_app` permission set only have the ability to do the functions provided in our Desktop App (File and Share Link operations). Additional permission sets may become available in the future, such as for a Site Admin to give a key with no administrator privileges. If you have ideas for permission sets, please let us know.
def update(id, params = None, options = None):
if not isinstance(params, dict):
params = {}
if not isinstance(options, dict):
options = {}
params["id"] = id
if "id" in params and not isinstance(params["id"], int):
raise InvalidParameterError("Bad parameter: id must be an int")
if "name" in params and not isinstance(params["name"], str):
raise InvalidParameterError("Bad parameter: name must be an str")
if "description" in params and not isinstance(params["description"], str):
raise InvalidParameterError("Bad parameter: description must be an str")
if "expires_at" in params and not isinstance(params["expires_at"], str):
raise InvalidParameterError("Bad parameter: expires_at must be an str")
if "permission_set" in params and not isinstance(params["permission_set"], str):
raise InvalidParameterError("Bad parameter: permission_set must be an str")
if "id" not in params:
raise MissingParameterError("Parameter missing: id")
response, options = Api.send_request("PATCH", "/api_keys/{id}".format(id=params['id']), params, options)
return ApiKey(response.data, options)
def delete_current(params = None, options = None):
if not isinstance(params, dict):
params = {}
if not isinstance(options, dict):
options = {}
response, _options = Api.send_request("DELETE", "/api_key", params, options)
return response.data
def delete(id, params = None, options = None):
if not isinstance(params, dict):
params = {}
if not isinstance(options, dict):
options = {}
params["id"] = id
if "id" in params and not isinstance(params["id"], int):
raise InvalidParameterError("Bad parameter: id must be an int")
if "id" not in params:
raise MissingParameterError("Parameter missing: id")
response, _options = Api.send_request("DELETE", "/api_keys/{id}".format(id=params['id']), params, options)
return response.data
def destroy(id, params = None, options = None):
delete(id, params, options)
def new(*args, **kwargs):
return ApiKey(*args, **kwargs) | PypiClean |
/Mopidy_Advanced_Scrobbler-2.1.0-py3-none-any.whl/mopidy_advanced_scrobbler/_commands/db/sync_corrections.py | from __future__ import annotations
import enum
from typing import TYPE_CHECKING, Tuple
from urllib.parse import urlparse
import questionary
from prompt_toolkit.completion import FuzzyWordCompleter
from rich.table import Table, box
from mopidy_advanced_scrobbler._commands import AbortCommand, Counter
from mopidy_advanced_scrobbler._commands.db import (
attach_external_db,
connect_internal_db,
verify_external_db,
)
from mopidy_advanced_scrobbler._commands.output import stderr, stdout
if TYPE_CHECKING:
from argparse import Namespace
from mopidy_advanced_scrobbler.db import Connection
load_both_differ_query = """
SELECT main.corrections.track_uri,
main.corrections.title as main_title, main.corrections.artist as main_artist, main.corrections.album as main_album,
ext.corrections.title as ext_title, ext.corrections.artist as ext_artist, ext.corrections.album as ext_album FROM main.corrections
INNER JOIN ext.corrections ON ext.corrections.track_uri = main.corrections.track_uri
WHERE main_title != ext_title OR main_artist != ext_artist OR main_album != ext_album
"""
update_correction_query = """
UPDATE {0}.corrections SET title = ?, artist = ?, album = ? WHERE track_uri = ?
"""
load_only_in_main_query = """
SELECT
main.corrections.track_uri,
main.corrections.title,
main.corrections.artist,
main.corrections.album
FROM main.corrections
LEFT JOIN ext.corrections ON ext.corrections.track_uri = main.corrections.track_uri
WHERE ext.corrections.track_uri IS NULL
"""
load_only_in_ext_query = """
SELECT
ext.corrections.track_uri,
ext.corrections.title,
ext.corrections.artist,
ext.corrections.album
FROM ext.corrections
LEFT JOIN main.corrections ON main.corrections.track_uri = ext.corrections.track_uri
WHERE main.corrections.track_uri IS NULL
"""
class DataMismatchChoiceEnum(enum.Enum):
CHOOSE_INTERNAL = enum.auto()
CHOOSE_EXTERNAL = enum.auto()
MANUAL_EDIT = enum.auto()
SKIP_TRACK = enum.auto()
class DataMismatchCounterEnum(enum.Enum):
UPDATE = enum.auto()
SKIP = enum.auto()
choice_choose_internal = questionary.Choice(
title="Use Internal Values",
value=DataMismatchChoiceEnum.CHOOSE_INTERNAL,
)
choice_choose_external = questionary.Choice(
title="Use External Values",
value=DataMismatchChoiceEnum.CHOOSE_EXTERNAL,
)
choice_manual_edit = questionary.Choice(
title="Manually Edit Values",
value=DataMismatchChoiceEnum.MANUAL_EDIT,
)
choice_skip = questionary.Choice(
title="Skip Track",
value=DataMismatchChoiceEnum.SKIP_TRACK,
)
def make_differ_table(result) -> Table:
table = Table(title=result["track_uri"], show_footer=False, box=box.HEAVY_HEAD, show_lines=True)
table.add_column(header="", style="bold")
table.add_column(header="Internal")
table.add_column(header="External")
table.add_row("Title", result["main_title"], result["ext_title"])
table.add_row("Artist", result["main_artist"], result["ext_artist"])
table.add_row("Album", result["main_album"], result["ext_album"])
return table
def run(args: Namespace, config):
verify_external_db(args.external_db, config)
db = connect_internal_db(config)
attach_external_db(db, args.external_db)
differ_counter: Counter[DataMismatchCounterEnum] = Counter()
try:
sync_both_differ(db, differ_counter, include_filesystem_uris=args.include_filesystem_uris)
finally:
updated_count = differ_counter.get(DataMismatchCounterEnum.UPDATE)
if updated_count > 0:
stdout.print(f"Updated {updated_count} entries!", style="success")
else:
stdout.print("No entries were updated.", style="notice")
skipped_count = differ_counter.get(DataMismatchCounterEnum.SKIP)
if skipped_count > 0:
stdout.print(f"Skipped {skipped_count} entries.", style="notice")
sync_only_in_main(db, include_filesystem_uris=args.include_filesystem_uris)
sync_only_in_ext(db, include_filesystem_uris=args.include_filesystem_uris)
stdout.print("Success!", style="success")
return 0
def sync_both_differ(
db: Connection,
counter: Counter[DataMismatchCounterEnum],
*,
include_filesystem_uris: bool = False,
):
stdout.print(
"Synchronising differences between existing entries in both databases.", style="notice"
)
results = db.execute(load_both_differ_query)
for result in results:
track_uri = result["track_uri"]
parsed_track_uri = urlparse(track_uri)
if not parsed_track_uri.scheme:
stderr.print(
f"Invalid track URI found in internal corrections database: {track_uri}",
style="warning",
)
continue
elif parsed_track_uri.scheme in ("file", "local"):
if not include_filesystem_uris:
continue
table = make_differ_table(result)
stdout.print(table)
response = questionary.select(
message="",
choices=(choice_choose_internal, choice_choose_external, choice_manual_edit),
).ask()
if response is None:
raise AbortCommand
if response == DataMismatchChoiceEnum.CHOOSE_INTERNAL:
update_ext_from_main(db, result)
elif response == DataMismatchChoiceEnum.CHOOSE_EXTERNAL:
update_main_from_ext(db, result)
elif response == DataMismatchChoiceEnum.MANUAL_EDIT:
while True:
title, artist, album = edit_values(result)
if questionary.confirm("Empty album correct?", auto_enter=False).ask():
break
update_both(db, track_uri, title, artist, album)
elif response == DataMismatchChoiceEnum.SKIP_TRACK:
counter.incr(DataMismatchCounterEnum.SKIP)
continue
else:
stderr.print("Unrecognised option.", style="error")
raise AbortCommand
counter.incr(DataMismatchCounterEnum.UPDATE)
def update_ext_from_main(db: Connection, result):
db.execute(
update_correction_query.format("ext"),
(result["main_title"], result["main_artist"], result["main_album"], result["track_uri"]),
)
def update_main_from_ext(db: Connection, result):
db.execute(
update_correction_query.format("main"),
(result["ext_title"], result["ext_artist"], result["ext_album"], result["track_uri"]),
)
def update_both(db: Connection, track_uri: str, title: str, artist: str, album: str):
with db:
db.execute("BEGIN")
db.execute(
update_correction_query.format("main"),
(title, artist, album, track_uri),
)
db.execute(
update_correction_query.format("ext"),
(title, artist, album, track_uri),
)
def edit_values(result) -> Tuple[str, str, str]:
title = ""
while title == "":
if result["main_title"] == result["ext_title"]:
default = result["main_title"]
else:
default = ""
title = questionary.text(
"Title:",
default=default,
completer=FuzzyWordCompleter([result["main_title"], result["ext_title"]]),
).ask()
if title is None:
raise AbortCommand
title = title.strip()
artist = ""
while artist == "":
if result["main_artist"] == result["ext_artist"]:
default = result["main_artist"]
else:
default = ""
artist = questionary.text(
"Artist:",
default=default,
completer=FuzzyWordCompleter([result["main_artist"], result["ext_artist"]]),
).ask()
if artist is None:
raise AbortCommand
artist = artist.strip()
while True:
if result["main_album"] == result["ext_album"]:
default = result["main_album"]
else:
default = ""
album = questionary.text(
"Album:",
default=default,
completer=FuzzyWordCompleter([result["main_album"], result["ext_album"]]),
).ask()
if album is None:
raise AbortCommand
album = album.strip()
if album != "":
break
if questionary.confirm("Are you sure?", auto_enter=False).ask():
break
return title, artist, album
def sync_only_in_main(db: Connection, *, include_filesystem_uris: bool = False) -> int:
stdout.print("Copying entries only in internal database to external database.", style="notice")
insert_query = "INSERT INTO ext.corrections (track_uri, title, artist, album)"
query = f"{insert_query} {load_only_in_main_query}"
if not include_filesystem_uris:
query += (
" AND main.corrections.track_uri NOT LIKE 'file:%'"
" AND main.corrections.track_uri NOT LIKE 'local:%'"
)
cursor = db.execute(query)
entry_count = cursor.rowcount
if entry_count > 0:
stdout.print(f"Copied {entry_count} entries!", style="success")
else:
stdout.print("No entries were copied.", style="notice")
return entry_count
def sync_only_in_ext(db: Connection, *, include_filesystem_uris: bool = False) -> int:
stdout.print("Copying entries only in external database to internal database.", style="notice")
insert_query = "INSERT INTO main.corrections (track_uri, title, artist, album)"
query = f"{insert_query} {load_only_in_ext_query}"
if not include_filesystem_uris:
query += (
" AND ext.corrections.track_uri NOT LIKE 'file:%'"
" AND ext.corrections.track_uri NOT LIKE 'local:%'"
)
cursor = db.execute(query)
entry_count = cursor.rowcount
if entry_count > 0:
stdout.print(f"Copied {entry_count} entries!", style="success")
else:
stdout.print("No entries were copied.", style="notice")
return entry_count
__all__ = ("run",) | PypiClean |
/DynaMIT-1.1.5.tar.gz/DynaMIT-1.1.5/dynamit/rnaForesterSearcher.py |
from __future__ import print_function
from __future__ import division
from builtins import str
import os, math, re
import subprocess
from subprocess import CalledProcessError
import dynamit.motifSearcher
import dynamit.utils
class RNAforesterSearcher(dynamit.motifSearcher.MotifSearcher):
"""Class implementing a RNAforester RNA secondary
structure motif search component, running this tool
on the provided input sequences and providing its
processed motifs and instances as results.
"""
def __init__(self):
"""Initialize all class attributes with their default values.
"""
super(self.__class__, self).__init__()
self.searcherName = "RNAforester"
self.path = ""
self.params = ""
self.structureFilename = ""
def setConfiguration(self, path, params):
"""Loads the searcher parameters specified in the configuration file.
Args:
path: path of the RNAforester executable file.
params: parameters to be passed to RNAforester
along with the sequences filename.
Returns:
Returns 0 if everything went fine, 1 and an error message otherwise.
"""
self.path = path
self.params = params
# if params contain a semi-colon, then the first part is the
# actual parameters, the second one is a filename containing
# secondary structure predictions (RNAfold format) to be used
# instead of letting RNAforester predict them.
if self.params.find(";") >= 0:
inf = self.params.split(";")
self.params = inf[0]
self.structureFilename = inf[1]
return 0
def runSearch(self, sequencesFilename):
"""Performs motif search by executing the RNAforester
secondary structure motif search tool and processing
its results to provide motifs and related instances.
Args:
sequencesFilename: input sequences filename for this run.
Returns:
Returns a list of strings representing identified motif matches
if everything went fine (details on results filenames, etc.,
are printed to the console); returns 1 and an error message otherwise.
"""
# convert sequences file to single line sequences FASTA
singleLineFilename = os.path.splitext(sequencesFilename)[0] + \
"_singleLine.fasta"
if (dynamit.utils.makeSingleLineFASTA(sequencesFilename,
singleLineFilename) != 0):
print("[ERROR] Unable to create single line sequences file.")
return 1
# compose the complete command-line for launching RNAforester.
completePath = os.path.join(self.path, "RNAforester") + \
" -p -m " + self.params
# if a filename containing secondary structure predictions
# (RNAfold format) was specified in the parameters, then
# use this file as input instead of letting RNAforester predict them.
if self.structureFilename != "":
completePath = os.path.join(self.path, "RNAforester") + \
" -m " + self.params + " " + self.structureFilename
# prepare sequences dictionary to be later passed
# to processRNAforesterResults
sequences = dict([(seqRecord.description, str(seqRecord.seq)) \
for seqRecord in dynamit.utils.getSequencesRecords(singleLineFilename)])
try:
# open input and output files for RNAforester.
inSeqsHandle = open(singleLineFilename)
# if a filename containing secondary structure predictions
# (RNAfold format) was specified in the parameters, then
# use this file as input instead of letting RNAforester predict them.
if self.structureFilename != "":
inSeqsHandle.close()
inSeqsHandle = open(self.structureFilename)
outResultsHandle = open(singleLineFilename + ".RNAforester", "wb")
# launch RNAforester and wait for its execution to
# complete (store its stderr for use if an error happens).
callOutput = subprocess.check_output(completePath, shell=True,
stderr=subprocess.STDOUT,
stdin=inSeqsHandle)
# write results to the output file and close it.
outResultsHandle.write(callOutput)
outResultsHandle.close()
# check if RNAforester results exist
if os.path.isfile(singleLineFilename + ".RNAforester"):
# extract results
print(" [RNAforesterSearcher] Search completed.")
self.searchResults = self._processRNAforesterResults(sequences,
singleLineFilename + ".RNAforester")
else:
print("[ERROR] Could not find RNAforester results file.")
return 1
# remove cluster.dot and test.out files (generated by RNAforester)
# but not needed in this context.
if os.path.isfile("cluster.dot"):
os.remove("cluster.dot")
if os.path.isfile("test.out"):
os.remove("test.out")
print(" [RNAforesterSearcher] Execution completed.")
return self.searchResults
except CalledProcessError as e:
# inform about the error that happened, and abort searcher execution.
print("[ERROR] RNAforester execution terminated with an error:" + e.output)
return 1
finally:
# close input and output files.
inSeqsHandle.close()
if outResultsHandle:
outResultsHandle.close()
def _processRNAforesterResults(self, sequences, resultsFilename):
""" Process results contained in RNAforester output files to
produce a table for subsequent DynaMIT phases.
Args:
sequences: a dictionary of sequences (id is key, sequence is value).
resultsFilename: the RNAforester results filename.
Returns:
Returns a list of strings, one per motif match, containing
motif sequence, sequence id, match position, etc.
"""
print(" [RNAforesterSearcher] Processing results: <", \
os.path.basename(resultsFilename), ">")
try:
with open(resultsFilename) as f:
lines = f.readlines()
processedResults = []
isInResults = False
isInCluster = False
resultsStart = 0
motifConsensus = ""
motifScore = 0
motifStructConsensus = ""
i = 0
while i < len(lines):
line = lines[i].rstrip('\n')
if isInResults:
# we have collected the motif consensus and are
# now processing its instances
if (line == '\n') or line.startswith(" "):
# we are done mapping instances, so return the results
isInResults = False
isInCluster = False
else:
# maps current instance to its position on the sequence,
# then add it to results
info = re.split(r'\s+', line)
# get the match full sequence ID.
fullSeqID = dynamit.utils.getFullSequenceID(sequences, info[0], 0)
fullSeqID = info[0] if fullSeqID == -1 else fullSeqID
# remove gaps to find the motif instance position and, if
# original sequences contained "T"s replace Us in RNAforester.
noGapsSeq = info[1].replace("-", "").replace("U", "T") \
if sequences[fullSeqID].find("T") >= 0 \
else info[1].replace("-", "")
motifStart = sequences[fullSeqID].find(noGapsSeq) + 1
# if the motif instance was not found on the sequence, we
# likely selected the wrong full sequence ID, so try again
# systematically on all sequence until found.
if motifStart == 0:
for seqId, seq in list(sequences.items()):
if seqId.startswith(info[0]):
motifStart = seq.find(noGapsSeq) + 1
if motifStart > 0:
fullSeqID = seqId
break
# add the instance to the searcher results.
processedResults.append(motifConsensus + \
"\tstructure\t" + self.searcherName + \
"\t" + fullSeqID + "\t" + str(motifStart) + \
"\t" + str(motifStart + len(info[1])) + \
"\t" + motifScore + "\t" + motifStructConsensus)
if line.startswith("RNA Structure Cluster Nr"):
# we have reached the portion of the file containing motifs instances,
# so skip additional lines, store the motif score and the starting
# lines of motifs instances and go on, we will go back to instances
# after collecting the motif consensus
motifScore = lines[i+1].rstrip('\n').split(':')[1].lstrip(' ')
motifScore = str(math.log(float(motifScore), 2))
i += 4
resultsStart = i
isInCluster = True
continue
if line.startswith("Consensus sequence/structure:") and isInCluster:
# we have reached the portion of the file containing the
# motif consensus, so get its stucture/sequence representation.
i += 11
motifConsensus = lines[i].rstrip('\n').lstrip(' ').replace("U", "T")
motifStructConsensus = lines[i+1].rstrip('\n').lstrip(' ')
# now that we have the consensus, move back to motif
# instances to record these
i = resultsStart
isInResults = True
continue
i += 1
return processedResults
except (IOError, IndexError, KeyError, RuntimeError, ValueError) as e:
print(" [RNAforesterSearcher] Unexpected error: %s" % str(e))
return 1 | PypiClean |
/EVE-SRP-0.12.11.tar.gz/EVE-SRP-0.12.11/src/evesrp/views/divisions.py | from __future__ import absolute_import
from flask import url_for, render_template, redirect, abort, flash, request,\
Blueprint, current_app
from flask_babel import gettext, lazy_gettext
from flask_login import login_required, fresh_login_required, current_user
from flask_wtf import Form
import six
from six.moves import map
from sqlalchemy.orm.exc import NoResultFound, MultipleResultsFound
from wtforms.fields import StringField, SubmitField, HiddenField, SelectField,\
Label
from wtforms.validators import InputRequired, AnyOf, NumberRange
from ..models import db
from ..auth import PermissionType
from ..auth.models import Division, Permission, Entity
from ..util import jsonify, varies
blueprint = Blueprint('divisions', __name__)
@blueprint.route('/')
@fresh_login_required
def permissions():
"""Show a page listing all divisions.
"""
if current_user.admin:
return render_template('divisions.html',
divisions=Division.query.all())
if current_user.has_permission(PermissionType.admin):
admin_permissions = current_user.permissions.filter_by(
permission=PermissionType.admin).values(Permission.division_id)
admin_permissions = map(lambda x: x[0], admin_permissions)
divisions = db.session.query(Division).\
filter(Division.id.in_(admin_permissions))
return render_template('divisions.html', divisions=divisions)
return render_template('permissions.html')
return abort(403)
class AddDivisionForm(Form):
# TRANS: On a form for creating a new division, this is a label for the
# TRANS: name of the division.
name = StringField(lazy_gettext(u'Division Name'),
validators=[InputRequired()])
# TRANS: On a form for creating a new division, this is a button for
# TRANS: creating a new division (by submitting the form).
submit = SubmitField(lazy_gettext(u'Create Division'))
@blueprint.route('/add/', methods=['GET', 'POST'])
@fresh_login_required
def add_division():
"""Present a form for adding a division and also process that form.
Only accesible to adminstrators.
"""
if not current_user.admin:
return abort(403)
form = AddDivisionForm()
if form.validate_on_submit():
division = Division(form.name.data)
db.session.add(division)
db.session.commit()
return redirect(url_for('.get_division_details',
division_id=division.id))
return render_template('form.html', form=form,
# TRANS: The title for a page for creating new divisions.
title=gettext(u'Create Division'))
class ChangeEntity(Form):
form_id = HiddenField(default='entity')
id_ = HiddenField()
name = StringField()
permission = HiddenField(validators=[AnyOf(list(PermissionType.values()))])
action = HiddenField(validators=[AnyOf(('add', 'delete'))])
#: List of tuples enumerating attributes that can be transformed/linked.
#: Mainly used as the choices argument to :py:class:`~.SelectField`
transformer_choices = [
('', u''),
# TRANS: Label for fields showing the name of a pilot.
('pilot', lazy_gettext(u'Pilot')),
# TRANS: Label for the corporation a pilot is in.
('corporation', lazy_gettext(u'Corporation')),
# TRANS: Label for the alliance a pilot is in.
('alliance', lazy_gettext(u'Alliance')),
# TRANS: Label for the solar system a loss occured in.
('system', lazy_gettext(u'Solar System')),
# TRANS: Label for the constellation a loss occured in.
('constellation', lazy_gettext(u'Constellation')),
# TRANS: Label for the region a loss occured in.
('region', lazy_gettext(u'Region')),
# TRANS: Label for the type of ship that was lost.
('ship_type', lazy_gettext(u'Ship')),
# TRANS: Label for the status a request is in (ex: Unevaluated, Approved)
('status', lazy_gettext(u'Request Status')),
]
class ChangeTransformer(Form):
form_id = HiddenField(default='transformer')
# TRANS: The a label for a selection field for selecting which attribute
# TRANS: to transform. See the translation for 'Attribute Transformer'.
attribute = SelectField(lazy_gettext(u'Attribute'),
choices=transformer_choices)
# TRANS: The label for a selection field for selecting the transformer for
# TRANS: an attribute. See the translation for 'Attribute Transformer'.
transformer = SelectField(lazy_gettext(u'Transformer'), choices=[])
def transformer_choices(attr):
"""Conveniece function for generating a list of transformer option tuples.
:param attr str: the name of the attribute to make a list for.
:return: A list of tuples suitable for the choices argument of\
:py:class:`StringField`
:rtype: list
"""
default_transformers = [
('none', 'None'),
]
choices = default_transformers
if attr in current_app.url_transformers:
for transformer in current_app.url_transformers[attr]:
choices.append((transformer, transformer))
return choices
@blueprint.route('/<int:division_id>/', methods=['GET'])
@fresh_login_required
@varies('Accept', 'X-Requested-With')
def get_division_details(division_id=None, division=None):
"""Generate a page showing the details of a division.
Shows which groups and individuals have been granted permissions to each
division.
Only accesible to administrators.
:param int division_id: The ID number of the division
"""
if division is None:
division = Division.query.get_or_404(division_id)
if not current_user.admin and not \
current_user.has_permission(PermissionType.admin, division):
abort(403)
if request.is_json or request.is_xhr:
return jsonify(division._json(True))
return render_template(
'division_detail.html',
division=division,
entity_form=ChangeEntity(formdata=None),
transformer_form=ChangeTransformer(formdata=None),
)
def _modify_division_entity(division):
"""Handle POST requests for adding/removing entities form a Division."""
form = ChangeEntity()
if form.validate():
entity = None
if form.id_.data != '':
current_app.logger.debug("Looking up entity by ID: {}".format(
form.id_.data))
entity = Entity.query.get(form.id_.data)
if entity is None:
# TRANS: This is an error message when there's a problem
# TRANS: granting a permission to a user or group
# TRANS: (collectively called 'entities'). The '#' is not
# TRANS: special, but the '%s(in_num)d' will be replaced with
# TRANS: the ID number that was attempted to be added.
flash(gettext(u"No entity with ID #%(id_num)d.",
id_num=form.id_.data),
category=u'error')
else:
current_app.logger.debug(u"Looking up entity by name: '{}'"\
.format(form.name.data))
try:
entity = Entity.query.filter_by(
name=form.name.data).one()
except NoResultFound:
# TRANS: Error message when a user or group with a given name
# TRANS: cannot be found.
flash(gettext(u"No entities with the name '%(name)s' found.",
name=form.name.data),
category=u'error')
except MultipleResultsFound:
# TRANS: Error message when multiple users and/or groups are
# TRANS: found with a given name.
flash(gettext(
u"Multiple entities with the name '%(name)s' found.",
name=form.name.data),
category=u'error')
else:
current_app.logger.debug("entity lookup success")
if entity is None:
return get_division_details(division=division), 404, None
# The entity has been found, create the query for the requested
# Permission.
permission_type = PermissionType.from_string(
form.permission.data)
permission_query = Permission.query.filter_by(
division=division,
entity=entity,
permission=permission_type)
# The response for both add and delete actions depends on whether the
# Permission is found, so look it up first.
try:
permission = permission_query.one()
except NoResultFound:
if form.action.data == 'add':
db.session.add(
Permission(division, permission_type, entity))
# TRANS: Message show when granting a permission to a user or
# TRANS: group.
flash(gettext(u"%(name)s is now a %(role)s.",
name=entity,
role=permission_type.description.lower()),
category=u"info")
elif form.action.data == 'delete':
# TRANS: Message shown when trying to remove a permission from
# TRANS: a user, but that user didn't have that permission
# TRANS: already.
flash(gettext(u"%(name)s is not a %(role)s.",
name=entity,
role=permission_type.description.lower()),
category=u"warning")
else:
if form.action.data == 'delete':
permission_query.delete()
# TRANS: Confirmation message shown when revoking a permission
# TRANS: from a user or group.
flash(gettext(u"%(name)s is no longer a %(role)s.",
name=entity,
role=permission_type.description.lower()),
category=u"info")
elif form.action.data == 'add':
flash(gettext(u"%(name)s is now a %(role)s.",
name=entity,
role=permission_type.description.lower()),
category=u"info")
db.session.commit()
else:
for field_name, errors in six.iteritems(form.errors):
errors = u", ".join(errors)
# TRANS: Error message that is shown when one or more fields of a
# TRANS: form are shown.
flash(gettext(u"Errors for %(field_name)s: %(error)s.",
field_name=field_name, error=errors), u'error')
current_app.logger.info("Malformed entity permission POST: {}".format(
form.errors))
return get_division_details(division=division)
def _modify_division_transformer(division):
"""Handle POST requests for changing the Transformers for a Division."""
form = ChangeTransformer()
# Set the form's choices
attr = form.attribute.data
form.transformer.choices = transformer_choices(attr)
# Check form and go from there
if form.validate():
name = form.transformer.data
if name == 'none':
division.transformers[attr] = None
else:
# Get the specific map of transformers for the attribute
attr_transformers = current_app.url_transformers[attr]
# Get the new transformer
division.transformers[attr] = attr_transformers[name]
# Explicitly add the TransformerRef to the session
db.session.add(division.division_transformers[attr])
db.session.commit()
# TRANS: Confirmation message shown when a transformer for an
# TRANS: attribute has been set.
flash(gettext(u"'%(attribute)s' set to '%(transformer)s'.",
attribute=attr, transformer=name), u'message')
else:
for field_name, errors in six.iteritems(form.errors):
errors = u", ".join(errors)
# TRANS: Generic error message shown for the fields in a form.
flash(gettext(u"Errors for %(field_name)s: %(error)s.",
field_name=field_name, error=errors), u'error')
current_app.logger.info("Malformed division transformer POST: {}".
format(form.errors))
return get_division_details(division=division)
@blueprint.route('/<int:division_id>/', methods=['POST'])
@fresh_login_required
def modify_division(division_id):
"""Dispatches modification requests to the specialized view function for
that operation.
"""
division = Division.query.get_or_404(division_id)
if not current_user.admin and not \
current_user.has_permission(PermissionType.admin, division):
abort(403)
form_id = request.form.get('form_id')
if form_id == 'entity':
return _modify_division_entity(division)
elif form_id == 'transformer':
return _modify_division_transformer(division)
else:
current_app.logger.warn("Invalid division modification POST: {}"
.format(request.form))
abort(400)
@blueprint.route('/<int:division_id>/transformers/')
@blueprint.route('/<int:division_id>/transformers/<attribute>/')
@login_required
def list_transformers(division_id, attribute=None):
"""API method to get a list of transformers for a division.
:param division_id int: the ID of the division to look up
:param attribute str: a specific attribute to look up. Optional.
:return: JSON
"""
division = Division.query.get_or_404(division_id)
if not current_user.admin and not \
current_user.has_permission(PermissionType.admin, division):
abort(403)
if attribute is None:
attrs = six.iterkeys(current_app.url_transformers)
else:
attrs = (attribute,)
choices = {}
for attr in attrs:
raw_choices = transformer_choices(attr)
current = division.transformers.get(attr, None)
if current is not None:
choices[attr] = \
[(c[0], c[1], c[1] == current.name) for c in raw_choices]
else:
choices[attr] = \
[(c[0], c[1], False) for c in raw_choices]
return jsonify(choices) | PypiClean |
/LDtoolsets-0.0.14.tar.gz/LDtoolsets-0.0.14/nbs/00_Genodata.ipynb | ```
# default_exp genodata
#hide
%load_ext autoreload
%autoreload 2
```
# Genodata module
> read and extract genodata
```
#export
import numpy as np
import pandas as pd
import dask.array as da
from bgen_reader import open_bgen
from pandas_plink import read_plink
from pandas_plink.bed_reader import lib, ffi
try:
from pybgen.parallel import ParallelPyBGEN as PyBGEN
except:
print('Can not import ParallelPyBGEN. import PyBGEN instead')
from pybgen import PyBGEN
#export
from math import floor
from pathlib import Path
from typing import Optional, Union
from tqdm import tqdm
from numpy import ascontiguousarray, empty, float32, float64, nan_to_num, uint8, uint64, arange, full
from xarray import DataArray
from pandas import DataFrame, array
# export
def read_bgen(file, sample_file=None,pybgen=True):
'''the function to read genotype data'''
if pybgen:
bg = PyBGEN(file,probs_only=True)
bim = []
for i,t in enumerate(bg.iter_variant_info()):
bim.append([int(t.chrom),t.name,0.0,t.pos,t.a1,t.a2,i])
bim = pd.DataFrame(bim,columns=['chrom','snp','cm','pos','a0','a1','i'])
bim.snp = 'chr'+bim[['chrom','pos','a0','a1']].astype(str).agg(':'.join, axis=1)
else:
bg = open_bgen(file,verbose=False)
snp,aa0,aa1 = [],[],[]
for c,p,alleles in zip(bg.chromosomes,bg.positions,bg.allele_ids):
a0,a1 = alleles.split(',')
aa0.append(a0)
aa1.append(a1)
snp.append(':'.join(['chr'+str(int(c)),str(p),a0,a1])) # '05' first change to int, then change to str
bim = pd.DataFrame({'chrom':bg.chromosomes.astype(int),'snp':snp,'pos':bg.positions,'a0':aa0,'a1':aa1})
if sample_file is None:
fam = None
else:
fam = pd.read_csv(sample_file, header=0, delim_whitespace=True, quotechar='"',skiprows=1)
fam.columns = ['fid','iid','missing','sex'] #Fix me
fam = fam
return bim,fam,bg
#export
def read_bim(fn):
header = ["chrom", "snp", "cm","pos","a0", "a1"]
df = pd.read_csv(fn,delim_whitespace=True,header=None,names=header,compression=None,engine="c",iterator=False)
df["i"] = range(df.shape[0])
return df
# export
def bgen2dask(bgen,index,step=500):
'''The function to covert bgen to dask array'''
genos = []
n = len(index)
for i in range(0,n,step):
onecode_geno = bgen.read(index[i:min(n,i+step)]) #samples x variants
geno = onecode_geno.argmax(axis=2).astype(np.int8)
genos.append(da.from_array(geno))
return(da.concatenate(genos,axis=1).T)
# export
def pybgen_region(bgen,region,step=100):
genos,geno=[],[]
i = 1
for _,v in bgen.iter_variants_in_region('0'+str(region[0]) if region[0]<10 else str(region[0]),region[1],region[2]):
if i % step == 0:
genos.append(da.from_array(geno))
geno = []
geno.append(v.argmax(axis=1).astype(np.int8))
i += 1
genos.append(da.from_array(geno))
return(da.concatenate(genos,axis=0))
# export
def extract_bed(geno,idx,row=True,step=500,region=None): #row = True by variants, row = False by samples
if isinstance(geno,da.core.Array):
if row:
geno = geno[idx,:]
else:
geno = geno[:,idx]
elif isinstance(geno,PyBGEN):
geno = pybgen_region(geno,region,step)
else:
if row:
#must be numric index
if type(list(idx)[0]) is bool:
pd_idx = pd.Series(idx)
idx = list(pd_idx[pd_idx].index)
geno = bgen2dask(geno,idx,step)
else:
geno = geno.read() # read all variants
geno = geno[:,idx]
return geno
# export
class Genodata:
def __init__(self,geno_path,sample_path=None):
self.bim,self.fam,self.bed = self.read_geno(geno_path,sample_path)
def __repr__(self):
return "bim:% s \n fam:% s \n bed:%s" % (self.bim, self.fam, self.bed)
def read_geno(self,geno_file,sample_file):
if geno_file.endswith('.bed'):
bim,fam,bed = read_plink(geno_file[:-4], verbose=False)
bim.snp = 'chr'+bim[['chrom','pos','a0','a1']].astype(str).agg(':'.join, axis=1)
elif geno_file.endswith('.bgen'):
if sample_file is None:
sample_file = geno_file.replace('.bgen', '.sample')
bim,fam,bed = read_bgen(geno_file,sample_file)
else:
raise ValueError('Plesae provide the genotype files with PLINK binary format or BGEN format')
bim.chrom = bim.chrom.astype(int)
bim.pos = bim.pos.astype(int)
return bim,fam,bed
def geno_in_stat(self,stat,notin=False):
'''The function to find an overlap region between geno data with sumstat'''
variants = stat.SNP
self.extractbyvariants(variants,notin)
def geno_in_unr(self,unr):
'''The function to find an overlap samples between geno data with unr'''
samples = unr.IID
self.extractbysamples(samples)
def extractbyregion(self,region):
bim = self.bim
idx = (bim.chrom == region[0]) & (bim.pos >= region[1]) & (bim.pos <= region[2])
print('this region',region,'has',sum(idx),'SNPs in Genodata')
if sum(idx) == 0:
raise ValueError('The extraction is empty')
#update bim,bed
self.extractbyidx(idx,row=True,region=region)
def extractbyvariants(self,variants,notin=False): #variants is list or pd.Series
idx = self.bim.snp.isin(variants)
if notin:
idx = idx == False
if sum(idx) == 0:
raise ValueError('The extraction is empty')
#update bim,bed
self.extractbyidx(idx,row=True)
def extractbysamples(self,samples,notin=False): #samples is list or pd.Series
samples = pd.Series(samples,dtype=str)
idx = self.fam.iid.astype(str).isin(samples)
if notin:
idx = idx == False
if sum(idx) == 0:
raise ValueError('The extraction is empty')
#update fam,bed
self.extractbyidx(idx,row=False)
def extractbyidx(self,idx,row=True,region=None):
'''get subset of genodata by index
if index is numbers, the order of genodata will be sorted by the order of index.
if row = True, extract by variants. Otherwise, extract by samples.'''
idx = list(idx)
self.idx = idx
if row:
#update bim
if type(idx[0]) is bool:
self.bim = self.bim[idx]
else:
self.bim = self.bim.iloc[idx]
else:
#update fam
if type(idx[0]) is bool:
self.fam = self.fam[idx]
else:
self.fam = self.fam.iloc[idx]
self.bed = extract_bed(self.bed,idx,row,region=region)
def export_plink(self, bed: Union[str, Path], bim: Optional[Union[str, Path]] = None, fam: Optional[Union[str, Path]] = None,row: str = "variant",verbose: bool = True):
bed = Path(bed)
if bim is None:
bim = bed.with_suffix(".bim")
if fam is None:
fam = bed.with_suffix(".fam")
bim = Path(bim)
fam = Path(fam)
write_bed(bed, self.bed, row, verbose)
_echo("Writing FAM... ", end="", disable=not verbose)
write_fam(fam, self.fam)
_echo("done.", disable=not verbose)
_echo("Writing BIM... ", end="", disable=not verbose)
write_bim(bim, self.bim)
_echo("done.", disable=not verbose)
#export
def write_plink(
G,
bed: Union[str, Path],
bim: Optional[Union[str, Path]] = None,
fam: Optional[Union[str, Path]] = None,
row: str = "variant",
verbose: bool = True,
):
"""
Write PLINK 1 binary files into a data array.
A PLINK 1 binary file set consists of three files:
- BED: containing the genotype.
- BIM: containing variant information.
- FAM: containing sample information.
The user must provide the genotype (dosage) via a :class:`xarray.DataArray` matrix
with data type :const:`numpy.float32` or :const:`numpy.float64`. That matrix must
have two named dimensions: **sample** and **variant**. The only allowed values for
the genotype are: :const:`0`, :const:`1`, :const:`2`, and :data:`math.nan`.
Parameters
----------
G
Genotype with bim, bed, and fam.
bed
Path to a BED file.
bim
Path to a BIM file.It defaults to :const:`None`, in which case it will try to be
inferred.
fam
Path to a FAM file. It defaults to :const:`None`, in which case it will try to
be inferred.
major
It can be either :const:`"sample"` or :const:`"variant"` (recommended and
default). Specify the matrix layout on the BED file.
verbose
:const:`True` for progress information; :const:`False` otherwise.
"""
if G.bed.ndim != 2:
raise ValueError("G has to be bidimensional")
bed = Path(bed)
if bim is None:
bim = bed.with_suffix(".bim")
if fam is None:
fam = bed.with_suffix(".fam")
bim = Path(bim)
fam = Path(fam)
write_bed(bed, G.bed, row, verbose)
_echo("Writing FAM... ", end="", disable=not verbose)
write_fam(fam, G.fam)
_echo("done.", disable=not verbose)
_echo("Writing BIM... ", end="", disable=not verbose)
write_bim(bim, G.bim)
_echo("done.", disable=not verbose)
def _echo(msg: str, end: str = "\n", disable: bool = False):
if not disable:
print(msg, end=end, flush=True)
def write_fam(filepath: Path, df):
cols = ["fid", "iid", "father","mother","gender","trait"]
df = df[cols]
df.to_csv(
filepath,
index=False,
sep="\t",
header=False,
encoding="ascii",
line_terminator="\n",
)
def write_bim(filepath: Path, df):
cols = ["chrom","snp","cm","pos","a0","a1"]
df = df[cols]
df.to_csv(
filepath,
index=False,
sep="\t",
header=False,
encoding="ascii",
line_terminator="\n",
)
#export
def write_bed(filepath: Path, G, row='variant', verbose=True):
"""
Write BED file.
It assumes that ``X`` is a variant-by-sample matrix.
"""
if not isinstance(G,da.core.Array):
G = da.asanyarray(G)
if row != "variant":
G = G.T
row_code = 1 if row == "variant" else 0
e = lib.write_bed_header(str(filepath).encode(), row_code)
if e != 0:
raise RuntimeError(f"Failure while writing BED file {filepath}.")
nrows = G.shape[0]
ncols = G.shape[1]
row_chunk = max(1, floor((1024 * 1024 * 256) / ncols))
row_chunk = min(row_chunk, nrows)
G = G.rechunk((row_chunk, ncols))
row_start = 0
for chunk in tqdm(G.chunks[0], "Writing BED", disable=not verbose):
data = G[row_start : row_start + chunk, :].compute()
if data.dtype not in [float32, float64]:
msg = "Unsupported data type. "
msg += "Please, provide a dosage matrix in either "
msg += "float32 or float64 format."
raise ValueError(msg)
_write_bed_chunk(filepath, data)
row_start += chunk
def _write_bed_chunk(filepath: Path, X):
base_type = uint8
base_size = base_type().nbytes
base_repr = "uint8_t"
nan_to_num(X, False, 3.0)
G = ascontiguousarray(X, base_type)
assert G.flags.aligned
strides = empty(2, uint64)
strides[:] = G.strides
strides //= base_size
e = lib.write_bed_chunk(
str(filepath).encode(),
G.shape[1],
G.shape[0],
ffi.cast(f"{base_repr} *", G.ctypes.data),
ffi.cast("uint64_t *", strides.ctypes.data),
)
if e != 0:
raise RuntimeError(f"Failure while writing BED file {filepath}.")
write_plink(geno,'test.bed')
from pandas_plink import read_plink1_bin
geno1 = Genodata('test.bed')
geno1
```
# Test
```
geno_path ='/home/dmc2245/UKBiobank/data/exome_files/project_VCF/072721_run/plink/ukb23156_c1.merged.filtered.bed'
region = [5,272741,1213528-900000]
geno_path = 'MWE_region_extraction/ukb23156_c5.merged.filtered.5_272741_1213528.bed'
sumstats_path = 'MWE_region_extraction/090321_UKBB_Hearing_aid_f3393_expandedwhite_6436cases_96601ctrl_PC1_2_f3393.regenie.snp_stats'
pheno_path = None
unr_path = 'MWE_region_extraction/UKB_genotypedatadownloaded083019.090221_sample_variant_qc_final_callrate90.filtered.extracted.white_europeans.filtered.092821_ldprun_unrelated.filtered.prune.txt'
imp_geno_path = 'MWE_region_extraction/ukb_imp_chr5_v3_05_272856_1213643.bgen'
imp_sumstats_path = 'MWE_region_extraction/100521_UKBB_Hearing_aid_f3393_expandedwhite_15601cases_237318ctrl_500k_PC1_PC2_f3393.regenie.snp_stats'
imp_ref = 'hg19'
output_sumstats = 'test.snp_stats'
output_LD = 'test_corr.csv'
#main(region,geno_path,sumstats_path,pheno_path,unr_path,imp_geno_path,imp_sumstats_path,imp_ref,output_sumstats,output_LD)
from pandas_plink import Chunk
Chunk(512,512)
exome_geno.extractbyvariants(exome_geno.bim.snp[:50])
exome_geno.extractbysamples(exome_geno.fam.iid[:60])
from LDtools.sumstat import *
region = [5, 272741, 1213528]
imput_sumstats = Sumstat('/home/dmc2245/UKBiobank/results/REGENIE_results/results_imputed_data/2021_10_07_f3393_500K/100521_UKBB_Hearing_aid_f3393_expandedwhite_15601cases_237318ctrl_500k_PC1_PC2_f3393.regenie.snp_stats.gz')
imput_sumstats.extractbyregion(region)
imput_sumstats
bgen = PyBGEN(geno_file)
sample_file = geno_file.replace('.bgen', '.sample')
if not os.path.isfile(sample_file):
if not os.path.isfile(${bgen_sample_path:r}):
raise ValueError(f"Cannot find the matching sample file ``{sample_file}`` for ``{geno_file}``.\nYou can specify path to sample file for all BGEN files using ``--bgen-sample-path``.")
else:
sample_file = ${bgen_sample_path:r}
bgen_fam = pd.read_csv(sample_file, header=0, delim_whitespace=True, quotechar='"',skiprows=1)
bgen_fam.columns = ['fid','iid','missing','sex']
geno = [bgen,bgen_fam]
#imp_geno_path = '/mnt/mfs/statgen/archive/UKBiobank_Yale_transfer/ukb39554_imputeddataset/ukb_imp_chr5_v3.bgen'
bgen_sample_path = '/home/dmc2245/UKBiobank_Yale_transfer/ukb39554_imputeddataset/ukb32285_imputedindiv.sample'
imput_geno = Genodata(imp_geno_path,bgen_sample_path)
imput_geno.extractbyregion(region)
imput_geno.extractbyvariants(list(imput_geno.bim.snp[10:20]))
imput_geno.extractbysamples(list(imput_geno.fam.iid[50:100]))
imput_geno
region
from pybgen import PyBGEN
bgen = PyBGEN(imp_geno_path,probs_only=True)
pybgen_region(bgen,region)
for t,g in bgen.iter_variants_in_region('0'+str(region[0]) if region[0]<10 else str(region[0]),region[1],region[2]):
print(t)
import pandas as pd
tmp = bgen.iter_variants()
genos = []
for i,v in zip(range(bgen.nb_variants),bgen):
geno = []
if i % 100000 ==0:
geno.append(v.argmax(axis=1).astype(np.int8))
print(i,j)
genos = []
n = len(index)
for i in range(0,n,step):
onecode_geno = bgen.read(index[i:min(n,i+step)]) #samples x variants
geno = onecode_geno.argmax(axis=2).astype(np.int8)
genos.append(da.from_array(geno))
1002 %10000
tmp
a = tmp.next()
import numpy as np
a[1]
aa = a[1].argmax(axis=1).astype(np.int8)
pd.Series(aa).value_counts()
tmp = []
for i,t in enumerate(bgen.iter_variant_info()):
tmp.append([int(t.chrom),t.name,0.0,t.pos,t.a1,t.a2,i])
tmp = pd.DataFrame(tmp,columns=['chrom','snp','cm','pos','a0','a1','i'])
tmp.snp = 'chr'+tmp[['chrom','pos','a0','a1']].astype(str).agg(':'.join, axis=1)
tmp
list(bgen.iter_variant_info())[0]
idx = imput_geno.idx
if type(list(idx)[0]) is bool:
pd_idx = pd.Series(idx)
idx = list(pd_idx[pd_idx].index)
len(idx)
idx[1:10]
imp_geno_path = 'MWE_region_extraction/ukb_imp_chr5_v3_05_272856_1213643.bgen'
bgen = open_bgen(imp_geno_path)
bgen.read(1)
imp_geno_path = '/mnt/mfs/statgen/archive/UKBiobank_Yale_transfer/ukb39554_imputeddataset/ukb_imp_chr5_v3.bgen'
bgen = open_bgen(imp_geno_path)
bgen.read(1)
imput_geno.bed
bgen
imput_geno.geno_in_stat(imput_sumstats.ss)
read_bgen('/mnt/mfs/statgen/archive/UKBiobank_Yale_transfer/ukb39554_imputeddataset/ukb_imp_chr5_v3.bgen')
```
| PypiClean |
/EZFF-1.0.0.tar.gz/EZFF-1.0.0/examples/lj-gulp-serial/run.py | import ezff
from ezff.interfaces import gulp, vasp
import numpy as np
# DEFINE GROUND TRUTHS
gt_bulk_modulus = 1.1236 #GPa
gt_structure = vasp.read_atomic_structure('ground_truths/POSCAR')
def my_error_function(variable_values, template):
myrank = ezff.get_pool_rank()
# Configure GULP job.
path = str(myrank)
md_job = gulp.job(path=path)
md_job.structure = gt_structure
md_job.forcefield = ezff.generate_forcefield(template, variable_values, FFtype='SW')
md_job.options['pbc'] = True
md_job.options['relax_atoms'] = True
md_job.options['relax_cell'] = True
# Submit job and read output
md_job.run()
# Calculate errors in lattice constant and elastic modulus
moduli = md_job.read_elastic_moduli() # 6X6 elastic modulus tensor inside a list of length 1 (for a single input structure)
md_bulk_modulus = np.mean([moduli[0][i,i] for i in range(3)]) # Bulk modulus is sum of diagonal elements in moduli[0]
bulk_modulus_error = np.linalg.norm(md_bulk_modulus - gt_bulk_modulus) # Error is the L2 norm of deviation from the ground truth value
md_structure = md_job.read_structure() # Read relaxed structure after optimization
error_abc, error_ang = ezff.error_lattice_constant(MD=md_structure, GT=gt_structure) # error_abc = error in lattice constants, error_ang = error in lattice angles
latt_error = np.linalg.norm(error_abc[0]) # error in 'a' lattice constant
return [latt_error, bulk_modulus_error]
if __name__ == '__main__':
obj = ezff.FFParam(error_function = my_error_function, num_errors = 2)
obj.read_variable_bounds('variable_bounds')
obj.read_forcefield_template('template')
obj.set_algorithm('randomsearch_so', population_size = 32)
obj.parameterize(num_epochs = 5)
obj.set_algorithm('nsga2_mo_platypus', population_size = 32)
obj.parameterize(num_epochs = 15) | PypiClean |
/Kallithea-0.7.0.tar.gz/Kallithea-0.7.0/kallithea/lib/vcs/ssh/base.py | import datetime
import logging
import sys
from kallithea.lib.auth import AuthUser, HasPermissionAnyMiddleware
from kallithea.lib.utils2 import set_hook_environment
from kallithea.model import db, meta
log = logging.getLogger(__name__)
class BaseSshHandler(object):
# Protocol for setting properties:
# Set by sub class:
# vcs_type: 'hg' or 'git'
# Set by make() / __init__():
# repo_name: requested repo name - only validated by serve()
# Set by serve() - must not be accessed before:
# db_repo: repository db object
# authuser: user that has been authenticated - like request.authuser ... which isn't used here
# allow_push: false for read-only access to the repo
# Set defaults, in case .exit should be called early
vcs_type = None
repo_name = None
@staticmethod
def make(ssh_command):
"""Factory function. Given a command as invoked over SSH (and preserved
in SSH_ORIGINAL_COMMAND when run as authorized_keys command), return a
handler if the command looks ok, else return None.
"""
raise NotImplementedError
def __init__(self, repo_name):
self.repo_name = repo_name.rstrip('/')
def serve(self, user_id, key_id, client_ip):
"""Verify basic sanity of the repository, and that the user is
valid and has access - then serve the native VCS protocol for
repository access."""
dbuser = db.User.get(user_id)
if dbuser is None:
self.exit('User %r not found' % user_id)
self.authuser = AuthUser.make(dbuser=dbuser, ip_addr=client_ip)
log.info('Authorized user %s from SSH %s trusting user id %s and key id %s for %r', dbuser, client_ip, user_id, key_id, self.repo_name)
if self.authuser is None: # not ok ... but already kind of authenticated by SSH ... but not really not authorized ...
self.exit('User %s from %s cannot be authorized' % (dbuser.username, client_ip))
ssh_key = db.UserSshKeys.get(key_id)
if ssh_key is None:
self.exit('SSH key %r not found' % key_id)
ssh_key.last_seen = datetime.datetime.now()
meta.Session().commit()
if HasPermissionAnyMiddleware('repository.write',
'repository.admin')(self.authuser, self.repo_name):
self.allow_push = True
elif HasPermissionAnyMiddleware('repository.read')(self.authuser, self.repo_name):
self.allow_push = False
else:
self.exit('Access to %r denied' % self.repo_name)
self.db_repo = db.Repository.get_by_repo_name(self.repo_name)
if self.db_repo is None:
self.exit("Repository '%s' not found" % self.repo_name)
assert self.db_repo.repo_name == self.repo_name
# Set global hook environment up for 'push' actions.
# For push actions, the action in global hook environment is used in
# process_pushed_raw_ids (which it is called as Git post-receive hook,
# or Mercurial 'changegroup' hook).
# For pull actions, the actual hook in log_pull_action (called directly
# on Git, or through the 'outgoing' Mercurial hook) is hardcoded to
# ignore the environment action and always use 'pull'.
set_hook_environment(self.authuser.username, client_ip, self.repo_name, self.vcs_type, 'push')
self._serve()
def _serve(self):
"""Serve the native protocol for repository access."""
raise NotImplementedError
def exit(self, error):
log.info('abort serving %s %s: %s', self.vcs_type, self.repo_name, error)
sys.stderr.write('abort: %s\n' % error)
sys.exit(1) | PypiClean |
/ANYstructure-4.10.tar.gz/ANYstructure-4.10/any_files/load_factor_window.py |
import tkinter as tk
import os
from _tkinter import TclError
class CreateLoadFactorWindow:
'''
self._load_factors_dict = {'dnva':[1.3,1.2,0.7], 'dnvb':[1,1,1.3], 'tanktest':[1,1,0]} # DNV loads factors
'''
def __init__(self,master, app=None):
super(CreateLoadFactorWindow, self).__init__()
self._frame = master
self._frame.wm_title("Load factor modifications here.")
self._frame.geometry('800x800')
self._frame.grab_set()
self._app = app
if __name__ == '__main__':
self._load_factors_dict = {'dnva': [1.3, 1.2, 0.7], 'dnvb': [1, 1, 1.3],
'tanktest': [1, 1, 0]} # DNV loads factors
else:
self._load_factors_dict = app._load_factors_dict
self.new_conda_lff1 = tk.DoubleVar()
self.new_conda_lff2 = tk.DoubleVar()
self.new_conda_lff3 = tk.DoubleVar()
self.new_condb_lff1 = tk.DoubleVar()
self.new_condb_lff2 = tk.DoubleVar()
self.new_condb_lff3 = tk.DoubleVar()
self.new_condtt_lff1 = tk.DoubleVar()
self.new_condtt_lff2 = tk.DoubleVar()
self.new_condtt_lff3 = tk.DoubleVar()
self.new_change_default = tk.BooleanVar()
self.new_change_existing = tk.BooleanVar()
self.new_conda_lff1.set(self._load_factors_dict['dnva'][0])
self.new_conda_lff2.set(self._load_factors_dict['dnva'][1])
self.new_conda_lff3.set(self._load_factors_dict['dnva'][2])
self.new_condb_lff1.set(self._load_factors_dict['dnvb'][0])
self.new_condb_lff2.set(self._load_factors_dict['dnvb'][1])
self.new_condb_lff3.set(self._load_factors_dict['dnvb'][2])
self.new_condtt_lff1.set(self._load_factors_dict['tanktest'][0])
self.new_condtt_lff2.set(self._load_factors_dict['tanktest'][1])
self.new_condtt_lff3.set(self._load_factors_dict['tanktest'][2])
ent_w = 20
tk.Label(self._frame, text='Static and dynamic load factors is specified here', font='Verdana 15 bold')\
.grid(row = 1, column = 1, sticky = tk.W)
tk.Label(self._frame, text='Note that DNV is used as reference, '
'but the load factors can be any other rule set such as ISO.', font='Verdana 8 bold')\
.grid(row = 2, column = 1, sticky = tk.W)
tk.Label(self._frame, text=' ', font='Verdana 8 bold')\
.grid(row = 3, column = 1, sticky = tk.W)
tk.Label(self._frame, text='Condition a) - Static load factor "unknown loads"', font='Verdana 8 bold')\
.grid(row = 4, column = 1, sticky = tk.W)
tk.Label(self._frame, text='Condition a) - Static load factor well defined loads', font='Verdana 8 bold')\
.grid(row = 5, column = 1, sticky = tk.W)
tk.Label(self._frame, text='Condition a) - Dynamic load factor', font='Verdana 8 bold')\
.grid(row = 6, column = 1, sticky = tk.W)
self.ent_conda_lf1 = tk.Entry(self._frame, textvariable=self.new_conda_lff1, width=ent_w)
self.ent_conda_lf2 = tk.Entry(self._frame, textvariable=self.new_conda_lff2, width=ent_w)
self.ent_conda_lf3 = tk.Entry(self._frame, textvariable=self.new_conda_lff3, width=ent_w)
self.ent_conda_lf1.grid(row=4, column=2)
self.ent_conda_lf2.grid(row=5, column=2)
self.ent_conda_lf3.grid(row=6, column=2)
tk.Label(self._frame, text=' ', font='Verdana 8 bold')\
.grid(row = 7, column = 1, sticky = tk.W)
tk.Label(self._frame, text='Condition b) - Static load factor "unknown loads"', font='Verdana 8 bold')\
.grid(row = 8, column = 1, sticky = tk.W)
tk.Label(self._frame, text='Condition b) - Static load factor well defined loads', font='Verdana 8 bold')\
.grid(row = 9, column = 1, sticky = tk.W)
tk.Label(self._frame, text='Condition b) - Dynamic load factor', font='Verdana 8 bold')\
.grid(row = 10, column = 1, sticky = tk.W)
self.ent_condb_lf1 = tk.Entry(self._frame, textvariable=self.new_condb_lff1, width=ent_w)
self.ent_condb_lf2 = tk.Entry(self._frame, textvariable=self.new_condb_lff2, width=ent_w)
self.ent_condb_lf3 = tk.Entry(self._frame, textvariable=self.new_condb_lff3, width=ent_w)
self.ent_condb_lf1.grid(row=8, column=2)
self.ent_condb_lf2.grid(row=9, column=2)
self.ent_condb_lf3.grid(row=10, column=2)
tk.Label(self._frame, text=' ', font='Verdana 8 bold')\
.grid(row = 11, column = 1, sticky = tk.W)
tk.Label(self._frame, text='Tank test) - Static load factor "unknown loads"', font='Verdana 8 bold')\
.grid(row = 12, column = 1, sticky = tk.W)
tk.Label(self._frame, text='Tank test) - Static load factor well defined loads', font='Verdana 8 bold')\
.grid(row = 13, column = 1, sticky = tk.W)
tk.Label(self._frame, text='Tank test) - Dynamic load factor', font='Verdana 8 bold')\
.grid(row = 14, column = 1, sticky = tk.W)
self.ent_condtt_lf1 = tk.Entry(self._frame, textvariable=self.new_condtt_lff1, width=ent_w)
self.ent_condtt_lf2 = tk.Entry(self._frame, textvariable=self.new_condtt_lff2, width=ent_w)
self.ent_condtt_lf3 = tk.Entry(self._frame, textvariable=self.new_condtt_lff3, width=ent_w)
self.ent_condtt_lf1.grid(row=12, column=2)
self.ent_condtt_lf2.grid(row=13, column=2)
self.ent_condtt_lf3.grid(row=14, column=2)
tk.Label(self._frame, text=' ', font='Verdana 8 bold')\
.grid(row = 15, column = 1, sticky = tk.W)
# tk.Label(self._frame, text='Change all current load factors', font='Verdana 8 bold')\
# .grid(row = 16, column = 1, sticky = tk.W)
# tk.Checkbutton(self._frame, variable=self.new_change_existing)\
# .grid(row=17, column=1, sticky = tk.W)
# tk.Label(self._frame, text='Change default load factors', font='Verdana 8 bold')\
# .grid(row = 18, column = 1, sticky = tk.W)
# tk.Checkbutton(self._frame, variable=self.new_change_default)\
# .grid(row=19, column=1, sticky=tk.W)
#
tk.Label(self._frame, text=' ', font='Verdana 8 bold')\
.grid(row = 16, column = 1, sticky = tk.W)
destroy_and_return = tk.Button(self._frame, text='Return specified load factors and change existing',
command=self.return_load_factors, bg='green', font='Verdana 12', fg='yellow')
destroy_and_return.grid(row = 17, column = 1)
tk.Label(self._frame, text=' ', font='Verdana 8 bold')\
.grid(row = 18, column = 1)
try:
img_file_name = 'img_dnv_load_combinations.gif'
if os.path.isfile('images/' + img_file_name):
file_path ='images/' + img_file_name
else:
file_path = app._root_dir + '/images/' + img_file_name
photo_transverse = tk.PhotoImage(file=file_path)
label_trans = tk.Label(self._frame, image=photo_transverse)
label_trans.image = photo_transverse # keep a reference!
label_trans.grid(row = 19, column = 1, columnspan = 2)
except TclError:
pass
def return_load_factors(self):
'''
self._load_factors_dict = {'dnva':[1.3,1.2,0.7], 'dnvb':[1,1,1.3], 'tanktest':[1,1,0]} # DNV loads factors
:return:
'''
self._load_factors_dict['dnva'] = [self.new_conda_lff1.get(), self.new_conda_lff2.get(),
self.new_conda_lff3.get()]
self._load_factors_dict['dnvb'] = [self.new_condb_lff1.get(), self.new_condb_lff2.get(),
self.new_condb_lff3.get()]
self._load_factors_dict['tanktest'] = [self.new_condtt_lff1.get(), self.new_condtt_lff2.get(),
self.new_condtt_lff3.get()]
if __name__ == '__main__':
self._frame.destroy()
print({'returned lf dict': self._load_factors_dict,
'change exisiting': self.new_change_existing.get(),
'change default': self.new_change_default.get()})
return
self._app.on_close_load_factor_window({'returned lf dict': self._load_factors_dict,
'change exisiting': self.new_change_existing.get(),
'change default': self.new_change_default.get()})
self._frame.destroy()
if __name__ == '__main__':
root = tk.Tk()
my_app = CreateLoadFactorWindow(root,app=None)
root.mainloop() | PypiClean |
/Joule-0.9.41.tar.gz/Joule-0.9.41/joule/controllers/follower_controller.py | import aiodns
import aiohttp
from aiohttp import web
from sqlalchemy.orm import Session
from joule.models import Follower
from joule.api import TcpNode
async def index(request: web.Request):
db: Session = request.app["db"]
followers = db.query(Follower)
return web.json_response([f.to_json() for f in followers])
# Note: this is covered with e2e tests
async def add(request: web.Request): # pragma: no cover
"""
Called by a Joule node to allow *this* node to control it
Parameters:
key(str): the API key to use
base_uri(str): (optional) specify the URL of the follower
port(int): specify the port Joule is using on the follower
"""
db: Session = request.app["db"]
cafile = request.app["cafile"]
if request.content_type != 'application/json':
return web.Response(text='content-type must be application/json', status=400)
body = await request.json()
try:
key = body["key"]
port = body["port"]
scheme = body["scheme"]
base_uri = body["base_uri"]
if "name_is_host" in body:
location = scheme+"://" + body["name"] + ":" + str(port) + base_uri
else:
location = scheme+"://" + request.remote + ":" + str(port) + base_uri
try:
node = TcpNode("new_follower", location, key, cafile)
info = await node.info()
await node.close()
follower_name = info.name
except aiohttp.ClientError:
return web.Response(text="no route to node", status=400)
except ValueError as e:
return web.Response(text=str(e), status=400)
follower = Follower(key=key, location=location,
name=follower_name)
# update the follower if this is a repeat
db.query(Follower).filter(Follower.location == follower.location).delete()
db.add(follower)
db.commit()
except KeyError as e:
return web.Response(text="Invalid request, missing [%s]" % str(e), status=400)
except ValueError as e:
return web.Response(text="Invalid request, bad argument format", status=400)
return web.json_response(data={'name': request.app["name"]})
async def delete(request: web.Request):
"""
Remove the specified node from the list of followers
"""
db: Session = request.app["db"]
try:
name = request.query["name"]
db.query(Follower).filter(name == name).delete()
db.commit()
except KeyError:
return web.Response(text="specify name to remove", status=400)
return web.Response(text="OK") | PypiClean |
/NovalIDE-1.1.8-py3-none-any.whl/noval/util/objutils.py |
import logging
import traceback
import sys
import os
import types
import noval.util.utillang as utillang
import noval.util.datetimeparser as datetimeparser
from types import *
import noval.util.utils as utils
FUNCTION_HAS_ATTR = '_hasAttr'
FUNCTION_GET_ATTR = '_getAttr'
FUNCTION_SET_ATTR = '_setAttr'
FUNCTION_DEL_ATTR = '_delAttr'
try:
ClassType
DictionaryType
long
unicode
InstanceType
SliceType
TypeType
XRangeType
basestring
except:
class _C:
def _m(self): pass
ClassType = type(_C)
DictionaryType = dict
long = int
unicode = str
_x = _C()
InstanceType = type(_x)
SliceType = slice
TypeType = type
XRangeType = range
basestring = str
def hasRawAttr(obj, name):
if obj == None:
return False
if name != FUNCTION_HAS_ATTR and hasattr(obj, FUNCTION_HAS_ATTR):
return obj._hasAttr(name)
return obj.__dict__.has_key(name)
def getRawAttr(obj, name):
if name != FUNCTION_GET_ATTR and hasattr(obj, FUNCTION_GET_ATTR):
return obj._getAttr(name)
return obj.__dict__.get(name)
def setRawAttr(obj, name, value):
if name != FUNCTION_SET_ATTR and hasattr(obj, FUNCTION_SET_ATTR):
obj._setAttr(name, value)
else:
obj.__dict__[name] = value
def delRawAttr(obj, name):
if name != FUNCTION_DEL_ATTR and hasattr(obj, FUNCTION_DEL_ATTR):
obj._delAttr(name)
else:
del obj.__dict__[name]
def getStaticAttr(obj, attr):
if (isinstance(obj, types.TypeType)):
classDesc = obj
else:
classDesc = obj.__class__
if (hasattr(classDesc, attr)):
return getattr(classDesc, attr)
return None
def setStaticAttr(obj, attr, value):
if (isinstance(obj, types.TypeType)):
classDesc = obj
else:
classDesc = obj.__class__
setattr(classDesc, attr, value)
def hasAttrFast(obj, name):
if hasRawAttr(obj, name):
return True
if hasattr(obj, '_complexType'):
complexType=obj._complexType
element=complexType.findElement(name)
if element:
return True
if hasattr(obj, name):
return True
return False
def moduleForName(moduleName):
module = None
pathList = moduleName.split('.')
if (len(moduleName) > 0):
module = __import__(moduleName)
for name in pathList[1:]:
if (name in module.__dict__):
module = module.__dict__[name]
else:
module = None
break
return module
def typeForName(typeName):
i = typeName.rfind('.')
if (i >= 0):
module = moduleForName(typeName[:i])
if (module != None):
name = typeName[i+1:]
if (name in module.__dict__):
return module.__dict__[name]
elif __builtin__.__dict__.has_key(typeName):
return __builtin__.__dict__[typeName]
return None
def functionForName(functionName):
ftype = typeForName(functionName)
if (isinstance(ftype, (types.FunctionType, types.MethodType, types.BuiltinFunctionType, types.BuiltinMethodType))):
return ftype
return None
def classForName(className):
ctype = typeForName(className)
if (isinstance(ctype, (types.ClassType, types.TypeType))):
return ctype
return None
def newInstance(className, objargs=None):
"dynamically create an object based on the className and return it."
if not isinstance(objargs, list):
objargs = [objargs]
if className == "None":
return None
elif className == "bool":
if ((len(objargs) < 1) or (objargs[0].lower() == "false") or (not objargs[0])):
return False
return True
if className == "str" or className == "unicode": # don't strip: blanks are significant
if len(objargs) > 0:
try:
if utils.is_py2():
return utillang.unescape(objargs[0]).encode()
else:
return utillang.unescape(objargs[0])
except:
return "?"
else:
return ""
if className == "date":
return datetimeparser.parse(objargs[0], asdate=True)
if className == "datetime":
return datetimeparser.parse(objargs[0])
if className == "time":
return datetimeparser.parse(objargs[0], astime=True)
classtype = classForName(className)
if (classtype == None):
raise Exception("Could not find class %s" % className)
if (len(objargs) > 0):
return classtype(*objargs)
else:
return classtype()
def getClassProperty(classType, propertyName):
return getattr(classType, propertyName)
def toDiffableRepr(value, maxLevel=None):
if (value == None):
return "None"
if (maxLevel == None):
maxLevel = 8
maxLevel -= 1
if (maxLevel < 0):
return typeToString(value, PRINT_OBJ_DIFFABLE)
## if ((exclude != None) and not isinstance(value, (basestring, int))):
## for v in exclude:
## if (v is value):
## return "<recursive reference>"
## exclude.append(value)
## elif (isinstance(value, ObjectType) and hasattr(value, "__dict__")):
## if (exclude == None):
## exclude = []
## s = "%s(%s)" % (type(value), toDiffableString(value.__dict__, exclude))
if (not isinstance(value, (bool, ClassType, complex, dict, DictionaryType,
float, int, list, long, str, tuple,
unicode, BuiltinFunctionType, BuiltinMethodType,
CodeType, FrameType, FunctionType, GeneratorType, InstanceType,
LambdaType, MethodType, ModuleType, SliceType, TracebackType,
TypeType, XRangeType))):
if (hasattr(value, "_toDiffableString")):
s = value._toDiffableString(maxLevel)
elif (hasattr(value, "__str__")):
s = str(value)
elif (hasattr(value, "__dict__")):
s = "%s(%s)" % (type(value), toDiffableString(value.__dict__, maxLevel))
else:
s = str(type(value))
ix2 = s.find(" object at 0x")
if (ix2 > 0):
ix = s.rfind(".")
if (ix > 0):
s = "<class %s>" %s[ix+1:ix2]
elif (isinstance(value, bool)):
if (value):
return "True"
else:
return "False"
elif (isinstance(value, (tuple, list))):
items = []
for v in value:
if (isinstance(v, basestring)):
if (v.find("'") >= 0):
items.append('"%s"' % v)
else:
items.append("'%s'" % v)
else:
items.append(toDiffableString(v, maxLevel))
s = "[" + ", ".join(items) + "]"
elif (isinstance(value, dict)):
items = []
for key, val in value.iteritems():
if (isinstance(val, UnicodeType)):
items.append("'%s': u'%s'" % (key, toDiffableString(val, maxLevel)))
elif (isinstance(val, basestring)):
items.append("'%s': '%s'" % (key, toDiffableString(val, maxLevel)))
else:
items.append("'%s': %s" % (key, toDiffableString(val, maxLevel)))
s = "{" + ", ".join(items) + "}"
else:
s = str(value)
return s
def toDiffableString(value, maxLevel=None):
## if (value == None):
## return "None"
## if ((exclude != None) and not isinstance(value, (basestring, int))):
## for v in exclude:
## if (v is value):
## return "<recursive reference>"
## exclude.append(value)
s = toDiffableRepr(value, maxLevel)
ds = ""
i = s.find(" at 0x")
start = 0
while (i >= 0):
j = s.find(">", i)
if (j < i):
break
ds += s[start:i]
start = j
i = s.find(" at 0x", start)
ds = ds + s[start:]
i = ds.find("\\src\\")
if (i < 0):
i = ds.find("/src/")
else:
ds = ds.replace("\\", "/")
if (i > 0):
i += 4
if (ds[i:i+5] == "\\php\\"):
i += 4
elif (ds[i:i+8] == "\\python\\"):
i += 7
ds = "filepath: ..." + ds[i:]
return ds
def toString(value, options=0):
if ((options & PRINT_OBJ_DIFFABLE) > 0):
return toDiffableString(value)
elif (not isinstance(value, basestring)):
return str(value)
return value
def typeToString(obj, options=0):
if (isinstance(obj, BooleanType)):
return "bool"
elif (isinstance(obj, UnicodeType)):
if ((options & PRINT_OBJ_DIFFABLE) > 0):
return "string"
return "unicode"
elif (isinstance(obj, basestring)):
return "string"
elif (isinstance(obj, IntType)):
return "int"
elif (isinstance(obj, LongType)):
if ((options & PRINT_OBJ_DIFFABLE) > 0):
return "int"
return "long"
elif (isinstance(obj, FloatType)):
return "float"
elif (type(obj) == ListType):
return "list"
elif (isinstance(obj, DictType)):
return "dict"
elif (isinstance(obj, TupleType)):
return "tuple"
elif (isinstance(obj, InstanceType)):
## ds = str(type(obj))
ds = "<class %s.%s> " % (obj.__module__, obj.__class__.__name__)
else:
ds = str(type(obj))
if (options == 0):
import activegrid.util.logger
options = activegrid.util.logger.testMode(0, PRINT_OBJ_DIFFABLE)
if ((options & PRINT_OBJ_DIFFABLE) > 0):
if (ds.startswith("<class ")):
ix = ds.rfind(".")
if (ix < 0):
ix = 8
ds = "<class %s>" % ds[ix+1:-2]
return ds
def nameToString(name, options=0):
if (name.startswith("_v_")):
return name[3:]
if ((options & PRINT_OBJ_DIFFABLE) > 0):
ix = name.find("__")
if ((ix > 1) and name.startswith("_")):
name = name[ix:]
return toDiffableString(name)
return name
PRINT_OBJ_GETATTR = 1
PRINT_OBJ_HIDE_INTERNAL = 2
PRINT_OBJ_COMPACT = 4
PRINT_OBJ_NONONE = 8
PRINT_OBJ_DIFFABLE = 16
PRINT_OBJ_HIDE_EXCLUDED = 32
PRINT_OBJ_INTERNAL = 512
def printObject(out, object, name=None, indent=0, flags=0, exclude=None, remove=None, maxIndent=30):
if (name == None):
name = ""
## elif (name.endswith("_") and not name.endswith("__")):
## name = name[:-1]
if ((remove != None) and (name in asDict(remove))):
return False
if ((maxIndent != None) and (indent > maxIndent)):
print >> out, " "*indent, "%s: %s" % (name, toString(str(object), flags)),
if ((flags & PRINT_OBJ_INTERNAL) == 0):
print >> out
return True
finalNewLine = False
printed = True
if ((flags & (PRINT_OBJ_COMPACT | PRINT_OBJ_HIDE_EXCLUDED)) > 0):
if ((exclude != None) and ((object in exclude) or (name in exclude))):
return
if ((flags & PRINT_OBJ_COMPACT) > 0):
indent = 0
if ((flags & PRINT_OBJ_INTERNAL) == 0):
finalNewLine = True
flags |= PRINT_OBJ_INTERNAL
if (object is None):
if (flags & PRINT_OBJ_NONONE) == 0:
print >> out, " "*indent, name, " = None",
else:
finalNewLine = False
printed = False
elif (name.startswith("_") and ((flags & PRINT_OBJ_HIDE_INTERNAL) > 0) and not name.startswith("_v_")):
finalNewLine = False
printed = False
elif (isinstance(object, (list, tuple))):
if ((exclude != None) and object in exclude):
print >> out, " "*indent, name, " : ", typeToString(object, flags), " of length = ", len(object), " (already printed)",
elif ((exclude != None) and name in exclude):
print >> out, " "*indent, name, " : ", typeToString(object, flags), " of length = ", len(object), " (excluded)",
else:
if ((exclude != None) and (len(object) > 0)): exclude.append(object)
print >> out, " "*indent, name, " : ", typeToString(object, flags), " of length = %d" % len(object),
for i, o in enumerate(object):
print >> out
printObject(out, o, name="[%d]" % i, indent=indent+2, flags=flags, exclude=exclude, remove=remove, maxIndent=maxIndent)
elif (isinstance(object, dict)):
if ((exclude != None) and object in exclude):
print >> out, " "*indent, name, " : ", typeToString(object, flags), " (already printed)",
else:
if ((exclude != None) and (len(object) > 0)): exclude.append(object)
if (len(name) > 0):
print >> out, " "*indent, name,
if ((flags & PRINT_OBJ_COMPACT) == 0):
print >> out
indent += 2
print >> out, " "*indent, "{",
if ((flags & PRINT_OBJ_COMPACT) == 0):
print >> out
keys = object.keys()
keys.sort()
for key in keys:
if (key != None):
n = key
if (not (isinstance(n, basestring))):
n = str(n)
else:
n = nameToString(n, flags)
if ((not n.startswith("_") or ((flags & PRINT_OBJ_HIDE_INTERNAL) == 0))):
if printObject(out, object[key], name=n, indent=indent+2, flags=(flags | PRINT_OBJ_INTERNAL), exclude=exclude, remove=remove, maxIndent=maxIndent):
if ((flags & PRINT_OBJ_COMPACT) == 0):
print >> out
else:
print >> out, ",",
print >> out, " "*indent, "}",
elif (hasattr(object, "__dict__")):
if (name.startswith("_")): ## and ((flags & PRINT_OBJ_HIDE_INTERNAL) > 0)):
print >> out, " "*indent, name, " : ", typeToString(object, flags),
elif ((exclude != None) and ((object in exclude) or (object.__dict__ in exclude))):
print >> out, " "*indent, name, " : ", typeToString(object, flags), " (already printed)",
else:
if (exclude != None): exclude.append(object)
print >> out, " "*indent, name, " : ", typeToString(object, flags),
if ((flags & PRINT_OBJ_GETATTR) == 0):
if ((flags & PRINT_OBJ_COMPACT) == 0):
print >> out
printObject(out, object.__dict__, indent=indent, flags=flags, exclude=exclude, remove=remove, maxIndent=maxIndent)
else:
if ((flags & PRINT_OBJ_COMPACT) == 0):
print >> out
## indent += 2
print >> out, " "*indent, "{",
keys = object.__dict__.keys()
keys.sort()
printed = True
for key in keys:
if ((exclude != None) and (key in exclude)):
continue
if (printed and ((flags & PRINT_OBJ_COMPACT) == 0)):
print >> out
n = nameToString(key, flags)
printed = printObject(out, getattr(object, n), name=n, indent=indent+2, flags=flags, exclude=exclude, remove=remove, maxIndent=maxIndent)
if ((flags & PRINT_OBJ_COMPACT) == 0):
print >> out
print >> out, " "*indent, "}",
elif (indent < 0):
print >> out, object,
elif isinstance(object, basestring):
if ((exclude != None) and name in exclude):
print >> out, " "*indent, name, " : ", typeToString(object, flags), " of length = ", len(object), " (excluded)",
elif (len(object) > 100):
object = toString(object, flags)
print >> out, " "*indent, name, ":", typeToString(object, flags), "[%d] = %s...%s" % (len(object), object[:50], object[-50:]),
else:
print >> out, " "*indent, name, ":", typeToString(object, flags), "=", toString(object, flags),
## elif (isinstance(object, float)):
## val = str(object)
## if (len(val) > 17):
## val = val[:17]
## print >> out, " "*indent, name, ":", type(object), "=", val,
else:
print >> out, " "*indent, name, ":", typeToString(object, flags), "=", toString(object, flags),
if (finalNewLine):
print >> out
return printed | PypiClean |
/DeepNN-2020.5.18.11.40.23-py3-none-any.whl/inn/models/FM_2.py | import pandas as pd
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.layers.normalization import BatchNormalization
from tensorflow.python.keras.layers import Input, Embedding, Dense, Flatten, Concatenate, Dot, Reshape, Add, Subtract
from tensorflow.keras.optimizers import Adam
from tensorflow.python.keras.regularizers import l2
from tensorflow.python.keras.callbacks import EarlyStopping, ModelCheckpoint
class KerasFM(object):
def __init__(self, k_latent=2, kernel_l2=0.1):
self.k_latent = k_latent # TODO: embedding dim 经验值
self.kernel_l2 = kernel_l2
def build_model(self, f_sizes):
"""
:param f_size: sparse feature nunique
:return:
"""
dim_input = len(f_sizes) # +1
input_x = [Input(shape=(1,)) for i in range(dim_input)] # 多列 sparse feature
biases = [self.get_embed(x, size, 1) for (x, size) in zip(input_x, f_sizes)]
factors = [self.get_embed(x, size) for (x, size) in zip(input_x, f_sizes)]
s = Add()(factors)
diffs = [Subtract()([s, x]) for x in factors]
dots = [Dot(axes=1)([d, x]) for d, x in zip(diffs, factors)]
x = Concatenate()(biases + dots)
x = BatchNormalization()(x)
output = Dense(1, activation='relu', kernel_regularizer=l2(self.kernel_l2))(x)
model = Model(inputs=input_x, outputs=[output])
model.compile(optimizer=Adam(clipnorm=0.5), loss='mean_squared_error') # TODO: radam
output_f = factors + biases
model_features = Model(inputs=input_x, outputs=output_f)
return model, model_features
def get_embed(self, x_input, x_size, embedding_l2=0.0002):
if x_size > 0: # category
embed = Embedding(x_size, self.k_latent, input_length=1, embeddings_regularizer=l2(embedding_l2))(x_input)
embed = Flatten()(embed)
else:
embed = Dense(self.k_latent, kernel_regularizer=l2(embedding_l2))(x_input)
return embed
def get_factors_biases(self, Xs, feature_names, model_features):
"""We can now retrieve the factors and the biases"""
X_pred = model_features.predict(Xs, 2 ** 10)
n = len(X_pred) // 2
factors = X_pred[:n]
biases = X_pred[-n:]
df = pd.DataFrame()
for f, X_p in zip(feature_names, factors):
for i in range(self.k_latent):
df[f'{f}_fm_factor_{i}'] = X_p[:, i]
for f, X_p in zip(feature_names, biases):
df[f'{f}_fm_bias'] = X_p[:, 0]
return df
if __name__ == '__main__':
from sklearn.datasets import make_classification
from sklearn.preprocessing import LabelEncoder
X, y = make_classification(n_samples=1000, n_features=7, random_state=42)
Xs = []
f_sizes = []
for i in range(X.shape[1]):
X[:, i] = LabelEncoder().fit_transform(X[:, i])
f_sizes.append(int(X[:, i].max() + 1))
Xs.append(X[:, i])
print(f_sizes)
fm = KerasFM()
model, model_features = fm.build_model(f_sizes)
earlystopper = EarlyStopping(patience=0, verbose=1)
model.fit(Xs, y.ravel(), epochs=10,
batch_size=2 ** 7,
verbose=1,
shuffle=True,
sample_weight=None,
callbacks=[earlystopper],
)
df_ = fm.get_factors_biases(Xs, range(len(Xs)), model_features)
print(df_)
print(df_.shape) # n_features * 3
print(model.predict(Xs)) | PypiClean |
/DendroPy_calver-2023.330.2-py3-none-any.whl/dendropy/legacy/ncbi.py |
##############################################################################
## DendroPy Phylogenetic Computing Library.
##
## Copyright 2010-2015 Jeet Sukumaran and Mark T. Holder.
## All rights reserved.
##
## See "LICENSE.rst" for terms and conditions of usage.
##
## If you use this work or any portion thereof in published work,
## please cite it as:
##
## Sukumaran, J. and M. T. Holder. 2010. DendroPy: a Python library
## for phylogenetic computing. Bioinformatics 26: 1569-1571.
##
##############################################################################
"""
Wrappers for interacting with NCBI databases.
*** DEPRECATED: use dendropy.interop.genbank.GenBankDna,
dendropy.interop.genbank.GenBankRna, or
dendropy.interop.genbank.GenBankProtein instead ***
"""
import warnings
from dendropy.utility import messaging
from dendropy.utility import textprocessing
from dendropy.utility import urlio
_LOG = messaging.get_logger(__name__)
import sys
import dendropy
import re
from dendropy.utility.error import DataParseError
GB_FASTA_DEFLINE_PATTERN = re.compile(r'^gi\|(\d+)\|(\w+)\|([\w\d]+).(\d+)\|(.*)$')
def parse_accession_number_and_gi_from_gb(gb_str):
accession = re.search(r"ACCESSION\s+(\S+)$", gb_str, flags=re.MULTILINE)
if accession is None:
raise ValueError("Failed to parse accession number")
accession = accession.groups()[0].strip()
gi = re.search(r'^VERSION\s+\S+\s+GI:([0-9]+)$', gb_str, flags=re.MULTILINE)
if gi is None:
raise ValueError("Failed to parse GI number")
gi = gi.groups()[0].strip()
return accession, gi
def parse_ncbi_curation_info_from_defline(gb_defline):
m = GB_FASTA_DEFLINE_PATTERN.match(gb_defline)
if m is not None:
return m.groups()[0], m.groups()[2], m.groups()[2] + '.' + m.groups()[3]
else:
return None
def compose_taxon_label_from_gb_defline(gb_defline,
num_desc_components=3,
separator='_',
gbnum_in_front=True,
exclude_gbnum=False):
"""
If ``gb_defline`` matches a GenBank FASTA-format defline structure, then this returns a
label:
<GB-ACCESSION-ID><SEPARATOR><DESC_COMPONENT(1)><SEPARATOR><DESC_COMPONENT(2)>...<DESC_COMPONENT(n)>
So, for example, given the following FASTA label:
gi|158931046|gb|EU105975.1| Homo sapiens Ache non-coding region T1584 genomic sequence
the corresponding taxon 3-component (default) label will be:
EU105975_Homo_sapiens_Ache
If ``gb_defline`` does *not* match a GenBank FASTA-format defline structure, then the string
is returned unchanged.
"""
m = GB_FASTA_DEFLINE_PATTERN.match(gb_defline)
if m is not None:
groups = m.groups()
desc_parts = [s.strip() for s in groups[-1].split() if s]
if exclude_gbnum:
label_parts = desc_parts[:num_desc_components]
elif gbnum_in_front:
label_parts = [groups[2]] + desc_parts[:num_desc_components]
else:
label_parts = desc_parts[:num_desc_components] + [groups[2]]
return separator.join(label_parts)
else:
return gb_defline
def relabel_taxa_from_defline(taxon_set,
num_desc_components=3,
separator='_',
gbnum_in_front=True,
exclude_gbnum=False):
"""
Examines the labels of each |Taxon| object in ``taxon_set``, and if
conforming to a GenBank pattern, translates the labels to a standard
format:
<GB-ACCESSION-ID><SEPARATOR><DESC_COMPONENT(1)><SEPARATOR><DESC_COMPONENT(2)>...<DESC_COMPONENT(n)>
So, for example, given the following FASTA label:
gi|158931046|gb|EU105975.1| Homo sapiens Ache non-coding region T1584 genomic sequence
the corresponding taxon 3-component (default) label will be:
EU105975_Homo_sapiens_Ache
"""
for taxon in taxon_set:
taxon.label = compose_taxon_label_from_gb_defline(
gb_defline=taxon.label,
num_desc_components=num_desc_components,
separator=separator,
gbnum_in_front=gbnum_in_front,
exclude_gbnum=exclude_gbnum)
return taxon_set
class Entrez(object):
"""
Wraps up all interactions with Entrez.
Example usage::
>>> from dendropy.interop import ncbi
>>> e = ncbi.Entrez(generate_labels=True,
... label_id_in_front=False,
... sort_taxa_by_label=True)
>>> d1 = e.fetch_nucleotide_accessions(['EU105474', 'EU105476'])
>>> d2 = e.fetch_nucleotide_accession_range(105474, 106045, prefix="EU")
"""
BASE_URL = "http://eutils.ncbi.nlm.nih.gov/entrez/eutils"
DATABASES = [
'pubmed',
'protein',
'nucleotide',
'nuccore',
'nucgss',
'nucest',
'structure',
'genome',
'biosystems',
'books',
'cancerchromosomes',
'cdd',
'gap',
'dbvar',
'domains',
'epigenomics',
'gene',
'genomeprj',
'gensat',
'geo',
'gds',
'homologene',
'journals',
'mesh',
'ncbisearch',
'nlmcatalog',
'omia',
'omim',
'pepdome',
'pmc',
'popset',
'probe',
'proteinclusters',
'pcassay',
'pccompound',
'pcsubstance',
'seqannot',
'snp',
'sra',
'taxonomy',
'toolkit',
'toolkitall',
'unigene',
'unists',
'linkoutpubmed',
'linkoutseq',
'linkoutother',
]
class AccessionFetchError(Exception):
def __init__(self, accession_ids):
Exception.__init__(self, "Failed to retrieve accessions: %s" % (", ".join([str(s) for s in accession_ids])))
def __init__(self,
generate_labels=False,
label_num_desc_components=3,
label_separator='_',
label_id_in_front=True,
exclude_gbnum_from_label=False,
sort_taxa_by_label=False):
"""
*** DEPRECATED: use dendropy.interop.genbank.GenBankDna,
dendropy.interop.genbank.GenBankRna, or
dendropy.interop.genbank.GenBankProtein instead ***
Instantiates a broker that queries NCBI and returns data. If
``generate_labels`` is |True|, then appropriate labels for sequences
will be automatically composed for each sequence based on the GenBank
FASTA defline. ``label_num_desc_components`` specifies the number of
components from the defline to use. ``label_separator`` specifies the
string used to separate the different label components.
``label_id_in_front`` specifies whether the GenBank accession number
should form the beginning (|True|) or tail (|False|) end of the
label. ``sort_taxa_by_label`` specifies whether the sequences should be
sorted by their final label values.
"""
warnings.warn("This class (and parent module) has been deprecated. Use the classes provided in 'dendropy.interop.genbak' instead", DeprecationWarning)
self.generate_labels = generate_labels
self.label_num_desc_components = label_num_desc_components
self.label_separator = label_separator
self.label_id_in_front = label_id_in_front
self.exclude_gbnum_from_label = exclude_gbnum_from_label
self.sort_taxa_by_label = sort_taxa_by_label
def fetch(self, db, ids, rettype):
"""
Raw fetch. Returns file-like object opened for reading on string
returned by query.
"""
if textprocessing.is_str_type(ids):
id_list = ids
else:
id_list = ",".join([str(i) for i in set(ids)])
params = {'db': db,
'id': id_list,
'rettype': rettype,
'retmode': 'text'}
query_url = Entrez.BASE_URL + "/efetch.fcgi?" + urlio.urlencode(params)
return urlio.read_url(query_url)
def fetch_gbrecs_as_plaintext_dict(self,
db,
ids,
key_field="accession"):
db_name = "nucleotide"
gb_recs_str = self.fetch(db=db, ids=ids, rettype="gb")
gb_recs_str_list = re.split(r"^//$", gb_recs_str, flags=re.MULTILINE)
gb_recs_str_list = [gb_rec for gb_rec in gb_recs_str_list
if gb_rec.replace("\n", "")]
gb_recs_dict = {}
for gb_str in gb_recs_str_list:
if not gb_str:
continue
try:
accession, gi = parse_accession_number_and_gi_from_gb(gb_str)
except TypeError:
print("---")
print(gb_str)
print("---")
raise
if key_field == "accession":
gb_recs_dict[accession] = gb_str
elif key_field == "gi":
gb_recs_dict[gi] = gb_str
else:
raise NotImplementedError("Key field '%s' is not supported" % key_field)
return gb_recs_dict
def fetch_nucleotide_accessions(self,
ids,
prefix=None,
verify=True,
matrix_type=dendropy.DnaCharacterMatrix,
**kwargs):
"""
Returns a DnaCharacterMatrix object (or some other type, if specified
by ``matrix_type`` argument) populated with sequences from the Entrez
nucleotide database with accession numbers given by ``ids`` (a list of
accession numbers). If ``prefix`` is given, it is pre-pended to all values
given in the id list. Any other keyword arguments given are passed to
the constructor of |DnaCharacterMatrix|.
**Note that the order of records is *not* the same as the order of ids!!!**
"""
if prefix is not None:
id_list = ["%s%s" % (prefix,i) for i in ids]
else:
id_list = [str(i) for i in ids]
results_str = self.fetch(db='nucleotide', ids=id_list, rettype='fasta')
try:
d = matrix_type.get_from_string(results_str, 'fasta', **kwargs)
except DataParseError:
sys.stderr.write("---\nNCBI Entrez Query returned:\n%s\n---\n" % results_str)
raise
for taxon in d.taxon_set:
taxon.ncbi_defline = taxon.label
taxon.ncbi_gi, taxon.ncbi_accession, taxon.ncbi_version = parse_ncbi_curation_info_from_defline(taxon.ncbi_defline)
if verify:
found_ids = set([t.ncbi_accession for t in d.taxon_set])
missing_ids = set(id_list).difference(found_ids)
found_ids = set([t.ncbi_gi for t in d.taxon_set])
missing_ids = set(missing_ids).difference(found_ids)
if len(missing_ids) > 0:
raise Entrez.AccessionFetchError(missing_ids)
if self.generate_labels:
relabel_taxa_from_defline(d.taxon_set,
num_desc_components=self.label_num_desc_components,
separator=self.label_separator,
gbnum_in_front=self.label_id_in_front,
exclude_gbnum=self.exclude_gbnum_from_label)
if self.sort_taxa_by_label:
d.taxon_set.sort(key=lambda x: x.label)
return d
def fetch_nucleotide_accession_range(self,
first,
last,
prefix=None,
verify=True,
matrix_type=dendropy.DnaCharacterMatrix,
**kwargs):
"""
Returns a DnaCharacterMatrix object (or some other type, if specified
by the ``matrix_type`` argument) populated with sequences from the
Entrez nucleotide database with accession numbers between ``start``
and, up to and *including* ``end``. If ``prefix`` is given, then it is
pre-pended to the ids. Any other keyword arguments given are passed to
thee constructor of |DnaCharacterMatrix|.
"""
ids = range(first, last+1)
return self.fetch_nucleotide_accessions(ids=ids, prefix=prefix, verify=verify, matrix_type=matrix_type, **kwargs) | PypiClean |
/DJModels-0.0.6-py3-none-any.whl/djmodels/db/backends/mysql/base.py | import re
from djmodels.core.exceptions import ImproperlyConfigured
from djmodels.db import utils
from djmodels.db.backends import utils as backend_utils
from djmodels.db.backends.base.base import BaseDatabaseWrapper
from djmodels.utils.functional import cached_property
try:
import MySQLdb as Database
except ImportError as err:
raise ImproperlyConfigured(
'Error loading MySQLdb module.\n'
'Did you install mysqlclient?'
) from err
from MySQLdb.constants import CLIENT, FIELD_TYPE # isort:skip
from MySQLdb.converters import conversions # isort:skip
# Some of these import MySQLdb, so import them after checking if it's installed.
from .client import DatabaseClient # isort:skip
from .creation import DatabaseCreation # isort:skip
from .features import DatabaseFeatures # isort:skip
from .introspection import DatabaseIntrospection # isort:skip
from .operations import DatabaseOperations # isort:skip
from .schema import DatabaseSchemaEditor # isort:skip
from .validation import DatabaseValidation # isort:skip
version = Database.version_info
if version < (1, 3, 7):
raise ImproperlyConfigured('mysqlclient 1.3.7 or newer is required; you have %s.' % Database.__version__)
# MySQLdb returns TIME columns as timedelta -- they are more like timedelta in
# terms of actual behavior as they are signed and include days -- and Django
# expects time.
django_conversions = {
**conversions,
**{FIELD_TYPE.TIME: backend_utils.typecast_time},
}
# This should match the numerical portion of the version numbers (we can treat
# versions like 5.0.24 and 5.0.24a as the same).
server_version_re = re.compile(r'(\d{1,2})\.(\d{1,2})\.(\d{1,2})')
class CursorWrapper:
"""
A thin wrapper around MySQLdb's normal cursor class that catches particular
exception instances and reraises them with the correct types.
Implemented as a wrapper, rather than a subclass, so that it isn't stuck
to the particular underlying representation returned by Connection.cursor().
"""
codes_for_integrityerror = (
1048, # Column cannot be null
1690, # BIGINT UNSIGNED value is out of range
)
def __init__(self, cursor):
self.cursor = cursor
def execute(self, query, args=None):
try:
# args is None means no string interpolation
return self.cursor.execute(query, args)
except Database.OperationalError as e:
# Map some error codes to IntegrityError, since they seem to be
# misclassified and Django would prefer the more logical place.
if e.args[0] in self.codes_for_integrityerror:
raise utils.IntegrityError(*tuple(e.args))
raise
def executemany(self, query, args):
try:
return self.cursor.executemany(query, args)
except Database.OperationalError as e:
# Map some error codes to IntegrityError, since they seem to be
# misclassified and Django would prefer the more logical place.
if e.args[0] in self.codes_for_integrityerror:
raise utils.IntegrityError(*tuple(e.args))
raise
def __getattr__(self, attr):
return getattr(self.cursor, attr)
def __iter__(self):
return iter(self.cursor)
class DatabaseWrapper(BaseDatabaseWrapper):
vendor = 'mysql'
display_name = 'MySQL'
# This dictionary maps Field objects to their associated MySQL column
# types, as strings. Column-type strings can contain format strings; they'll
# be interpolated against the values of Field.__dict__ before being output.
# If a column type is set to None, it won't be included in the output.
data_types = {
'AutoField': 'integer AUTO_INCREMENT',
'BigAutoField': 'bigint AUTO_INCREMENT',
'BinaryField': 'longblob',
'BooleanField': 'bool',
'CharField': 'varchar(%(max_length)s)',
'DateField': 'date',
'DateTimeField': 'datetime(6)',
'DecimalField': 'numeric(%(max_digits)s, %(decimal_places)s)',
'DurationField': 'bigint',
'FileField': 'varchar(%(max_length)s)',
'FilePathField': 'varchar(%(max_length)s)',
'FloatField': 'double precision',
'IntegerField': 'integer',
'BigIntegerField': 'bigint',
'IPAddressField': 'char(15)',
'GenericIPAddressField': 'char(39)',
'NullBooleanField': 'bool',
'OneToOneField': 'integer',
'PositiveIntegerField': 'integer UNSIGNED',
'PositiveSmallIntegerField': 'smallint UNSIGNED',
'SlugField': 'varchar(%(max_length)s)',
'SmallIntegerField': 'smallint',
'TextField': 'longtext',
'TimeField': 'time(6)',
'UUIDField': 'char(32)',
}
# For these columns, MySQL doesn't:
# - accept default values and implicitly treats these columns as nullable
# - support a database index
_limited_data_types = (
'tinyblob', 'blob', 'mediumblob', 'longblob', 'tinytext', 'text',
'mediumtext', 'longtext', 'json',
)
operators = {
'exact': '= %s',
'iexact': 'LIKE %s',
'contains': 'LIKE BINARY %s',
'icontains': 'LIKE %s',
'gt': '> %s',
'gte': '>= %s',
'lt': '< %s',
'lte': '<= %s',
'startswith': 'LIKE BINARY %s',
'endswith': 'LIKE BINARY %s',
'istartswith': 'LIKE %s',
'iendswith': 'LIKE %s',
}
# The patterns below are used to generate SQL pattern lookup clauses when
# the right-hand side of the lookup isn't a raw string (it might be an expression
# or the result of a bilateral transformation).
# In those cases, special characters for LIKE operators (e.g. \, *, _) should be
# escaped on database side.
#
# Note: we use str.format() here for readability as '%' is used as a wildcard for
# the LIKE operator.
pattern_esc = r"REPLACE(REPLACE(REPLACE({}, '\\', '\\\\'), '%%', '\%%'), '_', '\_')"
pattern_ops = {
'contains': "LIKE BINARY CONCAT('%%', {}, '%%')",
'icontains': "LIKE CONCAT('%%', {}, '%%')",
'startswith': "LIKE BINARY CONCAT({}, '%%')",
'istartswith': "LIKE CONCAT({}, '%%')",
'endswith': "LIKE BINARY CONCAT('%%', {})",
'iendswith': "LIKE CONCAT('%%', {})",
}
isolation_levels = {
'read uncommitted',
'read committed',
'repeatable read',
'serializable',
}
Database = Database
SchemaEditorClass = DatabaseSchemaEditor
# Classes instantiated in __init__().
client_class = DatabaseClient
creation_class = DatabaseCreation
features_class = DatabaseFeatures
introspection_class = DatabaseIntrospection
ops_class = DatabaseOperations
validation_class = DatabaseValidation
def get_connection_params(self):
kwargs = {
'conv': django_conversions,
'charset': 'utf8',
}
settings_dict = self.settings_dict
if settings_dict['USER']:
kwargs['user'] = settings_dict['USER']
if settings_dict['NAME']:
kwargs['db'] = settings_dict['NAME']
if settings_dict['PASSWORD']:
kwargs['passwd'] = settings_dict['PASSWORD']
if settings_dict['HOST'].startswith('/'):
kwargs['unix_socket'] = settings_dict['HOST']
elif settings_dict['HOST']:
kwargs['host'] = settings_dict['HOST']
if settings_dict['PORT']:
kwargs['port'] = int(settings_dict['PORT'])
# We need the number of potentially affected rows after an
# "UPDATE", not the number of changed rows.
kwargs['client_flag'] = CLIENT.FOUND_ROWS
# Validate the transaction isolation level, if specified.
options = settings_dict['OPTIONS'].copy()
isolation_level = options.pop('isolation_level', 'read committed')
if isolation_level:
isolation_level = isolation_level.lower()
if isolation_level not in self.isolation_levels:
raise ImproperlyConfigured(
"Invalid transaction isolation level '%s' specified.\n"
"Use one of %s, or None." % (
isolation_level,
', '.join("'%s'" % s for s in sorted(self.isolation_levels))
))
self.isolation_level = isolation_level
kwargs.update(options)
return kwargs
def get_new_connection(self, conn_params):
return Database.connect(**conn_params)
def init_connection_state(self):
assignments = []
if self.features.is_sql_auto_is_null_enabled:
# SQL_AUTO_IS_NULL controls whether an AUTO_INCREMENT column on
# a recently inserted row will return when the field is tested
# for NULL. Disabling this brings this aspect of MySQL in line
# with SQL standards.
assignments.append('SET SQL_AUTO_IS_NULL = 0')
if self.isolation_level:
assignments.append('SET SESSION TRANSACTION ISOLATION LEVEL %s' % self.isolation_level.upper())
if assignments:
with self.cursor() as cursor:
cursor.execute('; '.join(assignments))
def create_cursor(self, name=None):
cursor = self.connection.cursor()
return CursorWrapper(cursor)
def _rollback(self):
try:
BaseDatabaseWrapper._rollback(self)
except Database.NotSupportedError:
pass
def _set_autocommit(self, autocommit):
with self.wrap_database_errors:
self.connection.autocommit(autocommit)
def disable_constraint_checking(self):
"""
Disable foreign key checks, primarily for use in adding rows with
forward references. Always return True to indicate constraint checks
need to be re-enabled.
"""
self.cursor().execute('SET foreign_key_checks=0')
return True
def enable_constraint_checking(self):
"""
Re-enable foreign key checks after they have been disabled.
"""
# Override needs_rollback in case constraint_checks_disabled is
# nested inside transaction.atomic.
self.needs_rollback, needs_rollback = False, self.needs_rollback
try:
self.cursor().execute('SET foreign_key_checks=1')
finally:
self.needs_rollback = needs_rollback
def check_constraints(self, table_names=None):
"""
Check each table name in `table_names` for rows with invalid foreign
key references. This method is intended to be used in conjunction with
`disable_constraint_checking()` and `enable_constraint_checking()`, to
determine if rows with invalid references were entered while constraint
checks were off.
"""
with self.cursor() as cursor:
if table_names is None:
table_names = self.introspection.table_names(cursor)
for table_name in table_names:
primary_key_column_name = self.introspection.get_primary_key_column(cursor, table_name)
if not primary_key_column_name:
continue
key_columns = self.introspection.get_key_columns(cursor, table_name)
for column_name, referenced_table_name, referenced_column_name in key_columns:
cursor.execute(
"""
SELECT REFERRING.`%s`, REFERRING.`%s` FROM `%s` as REFERRING
LEFT JOIN `%s` as REFERRED
ON (REFERRING.`%s` = REFERRED.`%s`)
WHERE REFERRING.`%s` IS NOT NULL AND REFERRED.`%s` IS NULL
""" % (
primary_key_column_name, column_name, table_name,
referenced_table_name, column_name, referenced_column_name,
column_name, referenced_column_name,
)
)
for bad_row in cursor.fetchall():
raise utils.IntegrityError(
"The row in table '%s' with primary key '%s' has an invalid "
"foreign key: %s.%s contains a value '%s' that does not "
"have a corresponding value in %s.%s."
% (
table_name, bad_row[0], table_name, column_name,
bad_row[1], referenced_table_name, referenced_column_name,
)
)
def is_usable(self):
try:
self.connection.ping()
except Database.Error:
return False
else:
return True
@cached_property
def mysql_server_info(self):
with self.temporary_connection() as cursor:
cursor.execute('SELECT VERSION()')
return cursor.fetchone()[0]
@cached_property
def mysql_version(self):
match = server_version_re.match(self.mysql_server_info)
if not match:
raise Exception('Unable to determine MySQL version from version string %r' % self.mysql_server_info)
return tuple(int(x) for x in match.groups())
@cached_property
def mysql_is_mariadb(self):
# MariaDB isn't officially supported.
return 'mariadb' in self.mysql_server_info.lower() | PypiClean |
/ClueDojo-1.4.3-1.tar.gz/ClueDojo-1.4.3-1/src/cluedojo/static/dojox/data/util/JsonQuery.js | if(!dojo._hasResource["dojox.data.util.JsonQuery"]){
dojo._hasResource["dojox.data.util.JsonQuery"]=true;
dojo.provide("dojox.data.util.JsonQuery");
dojo.declare("dojox.data.util.JsonQuery",null,{useFullIdInQueries:false,_toJsonQuery:function(_1,_2){
var _3=true;
var _4=this;
function _5(_6,_7){
var _8=_7.__id;
if(_8){
var _9={};
_9[_4.idAttribute]=_4.useFullIdInQueries?_7.__id:_7[_4.idAttribute];
_7=_9;
}
for(var i in _7){
var _a=_7[i];
var _b=_6+(/^[a-zA-Z_][\w_]*$/.test(i)?"."+i:"["+dojo._escapeString(i)+"]");
if(_a&&typeof _a=="object"){
_5(_b,_a);
}else{
if(_a!="*"){
_c+=(_3?"":"&")+_b+((!_8&&typeof _a=="string"&&_1.queryOptions&&_1.queryOptions.ignoreCase)?"~":"=")+(_4.simplifiedQuery?encodeURIComponent(_a):dojo.toJson(_a));
_3=false;
}
}
}
};
if(_1.query&&typeof _1.query=="object"){
var _c="[?(";
_5("@",_1.query);
if(!_3){
_c+=")]";
}else{
_c="";
}
_1.queryStr=_c.replace(/\\"|"/g,function(t){
return t=="\""?"'":t;
});
}else{
if(!_1.query||_1.query=="*"){
_1.query="";
}
}
var _d=_1.sort;
if(_d){
_1.queryStr=_1.queryStr||(typeof _1.query=="string"?_1.query:"");
_3=true;
for(i=0;i<_d.length;i++){
_1.queryStr+=(_3?"[":",")+(_d[i].descending?"\\":"/")+"@["+dojo._escapeString(_d[i].attribute)+"]";
_3=false;
}
if(!_3){
_1.queryStr+="]";
}
}
if(_2&&(_1.start||_1.count)){
_1.queryStr=(_1.queryStr||(typeof _1.query=="string"?_1.query:""))+"["+(_1.start||"")+":"+(_1.count?(_1.start||0)+_1.count:"")+"]";
}
if(typeof _1.queryStr=="string"){
_1.queryStr=_1.queryStr.replace(/\\"|"/g,function(t){
return t=="\""?"'":t;
});
return _1.queryStr;
}
return _1.query;
},jsonQueryPagination:true,fetch:function(_e){
this._toJsonQuery(_e,this.jsonQueryPagination);
return this.inherited(arguments);
},isUpdateable:function(){
return true;
},matchesQuery:function(_f,_10){
_10._jsonQuery=_10._jsonQuery||dojox.json.query(this._toJsonQuery(_10));
return _10._jsonQuery([_f]).length;
},clientSideFetch:function(_11,_12){
_11._jsonQuery=_11._jsonQuery||dojox.json.query(this._toJsonQuery(_11));
return this.clientSidePaging(_11,_11._jsonQuery(_12));
},querySuperSet:function(_13,_14){
if(!_13.query){
return _14.query;
}
return this.inherited(arguments);
}});
} | PypiClean |
/My-Serializer-For-Json-And-XML-For_Lab3-5-0.1.1.tar.gz/My-Serializer-For-Json-And-XML-For_Lab3-5-0.1.1/Serializators/JSON.py | from Serializators.Core import Serialize
from Serializators.Core.functions_for_deserialize import Deserialize
class JsonSerializer:
data_serializer = Serialize()
data_deserializer = Deserialize()
def dumps(self, obj):
packed = self.data_serializer.serialize(obj)
if isinstance(packed, (list, tuple)):
return self.__list_n_tuple_to_string_util(packed)
if isinstance(packed, dict):
return self.__dict_to_string_util(packed)
return self.__ser_primitive(obj)
def dump(self, obj, file):
file.write(self.dumps(obj))
def loads(self, string):
result, ind = self.__loads_with_index(string, 0)
return self.data_deserializer.deserialize(result)
def load(self, file):
return self.loads(file.read())
def __list_n_tuple_to_string_util(self, collection):
if not collection:
return '[]'
result = '['
for item in collection:
if isinstance(item, dict):
result += f'{self.__dict_to_string_util(item)},'
elif isinstance(item, (list, tuple)):
result += f'{self.__list_n_tuple_to_string_util(item)},'
else:
result += f'{self.__ser_primitive(item)},'
return result[:-1] + ']'
def __dict_to_string_util(self, dictionary):
if not dictionary:
return '{}'
result = '{'
for key, value in dictionary.items():
if isinstance(value, dict):
result += f'"{key}": {self.__dict_to_string_util(value)},'
elif isinstance(value, (list, tuple)):
result += f'"{key}": {self.__list_n_tuple_to_string_util(value)},'
else:
result += f'"{key}": {self.__ser_primitive(value)},'
return result[:-1] + '}'
def __ser_primitive(self,obj):
if isinstance(obj,str):
obj= f"'{obj}'"
return f'"{str(obj)}"'
def __deserialize_list(self, string, index):
end_index = index + 1
bracket_count = 1
while bracket_count > 0 and end_index < len(string):
if string[end_index] == '[':
bracket_count += 1
if string[end_index] == ']':
bracket_count -= 1
end_index += 1
index += 1
result = []
while index < end_index:
if string[index] in (',', ' '):
index += 1
continue
if end_index - index < 2:
break
element, index = self.__loads_with_index(string, index)
result.append(element)
return result, end_index + 1
def __loads_with_index(self, string, index):
match string[index]:
case '"':
if string[index+1] == "'":
return self.__deserialize_string(string, index+2)
else:
return self.__deserialize_primitive(string, index)
case '[':
return self.__deserialize_list(string, index)
case '{':
return self.__deserialize_dict(string, index)
def __deserialize_dict(self,string,index):
end_index=index
bracket_count=1
while bracket_count>0 and end_index+1<len(string):
end_index+=1
if string[end_index]=='{':
bracket_count+=1
if string[end_index]=='}':
bracket_count-=1
index+=1
result={}
while index<end_index:
if string[index] in (',',' '):
index+=1
continue
key,index = self.__loads_with_index(string,index)
while string[index] in (':',' '):
index+=1
value,index=self.__loads_with_index(string,index)
result[key]=value
return result,end_index+1
def __string_catcher(self, string, index):
end_index = index
while string[end_index] != '"' and end_index < len(string):
end_index += 1
data_slice = string[index:end_index]
return data_slice, end_index + 3
def __deserialize_string(self, string, index):
end_index = index
while string[end_index] != "'" and end_index < len(string):
end_index += 1
data_slice = string[index:end_index]
return data_slice, end_index + 3
def __deserialize_list(self, string, index):
end_index = index + 1
bracket_count = 1
while bracket_count > 0 and end_index < len(string):
if string[end_index] == '[':
bracket_count += 1
if string[end_index] == ']':
bracket_count -= 1
end_index += 1
index += 1
result = []
while index < end_index:
if string[index] in (',', ' '):
index += 1
continue
if end_index - index < 2:
break
element, index = self.__loads_with_index(string, index)
result.append(element)
return result, end_index + 1
def __deserialize_number(self, string, index):
end_index = index
while string[end_index] != '"' and end_index < len(string):
end_index += 1
data_slice = string[index:end_index]
try:
if '.' in data_slice:
return float(data_slice), end_index + 1
else:
return int(data_slice), end_index + 1
except:
return self.__string_catcher(string, index)
def __deserialize_primitive(self, string, index):
index += 1
if string[index] == 'N':
return None, index + 5
elif string[index] == 'T':
return True, index + 5
elif string[index] == 'F':
return False, index + 6
else:
return self.__deserialize_number(string, index) | PypiClean |
/EDDIE-Tool-1.0.0.tar.gz/EDDIE-Tool-1.0.0/eddietool/arch/Linux/df.py | __version__ = '$Revision: 895 $'
__copyright__ = 'Copyright (c) Chris Miles 1999-2005'
__author__ = 'Chris Miles'
__license__ = '''
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
'''
# Python Modules
import re, string
# Eddie Modules
from eddietool.common import datacollect, log, utils
# This fetches data by parsing system calls of common commands. This was done because
# it was quick and easy to implement and port to multiple platforms. I know this is
# a bit ugly, but will clean it up later with more efficient code that fetches data
# directly from /proc or the kernel. CM 19990929
class dfList(datacollect.DataCollect):
"""dfList provides access to disk usage statistics."""
def __init__(self):
apply( datacollect.DataCollect.__init__, (self,) )
##################################################################
# Public, thread-safe, methods
def __str__(self):
"""Create string to display disk usage stats."""
d = self.getHash()
rv = ""
for item in d.keys():
rv = rv + str(d[item]) + '\n'
return(rv)
def __getitem__(self, name):
"""
Extends DataCollect.__getitem__() to search mounthash if default
datahash fails.
The dfList object can be treated like a dictionary and keyed by
either device or mount point.
"""
try:
r = apply( datacollect.DataCollect.__getitem__, (self, name) )
except KeyError:
self.data_semaphore.acquire() # thread-safe access to self.data
try:
r = self.data.mounthash[name] # try to find mount point
except KeyError:
self.data_semaphore.release()
raise KeyError, "Key %s not found in data hashes" % (name)
self.data_semaphore.release()
return r
##################################################################
# Private methods. No thread safety if not using public methods.
def collectData(self):
"""
Collect disk usage data.
"""
# Get information about all local filesystems from 'df'.
rawList = utils.safe_popen('df -l', 'r')
rawList.readline() # skip header
self.data.datahash = {}
self.data.mounthash = {}
lines = rawList.read()
lines = re.sub( r'\n ', '', lines)
lines = string.split(lines, '\n')
for line in lines:
fields = string.split(line)
if len(fields) == 6:
p = df(fields)
self.data.datahash[fields[0]] = p # dictionary of filesystem devices
self.data.mounthash[fields[5]] = p # dictionary of mount points
utils.safe_pclose( rawList )
log.log( "<df>dfList.collectData(): filesystem data collected", 7 )
##################################################################
# Define single filesystem information objects.
class df:
"""df object holds stats on disk usage for a file system."""
def __init__(self, *arg):
self.raw = arg[0]
self.data = {}
self.data['fsname'] = self.raw[0] # Filesystem name (device)
self.data['size'] = int(self.raw[1]) # Size of filesystem
self.data['used'] = int(self.raw[2]) # kb used
self.data['avail'] = int(self.raw[3]) # kb free
self.data['pctused'] = float(self.raw[4][:-1]) # Percentage Used
self.data['mountpt'] = self.raw[5] # Mount point
def __str__(self):
str = "%-20s %10s %10s %10s %4s %-12s\n" % ("Filesystem","Size","Used","Available","Use%","Mounted on")
str = str + "%-20s %10s %10s %10s %4s %-12s" % (self.data['fsname'],self.data['size'],self.data['used'],self.data['avail'],self.data['pctused'],self.data['mountpt'])
return(str)
def getHash(self):
"""
Return a copy of the filesystem data dictionary.
"""
hash_copy = {}
hash_copy.update(self.data)
return hash_copy
##
## END - df.py
## | PypiClean |
/DACBench-0.2.0.tar.gz/DACBench-0.2.0/dacbench/wrappers/policy_progress_wrapper.py | import matplotlib.pyplot as plt
import numpy as np
from gymnasium import Wrapper
class PolicyProgressWrapper(Wrapper):
"""
Wrapper to track progress towards optimal policy.
Can only be used if a way to obtain the optimal policy given an instance can be obtained.
"""
def __init__(self, env, compute_optimal):
"""
Initialize wrapper.
Parameters
----------
env : gym.Env
Environment to wrap
compute_optimal : function
Function to compute optimal policy
"""
super(PolicyProgressWrapper, self).__init__(env)
self.compute_optimal = compute_optimal
self.episode = []
self.policy_progress = []
def __setattr__(self, name, value):
"""
Set attribute in wrapper if available and in env if not.
Parameters
----------
name : str
Attribute to set
value
Value to set attribute to
"""
if name in [
"compute_optimal",
"env",
"episode",
"policy_progress",
"render_policy_progress",
]:
object.__setattr__(self, name, value)
else:
setattr(self.env, name, value)
def __getattribute__(self, name):
"""
Get attribute value of wrapper if available and of env if not.
Parameters
----------
name : str
Attribute to get
Returns
-------
value
Value of given name
"""
if name in [
"step",
"compute_optimal",
"env",
"episode",
"policy_progress",
"render_policy_progress",
]:
return object.__getattribute__(self, name)
else:
return getattr(self.env, name)
def step(self, action):
"""
Execute environment step and record distance.
Parameters
----------
action : int
action to execute
Returns
-------
np.array, float, bool, bool, dict
state, reward, terminated, truncated, metainfo
"""
state, reward, terminated, truncated, info = self.env.step(action)
self.episode.append(action)
if terminated or truncated:
optimal = self.compute_optimal(self.env.instance)
self.policy_progress.append(
np.linalg.norm(np.array(optimal) - np.array(self.episode))
)
self.episode = []
return state, reward, terminated, truncated, info
def render_policy_progress(self):
"""Plot progress."""
plt.figure(figsize=(12, 6))
plt.plot(np.arange(len(self.policy_progress)), self.policy_progress)
plt.title("Policy progress over time")
plt.xlabel("Episode")
plt.ylabel("Distance to optimal policy")
plt.show() | PypiClean |
/Data-CAT-0.7.2.tar.gz/Data-CAT-0.7.2/LICENSE.md | The data-CAT is an Open Source project supported by the VU University Amsterdam, the Netherlands eScience Center (NLeSC)
and Scientific Computing & Modelling NV (SCM). The terms of the [LGPL-3.0 license]* apply. As an exception to the LGPL-3.0 license,
you agree to grant SCM a [BSD 3-Clause license]** to the contributions you commit on this Github or provide to SCM in another manner.
\* https://opensource.org/licenses/LGPL-3.0
** https://opensource.org/licenses/BSD-3-Clause
[LGPL-3.0 license]: https://opensource.org/licenses/LGPL-3.0 "LGPL-3.0 license"
[BSD 3-Clause license]: https://opensource.org/licenses/BSD-3-Clause "BSD 3-Clause license"
| PypiClean |
/ESMValTool-2.9.0-py3-none-any.whl/esmvaltool/cmorizers/data/formatters/datasets/hwsd.py | import logging
import os
import iris
from cf_units import Unit
from iris import NameConstraint
from esmvaltool.cmorizers.data import utilities as utils
logger = logging.getLogger(__name__)
def _extract_variable(short_name, var, cfg, filepath, out_dir):
"""Extract variable."""
raw_var = var.get('raw', short_name)
cube = iris.load_cube(filepath, NameConstraint(var_name=raw_var))
# Sum over levels
if short_name in ('cSoil', ):
level_coord = iris.coords.DimCoord([0, 1], long_name='level')
cube.add_dim_coord(level_coord, 0)
cube = cube.collapsed('level', iris.analysis.SUM)
# Fix coordinates
if var['mip'] != 'fx':
cube = iris.util.new_axis(cube)
time_dim = iris.coords.DimCoord(
[183.0],
bounds=[0.0, 366.0],
units=Unit('days since 2000-01-01 00:00:00'),
standard_name='time',
var_name='time',
long_name='time')
cube.add_dim_coord(time_dim, 0)
utils.convert_timeunits(cube, 1950)
utils.fix_coords(cube)
# Fix units
if 'kg C' in cube.units.origin:
cube.units = Unit(cube.units.origin.replace('C', ''))
cmor_info = cfg['cmor_table'].get_variable(var['mip'], short_name)
cube.convert_units(cmor_info.units)
# Fix metadata
attrs = cfg['attributes']
attrs['mip'] = var['mip']
utils.fix_var_metadata(cube, cmor_info)
utils.set_global_atts(cube, attrs)
# Save variable
utils.save_variable(cube,
short_name,
out_dir,
attrs,
unlimited_dimensions=['time'])
def cmorization(in_dir, out_dir, cfg, cfg_user, start_date, end_date):
"""Cmorization func call."""
filepath = os.path.join(in_dir, cfg['filename'])
logger.info("Reading file '%s'", filepath)
# Run the cmorization
for (short_name, var) in cfg['variables'].items():
logger.info("CMORizing variable '%s'", short_name)
_extract_variable(short_name, var, cfg, filepath, out_dir) | PypiClean |
/GenIce-1.0.11.tar.gz/GenIce-1.0.11/genice/lattices/Struct71.py | pairs="""
219 106
151 37
221 108
165 111
140 51
102 179
109 18
45 166
15 216
72 82
185 63
215 173
119 121
88 161
22 28
39 68
219 74
169 95
103 164
38 212
62 82
163 134
131 205
97 81
7 74
98 107
29 164
128 69
198 225
124 210
13 78
29 126
171 179
194 125
108 91
9 166
146 150
14 72
178 159
162 4
83 78
157 59
139 192
0 130
80 96
54 146
153 170
142 196
101 36
39 69
184 214
12 109
58 173
185 80
198 20
220 25
187 81
117 113
142 91
13 45
95 198
2 87
45 225
140 105
127 42
29 110
141 106
85 89
67 113
53 132
194 120
92 43
215 144
120 50
135 204
163 10
182 156
115 65
137 210
5 178
224 78
176 16
189 213
129 163
135 56
218 93
5 60
129 132
84 46
203 127
224 175
118 121
221 154
14 46
124 112
193 192
214 213
7 150
38 47
142 210
166 95
177 23
57 189
103 206
147 176
85 151
48 176
38 195
61 25
221 132
155 75
59 9
33 187
26 19
191 75
224 4
101 44
31 138
148 111
193 130
1 203
58 41
83 149
30 130
193 171
84 82
165 96
153 152
218 83
14 6
98 226
3 44
201 36
122 5
2 18
71 150
104 171
50 222
180 51
33 154
71 165
22 122
169 137
118 31
133 131
1 41
99 44
201 20
10 53
167 161
169 161
6 89
76 87
48 121
215 49
62 188
12 40
186 111
129 20
136 195
142 79
117 146
148 41
73 8
120 188
3 113
6 110
212 202
114 203
15 155
174 27
116 87
38 176
170 11
143 69
9 24
122 219
26 9
173 75
13 59
106 150
134 112
144 86
81 210
28 155
116 100
192 204
51 19
147 61
86 188
156 52
143 92
90 199
97 161
128 16
159 89
34 100
28 60
104 204
0 83
53 199
88 145
223 227
174 23
155 126
70 224
139 0
1 141
66 217
211 65
48 181
47 79
153 119
170 132
202 11
68 212
140 100
204 227
63 16
124 90
154 213
212 152
77 112
136 54
119 182
170 156
85 183
133 175
12 180
32 108
135 93
57 24
30 160
218 70
207 172
168 48
63 31
33 8
186 61
149 8
205 49
216 177
184 18
163 95
66 164
81 199
107 37
84 207
14 127
57 225
209 90
218 73
34 208
108 199
47 11
175 179
71 35
190 35
123 141
197 42
167 19
68 16
143 7
223 37
116 226
22 64
32 213
129 101
136 43
0 162
156 101
195 209
77 113
41 191
158 72
66 151
162 184
5 220
183 107
180 226
122 42
70 56
187 196
154 24
51 59
168 71
36 91
62 227
157 4
106 138
178 75
178 197
2 200
202 52
6 197
188 104
84 86
15 50
99 90
99 91
184 70
147 165
186 191
3 211
211 146
175 160
207 40
109 56
32 145
123 64
7 96
135 105
124 55
219 118
137 36
115 181
167 73
187 10
159 215
159 216
190 23
39 52
114 144
172 131
190 111
174 194
208 37
76 93
87 88
63 61
67 68
72 103
203 23
227 17
29 158
92 55
116 225
12 105
27 64
105 17
114 58
24 20
221 196
143 211
162 189
141 42
85 49
160 183
13 100
55 79
148 174
82 94
223 207
190 60
196 11
192 172
201 26
50 173
28 186
217 208
220 191
30 98
217 179
97 8
117 138
149 189
222 49
208 17
112 44
103 89
216 60
139 18
110 151
182 69
160 205
200 19
157 57
137 134
206 217
133 104
139 93
200 145
21 223
140 76
126 64
120 177
201 145
67 31
115 54
152 53
33 166
22 185
148 123
157 226
45 149
80 138
27 127
40 107
206 94
21 46
66 222
86 131
171 222
102 130
153 181
99 65
168 25
169 26
30 4
185 74
15 164
62 125
119 67
209 152
109 172
200 214
181 47
39 77
158 27
102 78
193 205
34 98
32 97
214 73
2 180
136 147
3 182
117 43
17 94
34 102
92 77
65 79
118 25
194 58
114 46
21 125
220 35
167 76
128 121
55 195
52 134
158 125
168 54
21 110
206 183
88 198
123 80
144 177
128 74
202 10
1 35
115 209
94 40
43 96
133 56
126 197
"""
waters="""
0.0 0.62071 0.71841
0.1875 0.2343 0.96776
0.1875 0.65987 0.06489
0.1875 0.9895 0.1875
0.3125 0.62206 0.56011
0.3125 0.2343 0.53224
0.1875 0.34518 0.31356
0.5 0.10549 0.11767
0.75 0.75115 0.75
0.5 0.74547 0.38136
0.8125 0.87512 0.53099
0.5 0.92062 0.60083
0.375 0.5726 0.12606
0.6875 0.65987 0.43511
0.3125 0.34518 0.18644
0.6875 0.34013 0.56489
0.6875 0.07464 0.49291
0.6875 0.51941 0.1875
0.1875 0.62206 0.93989
0.5 0.72027 0.10216
0.1875 0.8005 0.33641
0.0 0.39771 0.12367
0.625 0.21406 0.36864
0.5 0.27973 0.89784
0.3125 0.76571 0.46776
0.125 0.14427 0.63234
0.5 0.76941 0.20192
0.625 0.28713 0.14784
0.6875 0.2343 0.53224
0.8125 0.34518 0.31356
0.1875 0.57124 0.52958
0.0 0.10549 0.38234
0.125 0.78594 0.86864
0.625 0.78594 0.63136
0.875 0.5726 0.37394
0.3125 0.1995 0.83641
0.375 0.85573 0.13234
0.0 0.48465 0.28186
0.6875 0.0105 0.6875
0.6875 0.9895 0.3125
0.3125 0.51941 0.1875
0.0 0.25453 0.88136
0.25 0.24885 0.25
0.8125 0.07464 0.00709
0.125 0.93012 0.14917
0.8125 0.71657 0.49087
0.1875 0.37794 0.06011
0.5 0.97525 0.71876
0.375 0.06989 0.64917
0.125 0.42741 0.62606
0.8125 0.37289 0.68856
0.5 0.66358 0.2192
0.625 0.93012 0.35083
0.0 0.89452 0.61767
0.125 0.06989 0.85083
0.6875 0.98 0.9375
0.5 0.56989 0.87606
0.1875 0.71657 0.49087
0.0 0.32208 0.87367
0.5 0.67792 0.37367
0.5 0.25453 0.61864
0.875 0.14427 0.63234
0.6875 0.42876 0.02958
0.8125 0.12488 0.46901
0.75 0.24885 0.25
0.3125 0.98 0.9375
0.8125 0.42876 0.47042
0.0 0.04 0.375
0.8125 0.02 0.4375
0.5 0.02475 0.28124
0.5 0.62071 0.78159
0.375 0.14427 0.86767
0.5 0.37929 0.21841
0.625 0.71287 0.85216
0.5 0.12963 0.29808
0.0 0.27973 0.60216
0.8125 0.65987 0.06489
0.8125 0.9895 0.1875
0.6875 0.62206 0.56011
0.5 0.96001 0.875
0.8125 0.15987 0.16333
0.8125 0.84013 0.83667
0.5 0.43011 0.12394
0.8125 0.65482 0.68644
0.3125 0.42876 0.02958
0.1875 0.42876 0.47042
0.375 0.42741 0.87394
0.0 0.67792 0.12633
0.0 0.74547 0.11864
0.3125 0.37794 0.43989
0.0 0.92062 0.89917
0.3125 0.87512 0.96901
0.6875 0.02 0.0625
0.8125 0.62206 0.93989
0.5 0.48465 0.21814
0.8125 0.8005 0.33641
0.6875 0.12488 0.03099
0.875 0.78594 0.86864
0.125 0.5726 0.37394
0.1875 0.92537 0.99291
0.8125 0.62712 0.31144
0.25 0.89073 0.25
0.8125 0.57124 0.52958
0.5 0.39771 0.37633
0.6875 0.48059 0.8125
0.625 0.5726 0.12606
0.1875 0.15987 0.16333
0.1875 0.51941 0.3125
0.1875 0.84013 0.83667
0.3125 0.57124 0.97042
0.0 0.37929 0.28159
0.6875 0.1995 0.83641
0.875 0.93012 0.14917
0.0 0.02475 0.21876
0.1875 0.34013 0.93511
0.1875 0.0105 0.8125
0.0 0.66358 0.2808
0.0 0.07939 0.10083
0.1875 0.12488 0.46901
0.1875 0.02 0.4375
0.6875 0.37289 0.81144
0.3125 0.07464 0.49291
0.375 0.21406 0.36864
0.875 0.21406 0.13136
0.8125 0.92537 0.99291
0.8125 0.37794 0.06011
0.875 0.28713 0.35216
0.375 0.28713 0.14784
0.5 0.07939 0.39917
0.125 0.85573 0.36767
0.0 0.56989 0.62394
0.3125 0.48059 0.8125
0.1875 0.87512 0.53099
0.5 0.51536 0.78186
0.75 0.89073 0.25
0.6875 0.57124 0.97042
0.875 0.06989 0.85083
0.625 0.85573 0.13234
0.0 0.12963 0.20192
0.0 0.60229 0.87633
0.6875 0.62712 0.18856
0.125 0.21406 0.13136
0.5 0.89452 0.88234
0.5 0.04 0.125
0.3125 0.37289 0.81144
0.1875 0.76571 0.03224
0.1875 0.07464 0.00709
0.75 0.10928 0.75
0.8125 0.2343 0.96776
0.875 0.71287 0.64784
0.3125 0.12488 0.03099
0.0 0.43011 0.37606
0.0 0.96001 0.625
0.1875 0.98 0.5625
0.375 0.78594 0.63136
0.8125 0.28343 0.50913
0.375 0.93012 0.35083
0.3125 0.65987 0.43511
0.6875 0.34518 0.18644
0.3125 0.34013 0.56489
0.3125 0.51671 0.5625
0.8125 0.76571 0.03224
0.1875 0.65482 0.68644
0.875 0.85573 0.36767
0.6875 0.37794 0.43989
0.625 0.14427 0.86767
0.6875 0.76571 0.46776
0.6875 0.71657 0.00913
0.25 0.10928 0.75
0.6875 0.8005 0.1636
0.3125 0.92537 0.50709
0.8125 0.48059 0.6875
0.1875 0.51671 0.9375
0.0 0.33643 0.7192
0.6875 0.28343 0.99087
0.5 0.53342 0.625
0.625 0.06989 0.64917
0.5 0.33643 0.7808
0.1875 0.28343 0.50913
0.6875 0.51671 0.5625
0.3125 0.62712 0.18856
0.3125 0.0105 0.6875
0.3125 0.9895 0.3125
0.3125 0.48329 0.4375
0.3125 0.65482 0.81356
0.6875 0.15987 0.33667
0.8125 0.1995 0.6636
0.6875 0.84013 0.66333
0.625 0.42741 0.87394
0.125 0.71287 0.64784
0.5 0.23059 0.79808
0.0 0.23059 0.70192
0.0 0.53342 0.875
0.0 0.51536 0.71814
0.8125 0.34013 0.93511
0.8125 0.0105 0.8125
0.5 0.87037 0.70192
0.125 0.28713 0.35216
0.0 0.76941 0.29808
0.0 0.87037 0.79808
0.3125 0.71657 0.00913
0.3125 0.8005 0.1636
0.6875 0.92537 0.50709
0.3125 0.28343 0.99087
0.8125 0.51671 0.9375
0.1875 0.48059 0.6875
0.5 0.46659 0.375
0.1875 0.48329 0.0625
0.8125 0.51941 0.3125
0.0 0.97525 0.78124
0.6875 0.87512 0.96901
0.3125 0.02 0.0625
0.8125 0.98 0.5625
0.25 0.75115 0.75
0.375 0.71287 0.85216
0.1875 0.37289 0.68856
0.5 0.32208 0.62633
0.6875 0.48329 0.4375
0.6875 0.65482 0.81356
0.3125 0.15987 0.33667
0.1875 0.1995 0.6636
0.3125 0.84013 0.66333
0.875 0.42741 0.62606
0.0 0.46659 0.125
0.5 0.60229 0.62367
0.0 0.72027 0.39784
0.1875 0.62712 0.31144
0.8125 0.48329 0.0625
"""
coord= "relative"
cages="""
12 -0.25 -0.43317 -0.25
12 0.5 -0.5 1.0
14 0.0 -0.29673 -0.12212
16 0.0 -0.57224 -0.12744
12 0.5 -0.3541 1.00425
14 0.5 -0.29673 0.62212
15 0.5 0.15756 0.59669
15 -0.5 -0.15756 -0.59669
14 0.0 0.29673 0.12212
15 0.5 0.20097 0.09562
14 0.5 0.06099 -0.12496
12 0.0 0.5 0.5
16 0.5 -0.57224 0.62744
12 0.25 0.43317 0.25
12 0.25 0.71808 0.25
14 0.0 0.06099 0.62496
12 0.25 -0.07999 0.75
16 0.5 0.57224 0.37256
12 -0.25 -0.71808 -0.25
14 0.0 -0.06099 -0.62496
12 0.0 -0.3541 -0.50425
15 0.5 -0.20097 0.90438
12 0.25 -0.71808 0.75
15 0.0 -0.20097 -0.40438
12 -0.25 0.43317 -0.75
15 0.0 0.15756 -0.09669
12 0.25 0.07999 0.25
12 0.25 -0.43317 0.75
12 0.0 0.3541 0.50425
12 0.5 0.0 0.5
15 0.0 0.20097 0.40438
14 0.5 -0.06099 1.12496
14 0.5 0.29673 0.37788
12 0.5 0.3541 -0.00425
16 0.0 0.57224 0.12744
15 0.0 -0.15756 1.09669
12 -0.25 0.07999 -0.75
12 -0.25 -0.07999 -0.25
12 0.0 0.0 0.0
12 -0.25 0.71808 -0.75
"""
bondlen = 3
cell = """
13.48668961126573 49.74996750158407 17.7840545616594
"""
density = 0.5711335266668782
from genice.cell import cellvectors
cell = cellvectors(a=13.48668961126573,
b=49.74996750158407,
c=17.7840545616594) | PypiClean |
/Flask_AdminLTE3-1.0.9-py3-none-any.whl/flask_adminlte3/static/plugins/moment/locale/ar-tn.js |
;(function (global, factory) {
typeof exports === 'object' && typeof module !== 'undefined'
&& typeof require === 'function' ? factory(require('../moment')) :
typeof define === 'function' && define.amd ? define(['../moment'], factory) :
factory(global.moment)
}(this, (function (moment) { 'use strict';
//! moment.js locale configuration
var arTn = moment.defineLocale('ar-tn', {
months: 'جانفي_فيفري_مارس_أفريل_ماي_جوان_جويلية_أوت_سبتمبر_أكتوبر_نوفمبر_ديسمبر'.split(
'_'
),
monthsShort: 'جانفي_فيفري_مارس_أفريل_ماي_جوان_جويلية_أوت_سبتمبر_أكتوبر_نوفمبر_ديسمبر'.split(
'_'
),
weekdays: 'الأحد_الإثنين_الثلاثاء_الأربعاء_الخميس_الجمعة_السبت'.split('_'),
weekdaysShort: 'أحد_إثنين_ثلاثاء_أربعاء_خميس_جمعة_سبت'.split('_'),
weekdaysMin: 'ح_ن_ث_ر_خ_ج_س'.split('_'),
weekdaysParseExact: true,
longDateFormat: {
LT: 'HH:mm',
LTS: 'HH:mm:ss',
L: 'DD/MM/YYYY',
LL: 'D MMMM YYYY',
LLL: 'D MMMM YYYY HH:mm',
LLLL: 'dddd D MMMM YYYY HH:mm',
},
calendar: {
sameDay: '[اليوم على الساعة] LT',
nextDay: '[غدا على الساعة] LT',
nextWeek: 'dddd [على الساعة] LT',
lastDay: '[أمس على الساعة] LT',
lastWeek: 'dddd [على الساعة] LT',
sameElse: 'L',
},
relativeTime: {
future: 'في %s',
past: 'منذ %s',
s: 'ثوان',
ss: '%d ثانية',
m: 'دقيقة',
mm: '%d دقائق',
h: 'ساعة',
hh: '%d ساعات',
d: 'يوم',
dd: '%d أيام',
M: 'شهر',
MM: '%d أشهر',
y: 'سنة',
yy: '%d سنوات',
},
week: {
dow: 1, // Monday is the first day of the week.
doy: 4, // The week that contains Jan 4th is the first week of the year.
},
});
return arTn;
}))); | PypiClean |
/Eddy's%20Memory%20Game-1.0.zip/Eddy's Memory Game-1.0/functions.py | __author__="Edward McKnight (EM-Creations.co.uk)"
__date__ ="$05-Jul-2011 09:48:41$"
def clearScreen(numlines=100):
import os
if os.name == "posix":
# Unix/Linux/MacOS/BSD/etc
os.system('clear')
elif os.name in ("nt", "dos", "ce"):
# DOS/Windows
os.system('CLS')
else:
# Fallback for other operating systems.
print '\n' * numlines
return;
def displayMainMenu():
print "|=======================================|"
print "|============== MAIN MENU ==============|"
print "|=======================================|"
print "| Option | Description |"
print "|---------------------------------------|"
print "| 1. | Set up new game |"
print "| 2. | Word Database |"
print "| 3. | Highscores |"
print "| 4. | Credits |"
print "| 5. | Exit |"
print "|=======================================|"
return;
def displayWordDBMenu():
print "|=======================================|"
print "|============ Word Database ============|"
print "|=======================================|"
print "| Option | Description |"
print "|---------------------------------------|"
print "| 1. | View Words |"
print "| 2. | Add word |"
print "| 3. | Remove Word |"
print "| 4. | Return to main menu |"
print "|=======================================|"
return;
def displayHighScores():
print "|==========================================|"
print "|============== Highscores ==============|"
print "|==========================================|"
print "| Rank | Name | Level Reached | Difficulty |"
print "|------------------------------------------|"
highScores = getHighScores()
i = 0;
while (i < len(highScores)):
print "| "+str(i+1)+" | "+highScores[i][0]+" | "+highScores[i][1]+" | "+highScores[i][2]
i += 1;
print "|==========================================|"
return;
def displayCurrentWords():
words = getCurrentWords()
print "|========= Current Words =========|"
i = 0
while (i < len(words)):
print str(i+1)+". "+words[i]
i += 1
return;
def getCurrentWords():
wordFile = open("words.emg", "r")
words = wordFile.read()
words = words.split(",")
wordFile.close()
return words;
def getHighScores():
hsFile = open("highscores.emg", "r")
highScores = hsFile.read()
highScores = highScores.split(";")
i = 0
while (i < len(highScores)):
highScores[i] = highScores[i].split(",")
i += 1;
highScores = sorted(highScores, key=lambda k: int(k[1]), reverse=True) # Sort the array so the person with the highest score is listed first
hsFile.close()
return highScores;
def saveHighScore(level, difficulty):
print
print "|========= Save HighScore =========|"
print "Please type in your name."
print
response = str(raw_input("Your Name: "))
# This is effectively a do while loop
response = response.strip()
if (len(response) < 3 or len(response) > 29 or not response.isalpha()):
print
print "Your name must be longer than 2 characters and shorter than 29 characters and must not contain numbers!"
while (len(response) < 3 or len(response) > 29 or not response.isalpha()):
response = str(raw_input("Your Name: "))
response = response.strip()
if (len(response) > 2 and len(response) < 29 and response.isalpha()):
break
else:
print
print "Your name must be longer than 2 characters and shorter than 29 characters and must not contain numbers!"
name = response.lower()
hsFile = open("highscores.emg", "a")
hsFile.write(";"+name+","+str(level)+","+str(difficulty))
hsFile.close()
print "Your highscore has been saved!"
return;
def addWord():
print "|========= Add Word =========|"
print "Please type in a word which is between 3 and 28 characters that you wish to add to the word database."
print
response = str(raw_input("New word: "))
# This is effectively a do while loop
response = response.strip()
if (len(response) < 3 or len(response) > 29 or not response.isalpha()):
print
print "The word must be longer than 2 characters and shorter than 29 characters and must not contain numbers!"
while (len(response) < 3 or len(response) > 29 or not response.isalpha()):
response = str(raw_input("New word: "))
response = response.strip()
if (len(response) > 2 and len(response) < 29 and response.isalpha()):
break
else:
print
print "The word must be longer than 2 characters and shorter than 29 characters and must not contain numbers!"
newWord = response.lower()
wordFile = open("words.emg", "a")
wordFile.write(","+newWord)
wordFile.close()
print "New word successfully added!"
return;
def removeWord():
print "|========= Remove Word =========|"
print "Please type in the word you want to remove from the word database."
print
response = str(raw_input("Word to remove: "))
# This is effectively a do while loop
response = response.strip()
response = response.lower()
words = getCurrentWords()
wordFound = response in words #Check if the response variable is found within the words list
if (wordFound):
words.remove(response)
wordFile = open("words.emg", "w")
wordFile.write(words[0])
i = 1
while (i < len(words)):
wordFile.write(","+words[i])
i += 1
wordFile.close()
print "\""+response+"\" successfully removed from the word database."
else:
print "\""+response+"\" is not in the word database."
return;
def displayCredits():
print "|======================================|"
print "|========= Eddy's Memory Game =========|"
print "|======================================|"
print "| Created by Edward McKnight |"
print "| Version 1.0 |"
print "| Programmed in Python |"
print "| www.EM-Creations.co.uk |"
print "|======================================|"
return;
def playWordGame(difficulty):
import random
import time
if (difficulty == 'easy'):
pauseTime = 7
elif (difficulty == 'medium'):
pauseTime = 5
elif (difficulty == 'hard'):
pauseTime = 3
print
print "You are about to play the word version of Eddy's Memory Game.\nYou will be given "+str(pauseTime)+" seconds to remember the sequence of words before having to retype them."
print "Note that after the first level you need to include a comma (,) between words!"
print
raw_input("Press enter when you're ready to begin...")
print
words = getCurrentWords()
wordSequence = ""
level = 1 # Set level to 1
while(True):
print "Level "+str(level)+":"
newWord = random.randrange(0, len(words))
newWord = words[newWord]
if (level > 1):
wordSequence = wordSequence+","+newWord
else:
wordSequence = newWord
print "Remember this word: "+newWord
time.sleep(pauseTime)
clearScreen()
userSequence = raw_input("Please type in the sequence of words: ")
if (userSequence != wordSequence):
print "Incorrect! You have lost! You got to level: "+str(level)
print "The correct answer was: "+wordSequence
print
response = str(raw_input("Would you like to save your score? (y/n): "))
# This is effectively a do while loop
if (response != 'y' and response != 'n'):
print
print "Please type either y or n."
while (response != 'y' and response != 'n'):
response = str(raw_input("Would you like to save your score? (y/n): "))
if (response == 'y' or response == 'n'):
break;
else:
print
print "Please type either y or n."
if (response == 'y'):
saveHighScore(level, difficulty)
print
raw_input("Press enter to return to the Main Menu...")
else:
raw_input("Press enter to return to the Main Menu...")
break;
break;
else:
print "Correct! Moving onto the next level..."
print
level+= 1 # Increment level
return;
def playNumberGame(difficulty):
import random
import time
if (difficulty == 'easy'):
pauseTime = 7
elif (difficulty == 'medium'):
pauseTime = 5
elif (difficulty == 'hard'):
pauseTime = 3
print
print "You are about to play the number version of Eddy's Memory Game.\nYou will be given "+str(pauseTime)+" seconds to remember the sequence of numbers before having to retype them."
print
raw_input("Press enter when you're ready to begin...")
numberSequence = ""
level = 1 # Set level to 1
while(True):
print "Level "+str(level)+":"
newNum = random.randrange(0, 9)
if (level > 1):
numberSequence = numberSequence+str(newNum)
else:
numberSequence = str(newNum)
print "Remember this number: "+str(newNum)
time.sleep(pauseTime)
clearScreen()
userSequence = raw_input("Please type in the sequence of numbers: ")
if (userSequence != numberSequence):
print "Incorrect! You have lost! You got to level: "+str(level)
print "The correct answer was: "+numberSequence
print
response = str(raw_input("Would you like to save your score? (y/n): "))
# This is effectively a do while loop
if (response != 'y' and response != 'n'):
print
print "Please type either y or n."
while (response != 'y' and response != 'n'):
response = str(raw_input("Would you like to save your score? (y/n): "))
if (response == 'y' or response == 'n'):
break;
else:
print
print "Please type either y or n."
if (response == 'y'):
saveHighScore(level, difficulty)
print
raw_input("Press enter to return to the Main Menu...")
else:
raw_input("Press enter to return to the Main Menu...")
break;
break;
else:
print "Correct! Moving onto the next level..."
print
level+= 1 # Increment level
return; | PypiClean |
/CircleBlock-1.1.0.tar.gz/CircleBlock-1.1.0/README.md | <div style="display: flex; justify-content: center;">
<img src="images/logo.png" alt="CircleBlock Logo">
</div>
<div style="display: flex; justify-content: center;">
<h1>CircleBlock</h1>
</div>
CircleBlock is a class that monitors the file system in the project root directory. This class detects changes in the file system and automatically updates the exportable functions of each module in each package's `__init__.py` file.
[](README.md) [](README.ko.md)
## Features
1. Start/stop file system monitoring
2. Get a list of exportable functions within a module
3. Initialize and update all `__init__.py` files in a directory
## Installation
To install CircleBlock, use the following command:
```
pipenv install circleblock
```
## Usage (CLI)
### Start monitoring
To start monitoring the file system, use the following command:
```
ccbk run
```
### Stop monitoring
To stop monitoring the file system, use the following command:
```
ccbk stop
```
### Initialize and update `__init__.py` files
To initialize and update all `__init__.py` files in the project, use the following command:
```
ccbk --init
```
## Options
The options available for the `ccbk` command are as follows:
- `--project-root (-p)`: Project root directory path (default: current directory)
`--log-level (-l)`: Log level (default: INFO)
- `--init (-i)`: Initialize and update all `__init__.py` files in the project (default: False)
## _Example_
Assume you have a project structure like this:
```
my_project/
├── package1/
│ ├── module1.py
│ ├── module2.py
│ └── __init__.py
└── package2/
├── module3.py
├── module4.py
└── __init__.py
```
If you run `ccbk run` in the `my_project` directory, CircleBlock will start monitoring the file system. Whenever there's a change in any of the modules, CircleBlock will automatically update the `__init__.py` files with the exportable functions.
For instance, if `module1.py` has the following content:
```
def func_a():
pass
def func_b():
pass
```
The `__init__.py` file in the `package1` directory will be updated with the following content:
```
from .module1 import (
func_a,
func_b,
)
```
This way, you can easily import these functions from the package itself:
```
from package1 import func_a, func_b
```
If you want to stop the file system monitoring, simply run the `ccbk stop` command. To initialize and update all `__init__.py` files in the project without starting the file system monitoring, use the `ccbk --init` command. | PypiClean |
/Nuitka_fixed-1.1.2-cp310-cp310-win_amd64.whl/nuitka/utils/StaticLibraries.py | import os
from nuitka.containers.OrderedSets import OrderedSet
from nuitka.PythonFlavors import (
isAnacondaPython,
isDebianPackagePython,
isNuitkaPython,
)
from nuitka.PythonVersions import (
getPythonABI,
getSystemPrefixPath,
python_version,
python_version_str,
)
from nuitka.Tracing import general
from .FileOperations import getFileContentByLine, getFileList
from .Utils import getLinuxDistribution, isDebianBasedLinux, isWin32Windows
_ldconf_paths = None
_static_lib_cache = {}
def locateStaticLinkLibrary(dll_name):
if dll_name not in _static_lib_cache:
_static_lib_cache[dll_name] = _locateStaticLinkLibrary(dll_name)
return _static_lib_cache[dll_name]
def _locateStaticLinkLibrary(dll_name):
# singleton, pylint: disable=global-statement
#
global _ldconf_paths
if _ldconf_paths is None:
_ldconf_paths = OrderedSet()
for conf_filemame in getFileList("/etc/ld.so.conf.d", only_suffixes=".conf"):
for conf_line in getFileContentByLine(conf_filemame):
conf_line = conf_line.split("#", 1)[0]
conf_line = conf_line.strip()
if os.path.exists(conf_line):
_ldconf_paths.add(conf_line)
for ld_config_path in _ldconf_paths:
candidate = os.path.join(ld_config_path, "lib%s.a" % dll_name)
if os.path.exists(candidate):
return candidate
return None
_static_lib_python_path = False
def isDebianSuitableForStaticLinking():
dist_name, _base, dist_version = getLinuxDistribution()
if dist_name == "Debian":
if dist_version is None:
return True
try:
dist_version = tuple(int(x) for x in dist_version.split("."))
except ValueError:
# dist_version contains a non-numeric string such as "sid".
return True
return dist_version >= (10,)
else:
# TODO: Needs implementing potentially, Mint etc. are based
# on something that should be considered.
return True
def _getSystemStaticLibPythonPath():
# Return driven function with many cases, pylint: disable=too-many-branches,too-many-return-statements
sys_prefix = getSystemPrefixPath()
python_abi_version = python_version_str + getPythonABI()
if isNuitkaPython():
# Nuitka Python has this.
if isWin32Windows():
return os.path.join(
sys_prefix,
"libs",
"python" + python_abi_version.replace(".", "") + ".lib",
)
else:
return os.path.join(
sys_prefix,
"lib",
"libpython" + python_abi_version + ".a",
)
if isWin32Windows():
# The gcc used on Windows for Anaconda is far too old for winlibs gcc
# to use its library.
if isAnacondaPython():
return None
candidates = [
# Anaconda has this.
os.path.join(
sys_prefix,
"libs",
"libpython" + python_abi_version.replace(".", "") + ".dll.a",
),
# MSYS2 mingw64 Python has this.
os.path.join(
sys_prefix,
"lib",
"libpython" + python_abi_version + ".dll.a",
),
]
for candidate in candidates:
if os.path.exists(candidate):
return candidate
else:
candidate = os.path.join(
sys_prefix, "lib", "libpython" + python_abi_version + ".a"
)
if os.path.exists(candidate):
return candidate
# For Python2 this works. TODO: Figure out Debian and Python3.
if (
python_version < 0x300
and isDebianPackagePython()
and isDebianSuitableForStaticLinking()
):
candidate = locateStaticLinkLibrary("python" + python_abi_version)
else:
candidate = None
if candidate is not None and os.path.exists(candidate):
# Also check libz, can be missing
if not locateStaticLinkLibrary("z"):
general.warning(
"Error, missing 'libz-dev' installation needed for static lib-python."
)
return candidate
# This is not necessarily only for Python3 on Debian, but maybe others as well,
# but that's what's been tested.
if python_version >= 0x300 and isDebianPackagePython() and isDebianBasedLinux():
try:
import sysconfig
candidate = os.path.join(
sysconfig.get_config_var("LIBPL"),
"libpython" + python_abi_version + "-pic.a",
)
if os.path.exists(candidate):
return candidate
except ImportError:
# Cannot detect this properly for Python 2.6, but we don't care much
# about that anyway.
pass
return None
def getSystemStaticLibPythonPath():
global _static_lib_python_path # singleton, pylint: disable=global-statement
if _static_lib_python_path is False:
_static_lib_python_path = _getSystemStaticLibPythonPath()
return _static_lib_python_path | PypiClean |
/MiWork-2021.2.20.20.8.11-py3-none-any.whl/miwork/exception.py |
from __future__ import absolute_import, division, print_function, unicode_literals
from six import PY2
if PY2:
def implements_to_string(cls):
cls.__unicode__ = cls.__str__
cls.__str__ = lambda x: x.__unicode__().encode('utf-8')
return cls
else:
def implements_to_string(x):
return x
@implements_to_string
class OpenLarkException(Exception):
def __init__(self, *args, **kwargs):
"""基本 Exception
"""
self.code = kwargs.pop('code', None) or getattr(self, 'code', None) or 0
self.msg = kwargs.pop('msg', None) or getattr(self, 'msg', None)
self.url = kwargs.pop('url', None) or getattr(self, 'url', None)
def __str__(self):
if PY2:
if self.url:
return u'<{} code={} msg="{}" url="{}">'.format(self.__class__.__name__, self.code, self.msg, self.url)
else:
return u'<{} code={} msg="{}">'.format(self.__class__.__name__, self.code, self.msg)
else:
if self.url:
return '<{} code={} msg="{}" url="{}">'.format(self.__class__.__name__, self.code, self.msg, self.url)
else:
return '<{} code={} msg="{}">'.format(self.__class__.__name__, self.code, self.msg)
# --------- OpenLark SDK 定义的参数错误
class LarkInvalidArguments(OpenLarkException):
code = 999901
msg = 'open_lark invalid arguments'
class LarkInvalidCallback(OpenLarkException):
code = 999902
msg = 'open_lark callback error'
class LarkGetAppTicketFail(OpenLarkException):
code = 999903
msg = 'get app_ticket fail'
class LarkUnknownError(OpenLarkException):
code = 999904
msg = 'unknown error'
# --------- 机器人和服务端异常
class LarkSendMessageFailException(OpenLarkException):
code = 10002
msg = '发送消息失败'
class LarkRequestParamsInvalidException(OpenLarkException):
code = 10003
msg = '请求参数不合法'
class LarkGetUserInfoFailOrUserIDNotExistException(OpenLarkException):
code = 10004
msg = '获取用户信息失败或者用户 ID 不存在'
class LarkConflictAppIDException(OpenLarkException):
code = 10005
msg = '生成 token 的 app_id 和相关 chat、open_id 的 app_id 不一致'
class LarkGetOpenChatIDFailException(OpenLarkException):
code = 10009
msg = '获取 open_chat_id 失败'
class LarkForbiddenSendMessageException(OpenLarkException):
code = 10010
msg = '禁止发送消息,请检查 scope 权限,机器人可见性范围'
class LarkGetAppAccessTokenFailException(OpenLarkException):
code = 10012
msg = '获取 app access token 失败'
class LarkGetTenantAccessTokenFailException(OpenLarkException):
code = 10013 # 10014
msg = '获取 tenant access token 失败'
class LarkWrongAppSecretException(OpenLarkException):
code = 10015
msg = 'app_secret 不正确'
class LarkSendAppTicketFailException(OpenLarkException):
code = 10016
msg = '发送 app_ticket 失败'
class LarkUnsupportedUrgentTypeException(OpenLarkException):
code = 10019
msg = '加急类型不支持'
class LarkWrongMessageIDException(OpenLarkException):
code = 10020
msg = '消息 ID 不正确'
class LarkForbiddenUrgentException(OpenLarkException):
code = 10023
msg = '没有加急 scope 权限'
class LarkInvalidOpenChatIDException(OpenLarkException):
code = 10029
msg = 'open_chat_id 不合法'
class LarkBotNotInChatException(OpenLarkException):
code = 10030
msg = '机器人不在群里'
class LarkAllOpenIDInvalidException(OpenLarkException):
code = 10032
msg = '所有 open_id 都不合法'
class LarkUnsupportedCrossTenantException(OpenLarkException):
code = 10034
msg = '不支持跨企业操作'
class LarkGetMessageIDFailException(OpenLarkException):
code = 10037
msg = '获取 message_id 失败'
class LarkGetSSOAccessTokenFailException(OpenLarkException):
code = 11000
msg = '获取 sso_access_token 失败'
class LarkGetCheckSecurityTokenFailException(OpenLarkException):
code = 11001
msg = '获取 CheckSecurityToken 失败'
class LarkCheckOpenChatIDFailException(OpenLarkException):
code = 11100
msg = 'open_chat_id 不合法或者 chat 不存在'
class LarkOpenIDNotExistException(OpenLarkException):
code = 11101
msg = 'open_id 不存在'
class LarkGetOpenIDFailException(OpenLarkException):
code = 11102
msg = '查询用户 open_id 失败'
class LarkOpenDepartmentIDNotExistException(OpenLarkException):
code = 11103
msg = 'open_department_id 不存在'
class LarkGetOpenDepartmentIDFailException(OpenLarkException):
code = 11104
msg = '查询用户 open_department_id 失败'
class LarkEmployeeIDNotExistException(OpenLarkException):
code = 11105
msg = 'user_id 不存在'
class LarkGetEmployeeIDFailException(OpenLarkException):
code = 11106
msg = '查询用户 user_id 失败'
class LarkUpdateChatNameFailException(OpenLarkException):
code = 11200
msg = '更新群名称失败'
class LarkBotNotGroupAdminException(OpenLarkException):
code = 11201 # 11208
msg = '机器人不是群主'
class LarkOnlyChatAdminCanInviteUserException(OpenLarkException):
code = 11202
msg = '只有群主才能拉用户进群'
class LarkForbiddenBotBatchSendMessageToUserException(OpenLarkException):
code = 11203
msg = '机器人没有给用户批量发送权限'
class LarkForbiddenBotBatchSendMessageToDepartmentException(OpenLarkException):
code = 11204
msg = '机器人没有给部门批量发送权限'
class LarkAppHasNoBotException(OpenLarkException):
code = 11205
msg = '应用没有机器人'
class LarkUserCannotGrantToChatAdminException(OpenLarkException):
code = 11206
msg = '用户不在群中不能被设置为群主'
class LarkAppUnavailableException(OpenLarkException):
code = 11207
msg = 'app 不可用'
class LarkAppNotExistException(OpenLarkException):
code = 11209
msg = 'app 不存在'
class LarkAppUsageInfoNotExistException(OpenLarkException):
code = 11210
msg = 'AppUsageInfo 不存在'
class LarkInviteUserToChatInvalidParamsException(OpenLarkException):
code = 11211
msg = '拉人进群参数错误'
class LarkRemoveUserFromChatInvalidParamsException(OpenLarkException):
code = 11212
msg = '踢人出群参数错误'
class LarkUpdateChatInvalidParamsException(OpenLarkException):
code = 11213
msg = '更新群参数错误'
class LarkUploadImageInvalidParamsException(OpenLarkException):
code = 11214
msg = '上传图片参数错误'
class LarkEmptyChatIDException(OpenLarkException):
code = 11215
msg = 'chat_id 为空'
class LarkGetChatIDFailException(OpenLarkException):
code = 11216
msg = '获取chat_id失败'
class LarkInviteBotToChatFailException(OpenLarkException):
code = 11217
msg = '拉机器人进群失败'
class LarkBotInChatFullException(OpenLarkException):
code = 11218
msg = '群机器人已满'
class LarkUnsupportedChatCrossTenantException(OpenLarkException):
code = 11219
msg = '不支持 chat 跨租户'
class LarkForbiddenBotDisbandChatException(OpenLarkException):
code = 11220
msg = '禁止机器人解散群'
class LarkBotForbiddenToGetImageBelongToThemException(OpenLarkException):
code = 11221
msg = '机器人不能获取不属于自己的图片'
class LarkOwnerOfBotIsNotInChatException(OpenLarkException):
code = 11222
msg = '机器人的 Owner 不在群里'
class LarkNotOpenApplicationSendMessagePermissionException(OpenLarkException):
code = 11223
msg = '没有打开应用发消息权限'
class LarkInvalidMessageIDException(OpenLarkException):
code = 11224
msg = 'message_id 参数错误'
class LarkAppIsNotVisibleToUserException(OpenLarkException):
code = 11225
msg = '你的应用对用户不可见'
class LarkInvalidAppIDException(OpenLarkException):
code = 11226
msg = 'app_id 参数不对或者没传'
class LarkImageKeyNotExistException(OpenLarkException):
code = 11227
msg = ' image_key 不存在'
class LarkBotIsNotMessageOwnerException(OpenLarkException):
code = 11234
msg = 'bot非消息owner'
class LarkBanAtALLException(OpenLarkException):
code = 11235
msg = '禁止@所有人'
class LarkUserNotActiveException(OpenLarkException):
code = 11236
msg = '用户已离职'
class LarkChatDisbandedException(OpenLarkException):
code = 11237
msg = '群聊已解散'
class LarkMessageTooOldException(OpenLarkException):
code = 11238
msg = '消息过久,不能撤销'
class LarkNoPermissionToGotException(OpenLarkException):
code = 11239
msg = '无权限获取'
class LarkInvalidTenantAccessTokenException(OpenLarkException):
code = 99991663
msg = 'tenant access token 无效'
class LarkInvalidAppAccessTokenException(OpenLarkException):
code = 99991664
msg = 'app access token 无效'
class LarkInvalidTenantCodeException(OpenLarkException):
code = 99991665
msg = 'tenant code 无效'
class LarkInvalidAppTicketException(OpenLarkException):
code = 99991666
msg = 'app ticket 无效'
class LarkFrequencyLimitException(OpenLarkException):
code = 99991400
msg = '发消息频率超过频控限制,目前每个AppID每个接口50/s、1000/min的限制'
class LarkInternalException(OpenLarkException):
code = 20000
msg = '内部异常'
# --------- 审批异常 / 审批错误码
# 40xx 是审批的 v1 接口
class LarkApprovalNotExistException(OpenLarkException):
code = 4002
msg = 'approval not exist'
class LarkApprovalSubscriptionExistException(OpenLarkException):
code = 4007
msg = 'subscription exist'
# 600xxx 是审批的 v3 接口
class LarkApprovalInvalidRequestParamsException(OpenLarkException):
code = 60001
msg = '请求参数错误'
class LarkApprovalApprovalCodeNotFoundException(OpenLarkException):
code = 60002
msg = '审批定义 approval_code 找不到'
class LarkApprovalInstanceCodeNotFoundException(OpenLarkException):
code = 60003
msg = '审批实例 instance_code 找不到'
class LarkApprovalUserNotFoundException(OpenLarkException):
code = 60004
msg = '用户找不到'
class LarkApprovalForbiddenException(OpenLarkException):
code = 60009
msg = '权限不足'
class LarkApprovalTaskIDNotFoundException(OpenLarkException):
code = 60010
msg = '审批任务 task_id 找不到'
class LarkApprovalDepartmentValidFailedException(OpenLarkException):
code = 60005
msg = '部门验证失败'
class LarkApprovalFormValidFailedException(OpenLarkException):
code = 60006
msg = '表单验证失败'
class LarkApprovalNeedPayException(OpenLarkException):
code = 60011
msg = '该审批为付费审批,免费版用户不能发起这个审批'
class LarkApprovalInstanceCodeConflictException(OpenLarkException):
code = 60012
msg = '审批实例 uuid 冲突'
# --------- 审批异常 / 审批错误码
# --------- 云空间
class LarkDriveWrongRequestJsonException(OpenLarkException):
code = 90201
msg = '请求体不是一个 json'
class LarkDriveWrongRangeException(OpenLarkException):
code = 90202
msg = '请求中 range 格式有误'
class LarkDriveFailException(OpenLarkException):
code = 90203
msg = '不是预期内的 fail'
class LarkDriveWrongRequestBodyException(OpenLarkException):
code = 90204
msg = '请求体有误'
class LarkDriveInvalidUsersException(OpenLarkException):
code = 90205
msg = '非法的 user'
class LarkDriveEmptySheetIDException(OpenLarkException):
code = 90206
msg = 'sheet_id 为空'
class LarkDriveEmptySheetTitleException(OpenLarkException):
code = 90207
msg = 'sheet 名称为空'
class LarkDriveSameSheetIDOrTitleException(OpenLarkException):
code = 90208
msg = '请求中有相同的 sheet_id 或 title'
class LarkDriveExistSheetIDException(OpenLarkException):
code = 90209
msg = 'sheet_id 已经存在'
class LarkDriveExistSheetTitleException(OpenLarkException):
code = 90210
msg = 'sheet title 已经存在'
class LarkDriveWrongSheetIDException(OpenLarkException):
code = 90211
msg = '错误的 sheet_id'
class LarkDriveWrongRowOrColException(OpenLarkException):
code = 90212
msg = '非法的行列'
class LarkDrivePermissionFailException(OpenLarkException):
code = 90213
msg = '没有文件的权限 forbidden'
class LarkDriveSpreadSheetNotFoundException(OpenLarkException):
code = 90214
msg = 'sheet 没有找到'
class LarkDriveSheetIDNotFoundException(OpenLarkException):
code = 90215
msg = 'sheet_id 没有找到'
class LarkDriveEmptyValueException(OpenLarkException):
code = 90216
msg = '请求中有空值'
class LarkDriveTooManyRequestException(OpenLarkException):
code = 90217
msg = '请求太频繁'
class LarkDriveTimeoutException(OpenLarkException):
code = 96402
msg = '超时'
class LarkDriveProcessingException(OpenLarkException):
code = 96403
msg = '请求正在处理中'
class LarkDriveLoginRequiredException(OpenLarkException):
code = 91404
msg = '需要登录'
class LarkDriveFailedException(OpenLarkException):
code = 90301 # 91201 / 96401
msg = '失败'
class LarkDriveOutOfLimitException(OpenLarkException):
code = 91206
msg = '超过限制'
class LarkDriveDuplicateException(OpenLarkException):
code = 91207
msg = '重复记录'
class LarkDriveForbiddenException(OpenLarkException):
code = 91002 # 90303 / 91204 / 91403
msg = '没有权限'
class LarkDriveInvalidOperationException(OpenLarkException):
code = 91003
msg = '操作异常'
class LarkDriveUserNoSharePermissionException(OpenLarkException):
code = 91004
msg = '用户没有共享权限'
class LarkDriveParamErrorException(OpenLarkException):
code = 90302 # 91001 / 91202 / 91401
msg = '参数错误'
class LarkDriveMetaDeletedException(OpenLarkException):
code = 90304 # 91205
msg = '文件已删除'
class LarkDriveMetaNotExistException(OpenLarkException):
code = 90305 # 91203 / 91402
msg = '文件不存在'
class LarkDriveReviewNotPassException(OpenLarkException):
code = 90306 # 91208
msg = '评论内容审核不通过'
class LarkDriveInternalErrorException(OpenLarkException):
code = 90399 # 95299 / 96201 / 96202 / 96001 / 95201 / 95201—95209
msg = '内部错误'
# --------- 云空间
# --------- 会议室
class LarkMeetingRoomInvalidPageTokenException(OpenLarkException):
code = 100001
msg = 'page token 格式非法'
class LarkMeetingRoomInvalidFieldSelectionException(OpenLarkException):
code = 100002
msg = 'fields 中存在非法字段名'
class LarkMeetingRoomTimeFormatMustFollowRFC3339StandardException(OpenLarkException):
code = 100003
msg = '时间格式未遵循 RFC3339 标准'
class LarkMeetingRoomInvalidBuildingIDException(OpenLarkException):
code = 100004
msg = 'building ID 非法'
class LarkMeetingRoomInvalidRoomIDException(OpenLarkException):
code = 100005
msg = 'room ID 非法'
class LarkMeetingRoomInternalErrorException(OpenLarkException):
code = 105001
msg = '内部错误'
# --------- 会议室
def gen_exception(code, url, msg=''):
"""生成异常
:type code: int
:type url: str
:type msg: str
:rtype: OpenLarkException
"""
exceptions = {
# 自定义
999901: LarkInvalidArguments,
999902: LarkInvalidCallback,
999903: LarkGetAppTicketFail,
999904: LarkUnknownError,
# 审批
4002: LarkApprovalNotExistException,
4007: LarkApprovalSubscriptionExistException,
60001: LarkApprovalInvalidRequestParamsException,
60002: LarkApprovalApprovalCodeNotFoundException,
60003: LarkApprovalInstanceCodeNotFoundException,
60004: LarkApprovalUserNotFoundException,
60009: LarkApprovalForbiddenException,
60010: LarkApprovalTaskIDNotFoundException,
60005: LarkApprovalDepartmentValidFailedException,
60006: LarkApprovalFormValidFailedException,
60011: LarkApprovalNeedPayException,
60012: LarkApprovalInstanceCodeConflictException,
# 数字超级大的异常,
99991400: LarkFrequencyLimitException,
99991663: LarkInvalidTenantAccessTokenException,
99991664: LarkInvalidAppAccessTokenException,
99991665: LarkInvalidTenantCodeException,
99991666: LarkInvalidAppTicketException,
10002: LarkSendMessageFailException,
10003: LarkRequestParamsInvalidException,
10004: LarkGetUserInfoFailOrUserIDNotExistException,
10005: LarkConflictAppIDException,
10009: LarkGetOpenChatIDFailException,
10010: LarkForbiddenSendMessageException,
10012: LarkGetAppAccessTokenFailException,
10013: LarkGetTenantAccessTokenFailException,
10014: LarkGetTenantAccessTokenFailException,
10015: LarkWrongAppSecretException,
10016: LarkSendAppTicketFailException,
10019: LarkUnsupportedUrgentTypeException,
10020: LarkWrongMessageIDException,
10023: LarkForbiddenUrgentException,
10029: LarkInvalidOpenChatIDException,
10030: LarkBotNotInChatException,
10032: LarkAllOpenIDInvalidException,
10034: LarkUnsupportedCrossTenantException,
10037: LarkGetMessageIDFailException,
11000: LarkGetSSOAccessTokenFailException,
11001: LarkGetCheckSecurityTokenFailException,
11100: LarkCheckOpenChatIDFailException,
11101: LarkOpenIDNotExistException,
11102: LarkGetOpenIDFailException,
11103: LarkOpenDepartmentIDNotExistException,
11104: LarkGetOpenDepartmentIDFailException,
11105: LarkEmployeeIDNotExistException,
11106: LarkGetEmployeeIDFailException,
11200: LarkUpdateChatNameFailException,
11201: LarkBotNotGroupAdminException,
11208: LarkBotNotGroupAdminException,
11202: LarkOnlyChatAdminCanInviteUserException,
11203: LarkForbiddenBotBatchSendMessageToUserException,
11204: LarkForbiddenBotBatchSendMessageToDepartmentException,
11205: LarkAppHasNoBotException,
11206: LarkUserCannotGrantToChatAdminException,
11207: LarkAppUnavailableException,
11209: LarkAppNotExistException,
11210: LarkAppUsageInfoNotExistException,
11211: LarkInviteUserToChatInvalidParamsException,
11212: LarkRemoveUserFromChatInvalidParamsException,
11213: LarkUpdateChatInvalidParamsException,
11214: LarkUploadImageInvalidParamsException,
11215: LarkEmptyChatIDException,
11216: LarkGetChatIDFailException,
11217: LarkInviteBotToChatFailException,
11218: LarkBotInChatFullException,
11219: LarkUnsupportedChatCrossTenantException,
11220: LarkForbiddenBotDisbandChatException,
11221: LarkBotForbiddenToGetImageBelongToThemException,
11222: LarkOwnerOfBotIsNotInChatException,
11223: LarkNotOpenApplicationSendMessagePermissionException,
11224: LarkInvalidMessageIDException,
11225: LarkAppIsNotVisibleToUserException,
11226: LarkInvalidAppIDException,
11227: LarkImageKeyNotExistException,
11234: LarkBotIsNotMessageOwnerException,
11235: LarkBanAtALLException,
11236: LarkUserNotActiveException,
11237: LarkChatDisbandedException,
11238: LarkMessageTooOldException,
11239: LarkNoPermissionToGotException,
# 云空间
90201: LarkDriveWrongRequestJsonException,
90202: LarkDriveWrongRangeException,
90203: LarkDriveFailException,
90204: LarkDriveWrongRequestBodyException,
90205: LarkDriveInvalidUsersException,
90206: LarkDriveEmptySheetIDException,
90207: LarkDriveEmptySheetTitleException,
90208: LarkDriveSameSheetIDOrTitleException,
90209: LarkDriveExistSheetIDException,
90210: LarkDriveExistSheetTitleException,
90211: LarkDriveWrongSheetIDException,
90212: LarkDriveWrongRowOrColException,
90213: LarkDrivePermissionFailException,
90214: LarkDriveSpreadSheetNotFoundException,
90215: LarkDriveSheetIDNotFoundException,
90216: LarkDriveEmptyValueException,
90217: LarkDriveTooManyRequestException,
96402: LarkDriveTimeoutException,
96403: LarkDriveProcessingException,
91404: LarkDriveLoginRequiredException,
91206: LarkDriveOutOfLimitException,
91207: LarkDriveDuplicateException,
91003: LarkDriveInvalidOperationException,
91004: LarkDriveUserNoSharePermissionException,
# 会议室
100001: LarkMeetingRoomInvalidPageTokenException,
100002: LarkMeetingRoomInvalidFieldSelectionException,
100003: LarkMeetingRoomTimeFormatMustFollowRFC3339StandardException,
100004: LarkMeetingRoomInvalidBuildingIDException,
100005: LarkMeetingRoomInvalidRoomIDException,
}
if code in exceptions:
return exceptions[code](code=code, msg=msg, url=url)
if 18000 <= code <= 20000:
return LarkInternalException(code=code, msg=msg, url=url)
if code in [4002, 60002]:
return LarkApprovalNotExistException(code=code, msg=msg, url=url)
if code in [90303, 91002, 91204, 91403]:
return LarkDriveForbiddenException(code=code, msg=msg, url=url)
if code in [90301, 91201, 96401]:
return LarkDriveFailedException(code=code, msg=msg, url=url)
if code in [90302, 91001, 91202, 91401]:
return LarkDriveParamErrorException(code=code, msg=msg, url=url)
if code in [90304, 91205]:
return LarkDriveMetaDeletedException(code=code, msg=msg, url=url)
if code in [90305, 91203, 91402]:
return LarkDriveMetaNotExistException(code=code, msg=msg, url=url)
if code in [90306, 91208]:
return LarkDriveReviewNotPassException(code=code, msg=msg, url=url)
if code in [90399, 95201, 95299, 96201, 96202, 96001] or (code >= 95201 and code <= 95209):
return LarkDriveInternalErrorException(code=code, msg=msg, url=url)
return OpenLarkException(code=code, msg=msg, url=url) | PypiClean |
/Flask_AdminLTE3-1.0.9-py3-none-any.whl/flask_adminlte3/static/plugins/jsgrid/jsgrid.min.js | !function(a,b,c){function d(a,c){var d=b(a);d.data(f,this),this._container=d,this.data=[],this.fields=[],this._editingRow=null,this._sortField=null,this._sortOrder=i,this._firstDisplayingPage=1,this._init(c),this.render()}var e="JSGrid",f=e,g="JSGridItem",h="JSGridEditRow",i="asc",j="desc",k="{first}",l="{pages}",m="{prev}",n="{next}",o="{last}",p="{pageIndex}",q="{pageCount}",r="{itemCount}",s="javascript:void(0);",t=function(a,c){return b.isFunction(a)?a.apply(c,b.makeArray(arguments).slice(2)):a},u=function(a){var c=b.Deferred();return a&&a.then?a.then(function(){c.resolve.apply(c,arguments)},function(){c.reject.apply(c,arguments)}):c.resolve(a),c.promise()},v={loadData:b.noop,insertItem:b.noop,updateItem:b.noop,deleteItem:b.noop};d.prototype={width:"auto",height:"auto",updateOnResize:!0,rowClass:b.noop,rowRenderer:null,rowClick:function(a){this.editing&&this.editItem(b(a.event.target).closest("tr"))},rowDoubleClick:b.noop,noDataContent:"Not found",noDataRowClass:"jsgrid-nodata-row",heading:!0,headerRowRenderer:null,headerRowClass:"jsgrid-header-row",headerCellClass:"jsgrid-header-cell",filtering:!1,filterRowRenderer:null,filterRowClass:"jsgrid-filter-row",inserting:!1,insertRowRenderer:null,insertRowClass:"jsgrid-insert-row",editing:!1,editRowRenderer:null,editRowClass:"jsgrid-edit-row",confirmDeleting:!0,deleteConfirm:"Are you sure?",selecting:!0,selectedRowClass:"jsgrid-selected-row",oddRowClass:"jsgrid-row",evenRowClass:"jsgrid-alt-row",cellClass:"jsgrid-cell",sorting:!1,sortableClass:"jsgrid-header-sortable",sortAscClass:"jsgrid-header-sort jsgrid-header-sort-asc",sortDescClass:"jsgrid-header-sort jsgrid-header-sort-desc",paging:!1,pagerContainer:null,pageIndex:1,pageSize:20,pageButtonCount:15,pagerFormat:"Pages: {first} {prev} {pages} {next} {last} {pageIndex} of {pageCount}",pagePrevText:"Prev",pageNextText:"Next",pageFirstText:"First",pageLastText:"Last",pageNavigatorNextText:"...",pageNavigatorPrevText:"...",pagerContainerClass:"jsgrid-pager-container",pagerClass:"jsgrid-pager",pagerNavButtonClass:"jsgrid-pager-nav-button",pagerNavButtonInactiveClass:"jsgrid-pager-nav-inactive-button",pageClass:"jsgrid-pager-page",currentPageClass:"jsgrid-pager-current-page",customLoading:!1,pageLoading:!1,autoload:!1,controller:v,loadIndication:!0,loadIndicationDelay:500,loadMessage:"Please, wait...",loadShading:!0,invalidMessage:"Invalid data entered!",invalidNotify:function(c){var d=b.map(c.errors,function(a){return a.message||null});a.alert([this.invalidMessage].concat(d).join("\n"))},onInit:b.noop,onRefreshing:b.noop,onRefreshed:b.noop,onPageChanged:b.noop,onItemDeleting:b.noop,onItemDeleted:b.noop,onItemInserting:b.noop,onItemInserted:b.noop,onItemEditing:b.noop,onItemUpdating:b.noop,onItemUpdated:b.noop,onItemInvalid:b.noop,onDataLoading:b.noop,onDataLoaded:b.noop,onOptionChanging:b.noop,onOptionChanged:b.noop,onError:b.noop,invalidClass:"jsgrid-invalid",containerClass:"jsgrid",tableClass:"jsgrid-table",gridHeaderClass:"jsgrid-grid-header",gridBodyClass:"jsgrid-grid-body",_init:function(a){b.extend(this,a),this._initLoadStrategy(),this._initController(),this._initFields(),this._attachWindowLoadResize(),this._attachWindowResizeCallback(),this._callEventHandler(this.onInit)},loadStrategy:function(){return this.pageLoading?new jsGrid.loadStrategies.PageLoadingStrategy(this):new jsGrid.loadStrategies.DirectLoadingStrategy(this)},_initLoadStrategy:function(){this._loadStrategy=t(this.loadStrategy,this)},_initController:function(){this._controller=b.extend({},v,t(this.controller,this))},renderTemplate:function(a,b,d){args=[];for(var e in d)args.push(d[e]);return args.unshift(a,b),a=t.apply(null,args),a===c||null===a?"":a},loadIndicator:function(a){return new jsGrid.LoadIndicator(a)},validation:function(a){return jsGrid.Validation&&new jsGrid.Validation(a)},_initFields:function(){var a=this;a.fields=b.map(a.fields,function(c){if(b.isPlainObject(c)){var d=c.type&&jsGrid.fields[c.type]||jsGrid.Field;c=new d(c)}return c._grid=a,c})},_attachWindowLoadResize:function(){b(a).on("load",b.proxy(this._refreshSize,this))},_attachWindowResizeCallback:function(){this.updateOnResize&&b(a).on("resize",b.proxy(this._refreshSize,this))},_detachWindowResizeCallback:function(){b(a).off("resize",this._refreshSize)},option:function(a,b){var c,d;return 1===arguments.length?this[a]:(c={option:a,oldValue:this[a],newValue:b},this._callEventHandler(this.onOptionChanging,c),this._handleOptionChange(c.option,c.newValue),d={option:c.option,value:c.newValue},void this._callEventHandler(this.onOptionChanged,d))},fieldOption:function(a,b,c){return a=this._normalizeField(a),2===arguments.length?a[b]:(a[b]=c,void this._renderGrid())},_handleOptionChange:function(a,b){switch(this[a]=b,a){case"width":case"height":this._refreshSize();break;case"rowClass":case"rowRenderer":case"rowClick":case"rowDoubleClick":case"noDataRowClass":case"noDataContent":case"selecting":case"selectedRowClass":case"oddRowClass":case"evenRowClass":this._refreshContent();break;case"pageButtonCount":case"pagerFormat":case"pagePrevText":case"pageNextText":case"pageFirstText":case"pageLastText":case"pageNavigatorNextText":case"pageNavigatorPrevText":case"pagerClass":case"pagerNavButtonClass":case"pageClass":case"currentPageClass":case"pagerRenderer":this._refreshPager();break;case"fields":this._initFields(),this.render();break;case"data":case"editing":case"heading":case"filtering":case"inserting":case"paging":this.refresh();break;case"loadStrategy":case"pageLoading":this._initLoadStrategy(),this.search();break;case"pageIndex":this.openPage(b);break;case"pageSize":this.refresh(),this.search();break;case"editRowRenderer":case"editRowClass":this.cancelEdit();break;case"updateOnResize":this._detachWindowResizeCallback(),this._attachWindowResizeCallback();break;case"invalidNotify":case"invalidMessage":break;default:this.render()}},destroy:function(){this._detachWindowResizeCallback(),this._clear(),this._container.removeData(f)},render:function(){return this._renderGrid(),this.autoload?this.loadData():b.Deferred().resolve().promise()},_renderGrid:function(){this._clear(),this._container.addClass(this.containerClass).css("position","relative").append(this._createHeader()).append(this._createBody()),this._pagerContainer=this._createPagerContainer(),this._loadIndicator=this._createLoadIndicator(),this._validation=this._createValidation(),this.refresh()},_createLoadIndicator:function(){return t(this.loadIndicator,this,{message:this.loadMessage,shading:this.loadShading,container:this._container})},_createValidation:function(){return t(this.validation,this)},_clear:function(){this.cancelEdit(),clearTimeout(this._loadingTimer),this._pagerContainer&&this._pagerContainer.empty(),this._container.empty().css({position:"",width:"",height:""})},_createHeader:function(){var a=this._headerRow=this._createHeaderRow(),c=this._filterRow=this._createFilterRow(),d=this._insertRow=this._createInsertRow(),e=this._headerGrid=b("<table>").addClass(this.tableClass).append(a).append(c).append(d),f=this._header=b("<div>").addClass(this.gridHeaderClass).addClass(this._scrollBarWidth()?"jsgrid-header-scrollbar":"").append(e);return f},_createBody:function(){var a=this._content=b("<tbody>"),c=this._bodyGrid=b("<table>").addClass(this.tableClass).append(a),d=this._body=b("<div>").addClass(this.gridBodyClass).append(c).on("scroll",b.proxy(function(a){this._header.scrollLeft(a.target.scrollLeft)},this));return d},_createPagerContainer:function(){var a=this.pagerContainer||b("<div>").appendTo(this._container);return b(a).addClass(this.pagerContainerClass)},_eachField:function(a){var c=this;b.each(this.fields,function(b,d){d.visible&&a.call(c,d,b)})},_createHeaderRow:function(){if(b.isFunction(this.headerRowRenderer))return b(this.renderTemplate(this.headerRowRenderer,this));var a=b("<tr>").addClass(this.headerRowClass);return this._eachField(function(c,d){var e=this._prepareCell("<th>",c,"headercss",this.headerCellClass).append(this.renderTemplate(c.headerTemplate,c)).appendTo(a);this.sorting&&c.sorting&&e.addClass(this.sortableClass).on("click",b.proxy(function(){this.sort(d)},this))}),a},_prepareCell:function(a,c,d,e){return b(a).css("width",c.width).addClass(e||this.cellClass).addClass(d&&c[d]||c.css).addClass(c.align?"jsgrid-align-"+c.align:"")},_createFilterRow:function(){if(b.isFunction(this.filterRowRenderer))return b(this.renderTemplate(this.filterRowRenderer,this));var a=b("<tr>").addClass(this.filterRowClass);return this._eachField(function(b){this._prepareCell("<td>",b,"filtercss").append(this.renderTemplate(b.filterTemplate,b)).appendTo(a)}),a},_createInsertRow:function(){if(b.isFunction(this.insertRowRenderer))return b(this.renderTemplate(this.insertRowRenderer,this));var a=b("<tr>").addClass(this.insertRowClass);return this._eachField(function(b){this._prepareCell("<td>",b,"insertcss").append(this.renderTemplate(b.insertTemplate,b)).appendTo(a)}),a},_callEventHandler:function(a,c){return a.call(this,b.extend(c,{grid:this})),c},reset:function(){return this._resetSorting(),this._resetPager(),this._loadStrategy.reset()},_resetPager:function(){this._firstDisplayingPage=1,this._setPage(1)},_resetSorting:function(){this._sortField=null,this._sortOrder=i,this._clearSortingCss()},refresh:function(){this._callEventHandler(this.onRefreshing),this.cancelEdit(),this._refreshHeading(),this._refreshFiltering(),this._refreshInserting(),this._refreshContent(),this._refreshPager(),this._refreshSize(),this._callEventHandler(this.onRefreshed)},_refreshHeading:function(){this._headerRow.toggle(this.heading)},_refreshFiltering:function(){this._filterRow.toggle(this.filtering)},_refreshInserting:function(){this._insertRow.toggle(this.inserting)},_refreshContent:function(){var a=this._content;if(a.empty(),!this.data.length)return a.append(this._createNoDataRow()),this;for(var b=this._loadStrategy.firstDisplayIndex(),c=this._loadStrategy.lastDisplayIndex(),d=b;c>d;d++){var e=this.data[d];a.append(this._createRow(e,d))}},_createNoDataRow:function(){var a=0;return this._eachField(function(){a++}),b("<tr>").addClass(this.noDataRowClass).append(b("<td>").addClass(this.cellClass).attr("colspan",a).append(this.renderTemplate(this.noDataContent,this)))},_createRow:function(a,c){var d;return b.isFunction(this.rowRenderer)?d=this.renderTemplate(this.rowRenderer,this,{item:a,itemIndex:c}):(d=b("<tr>"),this._renderCells(d,a)),d.addClass(this._getRowClasses(a,c)).data(g,a).on("click",b.proxy(function(b){this.rowClick({item:a,itemIndex:c,event:b})},this)).on("dblclick",b.proxy(function(b){this.rowDoubleClick({item:a,itemIndex:c,event:b})},this)),this.selecting&&this._attachRowHover(d),d},_getRowClasses:function(a,b){var c=[];return c.push((b+1)%2?this.oddRowClass:this.evenRowClass),c.push(t(this.rowClass,this,a,b)),c.join(" ")},_attachRowHover:function(a){var c=this.selectedRowClass;a.hover(function(){b(this).addClass(c)},function(){b(this).removeClass(c)})},_renderCells:function(a,b){return this._eachField(function(c){a.append(this._createCell(b,c))}),this},_createCell:function(a,c){var d,e=this._getItemFieldValue(a,c),f={value:e,item:a};return d=b.isFunction(c.cellRenderer)?this.renderTemplate(c.cellRenderer,c,f):b("<td>").append(this.renderTemplate(c.itemTemplate||e,c,f)),this._prepareCell(d,c)},_getItemFieldValue:function(a,b){for(var c=b.name.split("."),d=a[c.shift()];d&&c.length;)d=d[c.shift()];return d},_setItemFieldValue:function(a,b,c){for(var d=b.name.split("."),e=a,f=d[0];e&&d.length;)a=e,f=d.shift(),e=a[f];if(!e)for(;d.length;)a=a[f]={},f=d.shift();a[f]=c},sort:function(a,c){return b.isPlainObject(a)&&(c=a.order,a=a.field),this._clearSortingCss(),this._setSortingParams(a,c),this._setSortingCss(),this._loadStrategy.sort()},_clearSortingCss:function(){this._headerRow.find("th").removeClass(this.sortAscClass).removeClass(this.sortDescClass)},_setSortingParams:function(a,b){a=this._normalizeField(a),b=b||(this._sortField===a?this._reversedSortOrder(this._sortOrder):i),this._sortField=a,this._sortOrder=b},_normalizeField:function(a){return b.isNumeric(a)?this.fields[a]:"string"==typeof a?b.grep(this.fields,function(b){return b.name===a})[0]:a},_reversedSortOrder:function(a){return a===i?j:i},_setSortingCss:function(){var a=this._visibleFieldIndex(this._sortField);this._headerRow.find("th").eq(a).addClass(this._sortOrder===i?this.sortAscClass:this.sortDescClass)},_visibleFieldIndex:function(a){return b.inArray(a,b.grep(this.fields,function(a){return a.visible}))},_sortData:function(){var a=this._sortFactor(),b=this._sortField;b&&this.data.sort(function(c,d){return a*b.sortingFunc(c[b.name],d[b.name])})},_sortFactor:function(){return this._sortOrder===i?1:-1},_itemsCount:function(){return this._loadStrategy.itemsCount()},_pagesCount:function(){var a=this._itemsCount(),b=this.pageSize;return Math.floor(a/b)+(a%b?1:0)},_refreshPager:function(){var a=this._pagerContainer;a.empty(),this.paging&&a.append(this._createPager());var b=this.paging&&this._pagesCount()>1;a.toggle(b)},_createPager:function(){var a;return a=b.isFunction(this.pagerRenderer)?b(this.pagerRenderer({pageIndex:this.pageIndex,pageCount:this._pagesCount()})):b("<div>").append(this._createPagerByFormat()),a.addClass(this.pagerClass),a},_createPagerByFormat:function(){var a=this.pageIndex,c=this._pagesCount(),d=this._itemsCount(),e=this.pagerFormat.split(" ");return b.map(e,b.proxy(function(e){var f=e;return e===l?f=this._createPages():e===k?f=this._createPagerNavButton(this.pageFirstText,1,a>1):e===m?f=this._createPagerNavButton(this.pagePrevText,a-1,a>1):e===n?f=this._createPagerNavButton(this.pageNextText,a+1,c>a):e===o?f=this._createPagerNavButton(this.pageLastText,c,c>a):e===p?f=a:e===q?f=c:e===r&&(f=d),b.isArray(f)?f.concat([" "]):[f," "]},this))},_createPages:function(){var a=this._pagesCount(),b=this.pageButtonCount,c=this._firstDisplayingPage,d=[];c>1&&d.push(this._createPagerPageNavButton(this.pageNavigatorPrevText,this.showPrevPages));for(var e=0,f=c;b>e&&a>=f;e++,f++)d.push(f===this.pageIndex?this._createPagerCurrentPage():this._createPagerPage(f));return a>c+b-1&&d.push(this._createPagerPageNavButton(this.pageNavigatorNextText,this.showNextPages)),d},_createPagerNavButton:function(a,c,d){return this._createPagerButton(a,this.pagerNavButtonClass+(d?"":" "+this.pagerNavButtonInactiveClass),d?function(){this.openPage(c)}:b.noop)},_createPagerPageNavButton:function(a,b){return this._createPagerButton(a,this.pagerNavButtonClass,b)},_createPagerPage:function(a){return this._createPagerButton(a,this.pageClass,function(){this.openPage(a)})},_createPagerButton:function(a,c,d){var e=b("<a>").attr("href",s).html(a).on("click",b.proxy(d,this));return b("<span>").addClass(c).append(e)},_createPagerCurrentPage:function(){return b("<span>").addClass(this.pageClass).addClass(this.currentPageClass).text(this.pageIndex)},_refreshSize:function(){this._refreshHeight(),this._refreshWidth()},_refreshWidth:function(){var a="auto"===this.width?this._getAutoWidth():this.width;this._container.width(a)},_getAutoWidth:function(){var a=this._headerGrid,b=this._header;a.width("auto");var c=a.outerWidth(),d=b.outerWidth()-b.innerWidth();return a.width(""),c+d},_scrollBarWidth:function(){var a;return function(){if(a===c){var d=b("<div style='width:50px;height:50px;overflow:hidden;position:absolute;top:-10000px;left:-10000px;'></div>"),e=b("<div style='height:100px;'></div>");d.append(e).appendTo("body");var f=e.innerWidth();d.css("overflow-y","auto");var g=e.innerWidth();d.remove(),a=f-g}return a}}(),_refreshHeight:function(){var a,b=this._container,c=this._pagerContainer,d=this.height;b.height(d),"auto"!==d&&(d=b.height(),a=this._header.outerHeight(!0),c.parents(b).length&&(a+=c.outerHeight(!0)),this._body.outerHeight(d-a))},showPrevPages:function(){var a=this._firstDisplayingPage,b=this.pageButtonCount;this._firstDisplayingPage=a>b?a-b:1,this._refreshPager()},showNextPages:function(){var a=this._firstDisplayingPage,b=this.pageButtonCount,c=this._pagesCount();this._firstDisplayingPage=a+2*b>c?c-b+1:a+b,this._refreshPager()},openPage:function(a){1>a||a>this._pagesCount()||(this._setPage(a),this._loadStrategy.openPage(a))},_setPage:function(a){var b=this._firstDisplayingPage,c=this.pageButtonCount;this.pageIndex=a,b>a&&(this._firstDisplayingPage=a),a>b+c-1&&(this._firstDisplayingPage=a-c+1),this._callEventHandler(this.onPageChanged,{pageIndex:a})},_controllerCall:function(a,c,d,e){if(d)return b.Deferred().reject().promise();this._showLoading();var f=this._controller;if(!f||!f[a])throw Error("controller has no method '"+a+"'");return u(f[a](c)).done(b.proxy(e,this)).fail(b.proxy(this._errorHandler,this)).always(b.proxy(this._hideLoading,this))},_errorHandler:function(){this._callEventHandler(this.onError,{args:b.makeArray(arguments)})},_showLoading:function(){this.loadIndication&&(clearTimeout(this._loadingTimer),this._loadingTimer=setTimeout(b.proxy(function(){this._loadIndicator.show()},this),this.loadIndicationDelay))},_hideLoading:function(){this.loadIndication&&(clearTimeout(this._loadingTimer),this._loadIndicator.hide())},search:function(a){return this._resetSorting(),this._resetPager(),this.loadData(a)},loadData:function(a){a=a||(this.filtering?this.getFilter():{}),b.extend(a,this._loadStrategy.loadParams(),this._sortingParams());var c=this._callEventHandler(this.onDataLoading,{filter:a});return this._controllerCall("loadData",a,c.cancel,function(a){a&&(this._loadStrategy.finishLoad(a),this._callEventHandler(this.onDataLoaded,{data:a}))})},getFilter:function(){var a={};return this._eachField(function(b){b.filtering&&this._setItemFieldValue(a,b,b.filterValue())}),a},_sortingParams:function(){return this.sorting&&this._sortField?{sortField:this._sortField.name,sortOrder:this._sortOrder}:{}},getSorting:function(){var a=this._sortingParams();return{field:a.sortField,order:a.sortOrder}},clearFilter:function(){var a=this._createFilterRow();return this._filterRow.replaceWith(a),this._filterRow=a,this.search()},insertItem:function(a){var c=a||this._getValidatedInsertItem();if(!c)return b.Deferred().reject().promise();var d=this._callEventHandler(this.onItemInserting,{item:c});return this._controllerCall("insertItem",c,d.cancel,function(a){a=a||c,this._loadStrategy.finishInsert(a),this._callEventHandler(this.onItemInserted,{item:a})})},_getValidatedInsertItem:function(){var a=this._getInsertItem();return this._validateItem(a,this._insertRow)?a:null},_getInsertItem:function(){var a={};return this._eachField(function(b){b.inserting&&this._setItemFieldValue(a,b,b.insertValue())}),a},_validateItem:function(a,c){var d=[],e={item:a,itemIndex:this._rowIndex(c),row:c};if(this._eachField(function(f){if(f.validate&&(c!==this._insertRow||f.inserting)&&(c!==this._getEditRow()||f.editing)){var g=this._getItemFieldValue(a,f),h=this._validation.validate(b.extend({value:g,rules:f.validate},e));this._setCellValidity(c.children().eq(this._visibleFieldIndex(f)),h),h.length&&d.push.apply(d,b.map(h,function(a){return{field:f,message:a}}))}}),!d.length)return!0;var f=b.extend({errors:d},e);return this._callEventHandler(this.onItemInvalid,f),this.invalidNotify(f),!1},_setCellValidity:function(a,b){a.toggleClass(this.invalidClass,!!b.length).attr("title",b.join("\n"))},clearInsert:function(){var a=this._createInsertRow();this._insertRow.replaceWith(a),this._insertRow=a,this.refresh()},editItem:function(a){var b=this.rowByItem(a);b.length&&this._editRow(b)},rowByItem:function(a){return a.jquery||a.nodeType?b(a):this._content.find("tr").filter(function(){return b.data(this,g)===a})},_editRow:function(a){if(this.editing){var b=a.data(g),c=this._callEventHandler(this.onItemEditing,{row:a,item:b,itemIndex:this._itemIndex(b)});if(!c.cancel){this._editingRow&&this.cancelEdit();var d=this._createEditRow(b);this._editingRow=a,a.hide(),d.insertBefore(a),a.data(h,d)}}},_createEditRow:function(a){if(b.isFunction(this.editRowRenderer))return b(this.renderTemplate(this.editRowRenderer,this,{item:a,itemIndex:this._itemIndex(a)}));var c=b("<tr>").addClass(this.editRowClass);return this._eachField(function(b){var d=this._getItemFieldValue(a,b);this._prepareCell("<td>",b,"editcss").append(this.renderTemplate(b.editTemplate||"",b,{value:d,item:a})).appendTo(c)}),c},updateItem:function(a,b){1===arguments.length&&(b=a);var c=a?this.rowByItem(a):this._editingRow;return(b=b||this._getValidatedEditedItem())?this._updateRow(c,b):void 0},_getValidatedEditedItem:function(){var a=this._getEditedItem();return this._validateItem(a,this._getEditRow())?a:null},_updateRow:function(a,c){var d=a.data(g),e=this._itemIndex(d),f=b.extend(!0,{},d,c),h=this._callEventHandler(this.onItemUpdating,{row:a,item:f,itemIndex:e,previousItem:d});return this._controllerCall("updateItem",f,h.cancel,function(g){var h=b.extend(!0,{},d);f=g||b.extend(!0,d,c);var i=this._finishUpdate(a,f,e);this._callEventHandler(this.onItemUpdated,{row:i,item:f,itemIndex:e,previousItem:h})})},_rowIndex:function(a){return this._content.children().index(b(a))},_itemIndex:function(a){return b.inArray(a,this.data)},_finishUpdate:function(a,b,c){this.cancelEdit(),this.data[c]=b;var d=this._createRow(b,c);return a.replaceWith(d),d},_getEditedItem:function(){var a={};return this._eachField(function(b){b.editing&&this._setItemFieldValue(a,b,b.editValue())}),a},cancelEdit:function(){this._editingRow&&(this._getEditRow().remove(),this._editingRow.show(),this._editingRow=null)},_getEditRow:function(){return this._editingRow&&this._editingRow.data(h)},deleteItem:function(b){var c=this.rowByItem(b);if(c.length&&(!this.confirmDeleting||a.confirm(t(this.deleteConfirm,this,c.data(g)))))return this._deleteRow(c)},_deleteRow:function(a){var b=a.data(g),c=this._itemIndex(b),d=this._callEventHandler(this.onItemDeleting,{row:a,item:b,itemIndex:c});return this._controllerCall("deleteItem",b,d.cancel,function(){this._loadStrategy.finishDelete(b,c),this._callEventHandler(this.onItemDeleted,{row:a,item:b,itemIndex:c})})}},b.fn.jsGrid=function(a){var e=b.makeArray(arguments),g=e.slice(1),h=this;return this.each(function(){var e,i=b(this),j=i.data(f);if(j)if("string"==typeof a){if(e=j[a].apply(j,g),e!==c&&e!==j)return h=e,!1}else j._detachWindowResizeCallback(),j._init(a),j.render();else new d(i,a)}),h};var w={},x=function(a){var c;b.isPlainObject(a)?c=d.prototype:(c=w[a].prototype,a=arguments[1]||{}),b.extend(c,a)},y={},z=function(a){var c=b.isPlainObject(a)?a:y[a];if(!c)throw Error("unknown locale "+a);A(jsGrid,c)},A=function(a,c){b.each(c,function(c,d){return b.isPlainObject(d)?void A(a[c]||a[c[0].toUpperCase()+c.slice(1)],d):void(a.hasOwnProperty(c)?a[c]=d:a.prototype[c]=d)})};a.jsGrid={Grid:d,fields:w,setDefaults:x,locales:y,locale:z,version:"1.5.3"}}(window,jQuery),function(a,b){function c(a){this._init(a)}c.prototype={container:"body",message:"Loading...",shading:!0,zIndex:1e3,shaderClass:"jsgrid-load-shader",loadPanelClass:"jsgrid-load-panel",_init:function(a){b.extend(!0,this,a),this._initContainer(),this._initShader(),this._initLoadPanel()},_initContainer:function(){this._container=b(this.container)},_initShader:function(){this.shading&&(this._shader=b("<div>").addClass(this.shaderClass).hide().css({position:"absolute",top:0,right:0,bottom:0,left:0,zIndex:this.zIndex}).appendTo(this._container))},_initLoadPanel:function(){this._loadPanel=b("<div>").addClass(this.loadPanelClass).text(this.message).hide().css({position:"absolute",top:"50%",left:"50%",zIndex:this.zIndex}).appendTo(this._container)},show:function(){var a=this._loadPanel.show(),b=a.outerWidth(),c=a.outerHeight();a.css({marginTop:-c/2,marginLeft:-b/2}),this._shader.show()},hide:function(){this._loadPanel.hide(),this._shader.hide()}},a.LoadIndicator=c}(jsGrid,jQuery),function(a,b){function c(a){this._grid=a}function d(a){this._grid=a,this._itemsCount=0}c.prototype={firstDisplayIndex:function(){var a=this._grid;return a.option("paging")?(a.option("pageIndex")-1)*a.option("pageSize"):0},lastDisplayIndex:function(){var a=this._grid,b=a.option("data").length;return a.option("paging")?Math.min(a.option("pageIndex")*a.option("pageSize"),b):b},itemsCount:function(){return this._grid.option("data").length},openPage:function(){this._grid.refresh()},loadParams:function(){return{}},sort:function(){return this._grid._sortData(),this._grid.refresh(),b.Deferred().resolve().promise()},reset:function(){return this._grid.refresh(),b.Deferred().resolve().promise()},finishLoad:function(a){this._grid.option("data",a)},finishInsert:function(a){var b=this._grid;b.option("data").push(a),b.refresh()},finishDelete:function(a,b){var c=this._grid;c.option("data").splice(b,1),c.reset()}},d.prototype={firstDisplayIndex:function(){return 0},lastDisplayIndex:function(){return this._grid.option("data").length},itemsCount:function(){return this._itemsCount},openPage:function(){this._grid.loadData()},loadParams:function(){var a=this._grid;return{pageIndex:a.option("pageIndex"),pageSize:a.option("pageSize")}},reset:function(){return this._grid.loadData()},sort:function(){return this._grid.loadData()},finishLoad:function(a){this._itemsCount=a.itemsCount,this._grid.option("data",a.data)},finishInsert:function(){this._grid.search()},finishDelete:function(){this._grid.search()}},a.loadStrategies={DirectLoadingStrategy:c,PageLoadingStrategy:d}}(jsGrid,jQuery),function(a){var b=function(a){return"undefined"!=typeof a&&null!==a},c={string:function(a,c){return b(a)||b(c)?b(a)?b(c)?(""+a).localeCompare(""+c):1:-1:0},number:function(a,b){return a-b},date:function(a,b){return a-b},numberAsString:function(a,b){return parseFloat(a)-parseFloat(b)}};a.sortStrategies=c}(jsGrid,jQuery),function(a,b,c){function d(a){this._init(a)}d.prototype={_init:function(a){b.extend(!0,this,a)},validate:function(a){var c=[];return b.each(this._normalizeRules(a.rules),function(d,e){if(!e.validator(a.value,a.item,e.param)){var f=b.isFunction(e.message)?e.message(a.value,a.item):e.message;c.push(f)}}),c},_normalizeRules:function(a){return b.isArray(a)||(a=[a]),b.map(a,b.proxy(function(a){return this._normalizeRule(a)},this))},_normalizeRule:function(a){if("string"==typeof a&&(a={validator:a}),b.isFunction(a)&&(a={validator:a}),!b.isPlainObject(a))throw Error("wrong validation config specified");return a=b.extend({},a),b.isFunction(a.validator)?a:this._applyNamedValidator(a,a.validator)},_applyNamedValidator:function(a,c){delete a.validator;var d=e[c];if(!d)throw Error('unknown validator "'+c+'"');return b.isFunction(d)&&(d={validator:d}),b.extend({},d,a)}},a.Validation=d;var e={required:{message:"Field is required",validator:function(a){return a!==c&&null!==a&&""!==a}},rangeLength:{message:"Field value length is out of the defined range",validator:function(a,b,c){return a.length>=c[0]&&a.length<=c[1]}},minLength:{message:"Field value is too short",validator:function(a,b,c){return a.length>=c}},maxLength:{message:"Field value is too long",validator:function(a,b,c){return a.length<=c}},pattern:{message:"Field value is not matching the defined pattern",validator:function(a,b,c){return"string"==typeof c&&(c=new RegExp("^(?:"+c+")$")),c.test(a)}},range:{message:"Field value is out of the defined range",validator:function(a,b,c){return a>=c[0]&&a<=c[1]}},min:{message:"Field value is too small",validator:function(a,b,c){return a>=c}},max:{message:"Field value is too large",validator:function(a,b,c){return c>=a}}};a.validators=e}(jsGrid,jQuery),function(a,b,c){function d(a){b.extend(!0,this,a),this.sortingFunc=this._getSortingFunc()}d.prototype={name:"",title:null,css:"",align:"",width:100,visible:!0,filtering:!0,inserting:!0,editing:!0,sorting:!0,sorter:"string",headerTemplate:function(){return this.title===c||null===this.title?this.name:this.title},itemTemplate:function(a){return a},filterTemplate:function(){return""},insertTemplate:function(){return""},editTemplate:function(a,b){return this._value=a,this.itemTemplate(a,b)},filterValue:function(){return""},insertValue:function(){return""},editValue:function(){return this._value},_getSortingFunc:function(){var c=this.sorter;if(b.isFunction(c))return c;if("string"==typeof c)return a.sortStrategies[c];throw Error('wrong sorter for the field "'+this.name+'"!')}},a.Field=d}(jsGrid,jQuery),function(a,b){function c(a){d.call(this,a)}var d=a.Field;c.prototype=new d({autosearch:!0,readOnly:!1,filterTemplate:function(){if(!this.filtering)return"";var a=this._grid,b=this.filterControl=this._createTextBox();return this.autosearch&&b.on("keypress",function(b){13===b.which&&(a.search(),b.preventDefault())}),b},insertTemplate:function(){return this.inserting?this.insertControl=this._createTextBox():""},editTemplate:function(a){if(!this.editing)return this.itemTemplate.apply(this,arguments);var b=this.editControl=this._createTextBox();return b.val(a),b},filterValue:function(){return this.filterControl.val()},insertValue:function(){return this.insertControl.val()},editValue:function(){return this.editControl.val()},_createTextBox:function(){return b("<input>").attr("type","text").prop("readonly",!!this.readOnly)}}),a.fields.text=a.TextField=c}(jsGrid,jQuery),function(a,b,c){function d(a){e.call(this,a)}var e=a.TextField;d.prototype=new e({sorter:"number",align:"right",readOnly:!1,filterValue:function(){return this.filterControl.val()?parseInt(this.filterControl.val()||0,10):c},insertValue:function(){return this.insertControl.val()?parseInt(this.insertControl.val()||0,10):c},editValue:function(){return this.editControl.val()?parseInt(this.editControl.val()||0,10):c},_createTextBox:function(){return b("<input>").attr("type","number").prop("readonly",!!this.readOnly)}}),a.fields.number=a.NumberField=d}(jsGrid,jQuery),function(a,b){function c(a){d.call(this,a)}var d=a.TextField;c.prototype=new d({insertTemplate:function(){return this.inserting?this.insertControl=this._createTextArea():""},editTemplate:function(a){if(!this.editing)return this.itemTemplate.apply(this,arguments);var b=this.editControl=this._createTextArea();return b.val(a),b},_createTextArea:function(){return b("<textarea>").prop("readonly",!!this.readOnly)}}),a.fields.textarea=a.TextAreaField=c}(jsGrid,jQuery),function(a,b,c){function d(a){if(this.items=[],this.selectedIndex=-1,this.valueField="",this.textField="",a.valueField&&a.items.length){var b=a.items[0][a.valueField];this.valueType=typeof b===f?f:g}this.sorter=this.valueType,e.call(this,a)}var e=a.NumberField,f="number",g="string";d.prototype=new e({align:"center",valueType:f,itemTemplate:function(a){var d,e=this.items,f=this.valueField,g=this.textField;d=f?b.grep(e,function(b){return b[f]===a})[0]||{}:e[a];var h=g?d[g]:d;return h===c||null===h?"":h},filterTemplate:function(){if(!this.filtering)return"";var a=this._grid,b=this.filterControl=this._createSelect();return this.autosearch&&b.on("change",function(){a.search()}),b},insertTemplate:function(){return this.inserting?this.insertControl=this._createSelect():""},editTemplate:function(a){if(!this.editing)return this.itemTemplate.apply(this,arguments);var b=this.editControl=this._createSelect();return a!==c&&b.val(a),b},filterValue:function(){var a=this.filterControl.val();return this.valueType===f?parseInt(a||0,10):a},insertValue:function(){var a=this.insertControl.val();return this.valueType===f?parseInt(a||0,10):a},editValue:function(){var a=this.editControl.val();return this.valueType===f?parseInt(a||0,10):a},_createSelect:function(){var a=b("<select>"),c=this.valueField,d=this.textField,e=this.selectedIndex;return b.each(this.items,function(f,g){var h=c?g[c]:f,i=d?g[d]:g,j=b("<option>").attr("value",h).text(i).appendTo(a);j.prop("selected",e===f)}),a.prop("disabled",!!this.readOnly),a}}),a.fields.select=a.SelectField=d}(jsGrid,jQuery),function(a,b,c){function d(a){e.call(this,a)}var e=a.Field;d.prototype=new e({sorter:"number",align:"center",autosearch:!0,itemTemplate:function(a){return this._createCheckbox().prop({checked:a,disabled:!0})},filterTemplate:function(){if(!this.filtering)return"";var a=this._grid,c=this.filterControl=this._createCheckbox();return c.prop({readOnly:!0,indeterminate:!0}),c.on("click",function(){var a=b(this);
a.prop("readOnly")?a.prop({checked:!1,readOnly:!1}):a.prop("checked")||a.prop({readOnly:!0,indeterminate:!0})}),this.autosearch&&c.on("click",function(){a.search()}),c},insertTemplate:function(){return this.inserting?this.insertControl=this._createCheckbox():""},editTemplate:function(a){if(!this.editing)return this.itemTemplate.apply(this,arguments);var b=this.editControl=this._createCheckbox();return b.prop("checked",a),b},filterValue:function(){return this.filterControl.get(0).indeterminate?c:this.filterControl.is(":checked")},insertValue:function(){return this.insertControl.is(":checked")},editValue:function(){return this.editControl.is(":checked")},_createCheckbox:function(){return b("<input>").attr("type","checkbox")}}),a.fields.checkbox=a.CheckboxField=d}(jsGrid,jQuery),function(a,b){function c(a){d.call(this,a),this._configInitialized=!1}var d=a.Field;c.prototype=new d({css:"jsgrid-control-field",align:"center",width:50,filtering:!1,inserting:!1,editing:!1,sorting:!1,buttonClass:"jsgrid-button",modeButtonClass:"jsgrid-mode-button",modeOnButtonClass:"jsgrid-mode-on-button",searchModeButtonClass:"jsgrid-search-mode-button",insertModeButtonClass:"jsgrid-insert-mode-button",editButtonClass:"jsgrid-edit-button",deleteButtonClass:"jsgrid-delete-button",searchButtonClass:"jsgrid-search-button",clearFilterButtonClass:"jsgrid-clear-filter-button",insertButtonClass:"jsgrid-insert-button",updateButtonClass:"jsgrid-update-button",cancelEditButtonClass:"jsgrid-cancel-edit-button",searchModeButtonTooltip:"Switch to searching",insertModeButtonTooltip:"Switch to inserting",editButtonTooltip:"Edit",deleteButtonTooltip:"Delete",searchButtonTooltip:"Search",clearFilterButtonTooltip:"Clear filter",insertButtonTooltip:"Insert",updateButtonTooltip:"Update",cancelEditButtonTooltip:"Cancel edit",editButton:!0,deleteButton:!0,clearFilterButton:!0,modeSwitchButton:!0,_initConfig:function(){this._hasFiltering=this._grid.filtering,this._hasInserting=this._grid.inserting,this._hasInserting&&this.modeSwitchButton&&(this._grid.inserting=!1),this._configInitialized=!0},headerTemplate:function(){this._configInitialized||this._initConfig();var a=this._hasFiltering,b=this._hasInserting;return this.modeSwitchButton&&(a||b)?a&&!b?this._createFilterSwitchButton():b&&!a?this._createInsertSwitchButton():this._createModeSwitchButton():""},itemTemplate:function(a,c){var d=b([]);return this.editButton&&(d=d.add(this._createEditButton(c))),this.deleteButton&&(d=d.add(this._createDeleteButton(c))),d},filterTemplate:function(){var a=this._createSearchButton();return this.clearFilterButton?a.add(this._createClearFilterButton()):a},insertTemplate:function(){return this._createInsertButton()},editTemplate:function(){return this._createUpdateButton().add(this._createCancelEditButton())},_createFilterSwitchButton:function(){return this._createOnOffSwitchButton("filtering",this.searchModeButtonClass,!0)},_createInsertSwitchButton:function(){return this._createOnOffSwitchButton("inserting",this.insertModeButtonClass,!1)},_createOnOffSwitchButton:function(a,c,d){var e=d,f=b.proxy(function(){g.toggleClass(this.modeOnButtonClass,e)},this),g=this._createGridButton(this.modeButtonClass+" "+c,"",function(b){e=!e,b.option(a,e),f()});return f(),g},_createModeSwitchButton:function(){var a=!1,c=b.proxy(function(){d.attr("title",a?this.searchModeButtonTooltip:this.insertModeButtonTooltip).toggleClass(this.insertModeButtonClass,!a).toggleClass(this.searchModeButtonClass,a)},this),d=this._createGridButton(this.modeButtonClass,"",function(b){a=!a,b.option("inserting",a),b.option("filtering",!a),c()});return c(),d},_createEditButton:function(a){return this._createGridButton(this.editButtonClass,this.editButtonTooltip,function(b,c){b.editItem(a),c.stopPropagation()})},_createDeleteButton:function(a){return this._createGridButton(this.deleteButtonClass,this.deleteButtonTooltip,function(b,c){b.deleteItem(a),c.stopPropagation()})},_createSearchButton:function(){return this._createGridButton(this.searchButtonClass,this.searchButtonTooltip,function(a){a.search()})},_createClearFilterButton:function(){return this._createGridButton(this.clearFilterButtonClass,this.clearFilterButtonTooltip,function(a){a.clearFilter()})},_createInsertButton:function(){return this._createGridButton(this.insertButtonClass,this.insertButtonTooltip,function(a){a.insertItem().done(function(){a.clearInsert()})})},_createUpdateButton:function(){return this._createGridButton(this.updateButtonClass,this.updateButtonTooltip,function(a,b){a.updateItem(),b.stopPropagation()})},_createCancelEditButton:function(){return this._createGridButton(this.cancelEditButtonClass,this.cancelEditButtonTooltip,function(a,b){a.cancelEdit(),b.stopPropagation()})},_createGridButton:function(a,c,d){var e=this._grid;return b("<input>").addClass(this.buttonClass).addClass(a).attr({type:"button",title:c}).on("click",function(a){d(e,a)})},editValue:function(){return""}}),a.fields.control=a.ControlField=c}(jsGrid,jQuery); | PypiClean |
/Eureqa-1.76.0.tar.gz/Eureqa-1.76.0/eureqa/analysis_templates/execution.py |
from progress_update import ProgressUpdate
from parameters_values import ParametersValues
from parameter_validation_result import ParameterValidationResult
from eureqa.utils import Throttle
import json
from eureqa.utils import utils
class Execution:
"""Represents an analysis template execution on the server.
:param dict body: Class metadata as dictionary
:param str template_id: The id of the analysis_template the execution belongs to.
:param Eureqa eureqa: A eureqa connection.
:var str `~eureqa.analysis_templates.execution.Execution.template_id`: The id of the analysis_template the execution belongs to.
:var str `~eureqa.analysis_templates.execution.Execution.analysis_id`: The id of the analysis the execution belongs to.
:var str state: The current state of the execution.
:var list `~eureqa.analysis_templates.execution.Execution.parameters`: The list of parameter values for the execution.
:var list `~eureqa.analysis_templates.execution.Execution.progress_updates`: The list of updates for the execution.
"""
def __init__(self, body, template_id, eureqa):
"""For internal use only"""
self._id = body['id']
self.template_id = template_id
self.analysis_id = body['analysis_id']
self.parameters = ParametersValues._from_json(eureqa, body, execution=self).parameters
self._add_self_to_data_file_parameters()
self._eureqa = eureqa
self._body = body
@property
@Throttle()
def state(self):
return self._get_updated_self()._body['state']
def _get_updated_self(self):
""" Internal. Return a new Execution object representing the current state on the server """
self._eureqa._session.report_progress('Getting details for execution: \'%s\' of analysis_template: \'%s\'.' % (self._id, self.template_id))
updated_execution = self._eureqa._session.execute('/analysis_templates/%s/executions/%s' %
(utils.quote(self.template_id),
utils.quote(self._id)), 'GET')
return Execution(updated_execution, self.template_id, self._eureqa)
def _is_running(self):
"""Internal (to programatically drive an analysis template). Return
true if the execution is currently executing or validating"""
the_state = self.state # NB actually makes a RPC
return the_state in { 'VALIDATING', 'RUNNING' }
def _is_done(self):
"""Internal (to programatically drive an analysis template). Return
true if the execution is currently executing or validating"""
the_state = self.state # NB actually makes a RPC
return the_state in { 'DONE', 'ERRORED', 'ABORTED' }
def _is_waiting_on_confirmation(self):
"""Internal (to programatically drive an analysis template). Return
true if the execution is waiting on the user to accept results of confirmation"""
the_state = self.state # NB actually makes a RPC
return the_state == 'WAITING_FOR_CONFIRMATION'
@property
def progress_updates(self):
"""Get all progress updates for an execution of an analysis template."""
self._eureqa._session.report_progress('Getting progress_updates for execution: \'%s\' of analysis_template: \'%s\'.' % (self._id, self.template_id))
results = self._eureqa._session.execute('/analysis_templates/%s/executions/%s/progress_updates' %
(utils.quote(self.template_id),
utils.quote(self._id)), 'GET')
return [ProgressUpdate(x) for x in results]
@property
@Throttle()
def validation_results(self):
"""Get all validation results for the execution of an analysis template."""
def make_parameter(body):
r = ParameterValidationResult(type='UNKNOWN')
r._from_json(body)
return r
return [make_parameter(x) for x in self._get_updated_self()._body['validation_results']]
def _to_json(self):
body = {
'template_id': self.template_id,
'analysis_id': self.analysis_id,
'state': self._body['state'],
'parameter': ParametersValues(self.parameters)._to_json(),
'progress_updates': [x._to_json() for x in self.progress_updates]
}
return body
def _is_running(self):
"""Internal (to programatically drive an analysis template). Return
true if the execution is currently executing or validating"""
the_state = self.state # NB actually makes a RPC
return the_state in { 'VALIDATING', 'RUNNING' }
def __str__(self):
return json.dumps(self._to_json(), indent=4)
def get_analysis(self):
"""Retrieves the analysis that belongs to the execution."""
return self._eureqa.get_analysis(self.analysis_id)
def get_analysis_template(self):
"""Retrieves the analysis that belongs to the execution."""
return self._eureqa._get_analysis_template(self.template_id)
def update_progress(self, message):
"""Create a progress update for an execution of an analysis template.
:param str message: The progress message
"""
self._eureqa._session.report_progress('Updating progress of execution: \'%s\' of analysis_template: \'%s\'.' % (self._id, self.template_id))
self._eureqa._session.execute('/analysis_templates/%s/executions/%s/progress_updates' %
(utils.quote(self.template_id),
utils.quote(self._id)), 'POST', { 'message': message })
def report_validation_result(self, type, message=None, details=None, parameter_id=None):
"""
Report an info/warning/error message about the specified parameter to be shown to the user in validation review
:param str type: If this result is INFO, WARNING, or ERROR
:param str message: The result message
:param str details: (optional) Detailed message about the result
:param str parameter_id: (optional) the analysis template parameter that this progress update refers to
"""
self._eureqa._session.report_progress('Reporting validation result for execution: \'%s\' of analysis_template: \'%s\'.' % (self._id, self.template_id))
validation_result = ParameterValidationResult(type, message, details, parameter_id)
self._eureqa._session.execute('/analysis_templates/%s/executions/%s/validation_result' %
(utils.quote(self.template_id),
utils.quote(self._id)), 'POST', validation_result._to_json())
def report_fatal_error(self, error):
"""Notifies the server that an error occurred during the execution
and terminates the script execution.
:param str error: The error that occurred during the execution.
"""
self._eureqa._session.report_progress('Reporting error in execution: \'%s\' of analysis_template: \'%s\'.' % (self._id, self.template_id))
self._eureqa._session.execute('/analysis_templates/%s/executions/%s/error' %
(utils.quote(self.template_id),
utils.quote(self._id)), 'POST', {'message': error})
self._hard_exit()
def _report_validation_completion(self):
"""Notifies the server that the validation portion has completed."""
self._eureqa._session.report_progress('Reporting completion of validation: \'%s\' of analysis_template: \'%s\'.' % (self._id, self.template_id))
self._eureqa._session.execute('/analysis_templates/%s/executions/%s/validation_completion' %
(utils.quote(self.template_id),
utils.quote(self._id)), 'POST', {})
def _abort(self):
"""Notifies the server that something bad happened to the execution."""
self._eureqa._session.report_progress('Aborting execution \'%s\' of analysis_template: \'%s\'.' % (self._id, self.template_id))
self._eureqa._session.execute('/analysis_templates/%s/executions/%s/abort' %
(utils.quote(self.template_id),
utils.quote(self._id)), 'POST', {})
def _report_validation_results_confirmed(self):
"""Internal (to programatically drive an analysis template). Reports
that that the user confirmed validation results"""
self._eureqa._session.report_progress('Reporting validation results were confirmed for execution: \'%s\' of analysis_template: \'%s\'.' % (self._id, self.template_id))
self._eureqa._session.execute('/analysis_templates/%s/executions/%s/validation_results_confirmed' %
(utils.quote(self.template_id),
utils.quote(self._id)), 'POST', '{}')
def _report_completion(self):
"""Notifies the server that the execution completed
and terminates the script execution.
"""
self._eureqa._session.report_progress('Reporting completion of execution: \'%s\' of analysis_template: \'%s\'.' % (self._id, self.template_id))
self._eureqa._session.execute('/analysis_templates/%s/executions/%s/complete' %
(utils.quote(self.template_id),
utils.quote(self._id)), 'POST', {})
def _wait_until_finished(self):
while not self._is_done():
if self._is_waiting_on_confirmation():
self._report_validation_results_confirmed()
def _hard_exit(self):
import traceback
with open('/tmp/we_are_quitting.txt', 'wb') as fd:
fd.write(traceback.format_exc() + '\n')
import sys
sys.exit()
def _add_self_to_data_file_parameters(self):
if self.parameters is not None:
for p_id in self.parameters:
param = self.parameters[p_id]
if param._type is "data_file":
param._execution = self | PypiClean |
/Metamorf-0.4.4.2.tar.gz/Metamorf-0.4.4.2/metamorf/tools/log.py | from metamorf.tools.filecontroller import FileControllerFactory
from metamorf.constants import *
from datetime import datetime
import os
class Log:
def __init__(self):
os.system('')
self.file_log = FileControllerFactory().get_file_reader(FILE_TYPE_LOG)
self.file_log.set_file_location(ACTUAL_PATH, LOG_FILE_NAME)
self.file_log.setup_writer(FILE_WRITER_APPEND)
file_controller_properties = FileControllerFactory().get_file_reader(FILE_TYPE_YML)
file_controller_properties.set_file_location(os.path.join(PACKAGE_PATH, PROPERTIES_FILE_PATH), PROPERTIES_FILE_NAME)
self.properties_file = file_controller_properties.read_file()
def log(self, subject: str, message: str, level: int):
date_now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
self.file_log.write_file(date_now + " [" + subject + "] " + message + "\n")
message = self._get_message_with_color(level, message)
if 'options' not in self.properties_file: return
if 'log_print' not in self.properties_file['options']: return
if level == LOG_LEVEL_ONLY_LOG: return
if self.properties_file['options']['print_debug'] == 'N' and level == LOG_LEVEL_DEBUG: return
if self.properties_file['options']['log_print'] == "Y":
print(COLOR_LIGHT_GRAY + date_now + " " + COLOR_DEFAULT + "["
+ COLOR_DARK_GRAY + subject +
COLOR_DEFAULT + "]" + "" + COLOR_DEFAULT + ": " + message)
def close(self):
self.file_log.close()
def _get_message_with_color(self, level: int, message: str):
result = COLOR_RED + message + COLOR_DEFAULT
if level == LOG_LEVEL_DEBUG:
result = COLOR_BLUE + message + COLOR_DEFAULT
if level == LOG_LEVEL_INFO or level == LOG_LEVEL_ONLY_LOG:
result = COLOR_DEFAULT + message + COLOR_DEFAULT
if level == LOG_LEVEL_WARNING:
result = COLOR_YELLOW + message + COLOR_DEFAULT
if level == LOG_LEVEL_ERROR:
result = COLOR_LIGHT_RED + message + COLOR_DEFAULT
if level == LOG_LEVEL_CRITICAL:
result = COLOR_RED + message + COLOR_DEFAULT
if level == LOG_LEVEL_OK:
result = COLOR_GREEN + message + COLOR_DEFAULT
return result | PypiClean |
/Astropysics-1.0.tar.gz/Astropysics-1.0/astropysics/coords/coordsys.py | #TODO: implement polar motion lookup techniques
from __future__ import division,with_statement
from ..constants import pi
from ..utils import add_docs
import numpy as np
_twopi = 2*pi
_pio2 = pi/2
try:
#requires Python 2.6
from abc import ABCMeta
from abc import abstractmethod
from abc import abstractproperty
from collections import Sequence,MutableSequence
except ImportError: #support for earlier versions
abstractmethod = lambda x:x
abstractproperty = property
ABCMeta = type
class MutableSequence(object):
__slots__=('__weakref__',) #support for weakrefs as necessary
class Sequence(object):
__slots__=('__weakref__',) #support for weakrefs as necessary
class AngularCoordinate(object):
"""
A class representing an angular value.
Arithmetic operators can be applied to the coordinate, and will be applied
directly to the numerical value in radians. For + and -, two angular
coordinates may be used, although for -, an AngularSeparation object will
be returned.
"""
import re as _re
__slots__=('_decval','_range')
#this disturbingly complex RE matches anything that looks like a standard sexigesimal or similar string
__acregex = _re.compile(r'(?:([+-])?(\d+(?:[.]\d*)?)(hours|h|degrees|d|radians|rads|rad|r| |:(?=\d+:\d+[.]?\d*$)))?(?:(\d+(?:[.]\d*)?)(m|\'|[:]| ))?(?:(\d+(?:[.]\d*)?)(s|"|$))?$')
#and this one matches all things that look like raw numbers
__decregex = _re.compile(r'[+-]?\d+([.]\d*)?$')
def __init__(self,inpt=None,sghms=None,range=None,radians=False):
"""
The input parser is very adaptable, and can be in any of the following
forms for `inpt`:
* A float value
if `radians` is True, this will be interpreted as decimal radians,
otherwise, it is in degrees.
* An :class:`AngularCoordinate` object
A copy of the input object will be created.
* None
The default of 0 will be used.
* A 3-tuple
If `sghms` is True, the tuple will be interpreted as
(hours,min,sec), otherwise, (degrees,min,sec).
* A string of the form ##.##
If `radians` is True, this will be cast to a float and used as
decimal radians, otherwise, it is in degrees.
* A string of the form ##.##d or ##.##degrees
The numerical part will be cast to a float and used as degrees.
* A string of the form ##.##h or ##.##hours
The numerical part will be cast to a float and used as hours.
* A string of the form ##.##radians,##.##rads, or ##.##r
The numerical part will be cast to a float and used as radians.
* A string of the form (+/-)##h##m##.##s
The numerical parts will be treated as hours,minutes, and seconds.
* A string of the form (+/-)##d##m##.##s or (+/-)##d##'##.##"
The numerical parts will be treated as degrees,minutes, and seconds.
* A string of the form (+/-)##:##:##.## or (+/-)## ## ##.##
Sexigesimal form. If `sghms` is None the presence of a a + or - sign
idicates that it should be interpreted as degrees, minutes, and
seconds. If the sign is absent, the numerical portions will be
treated as hours,min,sec. thewise, if `sghms` evaluates to True, the
numerical parts will be treated as hours,minutes, and seconds, and
if `sghms` evaluates to False, degrees,minutes, and seconds.
:param inpt: The coordinate value -- valid forms are described above.
:param sghms:
If True, ambiguous sexigesimal inputs should be hours, minutes, and
seconds instead of degrees,arcmin, and arcsec
:type sghms: boolean
:param range:
Sets the valid range of coordinates. Either a
2-sequence (lowerdegrees,upperdegrees) or None (for no limit)
:param radians:
If True, ambiguous inputs are treated as radians rather than
degrees.
:type radians: boolean
**Examples**
>>> from math import pi
>>> ac = AngularCoordinate(2.5)
>>> print ac
+2d30'00.00"
>>> print AngularCoordinate(ac)
+2d30'00.00"
>>> print AngularCoordinate(pi,radians=True)
+180d00.00"
>>> print AngularCoordinate('1.1')
+1d6'00.00"
>>> print AngularCoordinate('1.1',radians=True)
+63d1'31.29"
>>> print AngularCoordinate('12d25m12.5s')
+12d25'12.50"
>>> print AngularCoordinate('3:30:30',sghms=True)
+52d37'30.00"
>>> print AngularCoordinate('3:30:30',sghms=False)
+3d30'30.00"
>>> print AngularCoordinate('-3:30:30',sghms=None)
-3d30'30.00"
>>> print AngularCoordinate('+3:30:30',sghms=None)
+3d30'30.00"
>>> print AngularCoordinate('3:30:30',sghms=None)
+52d37'30.00"
"""
from operator import isSequenceType
self._range = None
if isinstance(inpt,AngularCoordinate):
self._decval = inpt._decval
self._range = inpt._range
return
elif inpt is None:
self._decval = 0
elif isinstance(inpt,basestring):
sinpt = inpt.strip()
decm = self.__decregex.match(sinpt)
if decm:
if radians:
self.radians = float(decm.group(0))
else:
self.degrees = float(decm.group(0))
else:
acm = self.__acregex.match(sinpt)
if acm:
sgn,dec1,mark1,dec2,mark2,dec3,mark3 = acm.group(1,2,3,4,5,6,7)
val = (0 if dec1 is None else float(dec1)) + \
(0 if dec2 is None else float(dec2)/60) + \
(0 if dec3 is None else float(dec3)/3600)
if sgn == '-':
val *= -1
if mark1 == ':' or mark1 == ' ':
if sghms is None:
if sgn is None:
self.hours = val
else: #'+' or '-'
self.degrees = val
elif sghms:
self.hours = val
else:
self.degrees = val
elif mark1 == 'hours' or mark1 == 'h':
self.hours = val
elif mark1 == 'degrees' or mark1 == 'd':
self.degrees = val
elif mark1 == 'radians' or mark1 == 'rad' or mark1 == 'rads' or mark1=='r':
self.radians = val
else:
try:
if radians:
self.radians = float(val)
else:
self.degrees = float(val)
except ValueError:
raise ValueError('invalid string input for AngularCoordinate')
else:
raise ValueError('Invalid string input for AngularCoordinate: '+inpt)
elif isSequenceType(inpt) and len(inpt)==3:
if sghms:
self.hrsminsec = inpt
else:
self.degminsec = inpt
elif radians:
self._decval = float(inpt)
else:
from math import radians
self._decval = radians(inpt)
self.range = range
def _setDegminsec(self,dms):
if not hasattr(dms, '__iter__') or len(dms)!=3:
raise ValueError('Must set degminsec as a length-3 iterator')
self.degrees = abs(dms[0])+abs(dms[1])/60.+abs(dms[2])/3600.
if dms[0]<0:
self._decval*=-1
def _getDegminsec(self):
fulldeg = abs(self.degrees)
deg = int(fulldeg)
fracpart = fulldeg-deg
min = int(fracpart*60.)
sec = fracpart*3600.-min*60.
return -deg if self.degrees < 0 else deg,min,sec
degminsec = property(_getDegminsec,_setDegminsec,doc="""
The value of this :class:`AngularCoordinate` as an (degrees,minutes,seconds)
tuple, with degrees and minutes as integers and seconds as a float.
""")
dms = degminsec
def _setHrsminsec(self,dms):
if not hasattr(dms, '__iter__') or len(dms)!=3:
raise ValueError('Must set hrsminsec as a length-3 iterator')
self.degrees = 15*(dms[0]+dms[1]/60.+dms[2]/3600.)
def _getHrsminsec(self):
factorized = self.degrees/15.
hrs = int(factorized)
mspart = factorized - hrs
min = int(mspart*60.)
sec = mspart*3600.-min*60.
return hrs,min,sec
hrsminsec = property(_getHrsminsec,_setHrsminsec,doc="""
The value of this :class:`AngularCoordinate` as an (hours,minutes,seconds)
tuple, with hours and minutes as integers and seconds as a float.
""")
hms = hrsminsec
def _setDecdeg(self,deg):
rads = deg*pi/180.
if self.range is not None:
rads = self._checkRange(rads)
self._decval = rads
def _getDecdeg(self):
return self._decval*180/pi
degrees = property(_getDecdeg,_setDecdeg,doc="""
The value of this :class:`AngularCoordinate` in decimal degrees.
""")
d = degrees
def _setRad(self,rads):
if self.range is not None:
rads = self._checkRange(rads)
self._decval = rads
def _getRad(self):
return self._decval
radians = property(_getRad,_setRad,doc="""
The value of this :class:`AngularCoordinate` in decimal radians.
""")
r = radians
def _setDechr(self,hr):
rads = hr*pi/12
if self.range is not None:
rads = self._checkRange(rads)
self._decval = rads
def _getDechr(self):
return self._decval*12/pi
hours = property(_getDechr,_setDechr,doc="""
The value of this :class:`AngularCoordinate` in decimal hours.
""")
h = hours
def _checkRange(self,rads):
"""
Checks if the input value is in range - returns the new value, or raises
a :exc:`ValueError`.
"""
if self._range is not None:
low,up,cycle = self._range
if cycle is None:
if low <= rads <= up:
return rads
else:
raise ValueError('Attempted to set angular coordinate outside range')
else:
if cycle > 0:
#this means use "triangle wave" pattern with the given quarter-period
from math import sin,asin
offset = low/(low-up)-0.5
return (up-low)*(asin(sin(pi*(2*rads/cycle+offset)))/pi+0.5)+low
else:
return (rads-low)%(up-low)+low
else:
return rads
def _setRange(self,newrng):
oldrange = self._range
try:
if newrng is None:
self._range = None
else:
from math import radians
newrng = tuple(newrng)
if len(newrng) == 2:
if newrng[1]-newrng[0] == 360:
newrng = (newrng[0],newrng[1],0)
else:
newrng = (newrng[0],newrng[1],None)
elif len(newrng)==3:
pass
else:
raise TypeError('range is not a 2 or 3-sequence')
if newrng[0] > newrng[1]:
raise ValueError('lower edge of range is not <= upper')
newrng = ( radians(newrng[0]),radians(newrng[1]), \
None if newrng[2] is None else radians(newrng[2]) )
self._range = newrng
self._decval = self._checkRange(self._decval)
except ValueError,e:
self._range = oldrange
if e.args[0] == 'lower edge of range is not <= upper':
raise e
else:
raise ValueError('Attempted to set range when value is out of range')
def _getRange(self):
if self._range is None:
return None
else:
from math import degrees
if self._range[2] is None:
return degrees(self._range[0]),degrees(self._range[1])
else:
return degrees(self._range[0]),degrees(self._range[1]),degrees(self._range[2])
range = property(_getRange,_setRange,doc="""
The acceptable range of angles for this :class:`AngularCoordinate`. This can
be set as a 2-sequence (lower,upper), or as a 3-sequence (lower,upper,cycle),
where cycle can be :
* 0: Angle values are coerced to lie in the range (default for
2-sequence if upper-lower is 360 degrees)
* None: A :exc:`ValueError` will be raised if out-of-range (default for
2-sequence otherwise)
* A positive scalar: Values are coerced in a triangle wave scheme, with
the scalar specifying the period. e.g. for the latitude, (-90,90,360)
would give the correct behavior)
""")
def __str__(self):
return self.getDmsStr(sep=('d',"'",'"'))
def __eq__(self,other):
if hasattr(other,'_decval'):
return self._decval==other._decval
else:
return self._decval==other
def __ne__(self,other):
return not self.__eq__(other)
def __add__(self,other):
if hasattr(other,'_decval'):
res = self.__class__()
res._decval = self._decval + other._decval
else:
res = self.__class__()
res._decval = self._decval + other
return res
def __sub__(self,other):
if isinstance(other,AngularCoordinate):
from math import degrees
res = AngularSeparation(degrees(other._decval),degrees(self._decval))
else:
res = AngularCoordinate()
res._decval = self._decval - other
return res
def __mul__(self,other):
res = self.__class__()
res._decval = self._decval*other
return res
def __div__(self,other):
res = self.__class__()
res._decval = self._decval/other
return res
def __truediv__(self,other):
res = self.__class__()
res._decval = self._decval//other
return res
def __pow__(self,other):
res = self.__class__()
res._decval = self._decval**other
return res
def __float__(self):
return self.degrees
def getDmsStr(self,secform='%05.2f',sep=(unichr(176),"'",'"'), sign=True,
canonical=False, inclzero=True):
"""
Generates the string representation of this AngularCoordinate as
degrees, arcminutes, and arcseconds.
:param secform: a formatter for the seconds
:type secform: string
:param sep:
The seperator between components - defaults to degree sign, ' and "
symbols.
:type sep: string or 3-tuple of strings
:param sign: Forces sign to be present before degree component.
:type sign: boolean
:param canonical: forces [+/-]dd:mm:ss.ss , overriding other arguments
:param inclzero:
If True, a "0" is included whenever even if the degrees or minutes
are 0. Otherise, the "0" and the corresponding separator are
omitted from the string.
:type inclzero: bool
:returns: String representation of this object.
"""
d,m,s = self.degminsec
if canonical:
secform = '%05.2f'
sep = (':',':','')
sign = True
s = secform%s
if int(float(s))>=60:
s = secform%0
m += 1
if m==60:
m = 0
d += 1
d,m=str(abs(d)),str(m)
if isinstance(sep,basestring):
if sep == 'dms':
sep = ('d','m','s')
sep = (sep,sep)
tojoin = []
if sign and self._decval >= 0:
tojoin.append('+')
if self._decval<0:
tojoin.append('-')
if inclzero or d is not '0':
tojoin.append(d)
tojoin.append(sep[0])
if inclzero or m is not '0':
tojoin.append(m)
tojoin.append(sep[1])
tojoin.append(s)
if len(sep)>2:
tojoin.append(sep[2])
return ''.join(tojoin)
def getHmsStr(self,secform = None,sep = ('h','m','s'), canonical = False,
inclzero=True):
"""
gets the string representation of this AngularCoordinate as hours,
minutes, and seconds
secform is the formatter for the seconds component
sep is the seperator between components - defaults to h, m, and s
canonical forces [+/-]dd:mm:ss.ss , overriding other arguments
Generates the string representation of this AngularCoordinate as hours,
minutes, and seconds.
:param secform: a formatter for the seconds component
:type secform: string
:param sep:
The seperator between components - defaults to 'h', 'm', and 's'.
:type sep: string or 3-tuple of strings
:param canonical: forces [+/-]dd:mm:ss.ss , overriding other arguments
:param inclzero:
If True, a "0" is included whenever even if the degrees or minutes
are 0. Otherise, the "0" and the corresponding separator are
omitted from the string.
:type inclzero: bool
:returns: String representation of this object.
"""
h,m,s = self.hrsminsec
if canonical:
secform = '%05.2f'
sep = (':',':','')
s = str(s) if secform is None else secform % s
#this is for the s=60 case
if int(float(s))>=60:
news = float(s) - 60
s = str(news) if secform is None else secform%news
m += 1
if m==60:
m = 0
h += 1
if h==24:
h = 0
h,m=str(h),str(m)
if isinstance(sep,basestring):
if sep == 'hms':
sep = ('h','m','s')
sep = (sep,sep)
tojoin = []
if inclzero or h is not '0':
tojoin.append(h)
tojoin.append(sep[0])
if inclzero or m is not '0':
tojoin.append(m)
tojoin.append(sep[1])
tojoin.append(s)
if len(sep)>2:
tojoin.append(sep[2])
return ''.join(tojoin)
class AngularSeparation(AngularCoordinate):
"""
This class represents a separation between two angular coordinates on the
unit sphere.
A constructor is available, but the most natural way to generate this object
is to use the subtraction (-) operator on two :class:`AngularCoordinate`
objects or two :class:`LatLongCoordinates` objects.
"""
def __init__(self,*args):
"""
Input arguments can be either:
* AngularSeparation(:class:`AngularSeparation` object)
Generates a copy of the provided object.
* AngularSeparation(sep)
Generates a separation of the provided distance with no starting point.
* AngularSeparation(start,end)
Computes the separation from the start and end objects, which must
be :class:`AngularCoordinate` objects.
"""
if len(args) == 1:
a = args[0]
if a.__class__ == self.__class__:
self._decval = args[0]._decval
self._range = args[0]._range
return
sep = a._decval if hasattr(a,'_decval') else a
elif len(args) == 2:
a0,a1 = args
a0 = a0._decval if hasattr(a0,'_decval') else a0
a1 = a1._decval if hasattr(a1,'_decval') else a1
sep = a1 - a0
else:
raise ValueError('improper number of inputs to AngularSeparation')
AngularCoordinate.__init__(self,sep)
def __add__(self,other):
if isinstance(other,AngularCoordinate) and not self.__class__ == other.__class__:
res = AngularCoordinate()
res._decval = self._decval+other._decval
return res
else:
return AngularCoordinate.__add__(self,other)
#comparisons
def __lt__(self,other):
return self._decval < other._decval
def __le__(self,other):
return self._decval <= other._decval
def __gt__(self,other):
return self._decval > other._decval
def __ge__(self,other):
return self._decval >= other._decval
def __eq__(self,other):
return self._decval == other._decval
def __ne__(self,other):
return self._decval != other._decval
def _getArcsec(self):
return self.degrees*3600
def _setArcsec(self,val):
self.degrees = val/3600
arcsec = property(_getArcsec,_setArcsec,doc=None)
def _getArcmin(self):
return self.degrees*60
def _setArcmin(self,val):
self.degrees = val/60
arcmin = property(_getArcmin,_setArcmin,doc=None)
def projectedSeparation(self,zord,usez=False,**kwargs):
"""
Computes the physical projected separation assuming a given distance.
kwargs are passed into :func:`cosmo_z_to_dist` if `usez` is True.
:param zord: Redshift or distance
:type zord: scalar number
:param usez:
If True, the input will be interpreted as a redshift, and kwargs
will be passed into the distance calculation. The result will be in
pc. Otherwise, `zord` will be interpreted as a distance.
:type usez: boolean
:returns: a float value for the separation (in pc if redshift is used)
"""
from .funcs import angular_to_physical_size
return angular_to_physical_size(self.arcsec,zord,usez=usez,**kwargs)
def separation3d(self,zord1,zord2,usez=False,**kwargs):
"""
computes the 3d separation assuming the two points at the ends of this
:class:`AngularSeparation` are at the distances `zord1` and `zord2`.
:param zord1: Redshift or distance for start point
:type zord1: scalar number
:param zord2: Redshift or distance for end point
:type zord2: scalar number
:param usez:
If True, the inputs will be interpreted as a redshift, and kwargs
will be passed into the distance calculation. The result will be in
pc. Otherwise, `zord` will be interpreted as a distance.
:type usez: boolean
:returns: a float value for the separation (in pc if redshift is used)
"""
from math import sin,cos,sqrt
if usez:
d1 = cosmo_z_to_dist(zord1,disttype=2,**kwargs)*1e6 #pc
d2 = cosmo_z_to_dist(zord1,disttype=2,**kwargs)*1e6 #pc
else:
if len(kwargs)>0:
raise TypeError('if not using redshift, kwargs should not be provided')
d1 = zord1
d2 = zord2
costerm = 2*d1*d2*cos(self._decval)
return sqrt(d1*d1+d2*d2-costerm)
#<-----------------------------Coordinate systems------------------------------>
class _CoosysMeta(ABCMeta):
"""
Metaclass for CoordinateSystem class and subclasses - needed to support
:class:`CoordinateSystem.registerTransform` decorator.
"""
def __init__(cls,name,bases,dct):
ABCMeta.__init__(cls,name,bases,dct)
import inspect
for k,v in inspect.getmembers(cls):
if isinstance(v,_TransformerMethodDeco):
for vfc,vtc in zip(v.fromclasses,v.toclasses):
fromclass = cls if vfc == 'self' else vfc
toclass = cls if vtc == 'self' else vtc
CoordinateSystem.registerTransform(fromclass,toclass,v.f,v.transtype)
setattr(cls,k,staticmethod(v.f))
class _TransformerMethodDeco(object):
"""
A class representing methods used for registering transforms for the class
the are in.
"""
def __init__(self,f,fromclass,toclass,transtype=None):
self.f = f
self.fromclasses = [fromclass]
self.toclasses = [toclass]
self.transtype = transtype
#Default for optmizing convert functions - currently false because it's not smart enough
_convertoptimizedefault = False
class CoordinateSystem(object):
"""
Base class of all coordinate systems. This class also holds the static
methods that manage conversion between coordinate systems.
*Subclassing*
* Subclasses of :class:`CoordinateSystem` must override :meth:`__init__` to
set initial values.
* :class:`CoordinateSystem` objects are intended to be quite small, so
unless there is a reason to do otherwise, subclasses should have a
:attr:`__slots__` class attribute (it should be a sequence of all the
valid attribute names for the object - see
http://docs.python.org/reference/datamodel.html for an explanation of the
`__slots__` mechanism).
* The :attr:`transweight` class variable can be set to determine the
weighting of this class when computing coordinate transformation pathways.
Note that *smaller* weights are preferred paths (e.g. a larger weight is
less likely to be visited). See
:meth:`CoordinateSystem.getTransformGraph` for more details.
"""
from collections import defaultdict as _defaultdict
__metaclass__ = _CoosysMeta
__slots__ = tuple()
@abstractmethod
def __init__(self):
raise NotImplementedError
_converters = _defaultdict(dict) #first index is from, second is to
_transtypes = dict()
@staticmethod
def registerTransform(fromclass,toclass,func=None,transtype=None,
overwrite=True):
"""
Register a function to transform coordinates from one system to another.
The transformation function is called is func(fromobject) and should
return a new object of type `toclass`. If called with no arguments,
the function should raise a :exc:`NotImplementedError` or behave in a
manner defined in subclasses (see e.g. :class:`LatLongCoordinates`).
If `transtype` is not None, the output of the transformation function is
filered through the function applied for that type using
:meth:`astropysics.CoordinateSystem.addTransType` .
If the transformation function `func` is None, the function is taken to
be a decorator, and if it is a method of a subclass of
:class:`CoordinateSystem`, `fromclass` or `toclass` may be the string
'self' . In this case, the function will use the class itself as the
from or to class. The function will then be treated as a static method
(e.g. it should not have `self` as the first argument).
:param fromclass: The class to transform from.
:type fromclass: subclass of :class:`CoordinateSystem` or 'self'
:param toclass: The class to transform to.
:type toclass: subclass of :class:`CoordinateSystem` or 'self'
:param func: the function to perform the transform or None if decorator.
:type func: a callable or None
:param transtype:
A transformation type that will be used to determine how the
transform is performed, or None for no transform type. (see
:meth:`astropysics.CoordinateSystem.addTransType` for details).
:type transtype: string or None
:param overwrite:
If True, any already existing function will be silently overriden.
Otherwise, a ValueError is raised.
:type overwrite: boolean
**Examples**::
class MyCoordinates(CoordinateSystem):
...
class YourCoordinates(CoordinateSystem):
...
def transformer(mycooobj):
...
return yourcoordobj
CoordinateSystem.registerTransform(MyCoordinates,YourCoordinates,transformer)
class TheirCoordinates(CoordinateSystem):
@CoordinateSystem.registerTransform(MyCoordinates,'self')
@classmethod
def fromMy(cls,mycoordinates):
...
return theircoordobj
"""
if func is None:
if fromclass != 'self':
if not issubclass(fromclass,CoordinateSystem):
raise TypeError('from class for registerTransform must be a CoordinateSystem')
if toclass != 'self':
if not issubclass(toclass,CoordinateSystem):
raise TypeError('to class for registerTransform must be a CoordinateSystem')
def make_or_extend_trans_meth_deco(f):
if isinstance(f,_TransformerMethodDeco):
f.fromclasses.append(fromclass)
f.toclasses.append(toclass)
elif callable(f):
return _TransformerMethodDeco(f,fromclass,toclass,transtype)
else:
raise TypeError('Tried to apply registerTransform to a non-callable')
return make_or_extend_trans_meth_deco
else:
if not issubclass(fromclass,CoordinateSystem) or not issubclass(toclass,CoordinateSystem):
raise TypeError('to/from classes for registerTransform must be CoordinateSystems')
if not overwrite and (toclass in CoordinateSystem._converters[fromclass]):
#format requires 2.6
#raise ValueError('function already exists to convert {0} to {1}'.format(fromclass,toclass))
raise ValueError('function already exists to convert %s to %s'%(fromclass,toclass))
if transtype is not None:
try:
ttf = CoordinateSystem._transtypes[transtype]
except KeyError:
raise KeyError('coordinate transformation type %s does not exist'%transtype)
lfunc = lambda cobj:ttf(func(cobj),cobj,toclass)
lfunc.basetrans = func
lfunc.transtype = transtype
CoordinateSystem._converters[fromclass][toclass] = lfunc
else:
func.transtype = None
CoordinateSystem._converters[fromclass][toclass] = func
CoordinateSystem._invalidateTransformCache()
@staticmethod
def getTransform(fromclass,toclass):
"""
Returns the transformation function to go from `fromclass` to `toclass`.
"""
return CoordinateSystem._converters[fromclass][toclass]
@staticmethod
def listAllTransforms():
"""
Returns a list of 2-tuples (fromclass,toclass) of all the coordinate
system combinations that have registered transformation functions.
"""
trlist = []
for fr,l in CoordinateSystem._converters.iteritems():
for li in l:
trlist.append((fr,li))
return trlist
@staticmethod
def listTransformsTo(toclass):
"""
Returns a list of classes that can be transformed to the supplied class.
"""
flist = []
for fr,l in CoordinateSystem._converters.iteritems():
for li in l:
if li is toclass:
flist.append(fr)
return flist
@staticmethod
def listTransformsFrom(fromclass):
"""
Returns a list of classes that can be transformed from the supplied
class.
"""
if fromclass in CoordinateSystem._converters:
return list(CoordinateSystem._converters[fromclass])
else:
return []
@staticmethod
def delTransform(fromclass,toclass):
"""
Deletes the transformation function to go from `fromclass` to `toclass`.
"""
del CoordinateSystem._converters[fromclass][toclass]
CoordinateSystem._invalidateTransformCache()
@staticmethod
def addTransType(funcorname):
"""
Registers a new transformation type. Transformation types are used to
implement particular types of transformations without repeating similar
code in each of the actual transformation functions.
The transformation type function (`funcorname`) will be called as
transfunc(trans,coords,toclass), where trans is the output of the actual
registered transformation function, coords is the coordinate object that
is being converted, and toclass is the target class.
:param funcorname:
The function to register as the transfromation type function. If a
string, the string is taken to be the name to use for the
transformation (intended for use as a function decorator).
Otherwise, the transformation type name is taken from the name of
the function with any intial _ removed.
:type funcorname: callable or string
:returns:
The function itself, to allow for use as a decorator. Note that this
means that if used as a decorator inside a class, the function will
be assigned as an *instance* method. Alternative, if `funcorname`
is a string, a function to be called on the transformation function
will be returned (see second example).
**Examples**::
@addTransType
def trans1(trans,coord,tocls):
return tocls(trans*coord.val)
@addTransType('trans1')
def _transfunc(trans,coord,tocls):
return tocls(trans*coord.val)
"""
def regtrans(func,typename):
if typename in CoordinateSystem._transtypes:
#go through and re-assign all existing transes to use the new one
for k,v in CoordinateSystem._converters.iteritems():
for k2,v2 in v.items():
if v2.transtype == typename:
btfunc = v2.basetrans
coof = lambda cobj:func(btfunc(cobj),cobj,k2)
coof.transtype = typename
coof.basetrans = btfunc
CoordinateSystem._converters[k][k2] = coof
CoordinateSystem._transtypes[typename] = func
return func
if isinstance(funcorname,basestring):
typename = funcorname
return lambda f:regtrans(f,typename)
elif callable(funcorname):
typename = funcorname.func_name
if typename.startswith('_'):
typename = typename[1:]
return regtrans(funcorname,typename)
else:
raise TypeError('funcorname is neither a callable nor a string')
@staticmethod
def getTransformPath(fromsys,tosys):
"""
Determines the transformation path from one coordinate system to another
for use with :meth:`convert`.
:param fromsys: The starting coordinate system class
:param tosys: The target coordinate system class
:returns:
A list of coordinate classes with the shortest path from `fromsys`
to `tosys` (*including* `fromsys` and `tosys`) or a callable with
the transformation if a single-step direct transformation is
available
:except NotImplementedError: If no path can be found.
"""
if tosys in CoordinateSystem._converters[fromsys]:
return CoordinateSystem._converters[fromsys][tosys]
else:
failstr = 'cannot convert coordinate system %s to %s'%(fromsys.__name__,tosys.__name__)
try:
import networkx as nx
g = CoordinateSystem.getTransformGraph()
if nx.__version__>'1.4':
path = nx.shortest_path(g,fromsys,tosys,weight=True)
else:
path = nx.shortest_path(g,fromsys,tosys,weighted=True)
if not path:
raise NotImplementedError(failstr+'; no transform path could be found')
return path
except ImportError,e:
if e.args[0] == 'No module named networkx':
raise NotImplementedError(failstr+'; networkx not installed')
else:
raise
_transgraph = None
@staticmethod
def getTransformGraph():
"""
Returns a `networkx <http://networkx.lanl.gov/>` :class:`DiGraph` object
representing a graph of the registered coordinate systems and the
transformations between them.
:except ImportError: If networkx is not installed.
"""
import networkx as nx
if CoordinateSystem._transgraph is None:
CoordinateSystem._transgraph = g = nx.DiGraph()
transes = []
for a,b in CoordinateSystem.listAllTransforms():
avgweight = (getattr(a,'transweight',1) +
getattr(b,'transweight',1))/2
transes.append((a,b,dict(weight=avgweight)))
g.add_edges_from(transes)
return CoordinateSystem._transgraph.copy()
_transformcache = _defaultdict(dict)
@staticmethod
def _invalidateTransformCache():
"""
Called when transforms are changed to invalidate the caches
"""
from collections import defaultdict
CoordinateSystem._transformcache = defaultdict(dict)
CoordinateSystem._transgraph = None
def convert(self,tosys):
"""
converts the coordinate system from it's current system to a new
:class:`CoordinateSystem` object.
:param tosys:
The new coordinate system class. Should be a subclass of
:class:`CoordinateSystem` .
:returns: A new object of a class determined by `tosys`
:except: raises :exc:`NotImplementedError` if conversion is not present
"""
convpath = CoordinateSystem.getTransformPath(self.__class__,tosys)
if callable(convpath):
return convpath(self)
else:
currobj = self
currsys = self.__class__
for intersys in convpath[1:-1]:
currobj = CoordinateSystem._converters[currsys][intersys](currobj)
currsys = intersys
return CoordinateSystem._converters[currsys][tosys](currobj)
class EpochalCoordinates(CoordinateSystem):
"""
A base class for :class:`CoordinateSystem` classes that have *changeable*
epochs associated with them.
*Subclassing*
Subclasses must implement these methods:
* :meth:`__init__` from :class:`CoordinateSystem`
* :meth:`transformToEpoch` -- see the method's entry for details.
Furthermore, subclasses should set :attr:`julianepoch` to determine if they
are Julian or Besselian.
"""
#TODO:figure out if there's a way to put this back in to save space - right
#now if two __slots__ classes are mixed together, this is thrown:
#TypeError: Error when calling the metaclass bases multiple bases have instance lay-out conflict
#__slots__ = ('_epoch',)
julianepoch = True
"""
If True, the epoch is Julian, otherwise, Besselian
"""
def __getstate__(self):
return {'_epoch':self._epoch}
def __setstate__(self,d):
self._epoch = d['_epoch']
def _getEpoch(self):
return self._epoch
def _setEpoch(self,val):
if val is None:
self._epoch = None
else:
if val == 'now':
from ..obstools import jd_to_epoch
val = jd_to_epoch(None,self.julianepoch)
if not hasattr(self,'_epoch') or self._epoch is None:
self._epoch = float(val)
else:
self.transformToEpoch(float(val))
epoch = property(_getEpoch,_setEpoch,doc="""
Epoch for this coordinate as a float.
Setting with the string 'now' will set the epoch to the time at the moment
the command is executed.
If set, this coordinate will be transformed to the new epoch, unless the
current Epoch is None, in which case the epoch will be set with no
transformation. If transformation is *not* desired, first set the epoch to
None, and then set to the new epoch.
Julian vs. Besselian is determined by the :attr:`julianepoch` attribute.
""")
def _getEpochstr(self):
#format requires 2.6
#return '{0}{1}'.format('J' if self.julianepoch else 'B',self._epoch)
if self._epoch is None:
return ''
else:
return '%s%s'%('J' if self.julianepoch else 'B',self._epoch)
def _setEpochstr(self,val):
self.epoch = val
epochstr = property(_getEpochstr,_setEpochstr,doc="""
A string representation of the epoch of this object with a J or B prefixed
for julian or besselian epochs.
""")
def _getJdepoch(self):
from ..obstools import epoch_to_jd
return epoch_to_jd(self._epoch,self.julianepoch)
def _setJdepoch(self,val):
from ..obstools import jd_to_epoch
self._epoch = jd_to_epoch(val,self.julianepoch)
jdepoch = property(_getJdepoch,_setJdepoch,doc="""
Julian Date of the epoch for this object.
""")
def _getMjdepoch(self):
from ..obstools import epoch_to_jd
return epoch_to_jd(self._epoch,self.julianepoch,mjd=True)
def _setMjdepoch(self,val):
from ..obstools import jd_to_epoch
self._epoch = jd_to_epoch(val,self.julianepoch,mjd=True)
mjdepoch = property(_getMjdepoch,_setMjdepoch,doc="""
Modified Julian Date of the epoch for this object.
""")
@abstractmethod
def transformToEpoch(self,newepoch):
"""
Subclasses should implement this method to transform their coordinates
to a new epoch. At the end of this method after the necessary data
transformations are performed, subclasses should call
``EpochalCoordinates.transformToEpoch(newepoch)``.
"""
self._epoch = newepoch
class RectangularCoordinates(CoordinateSystem):
"""
Rectangular/Cartesian Coordinates in three dimensions. Coordinates are
accessed via the attributes :attr:`x`, :attr:`y`, and :attr:`z`.
"""
__slots__ = ('x','y','z')
def __init__(self,x,y,z):
#: x cartesian coordinate value
self.x = x
#: y cartesian coordinate value
self.y = y
#: z cartesian coordinate value
self.z = z
def __getstate__(self):
#TODO: watch if this creates probelms by not being a dict
return dict(x=self.x,y=self.y,z=self.z)
def __setstate__(self,d):
self.x = d['x']
self.y = d['y']
self.z = d['z']
def __str__(self):
return '%s: x=%f,y=%f,z=%f'%(self.__class__.__name__,self.x,self.y,self.z)
def __add__(self,other):
from copy import deepcopy
if hasattr(other,'x') and hasattr(other,'y') and hasattr(other,'z'):
new = deepcopy(self)
new.x = self.x+other.x
new.y = self.y+other.y
new.z = self.z+other.z
return new
raise TypeError('Object of type %s does not have x,y, and z for operand +'%other.__class__)
def __sub__(self,other):
from copy import deepcopy
if hasattr(other,'x') and hasattr(other,'y') and hasattr(other,'z'):
new = deepcopy(self)
new.x = self.x-other.x
new.y = self.y-other.y
new.z = self.z-other.z
return new
raise TypeError('Object of type %s does not have x,y, and z for operand -'%other.__class__)
def _getLength(self):
from math import sqrt
return sqrt(self.x**2+self.y**2+self.z**2)
def _setLength(self,val):
scaling = val/self._getLength()
self.x *= scaling
self.y *= scaling
self.z *= scaling
length = property(_getLength,_setLength,doc="""
The Length of this coordinate's vector e.g. distance from the origin. If
set, the direction will be preserved but the vector will be scaled to the
provided value.""")
CartesianCoordinates = RectangularCoordinates
class _LatLongMeta(_CoosysMeta):
def __init__(cls,name,bases,dct):
_CoosysMeta.__init__(cls,name,bases,dct)
if cls._longlatnames_[0] is not None:
setattr(cls,cls._longlatnames_[0],cls.long)
setattr(cls,cls._longlatnames_[0]+'err',cls.longerr)
if cls._longlatnames_[1] is not None:
setattr(cls,cls._longlatnames_[1],cls.lat)
setattr(cls,cls._longlatnames_[1]+'err',cls.laterr)
class LatLongCoordinates(CoordinateSystem):
"""
This object represents an angular location on a sphere as represented in
spherical coordinates with a latitude and longitude, and optionally a
distance. Subclasses specify details such as transformations or epochs.
A :class:`LatLongCoordinate` system is designed to use the transformation
type (see :meth:`CoordinateSystem.addTransType`) 'smatrix'. Thus, if the
transformation between two :class:`LatLongCoordinate` subclasses can be
represented as a unitary matrix operating on position vectors on the unit
sphere, the transformation function can be written as::
@CoordinateSystem.registerTransform(InLLCoords,OutLLCoords,transtype='smatrix')
def transform(incoord):
... compute the elements of a 3x3 transformation matrix...
return np.mat([[a,b,c],[d,e,f],[g,h,i]])
*Subclassing*
Subclasses of :class:`LatLongCoordinates` can have the class attribute
:attr:`_longlatnames_` as a 2-tuple of strings (longname,latname), with
names for the two coordinates, e.g. ('ra','dec'). They can also include the
:attr:`_longrange_` attribute, which specifies the range of valid values for
the longitude (latitude is always -90 to 90 degrees), or None to place no
restriction. See :class:`CoordinateSystem` for additional subclassing
information.
"""
__slots__ = ('_lat','_long','_laterr','_longerr','_dpc')
__metaclass__ = _LatLongMeta
_longlatnames_ = ('longitude','latitude')
_longrange_ = None
def __init__(self,long=0,lat=0,longerr=None,laterr=None,distancepc=None):
"""
See the associated attribute docstrings for the meaning of the inputs.
"""
self._lat = AngularCoordinate(range=(-90,90,360))
self._long = AngularCoordinate(range=self._longrange_)
self.distancepc = distancepc
if hasattr(lat,'lat') and hasattr(lat,'long'):
if long is 0 and laterr is None and longerr is None:
self.lat = lat.lat
self.long = lat.long
self.laterr = lat.laterr
self.longerr = lat.longerr
else:
raise ValueError("can't provide a LatLongCoordinates as a constructor and set other values simultaneously")
else:
self.lat = lat
self.long = long
self.laterr = laterr
self.longerr = longerr
def __getstate__(self):
return dict([(k,getattr(self,k)) for k in LatLongCoordinates.__slots__])
def __setstate__(self,d):
for k in LatLongCoordinates.__slots__:
setattr(self,k,d[k])
def _getDistancepc(self):
if callable(self._dpc):
return self._dpc()
else:
return self._dpc
def _setDistancepc(self,val):
if val is None:
self._dpc = None
elif callable(val):
self._dpc = val
else:
try:
self._dpc = (float(val[0]),float(val[1]))
except (TypeError,IndexError):
self._dpc = (float(val),0)
distancepc = property(_getDistancepc,_setDistancepc,doc="""
Parallax distance to object in parsecs, or None to assume infinity. Set as
either a float, a 2-tuple (distance,distance_error), or a no-argument
callable that returns such a tuple. Getter always returns 2-tuple or None.
""")
def _getDistanceau(self):
from ..constants import aupercm,cmperpc
auperpc = cmperpc * aupercm
if self._dpc is None:
return None
elif callable(self._dpc):
res = self._dpc()
else:
res = self._dpc
return (res[0]*auperpc,res[1]*auperpc)
def _setDistanceau(self,val):
from ..constants import cmperau,pcpercm
pcperau = cmperau * pcpercm
if val is None:
self._dpc = None
elif callable(val):
self._dpc = lambda: tuple((v*pcperau for v in val()))
else:
try:
self._dpc = (float(val[0])*pcperau,float(val[1])*pcperau)
except (TypeError,IndexError):
self._dpc = (float(val)*pcperau,0)
distanceau = property(_getDistanceau,_setDistanceau,doc="""
Parallax distance to object in AU, or None to assume infinity. Set as either
a float, a 2-tuple (distance,distance_error), or a no-argument callable that
returns such a tuple. Getter always returns 2-tuple or None.
""")
def _getLat(self):
return self._lat
def _setLat(self,val):
if isinstance(val,AngularCoordinate):
rads = val.radians%_twopi
else:
rads = AngularCoordinate(val).radians%_twopi
#fix for radian range
if rads > 3*pi/2:
rads -= _twopi
elif rads > pi/2:
rads = pi - rads
self._lat.radians = rads
lat = property(_getLat,_setLat,doc="""
Latitude of this object as a :class:`AngularCoordinate` object. May be set
using any valid input form for :class:`AngularCoordinate`.
""")
def _getLong(self):
return self._long
def _setLong(self,val):
if isinstance(val,AngularCoordinate):
self._long.radians = val.radians%_twopi
else:
self._long.radians = AngularCoordinate(val).radians%_twopi
long = property(_getLong,_setLong,doc="""
Longitude of this object as a :class:`AngularCoordinate` object. May be set
using any valid input form for :class:`AngularCoordinate`.
""")
def _getLaterr(self):
return self._laterr
def _setLaterr(self,val):
if val is None:
self._laterr = None
elif isinstance(val,AngularSeparation):
self._laterr = val
else:
self._laterr = AngularSeparation(val)
laterr = property(_getLaterr,_setLaterr,doc="""
Latitude error for this object as a :class:`AngularSeparation` object. May
be set using any valid input form for :class:`AngularSeparation`.
""")
def _getLongerr(self):
return self._longerr
def _setLongerr(self,val):
if val is None:
self._longerr = None
elif isinstance(val,AngularSeparation):
self._longerr = val
else:
self._longerr = AngularSeparation(val)
longerr = property(_getLongerr,_setLongerr,doc="""
Longitude error for this object as a :class:`AngularSeparation` object. May
be set using any valid input form for :class:`AngularSeparation`.
""")
def __str__(self):
lat,long = self._lat.d,self._long.d
#lat,long = self._lat.getDmsStr(),self._long.getDmsStr()
#this requires 2.6 - switch later maybe
#return '{0}: {1[0]}={2},{1[1]}={3}'.format(self.__class__.__name__,self._longlatnames_,long,lat)
return '%s: %s=%f,%s=%f'%(self.__class__.__name__,self._longlatnames_[0],long,self._longlatnames_[1],lat)
def getCoordinateString(self,sep=' ',labels=False,canonical=False,hmslong=False):
coords = []
if hmslong:
coords.append(self._long.getHmsStr(canonical=canonical,sign=False))
else:
coords.append(self._long.getDmsStr(canonical=canonical,sign=False))
coords[-1] = self._longlatnames_[0]+'='+coords[-1]
coords.append(self._lat.getDmsStr(canonical=canonical))
coords[-1] = self._longlatnames_[1]+'='+coords[-1]
return sep.join(coords)
def __eq__(self,other):
if hasattr(other,'lat') and hasattr(other,'long'):
return self._lat==other.lat and self._long==other.long
else:
return False
def __ne__(self,other):
return not self.__eq__(other)
def __sub__(self,other):
if isinstance(other,LatLongCoordinates) or (hasattr(other,'lat') and hasattr(other,'long')):
from math import cos,degrees,acos,asin,sin,sqrt
b1 = self._lat.radians
b2 = other.lat.radians
db = abs(b2 - b1)
dl = abs(other.long.radians - self._long.radians)
#haversin(theta) = (1-cos(theta))/2 = sin^2(theta/2)
#has better numerical accuracy if sin for theta ~ 0, cos ~ pi/2
haversin = lambda t:(1-cos(t))/2 if pi/4 < (t%pi) < 3*pi/4 else sin(t/2)**2
hdb = haversin(db)
hdl = haversin(dl)
havsep = hdb + cos(b1)*cos(b2)*hdl
#archaversin
sep = acos(1 - 2*havsep) if 0.25 < havsep <= 0.75 else 2*asin(havsep**0.5)
#straightforward definition without the tweaks using haversin - this
#is in principal faster, but in practice it ends up only about
#10% faster due to the other overhead
#sep = acos(sin(b1)*sin(b2)+cos(b1)*cos(b2)*cos(dl))
return AngularSeparation(degrees(sep))
# #small angle version
# from math import cos,degrees,sqrt
# clat = cos((self._lat.radians+other._lat.radians)/2)
# dcorrlong = (self._long.radians - other._long.radians)*clat
# dlat = self._lat.radians-other._lat.radians
# sep = AngularSeparation(degrees(sqrt(dlat*dlat+dcorrlong*dcorrlong)))
# return sep
else:
raise ValueError("unsupported operand type(s) for -: '%s' and '%s'"%(self.__class__,other.__class__))
@staticmethod
@CoordinateSystem.addTransType
def _smatrix(m,coord,tocls):
newcoord = tocls()
newcoord.lat = coord._lat
newcoord.laterr = coord._laterr
newcoord.long = coord._long
newcoord.longerr = coord._longerr
newcoord.matrixRotate(m)
return newcoord
def matrixRotate(self,matrix,apply=True,fixrange=True,unitarycheck=False):
"""
Applies the supplied unitary rotation matrix to these coordinates.
:param matrix: the transformation matrix in cartesian coordinates
:type matrix: a 3x3 :class:`numpy.matrix`
:param apply:
If True, the transform will be applied inplace to the coordinates
for this object
:type apply: boolean
:param fixrange:
If True the latitude is autmoatically fixed to be on (-pi/2,pi/2)
and the longitude is on (0,2pi). Otherwise the raw coordinate is
output.
:type fixrange: boolean
:param unitarycheck:
If True and the matrix is not unitary, a ValueError will be raised.
Otherwise no check is performed.
:type unitarycheck: boolean
:returns:
(lat,long) as decimal radians after the transformation matrix is
applied or (lat,long,laterr,longerr) if errors are nonzero
"""
#for single values, math module is much faster than numpy
from math import sin,cos,atan2,sqrt
m = np.asmatrix(matrix)
if unitarycheck:
mdagger = m.H
rtol = 1e-5 if unitarycheck is True else unitarycheck
if not np.allclose(mdagger*m,m*mdagger,rtol):
raise ValueError('matrix not unitary')
lat = self.lat.radians
long = self.long.radians
laterr = 0 if self.laterr is None else self.laterr.radians
longerr = 0 if self.longerr is None else self.longerr.radians
sb = sin(lat)
cb = cos(lat)
sl = sin(long)
cl = cos(long)
#spherical w/ r=1 > cartesian
x = cb*cl
y = cb*sl
z = sb
#do transform
v = np.matrix((x,y,z)).T
xp,yp,zp = (m*v).A1
#cartesian > spherical
sp = sqrt(xp*xp+yp*yp) #cylindrical radius
latp = atan2(zp,sp)
longp = atan2(yp,xp)
#propogate errors if they are present
if laterr != 0 or longerr != 0:
#TODO: check formulae
#all of these are first order taylor expansions about the value
dx = sqrt((laterr*sb*cl)**2+(longerr*cb*sl)**2)
dy = sqrt((laterr*sb*sl)**2+(longerr*cb*cl)**2)
dz = abs(laterr*cb)
dv = np.matrix((dx,dy,dz))
dxp,dyp,dzp = np.sqrt(np.power(m,2)*np.power(dv,2))
#intermediate variables for dlatp - each of the partial derivatives
chi = 1/(1+(zp/sp)**2)
#common factor chi not included below
dbdx = x*z*sp**-3
dbdy = y*z*sp**-3
dbdz = 1/sp
dlatp = chi*sqrt((dxp*dbdx)**2 + (dyp*dbdy)**2 + (dzp*dbdz)**2)
dlongp = sqrt((dxp*yp*xp**-2)**2 + (dyp/xp)**2)/(1 + (yp/xp)**2) #indep of z
else:
laterr = None
if fixrange:
ao = (latp+_pio2)/_twopi
latp = _twopi*abs((ao-np.floor(ao+0.5)))-_pio2
longp = longp % _twopi
if apply:
self.lat.radians = latp
self.long.radians = longp
if laterr is not None:
self.laterr.radians = dlatp
self.longerr.radians = dlongp
if laterr is None:
return latp,longp
else:
return latp,longp,dlatp,dlongp
def convert(self,tosys,optimize=_convertoptimizedefault):
"""
Converts the coordinate system from it's current system to a new
:class:`CoordinateSystem` object possibly with optimizations for
matrix-based transformation of :class:`LatLongCoordinates` objects.
.. warning::
The transformation optimizations used if `optimize` is True are only
correct if the conversion matricies are independent of the
coordinate values themselves (e.g. are linear and
epoch-independent). Examples that are good for optimization include
FK4->FK5->ICRS (all are epoch-independent).
In the future, this system will be smarter about
knowing when it is safe to optimize, but for now, only use it if
you're sure it will work correctly.
:param tosys:
The new coordinate system class. Should be a subclass of
:class:`CoordinateSystem` .
:param bool optimize:
If True, speed up the transformation by composing matricies where
possible. If False, the standard transformation is performed.
:returns: A new object of a class determined by `tosys`
:except: raises NotImplementedError if converters are not present
"""
if tosys is self.__class__:
return self
if optimize:
cache = CoordinateSystem._transformcache['smatrix']
#add this object to the cache if its missing
if self.__class__ not in cache:
cache[self.__class__] = {}
if tosys not in cache[self.__class__]:
convs = CoordinateSystem.getTransformPath(self.__class__,tosys)
if callable(convs): #direct transform
convs = [convs]
else:
convclasses = convs
convfuncs = [CoordinateSystem._converters[c1][c2] for c1,c2 in zip(convclasses[:-1],convclasses[1:])]
convs = []
#now we populate convs with converter functions that are
#either multplied-together matricies if they are smatrix
#converters or the actual converter function otherwise
combinedmatrix = None
lastcls = None
for cls,cfunc in zip(convclasses[:-1],convfuncs):
#note that cls here is the *previous* conversion's end
#class/current conversion's start class...
if cfunc.transtype=='smatrix':
mt = cfunc.basetrans(self)
if hasattr(mt,'nocache') and mt.nocache:
cache = None
if combinedmatrix is None:
combinedmatrix = mt
else:
combinedmatrix = mt * combinedmatrix
else:
if combinedmatrix is None:
convs.append(cfunc)
else:
convs.append(_OptimizerSmatrixer(combinedmatrix,cls))
convs.append(cfunc)
if combinedmatrix is not None:
convs.append(_OptimizerSmatrixer(combinedmatrix,convclasses[-1]))
#now cache this transform for future use unless it was banned above
if cache is not None:
cache[self.__class__][tosys] = convs
else:
convs = cache[self.__class__][tosys]
#now actually do the transforms
coord = self
for conv in convs:
coord = conv(coord)
return coord
else:
return CoordinateSystem.convert(self,tosys)
class _OptimizerSmatrixer(object):
"""
Used internally to do the optimization of :meth`LatLongCoordinates.convert`
"""
transtype = 'smatrix'
def __init__(self,combinedmatrix,tocls):
self.combinedmatrix = combinedmatrix
self.tocls = tocls
def __call__(self,coord):
return LatLongCoordinates._smatrix(self.combinedmatrix,coord,self.tocls)
def basetrans(self,coords):
return self.combinedmatrix
class EpochalLatLongCoordinates(LatLongCoordinates,EpochalCoordinates):
"""
A Coordinate system where the coordinates change as a function of time.
The origin and orientation of some coordinate systems are tied to the motion
of the Earth and Solar System and hence most be updated as time passes.
In general this only accounts for epoch-related changes in its own
coordinate system. If (for example) one has a :class:`ITRSCoordinates`
coordinate, changing the epoch only updates for polar motion. To properly
update all epoch-related such as precession/nutation and earth rotation, the
coordinate should be transformed to :class:`ICRSCoordinates` , update the
epoch, and transform back to :class:`TIRSCoordinates` .
"""
__slots__ = tuple()
julianepoch = True
def __init__(self,long=0,lat=0,longerr=None,laterr=None,epoch=None,distancepc=None):
"""
See the associated attribute docstrings for the meaning of the inputs.
"""
LatLongCoordinates.__init__(self,long,lat,longerr,laterr,distancepc)
self._epoch = epoch
def __getstate__(self):
d = LatLongCoordinates.__getstate__(self)
d.update(EpochalCoordinates.__getstate__(self))
return d
def __setstate__(self,d):
LatLongCoordinates.__setstate__(self,d)
EpochalCoordinates.__setstate__(self,d)
@add_docs(LatLongCoordinates.convert)
def convert(self,tosys,optimize=_convertoptimizedefault):
''
res = LatLongCoordinates.convert(self,tosys,optimize)
if issubclass(tosys,EpochalLatLongCoordinates):
res._epoch = self._epoch
return res
convert.__doc__ = LatLongCoordinates.convert.__doc__
class EquatorialCoordinatesBase(EpochalLatLongCoordinates):
"""
This object represents an angular location on the unit sphere, specified in
right ascension and declination. Some of the subclasses are not strictly
speaking equatorial, but they are close, or are tied to the equatorial
position of a particular epoch.
This is a superclass for all other Equatorial Coordinate systems -
particular reference systems implement the :meth:`transformToEpoch` method.
See the docstring for :class:`EpochalLatLongCoordinates` for subclassing
suggestions.
"""
__slots__ = tuple()
_longlatnames_ = ('ra','dec')
_longrange_ = (0,360)
def __str__(self):
rastr = self.ra.getHmsStr(canonical=True)
decstr = self.dec.getDmsStr(canonical=True)
#2.6 required for format
#return '{3}: {0} {1} ({2})'.format(rastr,decstr,self.epoch,self.__class__.__name__)
return '%s: %s %s %s'%(self.__class__.__name__,rastr,decstr,self.epochstr)
def __init__(self,*args,**kwargs):
"""
Input for equatorial coordinates. Can follow any of the following forms:
* EquatorialCoordinatesBase()
* EquatorialCoordinatesBase(:class:`EquatorialCoordinatesBase`)
* EquatorialCoordinatesBase('rastr decstr')
* EquatorialCoordinatesBase((ra,dec))
* EquatorialCoordinatesBase(ra,dec)
* EquatorialCoordinatesBase(ra,fdec,raerr,decerr)
* EquatorialCoordinatesBase(ra,dec,raerr,decerr,epoch)
* EquatorialCoordinatesBase(ra,dec,raerr,decerr,epoch,distancepc)
Note that the default epoch is 2000 if not otherwise specified. To
disable epoch tranformations, set the epoch to None. If scalar values
are provided, they are assummed to be degrees.
"""
posargs = {}
if len(args) == 0:
pass
if len(args) == 1:
if isinstance(args[0],EquatorialCoordinatesBase):
EpochalLatLongCoordinates.__init__(self, args[0].ra, args[0].dec,
args[0].raerr, args[0].decerr,
args[0].epoch, args[0].distancepc)
return
elif isinstance(args[0],basestring):
sargs = args[0].split()
posargs['ra'] = sargs[0]
posargs['dec'] = sargs[1]
else:
posargs['ra'],posargs['dec'] = args[0]
elif len(args) == 2:
posargs['ra'] = args[0]
posargs['dec'] = args[1]
elif len(args) == 4:
posargs['ra'] = args[0]
posargs['dec'] = args[1]
posargs['raerr'] = args[2]
posargs['decerr'] = args[3]
elif len(args) == 5:
posargs['ra'] = args[0]
posargs['dec'] = args[1]
posargs['raerr'] = args[2]
posargs['decerr'] = args[3]
posargs['epoch'] = args[4]
elif len(args) == 6:
posargs['ra'] = args[0]
posargs['dec'] = args[1]
posargs['raerr'] = args[2]
posargs['decerr'] = args[3]
posargs['epoch'] = args[4]
posargs['distancepc'] = args[5]
for k,v in posargs.iteritems():
if k in kwargs:
raise ValueError('got multiple values for argument '+k)
kwargs[k] = v
kwargs.setdefault('ra',0)
kwargs.setdefault('dec',0)
kwargs.setdefault('raerr',None)
kwargs.setdefault('decerr',None)
kwargs.setdefault('epoch',2000)
EpochalLatLongCoordinates.__init__(self,kwargs['ra'],kwargs['dec'],kwargs['raerr'],kwargs['decerr'])
if 'epoch' in kwargs:
self.epoch = kwargs['epoch']
if 'distancepc' in kwargs:
if 'distanceau' in kwargs:
raise TypeError("can't specify distance in both pc and au")
self.distancepc = kwargs['distancepc']
elif 'distanceau' in kwargs:
self.distanceau = kwargs['distanceau']
else:
self._dpc = None
@add_docs(EpochalLatLongCoordinates.convert)
def convert(self,tosys,optimize=_convertoptimizedefault):
''
res = EpochalLatLongCoordinates.convert(self,tosys,optimize)
if self._dpc is not None:
res._dpc = self._dpc
return res
convert.__doc__ = EpochalLatLongCoordinates.convert.__doc__
class ICRSCoordinates(EquatorialCoordinatesBase):
"""
Equatorial Coordinates tied to the International Celestial Reference System
(ICRS). Strictly speaking this is not an Equatorial system, as it is an
inertial frame that only aligns with Earth's equator at J2000, but it is
nearly an equatorial system at J2000.
.. note::
Technically, this is actually the Barycentric Celestial Referense System
(BCRS), distringuished from ICRS by having acounted for space motion. In
astropysics, instead, space motion is already accounted for by using
a :class:`astropysics.coords.ephems.EphemerisObject` object, which yields
coordinates (often :class:`ICRSCoordinates`) at the epoch of
observation.
.. warning::
Abberation of starlight is not yet implemented in transformations
to/from ICRS.
"""
__slots__ = tuple()
def transformToEpoch(self,newepoch):
"""
ICRS is an inertial frame, so no transformation is necessary
"""
EpochalLatLongCoordinates.transformToEpoch(self,newepoch)
#Conversions to/from ICRS are in the other coordinate systems
def __makeFrameBias():
from ..utils import rotation_matrix
#sing USNO circular 179 for frame bias -- all in milliarcsec
da0 = -14.6
xi0 = -16.6170
eta0 = -6.8192
#mas->degrees
return rotation_matrix(-eta0/3600000,axis='x') *\
rotation_matrix(xi0/3600000,axis='y') *\
rotation_matrix(da0/3600000,axis='z')
frameBiasJ2000 = __makeFrameBias()
"""
Frame bias matrix such that vJ2000 = B*vICRS .
"""
#make it a static method just for clarity even though it isn't really visible
__makeFrameBias = staticmethod(__makeFrameBias)
class RectangularICRSCoordinates(RectangularCoordinates,EpochalCoordinates):
"""
Rectangular coordinates aligned to the ICRS with origin at the solar system
barycenter. The positive z-axis points to the north celestial pole and the
positive x-axis is along with the (0,0) point of the equatorial ICRS.
.. note::
Units for the coordinates are specified via the :attr:`unit` attribute.
When converting *from* :class:`ICRSCoordinates`, distances default to AU
if less than 1000 AU, otherwise, pc. If a distance is not present, the
default distance is 1 (unspecified unit).
"""
__slots__ = tuple()
julianepoch = True
def __init__(self,x,y,z,epoch=None,unit='pc'):
RectangularCoordinates.__init__(self,x,y,z)
self._epoch = epoch
self._unit = None
self.unit = unit
def __getstate__(self):
d = RectangularCoordinates.__getstate__(self)
d.update(EpochalCoordinates.__getstate__(self))
return d
def __setstate__(self,d):
RectangularCoordinates.__setstate__(self,d)
EpochalCoordinates.__setstate__(self,d)
def __str__(self):
if self.epoch is None:
epochstr = ''
else:
epochstr = ' ('+self.epochstr+')'
return RectangularCoordinates.__str__(self) + epochstr
def transformToEpoch(self,newepoch):
EpochalCoordinates.transformToEpoch(self,newepoch)
def _getUnit(self):
return self._unit
def _setUnit(self,val):
from ..constants import auperpc
if val is None:
self._unit = None
elif self._unit is None and (val in ('au','pc')):
self._unit = val
elif val=='au':
if self._unit == 'pc':
self.x *= auperpc
self.y *= auperpc
self.z *= auperpc
self._unit = val
elif val == 'pc':
if self._unit == 'au':
self.x /= auperpc
self.y /= auperpc
self.z /= auperpc
self._unit = val
else:
raise ValueError("unit must be 'au' or 'pc' - got %s"%val)
unit = property(_getUnit,_setUnit,doc="""The unit for these coordinates.
Must be 'au', 'pc', or None - setting to anything else will raise a
:exc:`ValueError`. If not None, setting to a new unit will convert the
values from AU to pc or vice versa.
""")
@CoordinateSystem.registerTransform('self',ICRSCoordinates)
def _toICRS(ric):
from math import asin,atan2,degrees
x,y,z = ric.x,ric.y,ric.z
r = (x*x+y*y+z*z)**0.5
dec = degrees(asin(z/r))
ra = degrees(atan2(y,x))
if ric.unit is None:
return ICRSCoordinates(ra,dec,epoch=ric.epoch)
elif ric.unit == 'pc':
return ICRSCoordinates(ra,dec,distancepc=r,epoch=ric.epoch)
elif ric.unit == 'au':
return ICRSCoordinates(ra,dec,distanceau=r,epoch=ric.epoch)
else:
raise NotImplementedError('Unrecognized unit %s in RectICRS->ICRS'%ric.unit)
@CoordinateSystem.registerTransform(ICRSCoordinates,'self')
def _fromICRS(ic):
from math import sin,cos
ra,dec = ic.ra.r,ic.dec.r
if ic.distanceau is None:
r = 1
unit = None
elif ic.distanceau>1000:
r = ic.distancepc[0]
unit = 'pc'
else:
r = ic.distanceau[0]
unit = 'au'
x = r*cos(ra)*cos(dec)
y = r*sin(ra)*cos(dec)
z = r*sin(dec)
return RectangularICRSCoordinates(x,y,z,ic.epoch,unit)
class GCRSCoordinates(EquatorialCoordinatesBase):
"""
Geocentric Celestial Reference System equatorial coordinates. The
orientation of this coordinates is fixed to the ICRS orientation, but with
origin at the earth geocenter.
.. warning::
Abberation of starlight not yet included in transforms.
"""
__slots__ = tuple()
def transformToEpoch(self,newepoch):
"""
Transforms from the current epoch to a new epoch by converting to ICRS
and back again in the new epoch.
"""
c1 = self.convert(ICRSCoordinates)
c1.epoch = newepoch
c2 = c1.convert(GCRSCoordinates)
self._lat = c2._lat
self._long = c2._long
self._laterr = c2._laterr
self._longerr = c2._longerr
self._dpc = c2._dpc
EpochalLatLongCoordinates.transformToEpoch(self,newepoch)
#TODO:implement direct spherical transformations if needed/wanted for precision
# @CoordinateSystem.registerTransform('self',ICRSCoordinates,transtype='smatrix')
# def _toICRS(gc):
# return np.eye(3).view(np.matrix)
# @CoordinateSystem.registerTransform(ICRSCoordinates,'self',transtype='smatrix')
# def _fromICRS(ic):
# return np.eye(3).view(np.matrix)
class RectangularGCRSCoordinates(RectangularCoordinates,EpochalCoordinates):
"""
Rectangular coordinates aligned to the GCRS with origin at the Earth
geocenter. The positive z-axis points to the north celestial pole and the
positive x-axis points down the (0,0) point of the equatorial GCRS (and
thus, also ICRS).
The coordinates at this location are actually an "Astrometric place" - the
actual location relative to the geocenter. This is disctinct from
:class:`GCRSCoordinates` in that :class:`GCRSCoordinates` *includes*
aberration and light deflection, while :class:`RectangularGCRSCoordinates`
does not.
.. note::
Units for the coordinates are specified via the :attr:`unit` attribute.
When converting *from* :class:`GCRSCoordinates`, distances default to AU
if less than 1000 AU, otherwise, pc. If a distance is not present, the
default distance is 1 (unspecified unit).
"""
__slots__ = tuple()
julianepoch = True
def __init__(self,x,y,z,epoch=None,unit='pc'):
RectangularCoordinates.__init__(self,x,y,z)
self._epoch = epoch
self._unit = None
self.unit = unit
def __getstate__(self):
d = RectangularCoordinates.__getstate__(self)
d.update(EpochalCoordinates.__getstate__(self))
return d
def __setstate__(self,d):
RectangularCoordinates.__setstate__(self,d)
EpochalCoordinates.__setstate__(self,d)
def __str__(self):
if self.epoch is None:
epochstr = ''
else:
epochstr = ' ('+self.epochstr+')'
return RectangularCoordinates.__str__(self) + epochstr
def transformToEpoch(self,newepoch):
EpochalCoordinates.transformToEpoch(newepoch)
def _getUnit(self):
return self._unit
def _setUnit(self,val):
from ..constants import auperpc
if val is None:
self._unit = None
elif self._unit is None and (val in ('au','pc')):
self._unit = val
elif val=='au':
if self._unit == 'pc':
self.x *= auperpc
self.y *= auperpc
self.z *= auperpc
self._unit = val
elif val == 'pc':
if self._unit == 'au':
self.x /= auperpc
self.y /= auperpc
self.z /= auperpc
self._unit = val
else:
raise ValueError("unit must be 'au' or 'pc' - got %s"%val)
unit = property(_getUnit,_setUnit,doc="""The unit for these coordinates.
Must be 'au', 'pc', or None - setting to anything else will raise a
:exc:`ValueError`. If not None, setting to a new unit will convert the
values from AU to pc or vice versa.
""")
@CoordinateSystem.registerTransform('self',GCRSCoordinates)
def _toGCRS(rgc):
from math import asin,atan2,degrees
#TODO:implement aberration and light deflection
x,y,z = rgc.x,rgc.y,rgc.z
r = (x*x+y*y+z*z)**0.5
dec = degrees(asin(z/r))
ra = degrees(atan2(y,x))
if rgc.unit is None:
return GCRSCoordinates(ra,dec,epoch=rgc.epoch)
elif rgc.unit == 'pc':
return GCRSCoordinates(ra,dec,distancepc=r,epoch=rgc.epoch)
elif rgc.unit == 'au':
return GCRSCoordinates(ra,dec,distanceau=r,epoch=rgc.epoch)
else:
raise NotImplementedError('Unrecognized unit %s in RectGCRS->GCRS'%rgc.unit)
@CoordinateSystem.registerTransform(GCRSCoordinates,'self')
def _fromGCRS(gc):
from math import sin,cos
#TODO:implement aberration and light deflection
ra,dec = gc.ra.r,gc.dec.r
if gc.distanceau is None:
r = 1
unit = None
elif gc.distanceau>1000:
r = gc.distancepc[0]
unit = 'pc'
else:
r = gc.distanceau[0]
unit = 'au'
x = r*cos(ra)*cos(dec)
y = r*sin(ra)*cos(dec)
z = r*sin(dec)
return RectangularGCRSCoordinates(x,y,z,gc.epoch,unit)
@CoordinateSystem.registerTransform('self',RectangularICRSCoordinates)
def _toRectICRS(rgc):
from .ephems import earth_pos_vel
from ..obstools import epoch_to_jd
from ..constants import auperpc
x = rgc.x
y = rgc.y
z = rgc.z
unit = rgc.unit
epoch = rgc.epoch
if epoch is None:
raise ValueError('cannot transform GCRS to ICRS without an epoch')
if unit is None: #infitiely far, so no corrections
return RectangularICRSCoordinates(x,y,z,epoch,unit=None)
else: #do parallax correction
xe,ye,ze = earth_pos_vel(epoch_to_jd(epoch),True)[0]
if unit == 'au':
xp = x - xe
yp = y - ye
zp = z - ze
elif unit == 'pc':
xp = x - xe/auperpc
yp = y - ye/auperpc
zp = z - ze/auperpc
else:
raise NotImplementedError('Unit %s not supported by GCRS->ICRS'%unit)
return RectangularICRSCoordinates(xp,yp,zp,epoch,unit=unit)
@CoordinateSystem.registerTransform(RectangularICRSCoordinates,'self')
def _fromRectICRS(ric):
from .ephems import earth_pos_vel
from ..obstools import epoch_to_jd
from ..constants import auperpc
x = ric.x
y = ric.y
z = ric.z
unit = ric.unit
epoch = ric.epoch
if epoch is None:
raise ValueError('cannot transform ICRS to GCRS without an epoch')
if unit is None: #infitiely far, so no corrections
return RectangularGCRSCoordinates(x,y,z,epoch,unit=None)
else: #do parallax correction
xe,ye,ze = earth_pos_vel(epoch_to_jd(epoch),True)[0]
if unit == 'au':
xp = x - xe
yp = y - ye
zp = z - ze
elif unit == 'pc':
xp = x - xe/auperpc
yp = y - ye/auperpc
zp = z - ze/auperpc
else:
raise NotImplementedError('Unit %s not supported by ICRS->GCRS'%unit)
return RectangularGCRSCoordinates(xp,yp,zp,epoch,unit=unit)
def _precession_matrix_J2000_Capitaine(epoch):
"""
Computes the precession matrix from J2000 to the given Julian Epoch.
Expression from from Capitaine et al. 2003 as written in the USNO
Circular 179. This should match the IAU 2006 standard from SOFA
(although this has not yet been tested)
"""
from ..utils import rotation_matrix
T = (epoch-2000.0)/100.0
#from USNO circular
pzeta = (-0.0000003173,-0.000005971,0.01801828,0.2988499,2306.083227,2.650545)
pz = (-0.0000002904,-0.000028596,0.01826837,1.0927348,2306.077181,-2.650545)
ptheta = (-0.0000001274,-0.000007089,-0.04182264,-0.4294934,2004.191903,0)
zeta = np.polyval(pzeta,T)/3600.0
z = np.polyval(pz,T)/3600.0
theta = np.polyval(ptheta,T)/3600.0
return rotation_matrix(-z,'z') *\
rotation_matrix(theta,'y') *\
rotation_matrix(-zeta,'z')
def _load_nutation_data(datafn,seriestype):
"""
Loads nutation series from saved data files.
Seriestype can be 'lunisolar' or 'planetary'
"""
from ..utils.io import get_package_data
if seriestype == 'lunisolar':
dtypes = [('nl',int),
('nlp',int),
('nF',int),
('nD',int),
('nOm',int),
('ps',float),
('pst',float),
('pc',float),
('ec',float),
('ect',float),
('es',float)]
elif seriestype == 'planetary':
dtypes = [('nl',int),
('nF',int),
('nD',int),
('nOm',int),
('nme',int),
('nve',int),
('nea',int),
('nma',int),
('nju',int),
('nsa',int),
('nur',int),
('nne',int),
('npa',int),
('sp',int),
('cp',int),
('se',int),
('ce',int)]
else:
raise ValueError('requested invalid nutation series type')
lines = [l for l in get_package_data(datafn).split('\n') if not l.startswith('#') if not l.strip()=='']
lists = [[] for n in dtypes]
for l in lines:
for i,e in enumerate(l.split(' ')):
lists[i].append(dtypes[i][1](e))
return np.rec.fromarrays(lists,names=[e[0] for e in dtypes])
_nut_data_00a_ls = _load_nutation_data('iau00a_nutation_ls.tab','lunisolar')
_nut_data_00a_pl = _load_nutation_data('iau00a_nutation_pl.tab','planetary')
def _nutation_components20062000A(epoch):
"""
:returns: eps,dpsi,deps in radians
"""
from ..obstools import epoch_to_jd
from .funcs import obliquity
epsa = obliquity(epoch_to_jd(epoch),2006)
raise NotImplementedError('2006/2000A nutation model not implemented')
return epsa,dpsi,deps
_nut_data_00b = _load_nutation_data('iau00b_nutation.tab','lunisolar')
def _nutation_components2000B(intime,asepoch=True):
"""
:param intime: time to compute the nutation components as a JD or epoch
:type intime: scalar
:param asepoch: if True, `intime` is interpreted as an epoch, otherwise JD
:type asepoch: bool
:returns: eps,dpsi,deps in radians
"""
from ..constants import asecperrad
from ..obstools import epoch_to_jd,jd2000
from .funcs import obliquity
if asepoch:
jd = epoch_to_jd(intime)
else:
jd = intime
epsa = np.radians(obliquity(jd,2000))
t = (jd-jd2000)/36525
#Fundamental (Delaunay) arguments from Simon et al. (1994) via SOFA
#Mean anomaly of moon
el = ((485868.249036 + 1717915923.2178*t)%1296000)/asecperrad
#Mean anomaly of sun
elp = ((1287104.79305 + 129596581.0481*t)%1296000)/asecperrad
#Mean argument of the latitude of Moon
F = ((335779.526232 + 1739527262.8478*t)%1296000)/asecperrad
#Mean elongation of the Moon from Sun
D = ((1072260.70369 + 1602961601.2090*t)%1296000)/asecperrad
#Mean longitude of the ascending node of Moon
Om = ((450160.398036 + -6962890.5431*t)%1296000)/asecperrad
#compute nutation series using array loaded from data directory
dat = _nut_data_00b
arg = dat.nl*el + dat.nlp*elp + dat.nF*F + dat.nD*D + dat.nOm*Om
sarg = np.sin(arg)
carg = np.cos(arg)
p1uasecperrad = asecperrad*1e7 #0.1 microasrcsecperrad
dpsils = np.sum((dat.ps + dat.pst*t)*sarg + dat.pc*carg)/p1uasecperrad
depsls = np.sum((dat.ec + dat.ect*t)*carg + dat.es*sarg)/p1uasecperrad
#fixed offset in place of planetary tersm
masecperrad = asecperrad*1e3 #milliarcsec per rad
dpsipl = -0.135/masecperrad
depspl = 0.388/masecperrad
return epsa,dpsils+dpsipl,depsls+depspl #all in radians
def _nutation_matrix(epoch):
"""
Nutation matrix generated from nutation components.
Matrix converts from mean coordinate to true coordinate as
r_true = M * r_mean
"""
from ..utils import rotation_matrix
#TODO: implement higher precision 2006/2000A model if requested/needed
epsa,dpsi,deps = _nutation_components2000B(epoch) #all in radians
return rotation_matrix(-(epsa + deps),'x',False) *\
rotation_matrix(-dpsi,'z',False) *\
rotation_matrix(epsa,'x',False)
def _load_CIO_locator_data(datafn):
"""
Loads CIO locator series terms from saved data files.
returns polycoeffs,termsarr (starting with 0th)
"""
from ..utils.io import get_package_data
lines = [l for l in get_package_data(datafn).split('\n') if not l.startswith('#') if not l.strip()=='']
coeffs = []
sincs = []
coscs = []
orders = []
inorder = False
for l in lines:
if 'Polynomial coefficients:' in l:
polys = l.replace('Polynomial coefficients:','').split(',')
polys = np.array(polys,dtype=float)
elif 'order' in l:
if inorder:
orders.append((np.array(coeffs,dtype=int),
np.array(sincs,dtype=float),
np.array(coscs,dtype=float)))
coeffs = []
sincs = []
coscs = []
inorder = True
elif inorder:
ls = l.split()
coeffs.append(ls[:8])
sincs.append(ls[8])
coscs.append(ls[9])
if inorder:
orders.append((np.array(coeffs,dtype=int),
np.array(sincs,dtype=float),
np.array(coscs,dtype=float)))
return polys,orders
_CIO_locator_data = _load_CIO_locator_data('iau00_cio_locator.tab')
class CIRSCoordinates(EquatorialCoordinatesBase):
"""
Represents an object as equatorial coordinates in the Celestial Intermediate
Reference System. This is the post-2000 IAU system for equatorial
coordinates that seperates the coordinate system from the dynamically
complicated and somewhat imprecisely-defined ideas of the equinox and
ecliptic. This system's fundamental plane is the equator of the Celestial
Intermediate Pole (CIP) and the origin of RA is at the Celestial
Intermediate Origin (CIO).
Changes to the :attr:`epoch` will result in the coordinates being updated
for precession nutation. Nutation currently uses the IAU 2000B model that
should be good to ~1 mas. If aberration or annual parallax corrections are
necessary, convert to :class:`ICRSCoordinates`, change the epoch, and then
convert back to :class:`CIRSCoordinates`.
To convert from these coordinates to :class:`HorizontalCoordinates`
appropriate for observed coordinates, site information is necessary. Hence,
the transformations from equatorial to horizontal coordinates are performed
by the :class:`~astropysics.obstools.Site` class in the
:mod:`~astropysics.obstools` module, and attempting to directly convert will
raise an :exc:`TypeError`.
"""
def transformToEpoch(self,newepoch):
"""
Transforms these :class:`EquatorialCoordinates` to a new epoch using the
IAU 2000 precessions from Capitaine, N. et al. 2003 as written in the
USNO Circular 179.
"""
if self.epoch is not None and newepoch is not None:
M = self._CMatrix(self.epoch).T
Mn = self._CMatrix(newepoch)
self.matrixRotate(Mn*M)
#this sets the epoch
EpochalLatLongCoordinates.transformToEpoch(self,newepoch)
@staticmethod
def _CMatrix(epoch):
"""
The GCRS->CIRS transformation matrix
"""
B = ICRSCoordinates.frameBiasJ2000
if epoch is None:
return B
else:
from math import sin,cos,atan,atan2,sqrt
from ..utils import rotation_matrix
P = _precession_matrix_J2000_Capitaine(epoch)
N = _nutation_matrix(epoch)
x,y,z = (N*P*B).A[2]
xsq,ysq = x**2,y**2
bz = 1/(1+z)
s = CIRSCoordinates._CIOLocator(epoch)
#matrix components - see Circular 179 or IERS Conventions 2003
a,b,c = 1-bz*xsq , -bz*x*y , -x
d,e,f = -bz*x*y , 1 - bz*ysq , -y
g,h,i = x , y , 1 - bz*(xsq+ysq)
#return rotation_matrix(-s,'z',defrees=False)*np.mat([[a,b,c],
# [d,e,f],
# [g,h,i]])
si = sin(s)
co = cos(s)
M = [[a*co - d*si,b*co - e*si,c*co - f*si],
[a*si + d*co,b*si + e*co,c*si + f*co],
[ g, h, i ]]
return np.mat(M)
# #SOFA implementation using spherical angles - numerically identical
# r2 = x*x + y*y
# e = atan2(y,x) if r2 != 0 else 0
# d = atan(sqrt(r2/(1-r2)))
# return rotation_matrix(-(e+s),'z',False) *\
# rotation_matrix(d,'y',False) *\
# rotation_matrix(e,'z',False)
@staticmethod
def _CIOLocator(epoch):
"""
Returns the CIO locator s for the provided epoch. s is the difference in
RA between the GCRS and CIP points for the ascending node of the CIP
equator.
"""
#from ..obstools import jd2000,epoch_to_jd
from ..constants import asecperrad
from .ephems import _mean_anomaly_of_moon,_mean_anomaly_of_sun,\
_mean_long_of_moon_minus_ascnode,_long_earth,\
_mean_elongation_of_moon_from_sun,_long_venus,\
_mean_long_asc_node_moon,_long_prec
#first need to find x and y for the CIP, as s+XY/2 is needed
B = ICRSCoordinates.frameBiasJ2000
P = _precession_matrix_J2000_Capitaine(epoch)
N = _nutation_matrix(epoch)
#N*P*B takes GCRS to true, so CIP is bottom row
x,y,z = (N*P*B).A[2]
#T = (epoch_to_jd(epoch) - jd2000)/36525
T = (epoch-2000)/100
fundargs = [] #fundamental arguments
fundargs.append(_mean_anomaly_of_moon(T))
fundargs.append(_mean_anomaly_of_sun(T))
fundargs.append(_mean_long_of_moon_minus_ascnode(T))
fundargs.append(_mean_elongation_of_moon_from_sun(T))
fundargs.append(_mean_long_asc_node_moon(T))
fundargs.append(_long_venus(T))
fundargs.append(_long_earth(T))
fundargs.append(_long_prec(T))
fundargs = np.array(fundargs)
polys,orders = _CIO_locator_data
newpolys = polys.copy() #copy 0-values to add to
for i,o in enumerate(orders):
ns,sco,cco = o
a = np.dot(ns,fundargs)
newpolys[i] += np.sum(sco*np.sin(a) + cco*np.cos(a))
return np.polyval(newpolys[::-1],T)/asecperrad - x*y/2.0
@CoordinateSystem.registerTransform(GCRSCoordinates,'self',transtype='smatrix')
def _fromGCRS(gcrsc):
return CIRSCoordinates._CMatrix(gcrsc.epoch)
@CoordinateSystem.registerTransform('self',GCRSCoordinates,transtype='smatrix')
def _toGCRS(cirssys):
return CIRSCoordinates._CMatrix(cirssys.epoch).T
class EquatorialCoordinatesEquinox(EquatorialCoordinatesBase):
"""
Represents an object in *mean* geocentric apparent equatorial coordinates,
using the pre-IAU2000 systems where the plane of the ecliptic is the
fundamental plane and the origin is at the equinox of date (as set by
:attr:`epoch`).
Changes to the :attr:`epoch` will result in the coordinates being updated
for precession, but not nutation, nor annual abberation. Neither are
planned by the primary author of this package, as IAU 2000 recommends using
only CIO-based systems, but if someone actually wants equinox-based
nutation, feel free to implement it and pass it along.
To convert from these coordinates to :class:`HorizontalCoordinates`
appropriate for observed coordinates, site information is necessary. Hence,
the transformations from equatorial to horizontal coordinates are performed
by the :class:`~astropysics.obstools.Site` class in the
:mod:`astropysics.obstools` module, and attempting to directly convert will
raise a :exc:`TypeError`.
"""
transweight = 1.1 #Equinox-based transforms are to be avoided in favor of CIRS
__slots__ = tuple()
def transformToEpoch(self,newepoch):
"""
Transforms these :class:`EquatorialCoordinates` to a new epoch using the
IAU 2000 precessions from Capitaine, N. et al. 2003 as written in the
USNO Circular 179.
"""
if self.epoch is not None and newepoch is not None:
#convert from current to J2000
B = _nutation_matrix(self.epoch) *\
_precession_matrix_J2000_Capitaine(self.epoch).T
#transpose==inv; matrix is real unitary
#convert to new epoch
A = _nutation_matrix(newepoch) *\
_precession_matrix_J2000_Capitaine(newepoch)
self.matrixRotate(A*B)
EpochalLatLongCoordinates.transformToEpoch(self,newepoch)
@CoordinateSystem.registerTransform(GCRSCoordinates,'self',transtype='smatrix')
def _fromGCRS(gcrsc):
B = ICRSCoordinates.frameBiasJ2000
if gcrsc.epoch is None:
return B
else:
P = _precession_matrix_J2000_Capitaine(gcrsc.epoch)
N = _nutation_matrix(gcrsc.epoch)
return N*P*B
@CoordinateSystem.registerTransform('self',GCRSCoordinates,transtype='smatrix')
def _toGCRS(eqsys):
return EquatorialCoordinatesEquinox._fromGCRS(eqsys).T
@CoordinateSystem.registerTransform('self',CIRSCoordinates,transtype='smatrix')
def _toCIRS(eqsys):
if eqsys.epoch is None:
return np.eye(3).view(np.matrix)
else:
from ..obstools import epoch_to_jd
from .funcs import equation_of_the_origins
from ..utils import rotation_matrix
jd = epoch_to_jd(eqsys.epoch)
eqo = equation_of_the_origins(jd)*15. #hours>degrees
return rotation_matrix(-eqo,'z',True)
@CoordinateSystem.registerTransform(CIRSCoordinates,'self',transtype='smatrix')
def _fromCIRS(cirssys):
return EquatorialCoordinatesEquinox._toCIRS(cirssys).T
class ITRSCoordinates(EpochalLatLongCoordinates):
"""
Coordinates based on the International Terrestrial Reference System. The
particular implementation here assumes ITRS matches the WGS84 coordinates
(used by GPS) - for astronomical purposes, this is a perfectly good
assumption.
Epoch transformations in this system only adjust for polar motion - to
account for earth rotation, transform back to
:class:`CIRSCoordinates` or :class:`EquatorialCoordinatesEquinox`,
change the epoch, then transfrom back to :class:`ITRSCoordinates`.
Because polar motion is not fully predictable, a number of methods are
available for approximating it. To choose a method, set the
:attr:`ITRSCoordinates.polarmotion` class attribute -- this will also affect
all future transformations to :class:`ITRSCoordinates` from other coordinate
systems. The following are valid values:
* None
Assumes the pole locations are fixed to the CIP at all times, aside
from the tiny effect of s' (the TIO-CIO shift).
* A 2-tuple of callables (xp,yp)
They will be called as xp(epoch) and yp(epoch) and the result will
be assumed to give the x and y coordinates of the poles in the CIP
frame.
.. note::
The transformations from CIRS and Equinox systems to ITRS technically
involve converting to TIRS (the Terrestrial Intermediate Reference
System), distinguished from ITRS by no polar motion correction. While
there is no class explicitly representing TIRS, ITRS with :attr:`epoch`
set to None is equivalent to TIRS.
"""
__slots__ = tuple('_dpc')
#_dpc is included for transformation to/from Equatorial-like systems
_longrange_ = (-180,180)
polarmotion = None
"""
Technique of computing poles (see :class:`ITRSCoordinates` documentation)
"""
@staticmethod
def _TIOLocator(epoch):
"""
s-prime, the offset between the0 of longitude for the CIO of CIRS and
the TIO of TIRS (Terrestrial Intermediate Reference System) - TIRS and
ITRS differ by polar motion. Return value in radians.
This is really,really small, and isn't really necessary except for
completeness
"""
from ..constants import asecperrad
T = (epoch-2000)/100
return -47e-6*T/asecperrad
@staticmethod
def _WMatrix(epoch):
from ..utils import rotation_matrix
sp = ITRSCoordinates._TIOLocator(epoch)
if ITRSCoordinates.polarmotion is None:
xp = 0
yp = 0
else: #assume 2-sequence (xp,yp)
xp,yp = ITRSCoordinates.polarmotion
if callable(xp):
xp = xp(epoch)
if callable(yp):
yp = yp(epoch)
#TODO: test if the following, linear matrix is good enough to not
#bother with the "right" one:
#[[1,-sp,-xp],
# [sp,1,yp],
# [xp,-yp,1]] #can also do sp->0
return rotation_matrix(-yp,'x') *\
rotation_matrix(-xp,'y') *\
rotation_matrix(sp,'z')
def transformToEpoch(self,newepoch):
"""
Transforms these :class:`ITRSCoordinates` to a new epoch, adjusting the
coordinate values for polar motion.
"""
if self.epoch is not None and newepoch is not None:
M = self._WMatrix(self.epoch).T
Mn = self._WMatrix(newepoch)
self.matrixRotate(Mn*M)
#this sets the epoch
EpochalLatLongCoordinates.transformToEpoch(self,newepoch)
@add_docs(EpochalLatLongCoordinates.convert)
def convert(self,tosys,optimize=_convertoptimizedefault):
''
res = EpochalLatLongCoordinates.convert(self,tosys,optimize)
if self._dpc is not None:
res._dpc = self._dpc
return res
@CoordinateSystem.registerTransform(CIRSCoordinates,'self',transtype='smatrix')
def _fromEqC(eqc):
from .funcs import earth_rotation_angle
from ..obstools import epoch_to_jd
from ..utils import rotation_matrix
epoch = eqc.epoch
if epoch is not None:
jd = epoch_to_jd(eqc.epoch)
era = earth_rotation_angle(jd,degrees=True)
W = ITRSCoordinates._WMatrix(eqc.epoch)
return W*rotation_matrix(era)
else:
return np.eye(3).view(np.matrix)
@CoordinateSystem.registerTransform(EquatorialCoordinatesEquinox,'self',transtype='smatrix')
def _fromEqE(eqe):
from .funcs import greenwich_sidereal_time
from ..utils import rotation_matrix
from ..obstools import epoch_to_jd
epoch = eqe.epoch
if epoch is not None:
jd = epoch_to_jd(eqe.epoch)
try:
gst = greenwich_sidereal_time(jd,True)*15. #hours -> degrees
except Exception,e:
from warnings import warn
warn('temporarily bypassing problem with greenwich_sidereal_time:%s'%e)
gst = greenwich_sidereal_time(jd,'simple')*15. #hours -> degrees
W = ITRSCoordinates._WMatrix(eqe.epoch)
return W*rotation_matrix(gst)
else:
return np.eye(3).view(np.matrix)
@CoordinateSystem.registerTransform('self',CIRSCoordinates,transtype='smatrix')
def _toEqC(itrsc):
#really we want inverse, but rotations are unitary -> inv==transpose
#we provide itrsc in the call because the epoch is needed
return ITRSCoordinates._fromEqC(itrsc).T
@CoordinateSystem.registerTransform('self',EquatorialCoordinatesEquinox,transtype='smatrix')
def _toEqE(itrsc):
#really we want inverse, but rotations are unitary -> inv==transpose
#we provide itrsc in the call because the epoch is needed
return ITRSCoordinates._fromEqE(itrsc).T
class FK5Coordinates(EquatorialCoordinatesEquinox):
"""
Equatorial Coordinates fixed to the FK5 reference system.
"""
__slots__ = tuple()
@staticmethod
def _precessionMatrixJ(epoch1,epoch2):
"""
Computes the precession matrix from one Julian epoch to another
"""
from ..utils import rotation_matrix
T = (epoch1 - 2000)/100
dt = (epoch2 - epoch1)/100
pzeta = (0.017998,0.000344,0.30188,-0.000139,1.39656,2306.2181)
temp = pzeta[5] + T*(pzeta[4]+T*pzeta[3])
zeta = dt*(temp + dt*((pzeta[2]+pzeta[1]*T) + dt*pzeta[0]))/3600
pz = (0.018203,-0.000066,1.09468)
z = dt*(temp + dt*((pz[2]+pz[1]*T) + dt*pz[0]))/3600
ptheta = (-0.041833,-0.000217,-0.42665,-0.000217,-0.85330,2004.3109)
temp = ptheta[5] + T*(ptheta[4]+T*ptheta[3])
theta = dt*(temp + dt*((ptheta[2]+ptheta[1]*T) + dt*ptheta[0]))/3600
return rotation_matrix(-z,'z') *\
rotation_matrix(theta,'y') *\
rotation_matrix(-zeta,'z')
def transformToEpoch(self,newepoch):
"""
Transforms these :class:`EquatorialCoordinates` to a new epoch. Uses the
algorithm resolved by the IAU in 1976 as written in Meeus, as well as
Lieske, J.H. 1979.
According to SOFA, this for becomes less valid at the following levels:
* 1960 CE to 2040 CE
< 0.1"
* 1640 CE to 2360 CE
< 1"
* 500 BCE to 3000 CE
< 3"
* 1200 BCE to 3900 CE
> 10"
* 4200 BCE to 5600 CE
> 100"
* 6800 BCE to 8200 CE
> 1000"
"""
if self.epoch is not None and newepoch is not None:
self.matrixRotate(self._precessionMatrixJ(self.epoch,newepoch))
EpochalLatLongCoordinates.transformToEpoch(self,newepoch)
@CoordinateSystem.registerTransform(ICRSCoordinates,'self',transtype='smatrix')
def _fromICRS(icrsc):
"""
B-matrix from USNO circular 179
"""
from ..utils import rotation_matrix
eta0 = -19.9/3600000
xi0 = 9.1/3600000
da0 = -22.9/3600000
B = rotation_matrix(-eta0,'x') *\
rotation_matrix(xi0,'y') *\
rotation_matrix(da0,'z')
epoch = icrsc.epoch
if icrsc.epoch is None:
return B
else:
return FK5Coordinates._precessionMatrixJ(2000,icrsc.epoch)*B
@CoordinateSystem.registerTransform('self',ICRSCoordinates,transtype='smatrix')
def _toICRS(fk5c):
return FK5Coordinates._fromICRS(fk5c).T
class FK4Coordinates(EquatorialCoordinatesEquinox):
"""
Equatorial Coordinates fixed to the FK4 reference system. Note that this
implementation does *not* correct for the elliptic terms of aberration
as of yet.
Epoch is Besselian.
"""
__slots__ = tuple()
julianepoch = False
def __init__(self,*args,**kwargs):
"""
Input for FK4 coordinates. Can follow any of the following forms:
* EquatorialCoordinatesBase()
* EquatorialCoordinatesBase(:class:`EquatorialCoordinatesBase`)
* EquatorialCoordinatesBase('rastr decstr')
* EquatorialCoordinatesBase((ra,dec))
* EquatorialCoordinatesBase(ra,dec)
* EquatorialCoordinatesBase(ra,fdec,raerr,decerr)
* EquatorialCoordinatesBase(ra,dec,raerr,decerr,epoch)
* EquatorialCoordinatesBase(ra,dec,raerr,decerr,epoch,distancepc)
The epoch of FK4 coordinates defaults to B1950.
"""
args = list(args)
args.insert(0,self)
EquatorialCoordinatesEquinox.__init__(*args,**kwargs)
if self._epoch==2000.:
self._epoch = 1950.
def transformToEpoch(self,newepoch):
"""
Transforms these :class:`EquatorialCoordinates` to a new epoch. Uses the
method of Newcomb (pre-IAU1976) to compute precession.
"""
if self.epoch is not None and newepoch is not None:
self.matrixRotate(self._precessionMatrixB(self.epoch,newepoch))
EpochalLatLongCoordinates.transformToEpoch(self,newepoch)
@staticmethod
def _precessionMatrixB(epoch1,epoch2):
"""
computes the precession matrix from one Besselian epoch to another using
Newcomb's method.
"""
from ..utils import rotation_matrix
#tropical years
t1 = (epoch1-1850.0)/1000.0
t2 = (epoch2-1850.0)/1000.0
dt = t2 - t1
zeta1 = 23035.545 + t1*139.720+0.060*t1*t1
zeta2 = 30.240 - 0.27*t1
zeta3 = 17.995
pzeta = (zeta3,zeta2,zeta1,0)
zeta = np.polyval(pzeta,dt)/3600
z1 = 23035.545 + t1*139.720 + 0.060*t1*t1
z2 = 109.480 + 0.39*t1
z3 = 18.325
pz = (z3,z2,z1,0)
z = np.polyval(pz,dt)/3600
theta1 = 20051.12 - 85.29*t1 - 0.37*t1*t1
theta2 = -42.65 - 0.37*t1
theta3 = -41.8
ptheta = (theta3,theta2,theta1,0)
theta = np.polyval(ptheta,dt)/3600
return rotation_matrix(-z,'z') *\
rotation_matrix(theta,'y') *\
rotation_matrix(-zeta,'z')
@CoordinateSystem.registerTransform('self',FK5Coordinates,transtype='smatrix')
def _toFK5(fk4c):
from ..obstools import epoch_to_jd,jd_to_epoch
#B1950->J2000 matrix from Murray 1989 A&A 218,325
B = np.mat([[0.9999256794956877,-0.0111814832204662,-0.0048590038153592],
[0.0111814832391717,0.9999374848933135,-0.0000271625947142],
[0.0048590037723143,-0.0000271702937440,0.9999881946023742]])
if fk4c.epoch is not None and fk4c.epoch != 1950:
jd = epoch_to_jd(fk4c.epoch,False)
jepoch = jd_to_epoch(jd)
T = (jepoch - 1950)/100
#now add in correction terms for FK4 rotating system
B[0,0] += -2.6455262e-9*T
B[0,1] += -1.1539918689e-6*T
B[0,2] += 2.1111346190e-6*T
B[1,0] += 1.1540628161e-6*T
B[1,1] += -1.29042997e-8*T
B[1,2] += 2.36021478e-8*T
B[2,0] += -2.1112979048e-6*T
B[2,1] += -5.6024448e-9*T
B[2,2] += 1.02587734e-8*T
PB = FK4Coordinates._precessionMatrixB(fk4c.epoch,1950)
return B*PB
else:
return B
@CoordinateSystem.registerTransform(FK5Coordinates,'self',transtype='smatrix')
def _fromFK5(fk5c):
#need inverse because Murray's matrix is *not* a true rotation matrix
return FK4Coordinates._toFK5(fk5c).I
class EclipticCoordinatesCIRS(EpochalLatLongCoordinates):
"""
Ecliptic Coordinates (beta, lambda) such that the fundamental plane passes
through the ecliptic at the current epoch.
Note that because the concept of the ecliptic can be complicated or even
ill-defined, ecliptic coordinates is astropysics are simply defined as
tied to a particular set of equatorial coordinates with a given obliquity
model. For :class:`EclipticCoordinatesCIRS`, the equatorial
coordinates are :class:`CIRSCoordinates` with obliquity
given by the IAU 2006 obliquity model (see :func:`obliquity`)
"""
__slots__ = ()
_longlatnames_ = ('lamb','beta')
_longrange_ = (0,360)
obliqyear = 2006
def __init__(self,lamb=0,beta=0,lamberr=None,betaerr=None,epoch=2000,
distanceau=None):
"""
See the associated attribute docstrings for the meaning of the inputs.
"""
EpochalLatLongCoordinates.__init__(self,lamb,beta,lamberr,betaerr,epoch)
self.distanceau = distanceau
@CoordinateSystem.registerTransform('self',CIRSCoordinates,transtype='smatrix')
def _toEq(eclsc):
from .funcs import obliquity
from ..utils import rotation_matrix
return rotation_matrix(-obliquity(eclsc.jdepoch,EclipticCoordinatesCIRS.obliqyear),'x')
@CoordinateSystem.registerTransform(CIRSCoordinates,'self',transtype='smatrix')
def _fromEq(eqc):
from .funcs import obliquity
from ..utils import rotation_matrix
return rotation_matrix(obliquity(eqc.jdepoch,EclipticCoordinatesCIRS.obliqyear),'x')
def transformToEpoch(self,newepoch):
if self.epoch is not None and newepoch is not None:
eqc = self.convert(CIRSCoordinates)
eqc.epoch = newepoch
newval = eqc.convert(self.__class__)
self._lat._decval = newval._lat._decval
self._long._decval = newval._long._decval
self._laterr._decval = newval._lat._decval
self._longerr._decval = newval._longerr._decval
EpochalLatLongCoordinates.transformToEpoch(self,newepoch)
class EclipticCoordinatesEquinox(EpochalLatLongCoordinates):
"""
Ecliptic Coordinates (beta, lambda) such that the fundamental plane passes
through the ecliptic at the current epoch.
Note that because the concept of the ecliptic can be complicated or even
ill-defined, ecliptic coordinates is astropysics are simply defined as tied
to a particular set of equatorial coordinates with a given obliquity model.
For :class:`EclipticCoordinatesEquinox`, the equatorial coordinates are
:class:`EquatorialCoordinatesEquinox` with obliquity given by the IAU 1980
obliquity model (see :func:`~astropysics.coords.funcs.obliquity`)
"""
__slots__ = ()
_longlatnames_ = ('lamb','beta')
_longrange_ = (0,360)
obliqyear = 1980
def __init__(self,lamb=0,beta=0,lamberr=None,betaerr=None,epoch=2000,
distanceau=None):
"""
See the associated attribute docstrings for the meaning of the inputs.
"""
EpochalLatLongCoordinates.__init__(self,lamb,beta,lamberr,betaerr,epoch)
self.distanceau = distanceau
@CoordinateSystem.registerTransform('self',EquatorialCoordinatesEquinox,transtype='smatrix')
def _toEq(eclsc):
from .funcs import obliquity
from ..utils import rotation_matrix
return rotation_matrix(-obliquity(eclsc.jdepoch,EclipticCoordinatesEquinox.obliqyear),'x')
@CoordinateSystem.registerTransform(EquatorialCoordinatesEquinox,'self',transtype='smatrix')
def _fromEq(eqc):
from .funcs import obliquity
from ..utils import rotation_matrix
return rotation_matrix(obliquity(eqc.jdepoch,EclipticCoordinatesEquinox.obliqyear),'x')
def transformToEpoch(self,newepoch):
if self.epoch is not None and newepoch is not None:
eqc = self.convert(EquatorialCoordinatesEquinox)
eqc.epoch = newepoch
newval = eqc.convert(self.__class__)
self._lat._decval = newval._lat._decval
self._long._decval = newval._long._decval
self._laterr._decval = newval._lat._decval
self._longerr._decval = newval._longerr._decval
EpochalLatLongCoordinates.transformToEpoch(self,newepoch)
class RectangularGeocentricEclipticCoordinates(RectangularCoordinates,EpochalCoordinates):
"""
Rectangular coordinates oriented so that the x-y plane lies in the plane of
the ecliptic at the specified epoch. Distances are in AU. Origin is at the
center of mass of the Earth.
Note that the epoch should not be set directly - if precession is desired
desired, convert to an Ecliptic coordinate system, do the precession, and
convert back.
"""
__slots__ = tuple()
julianepoch = True
def __init__(self,x,y,z,epoch=None):
RectangularCoordinates.__init__(self,x,y,z)
self._epoch = epoch
def __getstate__(self):
d = RectangularCoordinates.__getstate__(self)
d.update(EpochalCoordinates.__getstate__(self))
return d
def __setstate__(self,d):
RectangularCoordinates.__setstate__(self,d)
EpochalCoordinates.__setstate__(self,d)
def __str__(self):
if self.epoch is None:
epochstr = ''
else:
epochstr = ' ('+self.epochstr+')'
return RectangularCoordinates.__str__(self) + epochstr
def transformToEpoch(self,newepoch):
EpochalCoordinates.transformToEpoch(newepoch)
@CoordinateSystem.registerTransform('self',EclipticCoordinatesCIRS)
def _toEcC(rec):
from math import asin,atan2,degrees
x,y,z = rec.x,rec.y,rec.z
r = (x*x+y*y+z*z)**0.5
beta = degrees(asin(z/r))
lamb = degrees(atan2(y,x))
return EclipticCoordinatesCIRS(lamb,beta,distanceau=r,epoch=rec.epoch)
@CoordinateSystem.registerTransform('self',EclipticCoordinatesEquinox)
def _toEcQ(rec):
from math import asin,atan2,degrees
x,y,z = rec.x,rec.y,rec.z
r = (x*x+y*y+z*z)**0.5
beta = degrees(asin(z/r))
lamb = degrees(atan2(y,x))
return EclipticCoordinatesEquinox(lamb,beta,distanceau=r,epoch=rec.epoch)
@CoordinateSystem.registerTransform(EclipticCoordinatesCIRS,'self')
def _fromEcC(ec):
from math import sin,cos,degrees
l,b = ec.lamb.r,ec.beta.r
if ec.distanceau is None:
r = 1
else:
r = ec.distanceau[0]
x = r*cos(l)*cos(b)
y = r*sin(l)*cos(b)
z = r*sin(b)
return RectangularGeocentricEclipticCoordinates(x,y,z,ec.epoch)
@CoordinateSystem.registerTransform(EclipticCoordinatesEquinox,'self')
def _fromEcQ(ec):
from math import sin,cos,degrees
l,b = ec.lamb.r,ec.beta.r
if ec.distanceau is None:
r = 1
else:
r = ec.distanceau[0]
x = r*cos(l)*cos(b)
y = r*sin(l)*cos(b)
z = r*sin(b)
return RectangularGeocentricEclipticCoordinates(x,y,z,ec.epoch)
class GalacticCoordinates(EpochalLatLongCoordinates):
__slots__ = tuple()
_longlatnames_ = ('l','b')
_longrange_ = (0,360)
_ngp_J2000 = FK5Coordinates(192.859508, 27.128336,epoch=2000)
_long0_J2000 = AngularCoordinate(122.932)
_ngp_B1950 = FK4Coordinates(192.25, 27.4,epoch=1950)
_long0_B1950 = AngularCoordinate(123)
def transformToEpoch(self,newepoch):
"""
Galactic coordinates are nominally inertial, although the definition is
a bit unclear in that regard.
"""
EpochalLatLongCoordinates.transformToEpoch(self,newepoch)
@CoordinateSystem.registerTransform(FK5Coordinates,'self',transtype='smatrix')
def _fromFK5(fk5coords):
from ..utils import rotation_matrix
epoch = 2000 if fk5coords.epoch is None else fk5coords.epoch
mat = rotation_matrix(180 - GalacticCoordinates._long0_J2000.d,'z') *\
rotation_matrix(90 - GalacticCoordinates._ngp_J2000.dec.d,'y') *\
rotation_matrix(GalacticCoordinates._ngp_J2000.ra.d,'z') *\
FK5Coordinates._precessionMatrixJ(epoch,2000)
mat.nocache = True #can't cache because of the need to get the epoch
return mat
@CoordinateSystem.registerTransform('self',FK5Coordinates,transtype='smatrix')
def _toFK5(galcoords):
return GalacticCoordinates._fromFK5(galcoords).T
@CoordinateSystem.registerTransform(FK4Coordinates,'self',transtype='smatrix')
def _fromFK4(fk4coords):
from ..utils import rotation_matrix
epoch = 1950 if fk4coords.epoch is None else fk4coords.epoch
mat = rotation_matrix(180 - GalacticCoordinates._long0_B1950.d,'z') *\
rotation_matrix(90 - GalacticCoordinates._ngp_B1950.dec.d,'y') *\
rotation_matrix(GalacticCoordinates._ngp_B1950.ra.d,'z') *\
FK4Coordinates._precessionMatrixB(epoch,1950)
mat.nocache = True #can't cache because of the need to get the epoch
return mat
@CoordinateSystem.registerTransform('self',FK4Coordinates,transtype='smatrix')
def _toFK4(galcoords):
return GalacticCoordinates._fromFK4(galcoords).T
class SupergalacticCoordinates(EpochalLatLongCoordinates):
__slots__ = tuple()
_longlatnames_ = ('sgl','sgb')
_longrange_ = (0,360)
_nsgp_gal = GalacticCoordinates(47.37,6.32) #glactc is 47.47=WRONG
_sg0_gal = GalacticCoordinates(137.37,0)
def transformToEpoch(self,newepoch):
"""
Supergalactic coordinates are nominally inertial, although the
definition is a bit unclear in that regard.
"""
EpochalLatLongCoordinates.transformToEpoch(self,newepoch)
@CoordinateSystem.registerTransform('self',GalacticCoordinates,transtype='smatrix')
def _toGal(sgalcoords):
return SupergalacticCoordinates._fromGal(sgalcoords).T
@CoordinateSystem.registerTransform(GalacticCoordinates,'self',transtype='smatrix')
def _fromGal(galcoords):
from ..utils import rotation_matrix
z1r = rotation_matrix(SupergalacticCoordinates._nsgp_gal.l.d,'z')
yr = rotation_matrix(90 - SupergalacticCoordinates._nsgp_gal.b.d,'y')
z2r = rotation_matrix(180 - SupergalacticCoordinates._sg0_gal.l.d +\
SupergalacticCoordinates._nsgp_gal.l.d,'z')
return z2r*yr*z1r
class HorizontalCoordinates(LatLongCoordinates):
"""
This object represents an angular location on the unit sphere, with the
north pole of the coordinate position fixed to the local zenith
To convert from other :class:`Coordinate` types to horizontal positions, see
:class:`astropysics.obstools.Site`, as site information is required for
these corrections
"""
__slots__ = tuple()
_longlatnames_ = ('az','alt')
_longrange_ = (0,360)
def __init__(self,alt=0,az=0,alterr=None,azerr=None,distancepc=None):
"""
See the associated attribute docstrings for the meaning of the inputs.
"""
LatLongCoordinates.__init__(self,az,alt,azerr,alterr,distancepc)
@CoordinateSystem.registerTransform(EquatorialCoordinatesEquinox,'self')
@CoordinateSystem.registerTransform(CIRSCoordinates,'self')
def _toHoriz(incoosys=None):
raise TypeError('use astropysics.obstools.Site methods to transform celestial to terrestrial coordinates')
@CoordinateSystem.registerTransform('self',EquatorialCoordinatesEquinox)
@CoordinateSystem.registerTransform('self',CIRSCoordinates)
def _fromHoriz(incoosys=None):
raise TypeError('use astropysics.obstools.Site methods to transform terrestrial to celestial coordinates')
#Now that all the coordinate systems have been made, add the diagram to the docs
#That shows the graph of the built-in transforms
postbuiltin = """
A similar diagram can be generated after the user has created and registered
custom coordinates and transforms::
from networkx import to_agraph,relabel_nodes,draw_networkx
from astropysics.coords import CoordinateSystem
graph = CoordinateSystem.getTransformGraph()
dotgraph = to_agraph(relabel_nodes(graph,lambda n:n.__name__))
dotgraph.graph_attr.update(dict(size='12.0, 12.0',fontsize=12))
dotgraph.write('mygraph.dot')
draw_networkx(graph)
This will save a graphviz dot file and displaying the graph with matplotlib,
showing both builtin and custom-added coordinates and transforms.
"""
try:
from networkx import to_agraph,relabel_nodes
graph = to_agraph(relabel_nodes(CoordinateSystem.getTransformGraph(),lambda n:n.__name__))
graph.graph_attr.update(dict(size=r'12.0, 12.0',fontsize=12))
transstr="""
Built-in Transforms
^^^^^^^^^^^^^^^^^^^
A number of coordinate systems are provided built into astropysics. Most of
these have pre-defined standard transformations. The built-in coordinate classes
with defined transformation are shown in the diagram below.
.. graphviz::
"""+graph.string().replace('\n','\n ')+postbuiltin
__doc__ = __doc__.replace('{transformdiagram}',transstr)
del to_agraph,relabel_nodes,graph
except ImportError:
#if networkx or pygraphviz isn't present, drop the diagram but add a warning that it's missing
warningstr = """
Builtin Coordinate System Transforms
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. warning::
A diagram showing the relationships between the pre-defined transformations
should be here, but this copy of the documentation was built without
`networkx <http://networkx.lanl.gov/>` and `pygraphviz
<http://networkx.lanl.gov/pygraphviz/>` available to build the diagram.
Please re-build this file after those packages are installed to see the
diagram.
"""+postbuiltin
__doc__ = __doc__.replace('{transformdiagram}',warningstr)
del warningstr
#<--------------------------Convinience Functions------------------------------>
def angular_string_to_dec(instr,hms=True,degrees=True):
"""
Convinience function to convert a angular coordinate string to a decimal
value.
:param hms:
If True, the coordinate will be assumed to be h:m:s, otherwise d:m:s.
This will be ignored if the coordinates are specified as ##h##m##s or
##d##m##s, or if the input is not in sexigesimal form.
:type hms: boolean
:param degrees:
If True, the output will be decimal degrees, otherwise radians.
:type degrees: boolean
:returns: Decimal value in radians or degrees
"""
ac = AngularCoordinate(instr)
if degrees:
return ac.degrees
else:
return ac.radians
def objects_to_coordinate_arrays(posobjs,coords='auto',degrees=True):
"""
converts a sequence of position objects into an array of coordinates.
`coords` determines the order of the output coordinates - it can be a
comma-seperated list of coordinate names or if 'auto', it will be 'lat,long'
for all coordinate systems except for Equatorial, which will use 'ra,dec'
if `degrees` is True, returned arrays are in degrees, otherwise radians
"""
if coords=='auto':
coordnames = None
else:
coordnames = coords.split(',')
coords = []
if degrees:
for o in posobjs:
if coordnames is None:
if isinstance(o,EquatorialCoordinates):
coords.append((o.ra.d,o.dec.d))
else:
coords.append((o.lat.d,o.long.d))
else:
coords.append([getattr(o,c).d for c in coordnames])
else:
for o in posobjs:
if coordnames is None:
if isinstance(o,EquatorialCoordinates):
coords.append((o.ra.r,o.dec.r))
else:
coords.append((o.lat.r,o.long.r))
else:
coords.append([getattr(o,c).r for c in coordnames])
return np.array(coords).T | PypiClean |
/KnowledgeCore-2.8.10.tar.gz/KnowledgeCore-2.8.10/README.md | KnowledgeCore
==============

KnowledgeCore is a RDFlib-backed minimalistic knowledge base, initially designed
for robots (in particular human-robot interaction or multi-robot interaction).
It features full [ROS](https://www.ros.org) support.
It stores triples (like RDF/OWL triples), and provides an [API](doc/api.md)
accessible via a simple socket protocol or a [ROS wrapper](#ros-usage).
[pykb](https://github.com/severin-lemaignan/pykb) provides an idiomatic Python
binding over the socket interface, making easy to integrate the knowledge base in your application.
A similar API wrapper exists for ROS as well (see example below).
It integrates with the [reasonable](https://github.com/gtfierro/reasonable) OWL2
RL reasoner to provide OWL2 semantics and fast knowledge materialisation.
Example
-------
This example uses the ROS API (see below), with some Pythonic syntatic sugar:
```python
from knowledge_core.api import KB
rospy.init_node("test_knowledge_base")
kb = KB()
def on_robot_entering_antonio_property(evt):
print("A robot entered Antonio's %s: %s" (evt[0]["place"], evt[0]["robot"]))
kb += "ari rdf:type Robot"
kb += ["antonio looksAt ari", "ari isIn kitchen"]
kb.subscribe(["?robot isIn ?place", "?place belongsTo antonio", "?robot rdf:type Robot"], onRobotEnteringAntonioProperty)
kb += "kitchen belongsTo antonio"
# try as well:
# kb -= "antonio looksAt ari" to remove facts
# kb["* rdf:type Robot"] to query the knowledge base
rospy.spin()
```
will print:
```
A robot entered Antonio's kitchen: ari
```
Installation
------------
**KnowledgeCore only supports Python 3**
### Prerequisite
`rdflib >= 6.0.0`:
```
$ pip install rdflib
```
For reasoning (optional):
```
$ pip install reasonable
```
### Installation
From `pypi`:
```
$ pip install knowledge_core
```
From source:
```
$ git clone https://github.com/severin-lemaignan/knowledge_core.git
$ cd knowledge_core
$ python setup.py install
$ knowledge_core
```
If using ROS, you can also use your regular catkin workflow.
Documentation
-------------
### General usage
**If you are a roboticist, [jump to ROS usage](#ros-usage)**
You can use `KnowledgeCore` either as a server, accessible from multiple
applications (clients), or in *embedded* mode (which does not require to start a
server process, but is limited to one single client). Note that the embedded
mode is only available for Python applications.
In both case, and if your application is written in Python, it is highly recommended
to use [pykb](https://github.com/severin-lemaignan/pykb) to interact the
knowledge base.
#### Server mode
To start the knowledge base as a server, simply type:
```
$ knowledge_core
```
(run `knowledge_core --help` for available options)
Then:
```python
import kb
with kb.KB() as kb:
#...
```
See usage examples on the [pykb](https://github.com/severin-lemaignan/pykb) page, or in the `KnowledgeCore` [unit-tests](testing).
#### Embedded mode
No need to start `KnowledgeCore`. Simply use the following code to start using the
knowledge base in your code:
```python
import kb
with kb.KB(embedded=True) as kb:
#...
```
#### Interacting with KnowledgeCore from other languages
- from C++: check [liboro](https://github.com/severin-lemaignan/liboro)
- from any other language: the communication with the server relies on a simply
socket-based text protocol. Feel free to get in touch if you need help to add
support for your favourite language!
#### How do I get that fancy image on top of the README?
Check [oro-view](https://github.com/severin-lemaignan/oro-view) ;-)
### ROS usage
**Please first read the general [API introduction](doc/api.md), as this applies to the ROS interface as well.**
To start the ROS node:
```
rosrun knowledge_core knowledge_core
```
**Note that, in general, you want to use the 'Pythonic' wrapper built on top of the low-level ROS topics/services API. See example above. This Pythonic interface follows the [`pykb`](https://gitlab/interaction/pykb/) API (except in a few corner case that are not supported by the ROS interface).**
`knowledge_core` exposes two topics, `/kb/add_facts` and
`/kb/remove_facts`, to add/remove triples to the knowledge base. Both topics
expect a simple string with 3 tokens separated by spaces (if the object is a
literal string, use double quotes to escape it).
It also exposes the following services:
- `/kb/revise` to add/remove facts using a synchronous interface
- `/kb/query` to perform simple queries
- `/kb/sparql` to perform complex queries (full SPARQL end-point)
- `/kb/events` to subscribe to 'events' by providing a (set of) partially-bound
triples. Calling the service returns an event *id*. Subscribe then to
`/kb/events/<id>` to be notified everytime a new instance/class match the
provided pattern
- `/kb/manage` to manage the knowledge base (including eg clearing all the
facts)
Features
--------
### Server-Client or embedded
`KnowledgeCore` can be run as a stand-alone (socket) server, or directly embedded
in Python applications.
### Multi-models
`KnowledgeCore` is intended for dynamic environments, with possibly several
contexts/agents requiring separate knowledge models.
New models can be created at any time and each operation (like knowledge
addition/retractation/query) can operate on a specific subset of models.
Each models are also independently classified by the reasoner.
### Event system
`KnowledgeCore` provides a mechanism to *subscribe* to some conditions (like: an
instance of a given type is added to the knowledge base, some statement becomes
true, etc.) and get notified back.
### Reasoning
`KnowledgeCore` provides RDFS/OWL reasoning capabilities via the
[reasonable](https://github.com/gtfierro/reasonable) reasoner.
See [reasonable README](https://github.com/gtfierro/reasonable#owl-2-rules) for
the exact level of support of the different OWL2 RL rules.
### Transient knowledge
`KnowledgeCore` allows to attach 'lifespans' to statements: after a given duration,
they are automatically collected.
**[this functionality is currently disabled. Please open an issue of you need it
urgently]**
### Ontology walking
`KnowledgeCore` exposes several methods to explore the different ontological models
of the knowledge base. It is compatible with the visualization tool
[oro-view](https://github.com/severin-lemaignan/oro-view).
| PypiClean |
/EC_MS-0.7.5.tar.gz/EC_MS-0.7.5/src/EC_MS/Time_Response.py | from __future__ import division, print_function
import os
import numpy as np
from scipy.optimize import curve_fit
from scipy.integrate import odeint
from . import Chem
from .Molecules import Molecule
from .Plotting import plot_operation
def fit_exponential(t, y, zero_time_axis=False):
"""Return (tao, y0, y1) for best fit of y = y0 + (y1-y0) * exp(-t/tao)
Args:
t (vector): time
y (vector): values
zero_time_axix (boolean): whether to subtract t[0] from t. False by default
"""
if zero_time_axis:
t = t - t[0] # zero time axis
tau_i = t[-1] / 10 # guess at time constant
# tau_i = t[-1] #often can't solve with this guess. A smaller tau helps.
y0_i = y[-1] # guess at approach value
y1_i = y[0] # guess at true initial value
pars_i = [tau_i, y0_i, y1_i]
def exp_fun(x, tau, y0, y1):
z = y0 + (y1 - y0) * np.exp(-x / tau)
# print([tau,y0,y1]) #for diagnosing curve_fit problems
return z
pars, pcov = curve_fit(exp_fun, t, y, p0=pars_i)
# pars = [tau, y0, y1]
return pars
def fit_step(t, y, tpulse=0, step="fall", ax=None, spec="r--", label=None, verbose=1):
"""
use step='rise' to fit onset and step='fall' to fit tail
assumes that data starts from the start of the pulse
"""
if verbose:
print("\n\nfunction 'fit_step' at your service!\n")
# zero time axis
t0 = t[0]
t = t - t0
print("t0 = " + str(t0))
if type(t) is list:
t = np.array(t) # 17B02
if step == "fall":
I_tail = np.array([I for (I, t_I) in enumerate(t) if tpulse < t_I])
# print(I_tail)
t_fit = t[I_tail] - tpulse
elif step == "rise":
if tpulse == 0:
tpulse = t[-1]
I_tail = np.array([I for (I, t_I) in enumerate(t) if t_I < tpulse])
t_fit = t[I_tail]
else:
print("use step='rise' to fit onset and step='fall' to fit tail")
pars = fit_exponential(t_fit, y[I_tail])
if ax:
tau = pars[0]
y0 = pars[1]
y1 = pars[2]
y_fit = y0 + (y1 - y0) * np.exp(-t_fit / tau)
t_fit = t_fit + t0 # put time axis back
if step == "fall":
t_fit = t_fit + tpulse
ax.plot(t_fit, y_fit, spec, label=label)
if label:
if label == "tau":
label = "tau = {0:5.2f} s".format(tau)
I_text = int(len(t_fit) / 2)
t_text = t_fit[I_text]
y_text = y_fit[I_text]
ax.text(t_text, y_text, label, color=spec[0])
if verbose:
print("tau = " + str(pars[0]) + " s")
print("\nfunction 'fit_step' finished!\n\n")
return pars
def stagnant_diffusion_ode(C, T, pars): # Note that C comes before T here!
"""
Scott's master p. 52 and appendix C. Z is the new X.
returns rate of change dC/dT of concentration profile for
non-dimensionalized stagnant sniffer diffusion problem.
C = C(X) where X goes from 0 (membrane) to 1 (electrode)
T is time non-dimensionalized on the diffusion scale t0 = L²/D
pars:
[0] alpha = h*L/D is the system parameter.
[1] J_fun returns the flux from the electrode as a unction of T. The flux
scale J0 is used to define the concentration scale, C0 = J0*L/D
#modified 17B02 to enable carrier gas introduction of element using Cg
# pars as a list rather than a dict is slightly faster (see fig06.out),
# but I like the dict so that I can remember what's what.
"""
dZ = pars["dZ"] # assigning this here replaces two lookups with one.
C_N = C[-1] + pars["J_fun"](T) * dZ # boundary condition dC/dZ = J(T) at electrode
C_ = (
C[0] - pars["alpha"] * (C[0] - pars["Cg"]) * dZ
) # boundary condition dC/dZ = -alpha*(C-Cg) at membrane
C_up = np.append(C[1:], C_N)
C_down = np.append(C_, C[:-1])
d2CdZ2 = (C_up - 2 * C + C_down) * pars["1/dZ**2"] # second derivative of C wrt Z
dCdT = d2CdZ2 # Fick's second law
return dCdT
def solve_stagnant(
alpha=1, J_fun=None, Cg=1, Tspan=[0, 10], startstate="zero", N=30, flux=1, verbose=1
):
"""solves the stagnant sniffer partial differential equations.
pars[0][0] is alpha = h*L/D is the system parameter.
pars[0][1] is J_fun. Returns the flux from the electrode as a unction of T.
Tspan is [Tstart, Tfinish] on the diffusion timescale t0 = L²/D
C0 = C0(X) is start concentration profile. If size(C0) = 1, assumes a
uniform concentration.
N is descretization (read from C0)
flux = 0 to return entire concentration profile (on C0 = J0*L/D scale)
flux = 1 to return flux through membrane (on J0 scale)
"""
if verbose:
print("\n\nfunction 'solve_stagnant' at your service!\n")
if startstate == "zero":
C0 = np.zeros([N])
elif startstate == "steady":
C0 = 1 / alpha + np.linspace(0, N) / (N + 1) # Scott's MSc thesis, p. 53
elif startstate == "saturated":
C0 = np.ones([N]) * Cg
elif np.size(startstate) == 1:
C0 = np.ones([N]) * startstate
else:
C0 = startstate
N = np.size()
if np.size(Tspan) == 2:
Tspan = np.linspace(Tspan[0], Tspan[1], 100)
dZ = 1 / N # N+1? N-1? I can never remember what's most 'correct'
pars = {"alpha": alpha, "J_fun": J_fun, "Cg": Cg, "dZ": dZ, "1/dZ**2": 1 / dZ ** 2}
CC = odeint(stagnant_diffusion_ode, C0, Tspan, args=(pars,))
# 16J18_02h10: this crashes the kernel, I don't know why... 18h56 found it! c before t!
J = (
CC[:, 1] - CC[:, 0]
) * N # (positive) J = dC/dX with dC = C0 - C_ and dZ = 1 / N
# J is a function of T
if verbose:
print("solution shape: " + str(np.shape(CC)))
print("\nfunction 'solve_stagnant' finished!\n\n")
if flux:
return Tspan, J
else:
return Tspan, CC
def stagnant_pulse(*args, **kwargs):
print(
"\n\n'stagnant_pulse' has been renamed 'stagnant_operator'. Remember that next time!"
)
return stagnant_operator(*args, **kwargs)
def stagnant_operator(
tj=None,
tpulse=10,
tspan=None,
j_el=-1,
L=100e-6,
A=0.196e-4,
q0=1.5e15 / Chem.NA,
p_m=1e5,
mol="H2",
p_gas=0,
normalize=False,
D=None,
kH=None,
n_el=None,
Temp=None,
unit="pmol/s",
flux_direction="out",
verbose=True,
ax=None,
plot_type=None,
startstate="zero",
N=30,
colormap="plasma",
aspect="auto",
):
"""
Models a pulse of current towards a specified product in our EC-MS setup.
Theory in chapter 2 of Scott's masters thesis.
all arguments are in pure SI units. The electrode output can either be given
as a steady-state square pulse of electrical current (tpulse, j_el, n_el),
or as a measured current (tj[1]) as a function of time (tj[0])
#tj[1] should have units A/m^2. 1 mA/cm^2 is 10 A/m^2
#17B02: p_gas is the partial pressure of the analyte in the carrier gas.
# this enables, e.g., CO depletion modelling.
"""
if verbose:
print("\n\nfunction 'stagnant_operator' at your service!\n")
if type(mol) is str:
mol = Molecule(mol)
if Temp is not None:
mol.set_temperature(Temp)
else:
Temp = 298.15 # standard temperature in K
if D is None:
D = mol.D
if kH is None:
kH = mol.kH
if n_el is None and not normalize:
n_el = mol.n_el
if tspan is None:
if tj is None:
tspan = [-0.1 * tpulse, 1.2 * tpulse]
else:
tspan = [tj[0][0], tj[0][-1]]
h = kH * Chem.R * Temp * q0 / (p_m * A) # mass transfer coefficeint
alpha = L * h / D # system parameter
# non-dimensional scales:
t0 = L ** 2 / D
if tj is None:
if normalize:
j0 = 1
else:
j0 = j_el / (n_el * Chem.Far)
else:
t = tj[0]
if normalize:
j0 = 1
j = tj[1] / np.max(np.abs(tj[1]))
else:
j = tj[1] / (n_el * Chem.Far) # A/m^2 --> mol/(m^2*s)
j0 = max(np.abs(j))
c0 = j0 * L / D
tau = L ** 2 / (2 * D) + L / h
# from the approximate analytical solution, Scott's thesis appendix D
Tpulse = tpulse / t0
Tspan = (
np.linspace(tspan[0], tspan[1], 1000) / t0
) # why do I give so many time points?
if tj is None:
def J_fun(T):
if T < 0:
return 0
if T < Tpulse:
return 1
return 0
else:
T_in = t / t0
J_in = j / max(np.abs(j))
# print('max(J_in) = ' + str(max(J_in)))
def J_fun(T):
if T < T_in[0]: # assume no current outside of the input tj data
return 0
if T < T_in[-1]:
return np.interp(T, T_in, J_in)
return 0
c_gas = p_gas / (Chem.R * Temp)
cg = c_gas / kH # concentration analyte in equilibrium with carrier gas, 17B02
Cg = (
cg / c0
) # non-dimensionalized concentration analyte at equilibrium with carrier gas, 17B02
# pars = ([alpha, J_fun, Cg],) #odeint needs the tuple. 17A12: Why ?!
[T, CC] = solve_stagnant(
alpha=alpha,
J_fun=J_fun,
Cg=Cg,
Tspan=Tspan,
startstate=startstate,
flux=False,
N=N,
)
cc = CC * c0
t = T * t0
j = h * (cc[:, 0] - cg) # mass transport at the membrane
# j1 = D * (cc[:,1] - cc[:,0])
# fick's first law at the membrane gives the same j :)
if verbose:
print(
"q0 = "
+ str(q0)
+ " mol/s, h = "
+ str(h)
+ " m/s, alpha = "
+ str(alpha)
+ ", j0 = "
+ str(j0)
+ " mol/(m^2*s), max(j)/j0 = "
+ str(max(j) / j0)
+ ", t0 = "
+ str(t0)
+ " s, c0 = "
+ str(c0)
+ " mol/m^3"
+ ", tau (analytical) = "
+ str(tau)
+ " s"
+ ", cg = "
+ str(cg)
+ " mM"
)
# get ready to plot:
N = np.shape(cc)[1]
z = np.arange(N) / (N - 1) * L
# this will only be used for heatmap, so it's okay if dx isn't quite right.
if "cm^2" not in unit:
j = j * A
if unit[0] == "u":
j = j * 1e6
elif unit[0] == "n":
j = j * 1e9
elif unit[0] == "p":
j = j * 1e12
if flux_direction == "in":
j = -j
if normalize:
s_int = np.trapz(j, t)
if verbose:
print("normalizing from area = " + str(s_int))
j = j / s_int
# plotting was moved on 17G30 some legacy code here:
if plot_type is not None and ax is not None:
print("We recommend you plot seperately, using the function 'plot_operation'.")
axes = plot_operation(
cc=cc,
t=t,
z=z,
j=j,
ax=ax,
plot_type=plot_type,
colormap=colormap,
aspect=aspect,
verbose=verbose,
)
if verbose:
print("\nfunction 'stagnant_operator' finished!\n\n")
return t, j, axes
results = {"t": t, "z": z, "j": j, "cc": cc, "dimensions": "tz"}
return results
def flow_diffusion_ode(C, X, pars):
"""
Scott's master, p. 60. X is the new Y and Z is the new X.
"""
C_N = C[-1]
C_ = C[0] - pars["alpha"] * (C[0] - pars["Cg"]) * pars["dZ"]
C_up = np.append(C[1:], C_N)
C_down = np.append(C_, C[:-1])
d2CdZ2 = (C_up - 2 * C + C_down) * pars["1/dZ**2"]
# I assume multiplication to be faster than division
dCdX = d2CdZ2 * pars["1/beta"]
return dCdX
def solve_flow(
alpha=1, beta=1, Cg=0, N=30, flux=False, verbose=True, Xspan=[0, 1], C0="uniform"
):
"""
This solves the flow ODE and returns either flux through membrane as a
function of position (Xspan and J),
or the concentration profile (Xspan and CC)
It assumes steady state. I think I can use this and then convolute if I
need time dependence.
"""
if verbose:
print("\n\nfunction 'solve_flow' at your service!\n")
if C0 == "uniform":
C0 = np.array([1] * N)
# nothing else really makes sense, since c0 defines a scale.
if np.size(Xspan) == 2:
Xspan = np.linspace(Xspan[0], Xspan[1], 100)
dZ = 1 / N # N+1? N-1? I can never remember what's most 'correct'
pars = {
"alpha": alpha,
"1/beta": 1 / beta,
"Cg": Cg,
"dZ": dZ,
"1/dZ**2": 1 / dZ ** 2,
}
CC = odeint(flow_diffusion_ode, C0, Xspan, args=(pars,))
# 16J18_02h10: this crashes the kernel, I don't know why... 18h56 found it! c before t!
J = (
CC[:, 1] - CC[:, 0]
) * N # (positive) J = dC/dZ with dC = C0 - C_ and dX = 1 / N
# J is a function of X.
if verbose:
print("solution shape: " + str(np.shape(CC)))
print("\nfunction 'solve_flow' finished!\n\n")
if flux:
return Xspan, J
else:
return Xspan, CC
pass
def flow_operator(
mode="steady", # in steady mode it's not really an operator.
system="chip", #
A_el=0.196e-4,
A=0.196e-4,
q0=1.5e15 / Chem.NA,
Temp=None, # universal pars
L=100e-6,
w=5e-3,
w2=5e-3,
F=1e-9, # geometry pars #w and w2 changed from 0.5e-3 to 5e-3 on 17K28.
c0=None,
j0=None,
j_el=-1, # inlet flow pars
p_m=1e5, # chip pars
phi=0.5,
dp=20e-9,
Lp=100e-6, # DEMS pars
mol="H2",
D=None,
kH=None,
n_el=None,
M=None, # mol pars
p_gas=0,
normalize=False,
unit="pmol/s",
flux_direction="out",
N=100, # solver pars
verbose=True,
):
"""
Follows the recipe in Scott's MSc, page 61, for calculating collection
efficiency in a flow system by solving a differential equation. This
can be used to compare different types of EC-MS
"""
if verbose:
print("\n\nfunction 'flow_operator' at your service!\n")
if type(mol) is str:
mol = Molecule(mol)
if Temp is not None:
mol.set_temperature(Temp)
else:
Temp = 298.15 # standard temperature in K
if D is None:
D = mol.D # diffusion constant in electrolyte / [m^2/s]
if kH is None:
kH = mol.kH # dimensionless henry's law constant
if M is None:
M = Chem.Mass(mol.name) * 1e-3 # molar mass / [kg/mol]
# print(c0)
if n_el is None and c0 is None and not normalize:
n_el = mol.n_el
if c0 is None:
if j0 is None:
j0 = j_el * A_el / (n_el * Chem.Far)
c0 = j0 / F
# concentration is production rate over flow rate: [mol/s] / [m^3/s)] = [mol/m^3] )
if system == "chip":
h = kH * Chem.R * Temp * q0 / (p_m * A) # Scott's MSc, page 50
elif system == "DEMS":
h = (
kH * phi * dp / (3 * Lp) * np.sqrt(8 / np.pi * Chem.R * Temp / M)
) # Scott's MSc, page 49
v0 = F / (L * w2)
alpha = h * L / D
beta = v0 * L ** 2 / (D * w) # There is a mistake in Scott's MSc page60!
# There I got the non-dimensionalization wrong, and wrote beta = v0*w**2/(D*L)
# in fact, beta = v0*L**2/(D*w)
Xspan = np.linspace(0, 1, 1000)
X, CC = solve_flow(alpha=alpha, beta=beta, Xspan=Xspan, N=N, verbose=verbose)
x = X * w
cc = CC * c0
j = cc[:, 0] * h # mol/m^3 * m/s = mol/(m^2*s)
Z = np.linspace(0, 1, N)
z = Z * L
eta_m = 1 - np.trapz(CC[-1, :], Z) # Scott's MSc, page 61
eta_m_check = (
w2 * np.trapz(j, x) / (c0 * F)
) # m*mol/(m^2*s)*m / ((mol/m^3)*m^3/s) = 1
qm = c0 * F * eta_m
if verbose:
print("portion not escaped = " + str(eta_m))
print("portion collected = " + str(eta_m_check) + "\n\n")
if system == "chip":
eta_v = 1
elif system == "DEMS":
p_w = Chem.p_vap(mol="H2O", T=Temp, unit="Pa")
M_w = Chem.Mass("H2O") * 1e-3
j_w = (
A
* p_w
/ (Chem.R * Temp)
* phi
* dp
/ (3 * Lp)
* np.sqrt(8 / np.pi * Chem.R * Temp / M_w)
)
eta_v = q0 / (qm + j_w) # fixed 17H10
eta = eta_m * eta_v
if verbose:
print(
"q0 = "
+ str(q0)
+ " mol/s, h = "
+ str(h)
+ " m/s, alpha = "
+ str(alpha)
+ ", j0 = "
+ str(j0)
+ " mol/s, max(c)/c0 = "
+ str(np.max(np.max(cc)) / c0)
+ ", kH = "
+ str(kH)
+ ", eta = "
+ str(eta)
+ ", mol = "
+ str(mol.name)
+ ", system = "
+ str(system)
+ ""
+ ", beta = "
+ str(beta)
+ ""
+ ", v0 = "
+ str(v0)
)
if verbose:
print("\nfunction 'flow_operator' at finished!\n\n")
results = {
"x": x,
"z": z,
"j": j,
"cc": cc,
"eta_m": eta_m,
"eta_v": eta_v,
"eta": eta,
"dimensions": "xz",
}
return results
def delta_response(
L=100e-6,
q0=1e15 / Chem.NA,
mol="H2",
D=None,
kH=None,
n_el=None,
A=0.196e-4,
p_m=1e5,
Temp=298.15,
verbose=True,
tspan="auto",
N_t=1000,
):
"""
Returns the output when a stagnant_operator operates on a delta function.
There's probably a much smarter way to do it, but for now I'll just do
triangle pulse of width tau/250
"""
if D is None or kH is None:
if type(mol) is str:
mol = Molecule(mol)
if D is None:
D = mol.D
if kH is None:
kH = mol.kH
if verbose:
print("calculating a delta function response.")
h = kH * Chem.R * Temp * q0 / (p_m * A) # mass transfer coefficeint
tau = L / h + L ** 2 / (2 * D)
if tspan == "auto":
tspan = [0, 4 * tau]
t = np.linspace(tspan[0], tspan[1], N_t)
j = np.append(np.array([1]), np.zeros(N_t - 1))
tj = [t, j]
print(type(tj))
return stagnant_pulse(
tj=tj,
normalize=True,
tspan=tspan,
L=L,
A=A,
q0=q0,
p_m=p_m,
D=D,
kH=kH,
n_el=n_el,
Temp=Temp,
verbose=True,
plot_type=None,
)
if __name__ == "__main__":
pass
## | PypiClean |
/CmlArabicReaderThird-0.1-py3-none-any.whl/ThirdcamelArabicReader/camel_tools/morphology/cachetools/func.py |
__all__ = ("fifo_cache", "lfu_cache", "lru_cache", "mru_cache", "rr_cache", "ttl_cache")
import collections
import functools
import math
import random
import time
try:
from threading import RLock
except ImportError: # pragma: no cover
from dummy_threading import RLock
from . import FIFOCache, LFUCache, LRUCache, MRUCache, RRCache, TTLCache
from . import keys
_CacheInfo = collections.namedtuple(
"CacheInfo", ["hits", "misses", "maxsize", "currsize"]
)
class _UnboundCache(dict):
@property
def maxsize(self):
return None
@property
def currsize(self):
return len(self)
class _UnboundTTLCache(TTLCache):
def __init__(self, ttl, timer):
TTLCache.__init__(self, math.inf, ttl, timer)
@property
def maxsize(self):
return None
def _cache(cache, typed):
maxsize = cache.maxsize
def decorator(func):
key = keys.typedkey if typed else keys.hashkey
lock = RLock()
stats = [0, 0]
def wrapper(*args, **kwargs):
k = key(*args, **kwargs)
with lock:
try:
v = cache[k]
stats[0] += 1
return v
except KeyError:
stats[1] += 1
v = func(*args, **kwargs)
# in case of a race, prefer the item already in the cache
try:
with lock:
return cache.setdefault(k, v)
except ValueError:
return v # value too large
def cache_info():
with lock:
hits, misses = stats
maxsize = cache.maxsize
currsize = cache.currsize
return _CacheInfo(hits, misses, maxsize, currsize)
def cache_clear():
with lock:
try:
cache.clear()
finally:
stats[:] = [0, 0]
wrapper.cache_info = cache_info
wrapper.cache_clear = cache_clear
wrapper.cache_parameters = lambda: {"maxsize": maxsize, "typed": typed}
functools.update_wrapper(wrapper, func)
return wrapper
return decorator
def fifo_cache(maxsize=128, typed=False):
"""Decorator to wrap a function with a memoizing callable that saves
up to `maxsize` results based on a First In First Out (FIFO)
algorithm.
"""
if maxsize is None:
return _cache(_UnboundCache(), typed)
elif callable(maxsize):
return _cache(FIFOCache(128), typed)(maxsize)
else:
return _cache(FIFOCache(maxsize), typed)
def lfu_cache(maxsize=128, typed=False):
"""Decorator to wrap a function with a memoizing callable that saves
up to `maxsize` results based on a Least Frequently Used (LFU)
algorithm.
"""
if maxsize is None:
return _cache(_UnboundCache(), typed)
elif callable(maxsize):
return _cache(LFUCache(128), typed)(maxsize)
else:
return _cache(LFUCache(maxsize), typed)
def lru_cache(maxsize=128, typed=False):
"""Decorator to wrap a function with a memoizing callable that saves
up to `maxsize` results based on a Least Recently Used (LRU)
algorithm.
"""
if maxsize is None:
return _cache(_UnboundCache(), typed)
elif callable(maxsize):
return _cache(LRUCache(128), typed)(maxsize)
else:
return _cache(LRUCache(maxsize), typed)
def mru_cache(maxsize=128, typed=False):
"""Decorator to wrap a function with a memoizing callable that saves
up to `maxsize` results based on a Most Recently Used (MRU)
algorithm.
"""
if maxsize is None:
return _cache(_UnboundCache(), typed)
elif callable(maxsize):
return _cache(MRUCache(128), typed)(maxsize)
else:
return _cache(MRUCache(maxsize), typed)
def rr_cache(maxsize=128, choice=random.choice, typed=False):
"""Decorator to wrap a function with a memoizing callable that saves
up to `maxsize` results based on a Random Replacement (RR)
algorithm.
"""
if maxsize is None:
return _cache(_UnboundCache(), typed)
elif callable(maxsize):
return _cache(RRCache(128, choice), typed)(maxsize)
else:
return _cache(RRCache(maxsize, choice), typed)
def ttl_cache(maxsize=128, ttl=600, timer=time.monotonic, typed=False):
"""Decorator to wrap a function with a memoizing callable that saves
up to `maxsize` results based on a Least Recently Used (LRU)
algorithm with a per-item time-to-live (TTL) value.
"""
if maxsize is None:
return _cache(_UnboundTTLCache(ttl, timer), typed)
elif callable(maxsize):
return _cache(TTLCache(128, ttl, timer), typed)(maxsize)
else:
return _cache(TTLCache(maxsize, ttl, timer), typed) | PypiClean |
/IdracRedfishSupport-0.0.8.tar.gz/IdracRedfishSupport-0.0.8/GetIdracServerSlotInformationREDFISH.py |
import argparse
import getpass
import json
import logging
import os
import re
import requests
import sys
import time
import warnings
from datetime import datetime
from pprint import pprint
warnings.filterwarnings("ignore")
parser=argparse.ArgumentParser(description="Python script using Redfish API with OEM extension to get server slot information. Slot information includes: Fan, CPU, DIMM, PCI, Backplane, PSU")
parser.add_argument('-ip',help='iDRAC IP address', required=False)
parser.add_argument('-u', help='iDRAC username', required=False)
parser.add_argument('-p', help='iDRAC password. If you do not pass in argument -p, script will prompt to enter user password which will not be echoed to the screen.', required=False)
parser.add_argument('-x', help='Pass in X-Auth session token for executing Redfish calls. All Redfish calls will use X-Auth token instead of username/password', required=False)
parser.add_argument('--ssl', help='SSL cert verification for all Redfish calls, pass in value \"true\" or \"false\". By default, this argument is not required and script ignores validating SSL cert for all Redfish calls.', required=False)
parser.add_argument('--script-examples', help='Get executing script examples', action="store_true", dest="script_examples", required=False)
parser.add_argument('--get-text', help='Get all server slot information and echo output to the terminal, also redirect output to text file', dest="get_text", action="store_true", required=False)
parser.add_argument('--get-xml', help='Get all server slot information and redirect to XML file. NOTE: This argument will not echo slot information to the terminal.', dest="get_xml", action="store_true", required=False)
args=vars(parser.parse_args())
logging.basicConfig(format='%(message)s', stream=sys.stdout, level=logging.INFO)
def script_examples():
print("""\n- GetIdracServerSlotInformationREDFISH.py -ip 192.168.0.120 -u root -p calvin --get-text, this example will get slot information for all server devices and redirect output to text file.
\n- GetIdracServerSlotInformationREDFISH.py -ip 192.168.0.120 -u root -p calvin --get-xml, this example will get slot information for all server devices and redirect output to XML file.""")
sys.exit(0)
def check_supported_idrac_version():
if args["x"]:
response = requests.get('https://%s/redfish/v1/Dell/Systems/System.Embedded.1/DellSlotCollection' % idrac_ip, verify=verify_cert, headers={'X-Auth-Token': args["x"]})
else:
response = requests.get('https://%s/redfish/v1/Dell/Systems/System.Embedded.1/DellSlotCollection' % idrac_ip, verify=verify_cert,auth=(idrac_username, idrac_password))
data = response.json()
if response.status_code == 401:
logging.warning("\n- WARNING, status code %s returned. Incorrect iDRAC username/password or invalid privilege detected." % response.status_code)
sys.exit(0)
if response.status_code != 200:
logging.warning("\n- WARNING, iDRAC version installed does not support this feature using Redfish API")
sys.exit(0)
def get_server_slot_info():
try:
os.remove(idrac_ip + "_server_slot_info.txt")
except:
pass
open_file = open("%s_server_slot_info.txt" % idrac_ip,"w")
get_datetime = datetime.now()
current_date_time = "- Data collection timestamp: %s-%s-%s %s:%s:%s\n" % (get_datetime.month, get_datetime.day, get_datetime.year, get_datetime.hour, get_datetime.minute, get_datetime.second)
open_file.writelines(current_date_time)
open_file.writelines("\n\n")
if args["x"]:
response = requests.get('https://%s/redfish/v1/Dell/Systems/System.Embedded.1/DellSlotCollection' % idrac_ip, verify=verify_cert, headers={'X-Auth-Token': args["x"]})
else:
response = requests.get('https://%s/redfish/v1/Dell/Systems/System.Embedded.1/DellSlotCollection' % idrac_ip, verify=verify_cert,auth=(idrac_username, idrac_password))
data = response.json()
if response.status_code != 200:
logging.error("\n- FAIL, GET request failed, status code %s returned. Detailed error results: \n%s" % (response.status_code,data))
sys.exit(0)
if data['Members'] == []:
logging.error("- FAIL, no data detected for Members property. Manually execute GET on URI 'https://%s/redfish/v1/Dell/Systems/System.Embedded.1/DellSlotCollection' in browser to check. If no data detected, reboot server and run Collecting Inventory to refresh the configuration database for iDRAC, try GET again." % idrac_ip)
sys.exit(0)
for i in data['Members']:
pprint(i), print("\n")
for ii in i.items():
server_slot_entry = ("%s: %s" % (ii[0],ii[1]))
open_file.writelines("%s\n" % server_slot_entry)
open_file.writelines("\n")
number_list = [i for i in range (1,100001) if i % 50 == 0]
for seq in number_list:
if args["x"]:
response = requests.get('https://%s/redfish/v1/Dell/Systems/System.Embedded.1/DellSlotCollection?$skip=%s' % (idrac_ip, seq), verify=verify_cert, headers={'X-Auth-Token': args["x"]})
else:
response = requests.get('https://%s/redfish/v1/Dell/Systems/System.Embedded.1/DellSlotCollection?$skip=%s' % (idrac_ip, seq), verify=verify_cert,auth=(idrac_username, idrac_password))
data = response.json()
if response.status_code != 200:
if "query parameter $skip is out of range" in data["error"]["@Message.ExtendedInfo"][0]["Message"]:
open_file.close()
logging.info("\n- INFO, iDRAC Server Slot Information also captured in \"%s_server_slot_info.txt\" file" % idrac_ip)
sys.exit(0)
else:
logging.error("\n- FAIL, GET request failed using skip query parameter, status code %s returned. Detailed error results: \n%s" % (response.status_code,data))
sys.exit(0)
if "Members" not in data or data["Members"] == [] or response.status_code == 400:
break
for i in data['Members']:
pprint(i), print("\n")
for ii in i.items():
server_slot_entry = ("%s: %s" % (ii[0],ii[1]))
open_file.writelines("%s\n" % server_slot_entry)
open_file.writelines("\n")
logging.info("\n- INFO, iDRAC Server Slot Information also captured in \"%s_server_slot_info.txt\" file" % idrac_ip)
open_file.close()
def get_server_slot_info_xml():
logging.info("\n- INFO, collecting server slot information and converting to XML format, copy to XML file")
try:
os.remove(idrac_ip+"_server_slot_info.xml")
except:
pass
open_file = open("%s_server_slot_info.xml" % idrac_ip,"a")
open_file.writelines("<CIM>\n")
if args["x"]:
response = requests.get('https://%s/redfish/v1/Dell/Systems/System.Embedded.1/DellSlotCollection' % idrac_ip, verify=verify_cert, headers={'X-Auth-Token': args["x"]})
else:
response = requests.get('https://%s/redfish/v1/Dell/Systems/System.Embedded.1/DellSlotCollection' % idrac_ip, verify=verify_cert,auth=(idrac_username, idrac_password))
data = response.json()
if response.status_code != 200:
logging.error("\n- FAIL, GET request failed, status code %s returned. Detailed error results: \n%s" % (response.status_code,data))
sys.exit(0)
if data['Members'] == []:
logging.error("- FAIL, no data detected for Members property. Manually execute GET on URI 'https://%s/redfish/v1/Dell/Systems/System.Embedded.1/DellSlotCollection' in browser to check. if no data detected, reboot server and run Collecting Inventory to refresh the configuration database for iDRAC" % idrac_ip)
sys.exit(0)
for i in data['Members']:
create_dict = {}
for ii in i.items():
if ii[0] == "Id":
create_dict[ii[0]] = str(ii[1])
elif ii[0] == "EmptySlot":
create_dict[ii[0]] = str(ii[1])
elif ii[0] == "NumberDescription":
if ii[1] == "":
create_dict["Slot Number"] = "NA"
else:
create_dict["Slot Number"] = str(ii[1])
create_string = "<VALUE.NAMEDINSTANCE>\n<INSTANCENAME DEVICENAME=\""+create_dict["Id"]+"\">\n<KEYBINDING PROPERTY=\"Slot Number\">\n<VALUE>"+create_dict["Slot Number"]+"</VALUE>\n</KEYBINDING>\n</INSTANCENAME>\n<PROPERTY PROPERTY=\"EmptySlot\">\n<VALUE>"+create_dict["EmptySlot"]+"</VALUE>\n</PROPERTY>\n</VALUE.NAMEDINSTANCE>"
open_file.writelines(create_string)
number_list = [i for i in range (1,100001) if i % 50 == 0]
for seq in number_list:
if args["x"]:
response = requests.get('https://%s/redfish/v1/Dell/Systems/System.Embedded.1/DellSlotCollection?$skip=%s' % (idrac_ip, seq), verify=verify_cert, headers={'X-Auth-Token': args["x"]})
else:
response = requests.get('https://%s/redfish/v1/Dell/Systems/System.Embedded.1/DellSlotCollection?$skip=%s' % (idrac_ip, seq), verify=verify_cert,auth=(idrac_username, idrac_password))
data = response.json()
if response.status_code != 200:
if "query parameter $skip is out of range" in data["error"]["@Message.ExtendedInfo"][0]["Message"]:
open_file.writelines("\n</CIM>")
open_file.close()
logging.info("\n- PASS, iDRAC Server Slot Information captured in \"%s_server_slot_info.xml\" file" % idrac_ip)
sys.exit(0)
else:
logging.error("\n- FAIL, GET request failed using skip query parameter, status code %s returned. Detailed error results: \n%s" % (response.status_code,data))
sys.exit(0)
if "Members" not in data or data["Members"] == [] or response.status_code == 400:
break
if data['Members'] == []:
logging.error("- FAIL, no data detected for Members property. Manually execute GET on URI 'https://%s/redfish/v1/Dell/Systems/System.Embedded.1/DellSlotCollection' in browser to check. If no data detected, reboot server and run Collecting Inventory to refresh the configuration database for iDRAC, try GET again." % idrac_ip)
sys.exit(0)
for i in data['Members']:
create_dict = {}
for ii in i.items():
if ii[0] == "Id":
create_dict[ii[0]] = str(ii[1])
elif ii[0] == "EmptySlot":
create_dict[ii[0]] = str(ii[1])
elif ii[0] == "NumberDescription":
create_dict["Slot Number"] = str(ii[1])
create_string = "<VALUE.NAMEDINSTANCE>\n<INSTANCENAME DEVICENAME=\""+create_dict["Id"]+"\">\n<KEYBINDING PROPERTY=\"Slot Number\">\n<VALUE>"+create_dict["Slot Number"]+"</VALUE>\n</KEYBINDING>\n</INSTANCENAME>\n<PROPERTY PROPERTY=\"EmptySlot\">\n<VALUE>"+create_dict["EmptySlot"]+"</VALUE>\n</PROPERTY>\n</VALUE.NAMEDINSTANCE>"
open_file.writelines(create_string)
logging.info("\n- INFO, iDRAC Server Slot Information captured in \"%s_server_slot_info.xml\" file" % idrac_ip)
open_file.writelines("\n</CIM>")
open_file.close()
if __name__ == "__main__":
if args["script_examples"]:
script_examples()
if args["ip"] or args["ssl"] or args["u"] or args["p"] or args["x"]:
idrac_ip = args["ip"]
idrac_username = args["u"]
if args["p"]:
idrac_password = args["p"]
if not args["p"] and not args["x"] and args["u"]:
idrac_password = getpass.getpass("\n- Argument -p not detected, pass in iDRAC user %s password: " % args["u"])
if args["ssl"]:
if args["ssl"].lower() == "true":
verify_cert = True
elif args["ssl"].lower() == "false":
verify_cert = False
else:
verify_cert = False
else:
verify_cert = False
check_supported_idrac_version()
else:
logging.error("\n- FAIL, invalid argument values or not all required parameters passed in. See help text or argument --script-examples for more details.")
sys.exit(0)
if args["get_text"]:
get_server_slot_info()
elif args["get_xml"]:
get_server_slot_info_xml()
else:
logging.error("\n- FAIL, invalid argument values or not all required parameters passed in. See help text or argument --script-examples for more details.") | PypiClean |
/DeSide-1.0.2.tar.gz/DeSide-1.0.2/docs/datasets.md | Datasets
========
Datasets used in DeSide
***
## scRNA-seq datasets
| Dataset ID | Journal | DOI | Publish Date | Reported cells (total) | Organism | Tissue | Data location | Sequencing method | #patients |
|-------------|----------------------|-----------------------------|--------------|------------------|------------------------|----------------------------------|-----------------------------------|-------------------------|-----------|
| hnscc_cillo_01 | Immunity | 10.1016/j.immuni.2019.11.014 | 20200107 | 131,224 | Human | Head and Neck Cancer (HNSC) | GSE139324 | 10x Single Cell 3' v2 | 26 |
| pdac_pengj_02 | Cell Res | 10.1038/s41422-019-0195-y | 20190704 | 57,530 | Human | Pancreatic Ductal Adenocarcinoma (PDAC)| [Link](https://bigd.big.ac.cn/gsa/browse/CRA001160_) | 10x Single Cell 3' v2 | 22 |
| hnscc_puram_03 | Cell | 10.1016/j.cell.2017.10.044 | 20171130 | 5,902 | Human | Head and Neck Cancer (HNSC) | GSE103322 | Smart-seq2 | 16 |
| pdac_steele_04 | Nat Cancer | 10.1038/s43018-020-00121-4 | 20201026 | 124,898 | Human | Pancreatic Ductal Adenocarcinoma (PDAC)| GSE155698 | 10x Single Cell 3' v2 | 15 |
| luad_kim_05 | Nat Commun | 10.1038/s41467-020-16164-1 | 20200508 | 208,506 | Human | Lung Adenocarcinoma (LUAD) | GSE131907 | 10x Single Cell 3' v2 | 13 |
| nsclc_guo_06 | Nature Medicine | 10.1038/s41591-018-0045-3 | 20180625 | 12,346 | Human | Non-Small-Cell Lung Cancer (NSCLC) | GSE99254 | Smart-Seq2 | 13 |
| pan_cancer_07 | Nat Genet | 10.1038/s41588-020-00726-6 | 20201030 | 53,513 | Human | Cancer cell lines | GSE157220 | Illumina NextSeq 500 | - |
- The number of **reported cells** may include cells that don't originate from solid tumors, which were removed during integrating.
## Merged datasets and Synthetic datasets
| Dataset name | #samples | Sampling method | Filtering | #cell types | #genes | Input dataset | GEPs <br/>(type, fortmat) | Dataset type | Notation |
|:--------------------------------------:|----------|-----------------|-----------|-------------|--------|--------------------------------|:-------------------------------:|:-----------------------------:|:----------:|
| TCGA | 7,699 | - | - | - | 19,712 | - | MCT, `TPM` | Downloaded from TCGA | DA |
| merged_7_sc_datasets | 135,049 | - | - | 11 | 12,114 | 7 collected scRNA-seq datasets | Single cell, <br/>`log2(TPM+1)` | Raw dataset from scRNA-seq | S0 |
| SCT_POS_N10K | 110,000 | n_base=100 | - | 11 | 12,114 | S0 | SCT, `log2(TPM+1)` | Used to simulate MCT datasets | S1 |
| Mixed_N100K_random | 100,000 | Random | No | 11 | 12,114 | S1 | MCT, `log2(TPM+1)` | Training set | D0 |
| Mixed_N100K_segment | 100,000 | Segment | Yes | 11 | 6,168 | S1 | MCT, `log2(TPM+1)` | Training set | D1 |
| Mixed_N100K_segment_<br/>without_filtering | 100,000 | Segment | No | 11 | 12,114 | S1 | MCT, `log2(TPM+1)` | Training set | D2 |
| Test_set_random | 3,000 | Random | No | 11 | 12,114 | S1 | MCT, `log2(TPM+1)` | Test set | T0 |
| Test_set1 | 3,000 | Segment | Yes | 11 | 6,168 | S1 | MCT, `log2(TPM+1)` | Test set | T1 |
| Test_set2 | 3,000 | Segment | No | 11 | 12,114 | S1 | MCT, `log2(TPM+1)` | Test set | T2 |
| SCT_POS_N100 | 1100 | n_base=100 | - | 11 | 12,114 | S0 | SCT, `log2(TPM+1)` | Test set | T3 |
- MCT: Bulk gene expression profile with multiple different cell types
- SCT: Bulk gene expression profile with single cell type (scGEP)
- GEPs: Gene expression profiles
## Download
- TCGA: [download link](https://figshare.com/articles/dataset/Merged_gene_expression_profiles_TPM_/23047547)
- merged_7_sc_datasets (S0): [download link](https://figshare.com/articles/dataset/Dataset_S0/23283908)
- SCT_POS_N10K (S1): [download link](https://figshare.com/articles/dataset/Dataset_S1/23043560)
- Mixed_N100K_random (D0): [download link](https://figshare.com/articles/dataset/Dataset_D0/23283932)
- Mixed_N100K_segment (D1): [download link](https://figshare.com/articles/dataset/Dataset_D1/23047391)
- Mixed_N100K_segment_without_filtering (D2): [download link](https://figshare.com/articles/dataset/Dataset_D2/23284256)
- All Test Sets: [download link](https://figshare.com/articles/dataset/All_Test_Sets/23283884)
- Test_set_random (T0)
- Test_set1 (T1)
- Test_set2 (T2)
- SCT_POS_N100 (T3)
`.h5ad` files can be opened by the function `scanpy.read_h5ad()` in [Scanpy](https://scanpy.readthedocs.io/en/stable/) or the class [`deside.utility.read_file.ReadH5AD`](https://deside.readthedocs.io/en/latest/func/utility.html#deside.utility.read_file.ReadH5AD) in DeSide.
| PypiClean |
/Choco-1.0.5.tar.gz/Choco-1.0.5/choco/util.py |
import re
import collections
import codecs
import os
from choco import compat
import operator
def update_wrapper(decorated, fn):
decorated.__wrapped__ = fn
decorated.__name__ = fn.__name__
return decorated
class PluginLoader(object):
def __init__(self, group):
self.group = group
self.impls = {}
def load(self, name):
if name in self.impls:
return self.impls[name]()
else:
import pkg_resources
for impl in pkg_resources.iter_entry_points(
self.group,
name):
self.impls[name] = impl.load
return impl.load()
else:
from choco import errors
raise errors.RuntimeException(
"Can't load plugin %s %s" %
(self.group, name))
def register(self, name, modulepath, objname):
def load():
mod = __import__(modulepath)
for token in modulepath.split(".")[1:]:
mod = getattr(mod, token)
return getattr(mod, objname)
self.impls[name] = load
def verify_directory(dir):
"""create and/or verify a filesystem directory."""
tries = 0
while not os.path.exists(dir):
try:
tries += 1
os.makedirs(dir, compat.octal("0775"))
except:
if tries > 5:
raise
def to_list(x, default=None):
if x is None:
return default
if not isinstance(x, (list, tuple)):
return [x]
else:
return x
class memoized_property(object):
"""A read-only @property that is only evaluated once."""
def __init__(self, fget, doc=None):
self.fget = fget
self.__doc__ = doc or fget.__doc__
self.__name__ = fget.__name__
def __get__(self, obj, cls):
if obj is None:
return self
obj.__dict__[self.__name__] = result = self.fget(obj)
return result
class memoized_instancemethod(object):
"""Decorate a method memoize its return value.
Best applied to no-arg methods: memoization is not sensitive to
argument values, and will always return the same value even when
called with different arguments.
"""
def __init__(self, fget, doc=None):
self.fget = fget
self.__doc__ = doc or fget.__doc__
self.__name__ = fget.__name__
def __get__(self, obj, cls):
if obj is None:
return self
def oneshot(*args, **kw):
result = self.fget(obj, *args, **kw)
memo = lambda *a, **kw: result
memo.__name__ = self.__name__
memo.__doc__ = self.__doc__
obj.__dict__[self.__name__] = memo
return result
oneshot.__name__ = self.__name__
oneshot.__doc__ = self.__doc__
return oneshot
class SetLikeDict(dict):
"""a dictionary that has some setlike methods on it"""
def union(self, other):
"""produce a 'union' of this dict and another (at the key level).
values in the second dict take precedence over that of the first"""
x = SetLikeDict(**self)
x.update(other)
return x
class FastEncodingBuffer(object):
"""a very rudimentary buffer that is faster than StringIO,
but doesn't crash on unicode data like cStringIO."""
def __init__(self, encoding=None, errors='strict', as_unicode=False):
self.data = collections.deque()
self.encoding = encoding
if as_unicode:
self.delim = compat.u('')
else:
self.delim = ''
self.as_unicode = as_unicode
self.errors = errors
self.write = self.data.append
def truncate(self):
self.data = collections.deque()
self.write = self.data.append
def getvalue(self):
if self.encoding:
return self.delim.join(self.data).encode(self.encoding,
self.errors)
else:
return self.delim.join(self.data)
class LRUCache(dict):
"""A dictionary-like object that stores a limited number of items,
discarding lesser used items periodically.
this is a rewrite of LRUCache from Myghty to use a periodic timestamp-based
paradigm so that synchronization is not really needed. the size management
is inexact.
"""
class _Item(object):
def __init__(self, key, value):
self.key = key
self.value = value
self.timestamp = compat.time_func()
def __repr__(self):
return repr(self.value)
def __init__(self, capacity, threshold=.5):
self.capacity = capacity
self.threshold = threshold
def __getitem__(self, key):
item = dict.__getitem__(self, key)
item.timestamp = compat.time_func()
return item.value
def values(self):
return [i.value for i in dict.values(self)]
def setdefault(self, key, value):
if key in self:
return self[key]
else:
self[key] = value
return value
def __setitem__(self, key, value):
item = dict.get(self, key)
if item is None:
item = self._Item(key, value)
dict.__setitem__(self, key, item)
else:
item.value = value
self._manage_size()
def _manage_size(self):
while len(self) > self.capacity + self.capacity * self.threshold:
bytime = sorted(dict.values(self),
key=operator.attrgetter('timestamp'), reverse=True)
for item in bytime[self.capacity:]:
try:
del self[item.key]
except KeyError:
# if we couldn't find a key, most likely some other thread
# broke in on us. loop around and try again
break
# Regexp to match python magic encoding line
_PYTHON_MAGIC_COMMENT_re = re.compile(
r'[ \t\f]* \# .* coding[=:][ \t]*([-\w.]+)',
re.VERBOSE)
def parse_encoding(fp):
"""Deduce the encoding of a Python source file (binary mode) from magic
comment.
It does this in the same way as the `Python interpreter`__
.. __: http://docs.python.org/ref/encodings.html
The ``fp`` argument should be a seekable file object in binary mode.
"""
pos = fp.tell()
fp.seek(0)
try:
line1 = fp.readline()
has_bom = line1.startswith(codecs.BOM_UTF8)
if has_bom:
line1 = line1[len(codecs.BOM_UTF8):]
m = _PYTHON_MAGIC_COMMENT_re.match(line1.decode('ascii', 'ignore'))
if not m:
try:
import parser
parser.suite(line1.decode('ascii', 'ignore'))
except (ImportError, SyntaxError):
# Either it's a real syntax error, in which case the source
# is not valid python source, or line2 is a continuation of
# line1, in which case we don't want to scan line2 for a magic
# comment.
pass
else:
line2 = fp.readline()
m = _PYTHON_MAGIC_COMMENT_re.match(
line2.decode('ascii', 'ignore'))
if has_bom:
if m:
raise SyntaxError(
"python refuses to compile code with both a UTF8"
" byte-order-mark and a magic encoding comment")
return 'utf_8'
elif m:
return m.group(1)
else:
return None
finally:
fp.seek(pos)
def sorted_dict_repr(d):
"""repr() a dictionary with the keys in order.
Used by the lexer unit test to compare parse trees based on strings.
"""
keys = list(d.keys())
keys.sort()
return "{" + ", ".join(["%r: %r" % (k, d[k]) for k in keys]) + "}"
def restore__ast(_ast):
"""Attempt to restore the required classes to the _ast module if it
appears to be missing them
"""
if hasattr(_ast, 'AST'):
return
_ast.PyCF_ONLY_AST = 2 << 9
m = compile("""\
def foo(): pass
class Bar(object): pass
if False: pass
baz = 'choco'
1 + 2 - 3 * 4 / 5
6 // 7 % 8 << 9 >> 10
11 & 12 ^ 13 | 14
15 and 16 or 17
-baz + (not +18) - ~17
baz and 'foo' or 'bar'
(choco is baz == baz) is not baz != choco
choco > baz < choco >= baz <= choco
choco in baz not in choco""", '<unknown>', 'exec', _ast.PyCF_ONLY_AST)
_ast.Module = type(m)
for cls in _ast.Module.__mro__:
if cls.__name__ == 'mod':
_ast.mod = cls
elif cls.__name__ == 'AST':
_ast.AST = cls
_ast.FunctionDef = type(m.body[0])
_ast.ClassDef = type(m.body[1])
_ast.If = type(m.body[2])
_ast.Name = type(m.body[3].targets[0])
_ast.Store = type(m.body[3].targets[0].ctx)
_ast.Str = type(m.body[3].value)
_ast.Sub = type(m.body[4].value.op)
_ast.Add = type(m.body[4].value.left.op)
_ast.Div = type(m.body[4].value.right.op)
_ast.Mult = type(m.body[4].value.right.left.op)
_ast.RShift = type(m.body[5].value.op)
_ast.LShift = type(m.body[5].value.left.op)
_ast.Mod = type(m.body[5].value.left.left.op)
_ast.FloorDiv = type(m.body[5].value.left.left.left.op)
_ast.BitOr = type(m.body[6].value.op)
_ast.BitXor = type(m.body[6].value.left.op)
_ast.BitAnd = type(m.body[6].value.left.left.op)
_ast.Or = type(m.body[7].value.op)
_ast.And = type(m.body[7].value.values[0].op)
_ast.Invert = type(m.body[8].value.right.op)
_ast.Not = type(m.body[8].value.left.right.op)
_ast.UAdd = type(m.body[8].value.left.right.operand.op)
_ast.USub = type(m.body[8].value.left.left.op)
_ast.Or = type(m.body[9].value.op)
_ast.And = type(m.body[9].value.values[0].op)
_ast.IsNot = type(m.body[10].value.ops[0])
_ast.NotEq = type(m.body[10].value.ops[1])
_ast.Is = type(m.body[10].value.left.ops[0])
_ast.Eq = type(m.body[10].value.left.ops[1])
_ast.Gt = type(m.body[11].value.ops[0])
_ast.Lt = type(m.body[11].value.ops[1])
_ast.GtE = type(m.body[11].value.ops[2])
_ast.LtE = type(m.body[11].value.ops[3])
_ast.In = type(m.body[12].value.ops[0])
_ast.NotIn = type(m.body[12].value.ops[1])
def read_file(path, mode='rb'):
fp = open(path, mode)
try:
data = fp.read()
return data
finally:
fp.close()
def read_python_file(path):
fp = open(path, "rb")
try:
encoding = parse_encoding(fp)
data = fp.read()
if encoding:
data = data.decode(encoding)
return data
finally:
fp.close() | PypiClean |
/Flask-FlatPages-0.8.1.tar.gz/Flask-FlatPages-0.8.1/README.rst | ===============
Flask-FlatPages
===============
.. image:: https://github.com/flask-FlatPages/flask-FlatPages/actions/workflows/test.yml/badge.svg?branch=master
:target: https://github.com/flask-FlatPages/flask-FlatPages/actions/workflows/test.yml/
.. image:: https://img.shields.io/pypi/v/Flask-FlatPages.svg
:target: https://pypi.python.org/pypi/Flask-FlatPages
Provides flat static pages to a Flask_ application, based on text files
as opposed to a relational database.
* Works on Python 2.7 and 3.6+
* BSD licensed
* Latest documentation `on Read the Docs`_
* Source, issues and pull requests `on Github`_
* Releases `on PyPI`_
* Install with ``pip install Flask-FlatPages``
.. _Flask: http://flask.pocoo.org/
.. _on Read the Docs: http://flask-flatpages.readthedocs.org/
.. _on Github: https://github.com/SimonSapin/Flask-FlatPages/
.. _on PyPI: http://pypi.python.org/pypi/Flask-FlatPages
| PypiClean |
/NlvWxPython-4.2.0-cp37-cp37m-win_amd64.whl/wx/lib/popupctl.py |
import wx
from wx.lib.buttons import GenButtonEvent
class PopButton(wx.Control):
def __init__(self,*_args,**_kwargs):
wx.Control.__init__(self, *_args, **_kwargs)
self.up = True
self.didDown = False
self.Bind(wx.EVT_LEFT_DOWN, self.OnLeftDown)
self.Bind(wx.EVT_LEFT_UP, self.OnLeftUp)
self.Bind(wx.EVT_MOTION, self.OnMotion)
self.Bind(wx.EVT_PAINT, self.OnPaint)
def Notify(self):
evt = GenButtonEvent(wx.wxEVT_COMMAND_BUTTON_CLICKED, self.GetId())
evt.SetIsDown(not self.up)
evt.SetButtonObj(self)
evt.SetEventObject(self)
self.GetEventHandler().ProcessEvent(evt)
def OnEraseBackground(self, event):
pass
def OnLeftDown(self, event):
if not self.IsEnabled():
return
self.didDown = True
self.up = False
self.CaptureMouse()
self.GetParent().textCtrl.SetFocus()
self.Refresh()
event.Skip()
def OnLeftUp(self, event):
if not self.IsEnabled():
return
if self.didDown:
self.ReleaseMouse()
if not self.up:
self.Notify()
self.up = True
self.Refresh()
self.didDown = False
event.Skip()
def OnMotion(self, event):
if not self.IsEnabled():
return
if event.LeftIsDown():
if self.didDown:
x,y = event.GetPosition()
w,h = self.GetClientSize()
if self.up and x<w and x>=0 and y<h and y>=0:
self.up = False
self.Refresh()
return
if not self.up and (x<0 or y<0 or x>=w or y>=h):
self.up = True
self.Refresh()
return
event.Skip()
def OnPaint(self, event):
dc = wx.BufferedPaintDC(self)
if self.up:
flag = wx.CONTROL_CURRENT
else:
flag = wx.CONTROL_PRESSED
wx.RendererNative.Get().DrawComboBoxDropButton(self, dc, self.GetClientRect(), flag)
#---------------------------------------------------------------------------
# Tried to use wxPopupWindow but the control misbehaves on MSW
class PopupDialog(wx.Dialog):
def __init__(self,parent,content = None):
wx.Dialog.__init__(self,parent,-1,'', style = wx.BORDER_SIMPLE|wx.STAY_ON_TOP)
self.ctrl = parent
self.win = wx.Window(self,-1,pos = (0,0),style = 0)
if content:
self.SetContent(content)
def SetContent(self,content):
self.content = content
self.content.Reparent(self.win)
self.content.Show(True)
self.win.SetClientSize(self.content.GetSize())
self.SetSize(self.win.GetSize())
def Display(self):
pos = self.ctrl.ClientToScreen( (0,0) )
dSize = wx.GetDisplaySize()
selfSize = self.GetSize()
tcSize = self.ctrl.GetSize()
pos.x -= (selfSize.width - tcSize.width) // 2
if pos.x + selfSize.width > dSize.width:
pos.x = dSize.width - selfSize.width
if pos.x < 0:
pos.x = 0
pos.y += tcSize.height
if pos.y + selfSize.height > dSize.height:
pos.y = dSize.height - selfSize.height
if pos.y < 0:
pos.y = 0
self.Move(pos)
self.ctrl.FormatContent()
self.ShowModal()
#---------------------------------------------------------------------------
class PopupControl(wx.Control):
def __init__(self,*_args,**_kwargs):
wx.Control.__init__(self, *_args, **_kwargs)
self.textCtrl = wx.TextCtrl(self, wx.ID_ANY, '', pos = (0,0))
self.bCtrl = PopButton(self, wx.ID_ANY, style=wx.BORDER_NONE)
self.pop = None
self.content = None
self.Bind(wx.EVT_SIZE, self.OnSize)
self.bCtrl.Bind(wx.EVT_BUTTON, self.OnButton, self.bCtrl)
self.Bind(wx.EVT_SET_FOCUS, self.OnFocus)
self.SetInitialSize(_kwargs.get('size', wx.DefaultSize))
self.SendSizeEvent()
def OnFocus(self,evt):
# embedded control should get focus on TAB keypress
self.textCtrl.SetFocus()
evt.Skip()
def OnSize(self, evt):
# layout the child widgets
w,h = self.GetClientSize()
self.textCtrl.SetSize(0, 0, w - self.marginWidth - self.buttonWidth, h)
self.bCtrl.SetSize(w - self.buttonWidth, 0, self.buttonWidth, h)
def DoGetBestSize(self):
# calculate the best size of the combined control based on the
# needs of the child widgets.
tbs = self.textCtrl.GetBestSize()
return wx.Size(tbs.width + self.marginWidth + self.buttonWidth,
tbs.height)
def OnButton(self, evt):
if not self.pop:
if self.content:
self.pop = PopupDialog(self,self.content)
del self.content
else:
print('No Content to pop')
if self.pop:
self.pop.Display()
def Enable(self, flag):
wx.Control.Enable(self,flag)
self.textCtrl.Enable(flag)
self.bCtrl.Enable(flag)
def SetPopupContent(self, content):
if not self.pop:
self.content = content
self.content.Show(False)
else:
self.pop.SetContent(content)
def FormatContent(self):
pass
def PopDown(self):
if self.pop:
self.pop.EndModal(1)
def SetValue(self, value):
self.textCtrl.SetValue(value)
def GetValue(self):
return self.textCtrl.GetValue()
def SetFont(self, font):
self.textCtrl.SetFont(font)
def GetFont(self):
return self.textCtrl.GetFont()
def _get_marginWidth(self):
if 'wxMac' in wx.PlatformInfo:
return 6
else:
return 3
marginWidth = property(_get_marginWidth)
def _get_buttonWidth(self):
return 20
buttonWidth = property(_get_buttonWidth)
# an alias
PopupCtrl = PopupControl | PypiClean |
/EnergyCapSdk-8.2304.4743.tar.gz/EnergyCapSdk-8.2304.4743/energycap/sdk/models/bill_entry_response.py |
from msrest.serialization import Model
class BillEntryResponse(Model):
"""BillEntryResponse.
:param bill_id: The bill identifier
:type bill_id: int
:param account_id: The account identifier
:type account_id: int
:param vendor_id: The vendor identifier
:type vendor_id: int
:param needs_to_open_batch: Indicates if a new bill batch needs to be
opened to place this bill in
:type needs_to_open_batch: bool
:param begin_date: The bill's begin date
:type begin_date: datetime
:param end_date: The bill's end date
:type end_date: datetime
:param billing_period: The bill's billing period
:type billing_period: int
:param days: The bill's number of days
:type days: int
:param total_cost: The bill's total cost
:type total_cost: float
:param due_date:
:type due_date: ~energycap.sdk.models.BillHeaderChild
:param statement_date:
:type statement_date: ~energycap.sdk.models.BillHeaderChild
:param invoice_number:
:type invoice_number: ~energycap.sdk.models.BillHeaderChild
:param control_code:
:type control_code: ~energycap.sdk.models.BillHeaderChild
:param next_reading:
:type next_reading: ~energycap.sdk.models.BillHeaderChild
:param account_period_name:
:type account_period_name: ~energycap.sdk.models.BillHeaderChild
:param account_period_number:
:type account_period_number: ~energycap.sdk.models.BillHeaderChild
:param account_period_year:
:type account_period_year: ~energycap.sdk.models.BillHeaderChild
:param estimated:
:type estimated: ~energycap.sdk.models.BillHeaderChild
:param bill_note: The bill's note
:type bill_note: str
:param void: Indicates if the bill has been voided
:type void: bool
:param from_vendor: Indicates if the bill is from a vendor
:type from_vendor: bool
:param observation_method:
:type observation_method: ~energycap.sdk.models.ObservationMethodChild
:param approved: Indicates if the bill has been approved
:type approved: bool
:param has_been_split: Indicates if the bill has been split
:type has_been_split: bool
:param export_hold: Indicates if the bill is being withheld from bill
exports
:type export_hold: bool
:param ap_exported: Indicates if the bill has been ap exported
:type ap_exported: bool
:param gl_exported: Indicates if the bill has been gl exported
:type gl_exported: bool
:param accrual: Indicates if the bill is an accrual bill
:type accrual: bool
:param check_number: The number of the check that the bill was paid with
:type check_number: str
:param check_date: The date and time of the check
:type check_date: datetime
:param pay_status: The payment status of the bill
:type pay_status: str
:param cleared_date: The date and time that the check cleared
:type cleared_date: datetime
:param bill_image_url: The fully qualified url to the bill image
:type bill_image_url: str
:param general_ledger_code: The general ledger code of the bill's
account-level details ("Mixed" if there is more than one)
:type general_ledger_code: str
:param meters: The billing account's meters
:type meters: list[~energycap.sdk.models.BillEntryMeterChild]
:param account_body_lines: The bill's account-level details
:type account_body_lines: list[~energycap.sdk.models.BillEntryBodyLine]
:param vendor_body_lines: The bill's vendor template details
:type vendor_body_lines: list[~energycap.sdk.models.BillEntryBodyLine]
:param batch:
:type batch: ~energycap.sdk.models.BatchChild
"""
_attribute_map = {
'bill_id': {'key': 'billId', 'type': 'int'},
'account_id': {'key': 'accountId', 'type': 'int'},
'vendor_id': {'key': 'vendorId', 'type': 'int'},
'needs_to_open_batch': {'key': 'needsToOpenBatch', 'type': 'bool'},
'begin_date': {'key': 'beginDate', 'type': 'iso-8601'},
'end_date': {'key': 'endDate', 'type': 'iso-8601'},
'billing_period': {'key': 'billingPeriod', 'type': 'int'},
'days': {'key': 'days', 'type': 'int'},
'total_cost': {'key': 'totalCost', 'type': 'float'},
'due_date': {'key': 'dueDate', 'type': 'BillHeaderChild'},
'statement_date': {'key': 'statementDate', 'type': 'BillHeaderChild'},
'invoice_number': {'key': 'invoiceNumber', 'type': 'BillHeaderChild'},
'control_code': {'key': 'controlCode', 'type': 'BillHeaderChild'},
'next_reading': {'key': 'nextReading', 'type': 'BillHeaderChild'},
'account_period_name': {'key': 'accountPeriodName', 'type': 'BillHeaderChild'},
'account_period_number': {'key': 'accountPeriodNumber', 'type': 'BillHeaderChild'},
'account_period_year': {'key': 'accountPeriodYear', 'type': 'BillHeaderChild'},
'estimated': {'key': 'estimated', 'type': 'BillHeaderChild'},
'bill_note': {'key': 'billNote', 'type': 'str'},
'void': {'key': 'void', 'type': 'bool'},
'from_vendor': {'key': 'fromVendor', 'type': 'bool'},
'observation_method': {'key': 'observationMethod', 'type': 'ObservationMethodChild'},
'approved': {'key': 'approved', 'type': 'bool'},
'has_been_split': {'key': 'hasBeenSplit', 'type': 'bool'},
'export_hold': {'key': 'exportHold', 'type': 'bool'},
'ap_exported': {'key': 'apExported', 'type': 'bool'},
'gl_exported': {'key': 'glExported', 'type': 'bool'},
'accrual': {'key': 'accrual', 'type': 'bool'},
'check_number': {'key': 'checkNumber', 'type': 'str'},
'check_date': {'key': 'checkDate', 'type': 'iso-8601'},
'pay_status': {'key': 'payStatus', 'type': 'str'},
'cleared_date': {'key': 'clearedDate', 'type': 'iso-8601'},
'bill_image_url': {'key': 'billImageUrl', 'type': 'str'},
'general_ledger_code': {'key': 'generalLedgerCode', 'type': 'str'},
'meters': {'key': 'meters', 'type': '[BillEntryMeterChild]'},
'account_body_lines': {'key': 'accountBodyLines', 'type': '[BillEntryBodyLine]'},
'vendor_body_lines': {'key': 'vendorBodyLines', 'type': '[BillEntryBodyLine]'},
'batch': {'key': 'batch', 'type': 'BatchChild'},
}
def __init__(self, **kwargs):
super(BillEntryResponse, self).__init__(**kwargs)
self.bill_id = kwargs.get('bill_id', None)
self.account_id = kwargs.get('account_id', None)
self.vendor_id = kwargs.get('vendor_id', None)
self.needs_to_open_batch = kwargs.get('needs_to_open_batch', None)
self.begin_date = kwargs.get('begin_date', None)
self.end_date = kwargs.get('end_date', None)
self.billing_period = kwargs.get('billing_period', None)
self.days = kwargs.get('days', None)
self.total_cost = kwargs.get('total_cost', None)
self.due_date = kwargs.get('due_date', None)
self.statement_date = kwargs.get('statement_date', None)
self.invoice_number = kwargs.get('invoice_number', None)
self.control_code = kwargs.get('control_code', None)
self.next_reading = kwargs.get('next_reading', None)
self.account_period_name = kwargs.get('account_period_name', None)
self.account_period_number = kwargs.get('account_period_number', None)
self.account_period_year = kwargs.get('account_period_year', None)
self.estimated = kwargs.get('estimated', None)
self.bill_note = kwargs.get('bill_note', None)
self.void = kwargs.get('void', None)
self.from_vendor = kwargs.get('from_vendor', None)
self.observation_method = kwargs.get('observation_method', None)
self.approved = kwargs.get('approved', None)
self.has_been_split = kwargs.get('has_been_split', None)
self.export_hold = kwargs.get('export_hold', None)
self.ap_exported = kwargs.get('ap_exported', None)
self.gl_exported = kwargs.get('gl_exported', None)
self.accrual = kwargs.get('accrual', None)
self.check_number = kwargs.get('check_number', None)
self.check_date = kwargs.get('check_date', None)
self.pay_status = kwargs.get('pay_status', None)
self.cleared_date = kwargs.get('cleared_date', None)
self.bill_image_url = kwargs.get('bill_image_url', None)
self.general_ledger_code = kwargs.get('general_ledger_code', None)
self.meters = kwargs.get('meters', None)
self.account_body_lines = kwargs.get('account_body_lines', None)
self.vendor_body_lines = kwargs.get('vendor_body_lines', None)
self.batch = kwargs.get('batch', None) | PypiClean |
/JaqalPaq-1.2.0a1.tar.gz/JaqalPaq-1.2.0a1/examples/HeH+ (Helium Hydride)/JaqalPaq_HeH+_Untapered_Exemplar.ipynb | # Helium Hydride (Untapered HeH+) Exemplar
## Step 0: Import various libraries
```
# Imports for QSCOUT
import jaqalpaq
from jaqalpaq.core import circuitbuilder
from jaqalpaq.run import run_jaqal_circuit
from qscout.v1.std.jaqal_gates import ALL_GATES as std
from qscout.v1.std import noisy
# Imports for basic mathematical functionality
from math import pi
import numpy as np
# Imports for OpenFermion(-PySCF)
import openfermion as of
from openfermion.chem import MolecularData
from openfermionpyscf import run_pyscf
# Import for VQE optimizer
from scipy import optimize
```
# Step 1: SCF calculation to assmble the second-quantized Hamiltonian
```
# Set the basis set, spin, and charge of the H2 molecule
basis = 'sto-3g'
multiplicity = 1
charge = 1 #Charge is 1 for HeH+
# Set calculation parameters
run_scf = 1
run_fci = 1
delete_input = True
delete_output = False
# Generate molecule at some bond length (0.8 Angstroms here)
geometry = [('He', (0., 0., 0.)), ('H', (0., 0., 0.5))]
molecule = MolecularData(geometry, basis, multiplicity, charge, filename='./HeH+_4_sto-3g_single_0.8') #Set file location of data
# Run pyscf to generate new molecular data for sto-3g HeH+
molecule = run_pyscf(molecule, run_scf=run_scf, run_fci=run_fci, verbose=False)
print("Bond Length in Angstroms: {}".format(0.8))
print("FCI (Exact) energy in Hartrees: {}".format(molecule.fci_energy))
```
## Step 2: Convert the fermionic Hamiltonian to a qubit Hamiltonian
```
# Get the fermionic Hamiltonian for HeH+ and map it using the BK encoding
hamiltonian = molecule.get_molecular_hamiltonian()
hamiltonian_ferm = of.get_fermion_operator(hamiltonian)
hamiltonian_bk = of.bravyi_kitaev(hamiltonian_ferm)
hamiltonian_jw = of.jordan_wigner(hamiltonian_ferm)
terms = []
cs = []
for term in hamiltonian_jw.terms:
paulis = [None, None, None, None]
for pauli in term:
paulis[pauli[0]] = pauli[1]
terms += [paulis]
cs += [hamiltonian_jw.terms[term]]
```
## Step 3: Define UCC Ansatz circuit in JaqalPaq
```
def ansatz(theta, term):
builder = circuitbuilder.CircuitBuilder(native_gates=std)
# Create a qubit register
q = builder.register('q', 4)
# Define a hadamard macro
hadamard = circuitbuilder.SequentialBlockBuilder()
hadamard.gate('Sy', 'a')
hadamard.gate('Px', 'a')
builder.macro('hadamard', ['a'], hadamard)
# Define a CNOT macro
CNOT = circuitbuilder.SequentialBlockBuilder()
CNOT.gate('Sy', 'control')
CNOT.gate('MS', 'control', 'target', 0, np.pi/2)
Sxd_block = CNOT.block(parallel=True)
Sxd_block.gate('Sxd', 'control')
Sxd_block.gate('Sxd', 'target')
CNOT.gate('Syd', 'control')
builder.macro('CNOT', ['control', 'target'], CNOT)
# Prepare the Hartree Fock state
builder.gate('prepare_all')
builder.gate('Px', q[0])
builder.gate('Px', q[1])
# Apply the UCC Ansatz exp[-i*theta(X1 Y0)]
builder.gate('hadamard', q[0])
builder.gate('hadamard', q[1])
builder.gate('hadamard', q[2])
builder.gate('Sxd', q[3])
builder.gate('CNOT', q[0], q[1])
builder.gate('CNOT', q[1], q[2])
builder.gate('CNOT', q[2], q[3])
builder.gate('Rz', q[3], theta)
builder.gate('CNOT', q[2], q[3])
builder.gate('CNOT', q[1], q[2])
builder.gate('CNOT', q[0], q[1])
builder.gate('hadamard', q[0])
builder.gate('hadamard', q[1])
builder.gate('hadamard', q[2])
builder.gate('Sx', q[3])
# Change basis for measurement depending on term
for j, qubit in enumerate(term):
if qubit == 'X':
builder.gate('hadamard', ('array_item', q, j)),
if qubit == 'Y':
builder.gate('Sxd', ('array_item', q, j)),
builder.gate('measure_all')
circuit = builder.build()
return circuit
```
## Step 4: Define functions to calculate energy expectation value of Ansatz state
```
def ansatz_sampling(theta, sample_noise):
term_probs = []
for i in range(len(terms)):
if sample_noise:
probs = np.zeros(16)
circuit = ansatz(theta, terms[i])
sim_result = run_jaqal_circuit(circuit) #Run circuit on emulator
sim_probs = sim_result.subcircuits[0].probability_by_int
sample = np.random.choice(16, size=n_samples, p=sim_probs) #Sample from distribution to get probabilities
for count in sample:
probs[count] += 1 #Increment state counter
probs = probs/n_samples #Determine probabilities from sampling
term_probs += [probs] #Combine lists of probs of each term in Hamiltonian
else: #Exact solution without sampling
circuit = ansatz(theta, terms[i])
sim_result = run_jaqal_circuit(circuit) #Run circuit on emulator
sim_probs = sim_result.subcircuits[0].probability_by_int
term_probs += [sim_probs]
return term_probs
# Calculate energy of one term of the Hamiltonian for one possible state
def term_energy(term, state, coefficient, prob):
parity = 1
for i in range(len(term)):
#Change parity if state is occupied and is acted on by a pauli operator
if term[i] != None and state[i] == '1':
parity = -1*parity
return coefficient*prob*parity
# Calculate energy of the molecule for a given value of theta
def calculate_energy(theta, sample_noise):
energy = 0
probs = ansatz_sampling(theta[0], sample_noise) #Convert tuple (from optimization) to float for circuit
for i in range(len(terms)): #For each term in the hamiltonian
for j in range(len(probs[0])): #For each possible state
term = terms[i]
state = '{0:04b}'.format(j)[::-1] #convert state to binary (# of qubits)
coefficient = cs[i].real
prob = probs[i][j]
#print(term, state, coefficient, prob)
energy += term_energy(term, state, coefficient, prob)
return energy
```
## Step 5: Minimize the energy expectation value in 𝜃
```
# Minimize the energy using classical optimization
n_samples = 10000
optimize.minimize(fun=calculate_energy, x0=[0.01], args=(False), method="COBYLA")
```
## Step 6: Loop over previous steps to calculate ground state energy at different bond lengths
```
# Set the basis set, spin, and charge of the HeH+ molecule
basis = 'sto-3g'
multiplicity = 1
charge = 1
# Set calculation parameters
run_scf = 1
run_fci = 1
delete_input = True
delete_output = False
optimized_energies = [[], []]
exact_energies = []
# Loop over bond lengths from 0.5 to 1.5 angstroms
n_samples = 10000 # Sample circuit
n_pts = 11 #Number of points
bond_lengths = np.linspace(0.5,1.5,n_pts)
for diatomic_bond_length in bond_lengths:
# Generate molecule at some bond length
geometry = [('He', (0., 0., 0.)), ('H', (0., 0., diatomic_bond_length))]
description=str(round(diatomic_bond_length, 2))
molecule = MolecularData(
geometry, basis, multiplicity, charge,
description=description,
filename='./HeH+_4_sto-3g_single_dissociation')
# Run pyscf to generate new molecular data for sto-3g HeH+
molecule = run_pyscf(molecule, run_scf=run_scf, run_fci=run_fci, verbose=False)
# Get the fermionic Hamiltonian for HeH+ and map it using the BK encoding
hamiltonian = molecule.get_molecular_hamiltonian()
hamiltonian_ferm = of.get_fermion_operator(hamiltonian)
hamiltonian_jw = of.jordan_wigner(hamiltonian_ferm)
hamiltonian_bk = of.bravyi_kitaev(hamiltonian_ferm)
# Define terms and coefficients of our Hamiltonian
terms = []
cs = []
for term in hamiltonian_jw.terms:
paulis = [None, None, None, None]
for pauli in term:
paulis[pauli[0]] = pauli[1]
terms += [paulis][::-1]
cs += [hamiltonian_jw.terms[term]]
# Minimize the expectation value of the energy using a classical optimizer (SPSA)
exact_energies.append(molecule.fci_energy)
print("R={}\t Exact Energy: {}".format(str(round(diatomic_bond_length, 2)), molecule.fci_energy))
for i in range(2):
result = optimize.minimize(fun=calculate_energy, x0=[0.01], args=(i), method="COBYLA")
optimized_energies[i].append(result.fun)
print("R={}\t Optimized Energy: {}\t Sampling Noise: {}".format(str(round(diatomic_bond_length, 2)), result.fun, bool(i)))
print('\n')
import matplotlib
import matplotlib.pyplot as pyplot
# Plot the various energies for different bond lengths
fig = pyplot.figure(figsize=(10,7))
pyplot.rcParams['font.size']=18
bkcolor = '#ffffff'
ax = fig.add_subplot(1, 1, 1)
pyplot.subplots_adjust(left=.2)
ax.set_xlabel('R (Angstroms)')
ax.set_ylabel(r'E (Hartrees)')
ax.set_title(r'HeH+ 4-qubit bond dissociation curve')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
bond_lengths = [float(x) for x in bond_lengths]
ax.plot(bond_lengths, optimized_energies[0], 'o', label='UCCSD', color='red')
ax.plot(bond_lengths, optimized_energies[1], 'x', label='UCCSD with Sampling Noise', color='blue')
ax.plot(bond_lengths, exact_energies, '-', label='Full-CI', color='black')
ax.legend(frameon=False)
pyplot.show()
```
| PypiClean |
/EntropyEncoding-0.0.4.tar.gz/EntropyEncoding-0.0.4/README.md | 
# EntropyEncoding
## Description
This package implements an encoding to bypass entropy antivirus check.
I have researched about entropy bypass techniques and found people who use adding low-entropy data to bypass entropy check. I think adding data can be optimized and more efficient with a simple entropy encoding to reduce entropy score.
Adding low-entropy data:
1. you get a larger file
2. you do not change payload entropy (if the antivirus software splits the file for entropy calculation, it will probably have high entropy on a payload chunk)
## Requirements
This package require:
- python3
- python3 Standard Library
## Installation
```bash
python3 -m pip install EntropyEncoding
```
```bash
git clone "https://github.com/mauricelambert/EntropyEncoding.git"
cd "EntropyEncoding"
python3 -m pip install .
```
## Usages
```python
from EntropyEncoding import *
payload = b"shellcode_payload 0000111122223333444455556666777788889999AAAABBBBCCCCDDDDEEEEFFFF" * 120
key = bytes([0,255,127,55,155,25,225,10,220,40,190,26,100,70,90,45,235,32,64,128,215,28,46,158,123,13,8,5,168,191,69])
encrypted_payload = bytes([key[i % len(key)] ^ x for i, x in enumerate(payload)])
print(shannon_entropy(encrypted_payload)) # 7.753825816757683, good encryption or compression have an entropy score > 7.9 and < 8
# Malicious entropy is detected by antivirus software when entropy score is greater than ~= 7.2
# This encrypted payload will be detected as malicious entropy by antivirus software
encoded_shellcode = entropy_encode(encrypted_payload)
encoded2_shellcode = entropy_encode2(encrypted_payload)
print(encoded_shellcode)
print(encoded2_shellcode)
assert entropy_decode(encoded_shellcode) == encrypted_payload
assert entropy_decode2(encoded2_shellcode) == encrypted_payload
print(shannon_entropy(encoded_shellcode)) # 5.770760744294572, entropy score is smaller than 7.2, antivirus software will not detect this payload with entropy checks
print(shannon_entropy(encoded2_shellcode)) # 5.767383412620195, entropy score is smaller than 7.2, antivirus software will not detect this payload with entropy checks
r"""
I get entropy score from Windows executable, average score is ~= 5 (so 5.7 can be a legitimate entropy score):
>>> from glob import iglob
>>> from statistics import mean
>>> from EntropyEncoding import *
>>> entropy = []
>>> for a in iglob(r"C:\Windows\System32\*.exe"): entropy.append(shannon_entropy(open(a, "rb").read()))
...
>>> max(entropy)
7.932014219115418
>>> min(entropy)
1.6379445326685684
>>> mean(entropy)
5.063622509688209
>>>
"""
```
Tests results:
```
~# python3 EntropyEncoding.py
Entropy for non-encoded secrets: 4.521591372417719
Entropy for non-encoded encrypted secrets: 7.951320327821406
Entropy for entropy-encoded encrypted secrets: 5.774096152750044
Entropy for non-encoded exe: 5.22055339277441
Entropy for non-encoded encrypted exe: 7.914685739354301
Entropy for entropy-encoded encrypted exe: 5.759477906043907
~#
```
## Links
- [Pypi](https://pypi.org/project/EntropyEncoding)
- [Github](https://github.com/mauricelambert/EntropyEncoding)
- [Documentation](https://mauricelambert.github.io/info/python/security/EntropyEncoding.html)
## License
Licensed under the [GPL, version 3](https://www.gnu.org/licenses/).
| PypiClean |
/BitArray2D-2.1.tar.gz/BitArray2D-2.1/BitArray2D.py |
__version__ = '2.1'
__author__ = "Avinash Kak ([email protected])"
__date__ = '2011-March-22'
__url__ = 'http://RVL4.ecn.purdue.edu/~kak/dist2d/BitArray2D-2.1.html'
__copyright__ = "(C) 2011 Avinash Kak. Python Software Foundation."
__doc__ = '''
BitArray2D.py
Version: ''' + __version__ + '''
Author: Avinash Kak ([email protected])
Date: ''' + __date__ + '''
@title
CHANGE LOG:
Version 2.1:
BitArray2D is a renamed version of Bit2DArray. This name change
was made in response to user requests. BitArray2D is Python 3.x
compliant. It should work with both Python 2.x and Python 3.x.
@title
INSTALLATION:
The BitArray2D class was packaged using Distutils. For
installation, execute the following command-line in the source
directory (this is the directory that contains the setup.py file
after you have downloaded and uncompressed the tar archive):
python setup.py install
You have to have root privileges for this to work. On Linux
distributions, this will install the module file at a location that
looks like
/usr/lib/python2.7/site-packages/
If you do not have root access, you have the option of working
directly off the directory in which you downloaded the software by
simply placing the following statements at the top of your scripts
that use the BitArray2D class
import sys
sys.path.append( "pathname_to_BitArray2D_directory" )
To uninstall the module, simply delete the source directory, locate
where BitArray2D was installed with "locate BitArray2D" and delete
those files. As mentioned above, the full pathname to the installed
version is likely to look like
/usr/lib/python2.6/site-packages/BitArray2D*
If you want to carry out a non-standard install of BitArray2D, look
up the on-line information on Disutils by pointing your browser to
http://docs.python.org/dist/dist.html
@title
INTRODUCTION:
The BitArray2D class is for a memory-efficient packed representation
of 2D bit arrays and for logical and other operations (such as blob
dilations, erosions, etc.) on such arrays. The implementation of the
class takes advantage of the facilities of the BitVector class for
the memory representation and for the allowed operations.
Operations supported on 2D bit arrays:
__str__
__getitem__
__setitem__
__getslice__
__eq__
__ne__
__and__
__or__
__xor__
__invert__
deep_copy
size
read_bit_array_from_char_file
read_bit_array_from_binary_file
write_bit_array_to_char_file
write_bit_array_to_packed_binary_file
shift
dilate
erode
@title
CONSTRUCTING 2D BIT ARRAYS:
You can construct a 2D bit array in four different ways:
(1) You can construct a packed 2D bit array of all zeros by a
call like
ba = BitArray2D( rows = 20, columns = 10 )
This will create a 2D array of size 20x10. You can then set
the individual bits in this array using syntax that is shown
later in this documentation.
The following call will return an empty BitArray2D instance:
ba = BitArray2D( rows=0, columns=0 )
(2) You can construct a 2D bit array from a string in the following
manner:
ba = BitArray2D( bitstring = "111\n110\n011" )
This will create the following bit array in the memory:
111
110
011
There is no limit on the either the row size or the column size
of the bit array created in this manner. However, the rows
must all be of exactly the same size. An exception is thrown
when that condition is violated.
Note that even though you are supplying to the BitArray2D
constructor a string made of ASCII 1's and 0's, the 2D bit
array that is created is stored in a packed form. So a row
with, say, sixteen 1's and/or 0's will be stored as just two
bytes in the memory. So a 16x16 bit array will occupy only
32 bytes in the memory.
(3) You can create a 2D bit array by reading the bits directly from
a text file by calls that look like
ba = BitArray2D( filename = "data.txt" )
ba.read_bit_array_from_char_file()
You first have to create an empty BitArray2D instance, as in
the first statement above, and then call the method
read_bit_array_from_char_file() on the instance.
Even though the text file supplies ASCII 1's and 0's, the
internal representation of the bit array will be packed, as
mentioned for the case (2) above.
Note that when you create a bit array in this manner, the
newline character in the text file is used as a row delimiter.
(4) You can create a 2D bit array by reading the bits directly from
a binary file through calls like:
ba = BitArray2D( filename = "data_binary.dat" )
ba.read_bit_array_from_binary_file(rows = 5, cols = 8)
Since you are creating a bit array from a binary file, you
cannot designate any particular byte as a row delimiter. That's
the reason for why you must now specify the number of rows and
the number of columns to the read method in the second
statement. If the number of bits in the binary file is less
than what you need for the 2D bit array in the second statement
above, an exception is thrown.
To illustre creating a bit array by reading a file in the binary
mode, assume that the file has the characters 'hello' in it
and you read this file into a bit array by calling:
ba = BitArray2D( filename = "hello_file.dat" )
ba.read_bit_array_from_binary_file(rows = 5, cols = 8)
If you now say
print ba
you will see the following array displayed in your terminal
window:
01101000
01100101
01101100
01101100
01101111
These are the ASCII representations of the characters 'h', 'e',
'l', 'l', and 'o'.
@title
OPERATIONS SUPPORTED BY THE BitArray2D CLASS:
@title
DISPLAYING BIT ARRAYS:
(5) Since the BitArray2D class implements the __str__ method, a bit
array can be displayed in a terminal window by
print( ba )
where ba is an instance of BitArray2D. This will display in
your terminal window the bit array ba, with each row of the
array in a separate line. (Obviously, this is not what you
would do for very large bit arrays. But, for diagnostic work,
such displays can be very helpful.) You can always obtain the
string representation of a bit array by
str( ba )
In the string representation, the rows are separated by the
newline character.
@title
ACCESSING AND SETTING INDIVIDUAL BITS AND SLICES:
(6) You can access any individual bit of a 2D bit array by
bit = ba[ godel(i,j) ]
The call on the right will return the bit in the i-th row and
the j-th column of the BitArray2D ba. This assumes that you
have specifically imported the name 'godel' from the BitArray2D
module. If that is not the case, the above call will look like
bit = ba[ BitArray2D.godel(i,j) ]
The function godel(a,b) is used to map a pair of integers a and
b into a unique integer with the help of the Godel pairing
formula.
(7) Any single bit of a bit array ba at row index i and column index
j can be set to 1 or 0 by
ba[i,j] = 1_or_0
(8) A slice of a bit array, defined by the corner coordinates (i,j)
and (k,l), can be retrieved by
from BitArray2D import godel
ba[ godel(i,j) : godel(k,l) ]
In the implementation of the __getslice__ method that handles
the above invocation, calls to ungodel(m) are used to recover
the components i and j of a pair whose Godel map is m. To
demonstrate the working of slice retrieval:
ba1 = BitArray2D( bitstring = \
"111111\n110111\n111111\n111111\n111111\n111111" )
ba2 = ba3[godel(2,3) : godel(4,5)]
print( ba4 )
yields
11
11
(9) You can also carry out slice assignment by using syntax like
from BitArray2D import godel
ba1[ godel(i,j) : godel(k,l) ] = ba2
where the 2D bit array ba1 is presumably larger than the 2D bit
array ba2. The above call will replace the rectangular region
of ba1 that is defined by the corner coordinates (i,j) and
(k,l) by the bit array ba2, assuming that that the row width of
ba2 is (k-i) and the column width (l-j). So in a call like
ba1 = BitArray2D( bitstring = "101\n110\n111" ) # 101
# 110
# 111
ba2 = BitArray2D( rows = 5, columns = 5 ) # 00000
# 00000
# 00000
# 00000
# 00000
ba2[ godel(2, 2+ba2.rows) : godel(2,2+ba2.columns) ] = ba1
print( ba2 ) # 00000
# 00000
# 00101
# 00110
# 00111
(10) You can construct a deep copy of a bit array by
ba2 = ba1.deep_copy()
The bit array in ba2 will exactly the same as in ba1, except
that the two bit arrays will be two different objects in the
memory.
@title
LOGICAL OPERATIONS ON 2D BIT ARRAYS:
(11) You can carry out all of the logical operations on 2D bit arrays:
result_ba = ba1 & ba2 # for bitwise AND
result_ba = ba1 | ba2 # for bitwise OR
result_ba = ba1 ^ ba2 # for bitwise XOR
result_ba = ~ ba # for bitwise negation
@title
COMPARING 2D BIT ARRAYS:
(12) Given two 2D bit arrays, you can test for equality and inequality
through the following boolean comparisons:
ba1 == ba2
ba1 != ba2
@title
OTHER SUPPORTED OPERATIONS:
(13) You can shift a bit array array by
ba.shift( rowshift = m, colshift = n )
where m is the number of positions row-wise by which you
want to shift the array and the n the same column-wise.
The values for m and n are allowed to be negative. A positive
value for m will cause a bit array to shift downwards and a
positive value for n to shift rightwards.
What may make this method confusing at the beginning is the
orientation of the positive row direction and the positive
column direction. The origin of the array is at the upper left
hand corner of your display. Rows are positive going downwards
and columns are positive going rightwards:
X-----> +ve col direction
|
|
|
V
+ve row direction
So a positive value for rowshift will shift the array downwards
and a positive value for colshift will shift it rightwards.
Just remember that if you want the shifts to seem more
intuitive, use negative values for the rowshift argument.
(14) In order to patch small holes, you can dilate the blobs made up
of 1's that are connected through neighborhood relationship by
calling dilate():
result_ba = ba.dilate( m )
The returned bit array is an OR of the bit arrays obtained by
shifting ba by m positions in all four cardinal directions.
(15) The opposite of dilate is erode. An erosion operation should
shrink the blobs by the deletion of 1's that are at the boundary
up to a depth determined by the argument supplied to erode():
result_ba = ba.erode( m )
Logically, the array returned by the above call is an AND of
the the bit arrays obtained by shifting ba by m positions in
each of the four cardinal directions.
(16) You can write a bit array directly to a text file if you want
the bits to be written out as ASCII 1's and 0's:
ba.write_bit_array_to_char_file("out_file.txt")
This can be a useful thing to do when you are playing with
small to medium sized bit arrays. This call will deposit the
newline character at the end of each row of the bit array.
Subsequently, you can re-create the bit array in the memory by
reading the file with the calls
ba = BitArray2D( filename = "filename.txt" )
ba.read_bit_array_from_char_file()
that were mentioned earlier in item (3) above.
(17) You can write a bit array in its packed binary representation
to a file (that would obviously be a binary file) by calling
ba.write_bit_array_to_packed_binary_file("filename.dat")
The overall size of bit array ba must be a multiple of 8 for
this write function to work. If this condition is not met, the
function will throw an exception.
When writing an internally generated bit array out to a disk
file, the implementation of the write function opens the file
in the binary mode. This is particularly important on Windows
machines since, if the file were to be opened in the text mode,
the bit pattern 00001010 ('\\n') in a bit array will be written
out as 0000110100001010 ('\\r\\n').
A binary file created by the above call can be read back into
the memory by the calls shown in item (4) above:
ba = BitArray2D( filename = "filename.dat" )
ba.read_bit_array_from_binary_file(rows = 5, cols = 8)
As mentioned in (4) above, since no bytes can serve as row
delimiters for binary files, you have to tell the read function
how many rows and columns to read off the file.
@title
HOW A BIT ARRAY IS STORED:
Through the facilities provided by the BitVector class, the bits of
a bit array are stored in 16-bit unsigned ints.
@title
ABOUT THE AUTHOR:
Avi Kak is the author of "Programming with Objects: A Comparative
Presentation of Object-Oriented Programming with C++ and Java",
published by John-Wiley in 2003. This book presents a new approach
to the combined learning of two large object-oriented languages,
C++ and Java. It is being used as a text in a number of
educational programs around the world. This book has also been
translated into Chinese. Avi Kak is also the author of "Scripting
with Objects: A Comparative Presentation of Object-Oriented
Scripting with Perl and Python," published in 2008 by John-Wiley.
@title
SOME EXAMPLE CODE:
import BitArray2D
from BitArray2D import godel
print("\nConstructing an empty 2D bit array:")
ba = BitArray2D.BitArray2D( rows=0, columns=0 )
print(ba)
print("\nConstructing a bit array of size 10x10 with zero bits -- ba:")
ba = BitArray2D.BitArray2D( rows = 10, columns = 10 )
print(ba)
print("\nConstructing a bit array from a bit string -- ba2:")
ba2 = BitArray2D.BitArray2D( bitstring = "111\n110\n111" )
print(ba2)
print("\nPrint a specific bit in the array -- bit at 1,2 in ba2:")
print( ba2[ godel(1,2) ] )
print("\nSet a specific bit in the array --- set bit (0,1) of ba2:")
ba2[0,1] = 0
print(ba2)
print("\nExperiments in slice getting and setting:")
print("Printing an array -- ba3:")
ba3 = BitArray2D.BitArray2D( bitstring = "111111\n110111\n111111\n111111\n111111\n111111" )
print(ba3)
ba4 = ba3[godel(2,3) : godel(4,5)]
print("Printing a slice of the larger array -- slice b4 of ba3:")
print(ba4)
ba5 = BitArray2D.BitArray2D( rows = 5, columns = 5 )
print("\nPrinting an array for demonstrating slice setting:")
print(ba5)
ba5[godel(2, 2+ba2.rows) : godel(2,2+ba2.columns)] = ba2
print("\nSetting a slice of the array - setting slice of ba5 to ba2:")
print(ba5)
print("\nConstructing a deep copy of ba, will call it ba6:")
ba6 = ba.deep_copy()
ba6[ godel(3,3+ba2.rows) : godel(3,3+ba2.columns) ] = ba2
print("Setting a slice of the larger array -- set slice of ba6 to ba2:")
print(ba6)
(For a more complete working example, see the
example code in the BitArray2DDemo.py file in the
Examples sub-directory.)
'''
from BitVector import __version__ as bitvector_version
if bitvector_version.split('.')[0] < '3':
raise ImportError("The imported BitVector module must be of version 3.0 or higher")
import BitVector
import re
class BitArray2D( object ): #(A1)
def __init__( self, *args, **kwargs ): #(A2)
if args: #(A3)
raise ValueError( #(A4)
'''BitArray2D constructor can only be called with
keyword arguments for the following keywords:
rows, columns, filename, bitstring)''')
allowed_keys = 'bitstring','filename','rows','columns' #(A5)
keywords_used = kwargs.keys() #(A6)
for keyword in keywords_used: #(A7)
if keyword not in allowed_keys: #(A8)
raise ValueError("Wrong keyword used") #(A9)
filename = rows = columns = bitstring = None #(A10)
if 'filename' in kwargs : filename = kwargs.pop('filename')
if 'rows' in kwargs : rows = kwargs.pop('rows')
if 'columns' in kwargs : columns = kwargs.pop('columns')
if 'bitstring' in kwargs : bitstring = kwargs.pop('bitstring')
#(A11 -- A14)
self.filename = None #(A15)
self.rows = None #(A16)
self.columns = None #(A17)
self.bitstring = None #(A18)
self.FILEIN = None #(A19)
if filename: #(A20)
if rows or columns or bitstring: #(A21)
raise ValueError(
'''When filename is specified, you cannot
give values to any other constructor args''') #(A22)
self.filename = filename #(A23)
self.rowVectors = []; self.rows = 0; self.columns = 0 #(A24)
import sys #(A25)
try: #(A26)
if sys.version_info[0] == 3: #(A27)
self.FILEIN = open( filename, encoding='utf-8' )#(A28)
else: #(A29)
self.FILEIN = open( filename, 'rb' ) #(A30)
except IOError as e: #(A31)
print(e.strerror) #(A32)
return #(A33)
elif rows is not None and rows >= 0: #(A34)
if filename or bitstring: #(A35)
raise ValueError(
'''When number of rows is specified, you cannot
give values to any other constructor args except
for columns''') #(A36)
if not columns >= 0: #(A37)
raise ValueError(
'''When number of rows is specified, you must also
specify a value for the number of columns''') #(A38)
self.rows = rows; self.columns = columns #(A39)
self.rowVectors = [ BitVector.BitVector( size = self.columns ) \
for i in range( self.rows ) ] #(A40)
return #(A41)
elif bitstring or bitstring == '': #(A42)
self.rowVectors = [ BitVector.BitVector( bitstring = bits ) \
for bits in re.split( '\n', bitstring ) ] #(A43)
self.rows = len( self.rowVectors ) #(A44)
self.columns = self.rowVectors[0].length() #(A45)
rowVecSizes = [ len(x) for x in self.rowVectors ] #(A46)
if max( rowVecSizes ) != min( rowVecSizes ): #(A47)
raise AttributeError("Your row sizes do not match") #(A48)
def _add_row(self, bitvector): #(B1)
if self.columns == 0: #(B2)
self.columns = bitvector.length() #(B3)
elif self.columns != bitvector.length(): #(B4)
raise ValueError("Size wrong for the new row") #(B5)
self.rowVectors.append( bitvector ) #(B6)
self.rows += 1 #(B7)
def __str__( self ): #(C1)
'To create a print representation'
if self.rows==0 and self.columns==0: #(C2)
return '' #(C3)
return '\n'.join( map( str, self.rowVectors ) ) #(C4)
def __getitem__( self, pos ): #(D1)
'Get the bit from the designated position'
if not isinstance( pos, slice ): #(D2)
row,col = ungodel(pos) #(D3)
if row >= self.rows or row < -self.rows: #(D4)
raise ValueError( "row index range error" ) #(D5)
if col >= self.columns or col < -self.columns: #(D6)
raise ValueError( "column index range error" ) #(D7)
if row < 0: row = self.rows + row #(D8)
if col < 0: col = self.columns + col #(D9)
return self.rowVectors[row][col] #(D10)
else: #(D11)
if pos.start is None: #(D12)
start = 0,0 #(D13)
else: #(D14)
start = ungodel(pos.start) #(D15)
if pos.stop is None: #(D16)
stop = self.rows,self.columns #(D17)
else: #(D18)
stop = ungodel(pos.stop) #(D19)
result = BitArray2D( rows=0, columns=0 ) #(D20)
for i in range( start[0], stop[0] ): #(D21)
result._add_row( BitVector.BitVector( bitstring = \
str(self.rowVectors[i][start[1]:stop[1]])) ) #(D22)
return result #(D23)
def __setitem__(self, pos, item): #(E1)
'''
This is needed for both slice assignments and for index-based
assignments. It checks the type of pos and item to see if the call
is for slice assignment. For slice assignment, the second argument
must be of type slice '[m:n]' whose two numbers m and n are
produced by calling godel() on the two corners of the rectangular
regions whose values you want to set by calling this function. So
for slice assignments, think of pos as consisting of
'[(i,j):(k,l)]' where (i,j) defines one corner of the slice and
(k,l) the other slice. As you would expect, for slice assignments,
the argument item must of type BitArray2D. For index-based
assignment, the pos consists of the tuple (i,j), the point in the
array where you want to change the value. Again, for index-based
assignment, the last argument will either be 1 or 0.
'''
if (not isinstance( item, BitArray2D )): #(E2)
if isinstance( pos, slice ): #(E3)
raise ValueError("Second arg wrong for assignment") #(E4)
i,j = pos #(E5)
self.rowVectors[i][j] = item #(E6)
# The following section is for slice assignment:
if isinstance(pos,slice): #(E7)
if (not isinstance( item, BitArray2D )): #(E8)
raise TypeError('For slice assignment, \
the right hand side must be a BitArray2D') #(E9)
arg1, arg2 = pos.start, pos.stop #(E10)
i,j = ungodel(arg1) #(E11)
k,l = ungodel(arg2) #(E12)
for m in range(i,j): #(E13)
self.rowVectors[m][k:l] = item.rowVectors[m-i] #(E14)
def __getslice__(self, arg1, arg2): #(F1)
'''
A slice of a 2D array is defined as a rectangular region whose one
corner is at the (i,j) coordinates, which is represented by the
mapped integer arg1 produced by calling godel(i,j). The other
corner of the slice is at the coordinates (k,l) that is represented
by the integer arg2 produced by calling godel(k,l). The slice is
returned as a new BitArray2D instance.
'''
i,j = ungodel(arg1) #(F2)
k,l = ungodel(arg2) #(F3)
sliceArray = BitArray2D( rows=0, columns=0 ) #(F4)
if j > self.rows: j = self.rows #(F5)
if l > self.columns: l = self.columns #(F6)
for x in range(i,k): #(F7)
bv = self.rowVectors[x] #(F8)
sliceArray._add_row( bv[j:l] ) #(F9)
return sliceArray #(F10)
def __eq__(self, other): #(G1)
if self.size() != other.size(): return False #(G2)
if self.rowVectors != other.rowVectors: return False #(G3)
return True #(G4)
def __ne__(self, other): #(H1)
return not self == other #(H2)
def __and__(self, other): #(I1)
'''
Take a bitwise 'AND' of the bit array on which the method is
invoked with the argument bit array. Return the result as a new
bit array.
'''
if self.rows != other.rows or self.columns != other.columns: #(I2)
raise ValueError("Arguments to AND must be of same size")#(I3)
resultArray = BitArray2D(rows=0,columns=0) #(I4)
list(map(resultArray._add_row, \
[self.rowVectors[i] & other.rowVectors[i] \
for i in range(self.rows)])) #(I5)
return resultArray #(I6)
def __or__(self, other): #(J1)
'''
Take a bitwise 'OR' of the bit array on which the method is
invoked with the argument bit array. Return the result as a new
bit array.
'''
if self.rows != other.rows or self.columns != other.columns: #(J2)
raise ValueError("Arguments to OR must be of same size") #(J3)
resultArray = BitArray2D(rows=0,columns=0) #(J4)
list(map(resultArray._add_row, \
[self.rowVectors[i] | other.rowVectors[i] \
for i in range(self.rows)])) #(J5)
return resultArray
def __xor__(self, other): #(K1)
'''
Take a bitwise 'XOR' of the bit array on which the method is
invoked with the argument bit array. Return the result as a new
bit array.
'''
if self.rows != other.rows or self.columns != other.columns: #(K2)
raise ValueError("Arguments to XOR must be of same size")#(K3)
resultArray = BitArray2D(rows=0,columns=0) #(K4)
list(map(resultArray._add_row, \
[self.rowVectors[i] ^ other.rowVectors[i] \
for i in range(self.rows)])) #(K5)
return resultArray #(K6)
def __invert__(self): #(L1)
'''
Invert the bits in the bit array on which the method is invoked
and return the result as a new bit array.
'''
resultArray = BitArray2D(rows=0,columns=0) #(L2)
list(map(resultArray._add_row, [~self.rowVectors[i] \
for i in range(self.rows)])) #(L3)
return resultArray #(L4)
def deep_copy(self): #(M1)
'Make a deep copy of a bit array'
resultArray = BitArray2D(rows=0,columns=0) #(M2)
list(map(resultArray._add_row, [x.deep_copy() \
for x in self.rowVectors])) #(M3)
return resultArray #(M4)
def size(self): #(N1)
return self.rows, self.columns #(N2)
def read_bit_array_from_char_file(self): #(P1)
'''
This assumes that the bit array is stored in the form of
ASCII characters 1 and 0 in a text file. We further assume
that the different rows are separated by the newline character.
'''
error_str = "You need to first construct a BitArray2D" + \
"instance with a filename as argument" #(P2)
if not self.FILEIN: #(P3)
raise SyntaxError( error_str ) #(P4)
allbits = self.FILEIN.read() #(P5)
rows = filter( None, re.split('\n', allbits) ) #(P6)
list(map(self._add_row, [BitVector.BitVector( bitstring = x ) \
for x in rows])) #(P7)
def write_bit_array_to_char_file(self, file_out): #(Q1)
'''
Note that this write function for depositing a bit array into
text file uses the newline as the row delimiter.
'''
FILEOUT = open( file_out, 'w' ) #(Q2)
for bitvec in self.rowVectors: #(Q3)
FILEOUT.write( str(bitvec) + "\n" ) #(Q4)
def read_bit_array_from_binary_file(self, rows, columns): #(R1)
'''
This assumes that the bit array is stored in the form of ASCII
characters 1 and 0 in a text file. We further assume that the
different rows are separated by the newline character.
'''
error_str = "You need to first construct a BitArray2D" + \
"instance with a filename as argument" #(R2)
if not self.filename: #(R3)
raise SyntaxError( error_str ) #(R4)
import os.path #(R5)
filesize = os.path.getsize( self.filename ) #(R6)
if (rows * columns) % 8 != 0: #(R7)
raise ValueError("In binary file input mode, rows*cols must" \
+ " be a multiple of 8" ) #(R8)
if filesize < int(rows*columns/8): #(R9)
raise ValueError("File has insufficient bytes" ) #(R10)
bitstring = '' #(R11)
i = 0 #(R12)
while i < rows*columns/8: #(R13)
i += 1 #(R14)
byte = self.FILEIN.read(1) #(R15)
hexvalue = hex( ord( byte ) ) #(R16)
hexvalue = hexvalue[2:] #(R17)
if len( hexvalue ) == 1: #(R18)
hexvalue = '0' + hexvalue #(R19)
bitstring += BitVector._hexdict[ hexvalue[0] ] #(R20)
bitstring += BitVector._hexdict[ hexvalue[1] ] #(R21)
bv = BitVector.BitVector( bitstring = bitstring ) #(R22)
list(map(self._add_row, [ bv[m*columns : m*columns+columns] \
for m in range(rows) ])) #(R23)
def write_bit_array_to_packed_binary_file(self, file_out): #(S1)
'''
This creates a packed disk storage for your bit array. But for
this to work, the total number of bits in your bit array must be a
multiple of 8 since all file I/O is byte oriented. Also note that
now you cannot use any byte as a row delimiter. So when reading
such a file back into a bit array, you have to tell the read
function how many rows and columns to create.
'''
err_str = '''Only a bit array whose total number of bits
is a multiple of 8 can be written to a file.''' #(S2)
if self.rows * self.columns % 8: #(S3)
raise ValueError( err_str ) #(S4)
FILEOUT = open( file_out, 'wb' ) #(S5)
bitstring = '' #(S6)
for bitvec in self.rowVectors: #(S7)
bitstring += str(bitvec) #(S8)
compositeBitVec = BitVector.BitVector(bitstring = bitstring) #(S9)
compositeBitVec.write_to_file( FILEOUT ) #(S10)
def shift( self, rowshift, colshift ): #(T1)
'''
What may make this method confusing at the beginning is the
orientation of the positive row direction and the positive
column direction. The origin of the array is at the upper
left hand corner of your display. Rows are positive going
downwards and columns are positive going rightwards:
X-----> +ve col direction
|
|
|
V
+ve row direction
So a positive value for rowshift will shift the array downwards
and a positive value for colshift will shift it rightwards.
Just remember that if you want the shifts to seem more intuitive,
use negative values for the rowshift argument.
'''
if rowshift >= 0: #(T2)
self.rowVectors[rowshift : self.rows] = \
self.rowVectors[: self.rows-rowshift] #(T3)
self.rowVectors[:rowshift] = \
[BitVector.BitVector(size = self.columns) \
for i in range(rowshift)] #(T4)
if colshift >= 0:
for bitvec in self.rowVectors[:]: \
bitvec.shift_right(colshift) #(T5)
else:
for bitvec in self.rowVectors[:]: #(T6)
bitvec.shift_left(abs(colshift)) #(T7)
else: #(T8)
rowshift = abs(rowshift) #(T9)
self.rowVectors[:self.rows-rowshift] = \
self.rowVectors[rowshift : self.rows] #(T10)
self.rowVectors[self.rows-rowshift:] = \
[BitVector.BitVector(size = self.columns) \
for i in range(rowshift)] #(T11)
if colshift >= 0:
for bitvec in self.rowVectors[:]: \
bitvec.shift_right(colshift) #(T12)
else: #(T13)
for bitvec in self.rowVectors[:]: #(T14)
bitvec.shift_left(abs(colshift)) #(T15)
return self #(T16)
def dilate( self, m ): #(U1)
accumArray = BitArray2D(rows=self.rows, columns=self.columns)#(U2)
for i in range(-m,m+1): #(U3)
for j in range(-m,m+1): #(U4)
temp = self.deep_copy() #(U5)
accumArray |= temp.shift(i,j) #(U6)
return accumArray #(U7)
def erode( self, m ): #(V1)
accumArray = BitArray2D(rows=self.rows, columns=self.columns)#(V2)
for i in range(-m,m+1): #(V3)
for j in range(-m,m+1): #(V4)
temp = self.deep_copy() #(V5)
accumArray &= temp.shift(i,j) #(V6)
return accumArray #(V7)
#------------------------ End of Class Definition -----------------------
#--------------------------- Ancillary Functions ------------------------
def godel(i,j): #(W1)
return 2**i*(2*j + 1)-1 #(W2)
def ungodel(m): #(X1)
i,q = 0,m+1 #(X2)
while not q&1: #(X3)
q >>= 1 #(X4)
i += 1 #(X5)
j = ((m+1)/2**i - 1)/2 #(X6)
return int(i),int(j) #(X7)
#------------------------ Test Code Follows -----------------------
if __name__ == '__main__':
print("\nConstructing an empty 2D bit array:")
ba = BitArray2D( rows=0, columns=0 )
print(ba)
print("\nConstructing a bit array of size 10x10 with zero bits -- ba:")
ba = BitArray2D( rows = 10, columns = 10 )
print(ba)
print("\nConstructing a bit array from a bit string -- ba2:")
ba2 = BitArray2D( bitstring = "111\n110\n111" )
print(ba2)
print("\nPrint a specific bit in the array -- bit at 1,2 in ba2:")
print( ba2[ godel(1,2) ] )
print("\nSet a specific bit in the array --- set bit (0,1) of ba2:")
ba2[0,1] = 0
print(ba2)
print("\nExperiments in slice getting and setting:")
print("Printing an array -- ba3:")
ba3 = BitArray2D( bitstring = "111111\n110111\n111111\n111111\n111111\n111111" )
print(ba3)
ba4 = ba3[godel(2,3) : godel(4,5)]
print("Printing a slice of the larger array -- slice b4 of ba3:")
print(ba4)
ba5 = BitArray2D( rows = 5, columns = 5 )
print("\nPrinting an array for demonstrating slice setting:")
print(ba5)
ba5[godel(2, 2+ba2.rows) : godel(2,2+ba2.columns)] = ba2
print("\nSetting a slice of the array - setting slice of ba5 to ba2:")
print(ba5)
print("\nConstructing a deep copy of ba, will call it ba6:")
ba6 = ba.deep_copy()
ba6[ godel(3,3+ba2.rows) : godel(3,3+ba2.columns) ] = ba2
print("Setting a slice of the larger array -- set slice of ba6 to ba2:")
print(ba6)
print("\nExperiment in bitwise AND:")
ba5 = ba.deep_copy()
ba7 = ba5 & ba6
print("Displaying bitwise AND of ba5 and ba6 --- ba7:")
print(ba7)
print("\nExperiment in bitwise OR:")
ba7 = ba5 | ba6
print("Displaying bitwise OR of ba5 and ba6 --- ba7:")
print(ba7)
print("\nExperiment in bitwise XOR:")
ba7 = ba5 ^ ba6
print("Displaying bitwise XOR of ba5 and ba6 --- ba7:")
print(ba7)
print("\nExperiment in bitwise negation:")
ba7 = ~ba5
print("Displaying bitwise negation of ba5 --- ba7:")
print(ba7)
print("\nSanity check (A & ~A => all zeros):")
print(ba5 & ~ba5)
print("\nConstruct bit array from a char file with ASCII 1's and 0's:" )
ba8 = BitArray2D( filename = "Examples/data.txt" )
ba8.read_bit_array_from_char_file()
print("The bit array as read from the file -- ba8:")
print(ba8)
print("\nConstruct bit array from a packed binary file:")
ba9 = BitArray2D( filename = "Examples/data_binary.dat" )
ba9.read_bit_array_from_binary_file(rows = 5, columns = 8)
print("The bit array as read from the file -- ba9:")
print("size of ba9: " + str(ba9.size()))
print(ba9)
print("\nTest the equality and inequality operators:")
ba10 = BitArray2D( bitstring = "111\n110" )
ba11 = BitArray2D( bitstring = "111\n110" )
print("ba10 is equal to ba11 is: " + str(ba10 == ba11))
ba12 = BitArray2D( bitstring = "111\n111" )
print("ba10 is equal to ba12 is: " + str(ba10 == ba12))
print("\nTest shifting a bit array:")
print("printing ba13:")
ba13 = ba9.deep_copy()
print(ba13)
ba13.shift(rowshift=-2, colshift=2)
print("The shifted version of ba9:")
print(ba13)
print("\nTest dilation:")
ba14 = BitArray2D( filename = "Examples/data2.txt" )
ba14.read_bit_array_from_char_file()
print("Array before dilation:")
print(ba14)
ba15= ba14.dilate(1)
print("Array after dilation:")
print(ba15)
print("\nTest erosion:")
ba16 = BitArray2D( filename = "Examples/data2.txt" )
ba16.read_bit_array_from_char_file()
print("Array before erosion:")
print(ba16)
ba17= ba16.erode(1)
print("Array after erosion:")
print(ba17)
print("\nExperiments with writing array to char file:")
ba17.write_bit_array_to_char_file("out1.txt")
print("\nExperiments with writing array to packed binary file:")
ba9.write_bit_array_to_packed_binary_file("out2.dat") | PypiClean |
/HNSC-classifier-1.0.tar.gz/HNSC-classifier-1.0/LICENSE.md | GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
| PypiClean |
/CatKit-0.5.4-py3-none-any.whl/catkit/hub/cathubsqlite.py | import sys
from ase.db.sqlite import SQLite3Database
import sqlite3
import json
from past.utils import PY2
from tabulate import tabulate
init_commands = [
""" CREATE TABLE publication (
id INTEGER PRIMARY KEY AUTOINCREMENT,
pub_id text UNIQUE,
title text,
authors text,
journal text,
volume text,
number text,
pages text,
year integer,
publisher text,
doi text,
tags text
);""",
"""CREATE TABLE publication_system (
ase_id text REFERENCES systems(unique_id),
pub_id text REFERENCES publication(pub_id),
PRIMARY KEY (pub_id, ase_id)
);""",
"""CREATE TABLE reaction (
id INTEGER PRIMARY KEY AUTOINCREMENT,
chemical_composition text,
surface_composition text,
facet text,
sites text,
coverages text,
reactants text,
products text,
reaction_energy real,
activation_energy real,
dft_code text,
dft_functional text,
username text,
pub_id text,
FOREIGN KEY (pub_id) REFERENCES publication(pub_id)
);""",
""" CREATE TABLE reaction_system (
name text,
energy_correction real,
ase_id text,
id integer,
FOREIGN KEY (ase_id) REFERENCES systems(unique_id),
FOREIGN KEY (id) REFERENCES reaction(id)
);"""]
class CathubSQLite:
"""Class for managing SQLite3 database for reaction energies,
publications and atomic structures. Builds on top of the ASE database for
atomic strucutres https://wiki.fysik.dtu.dk/ase/ase/db/db.html with four
additional tables:
publication: publication info
publication_system: one-to-many mapping between publication table and
systems table in ASE database
reaction: reaction energies for surfaces
reaction_system: mamy-to-many mapping between reaction table and
systems table in ASE database
Connect to a database object:
db = CathubSQLite('yourdbfile.db')
Set up a connection for several manipulations:
with db as CathubSQLite('yourdbfile.db'):
Do your work...
Parameters
----------
filename : str
name of database file
"""
def __init__(self, filename, stdin=sys.stdin, stdout=sys.stdout):
assert filename.endswith('.db'), 'filename should have .db extension'
self.filename = filename
self.initialized = False
self.default = 'NULL'
self.connection = None
self.stdin = stdin
self.stdout = stdout
def _connect(self):
return sqlite3.connect(self.filename, timeout=600)
def __enter__(self):
"""Set connection upon entry using with statement"""
assert self.connection is None
self.connection = self._connect()
return self
def __exit__(self, exc_type, exc_value, tb):
"""Commit changes upon exit"""
if exc_type is None:
self.connection.commit()
else:
self.connection.rollback()
self.connection.close()
self.connection = None
def _initialize(self, con):
"""Set up tables in SQL"""
if self.initialized:
return
SQLite3Database()._initialize(con) # ASE db initialization
cur = con.execute(
'SELECT COUNT(*) FROM sqlite_master WHERE name="reaction"')
if cur.fetchone()[0] == 0: # no reaction table
for init_command in init_commands:
con.execute(init_command) # Create tables
con.commit()
self.initialized = True
def read(self, id, table='reaction'):
""" Return an entire row of a table
Parameters
---------
id: int
row integer
table: str
'reaction', 'publication', 'publication_system', 'reaction_system'
"""
con = self.connection or self._connect()
self._initialize(con)
cur = con.cursor()
cur.execute('SELECT * FROM \n {} \n WHERE \n {}.id={}'.format(
table, table, id))
row = cur.fetchall()
if len(row) == 14: # Old schema
row = row.insert(5, 'None')
return row
def write_publication(self, values):
"""
Write publication info to db
Parameters
----------
values: dict with entries
{'pub_id': str (short name for publication),
'authors': list of str ()
'journal': str,
'volume': str,
'number': str,
'pages': 'str'
'year': int,
'publisher': str,
'doi': str,
'tags': list of str}
"""
con = self.connection or self._connect()
self._initialize(con)
cur = con.cursor()
values = (values['pub_id'],
values['title'],
json.dumps(values['authors']),
values['journal'],
values['volume'],
values['number'],
values['pages'],
values['year'],
values['publisher'],
values['doi'],
json.dumps(values['tags']))
q = self.default + ',' + ', '.join('?' * len(values))
cur.execute('INSERT OR IGNORE INTO publication VALUES ({})'.format(q),
values)
pid = self.get_last_id(cur, table='publication')
if self.connection is None:
con.commit()
con.close()
return pid
def write(self, values, data=None):
"""
Write reaction info to db file
Parameters
----------
values: dict
The values dict can include:
{'chemical_composition': str (chemical composition on empty slab) ,
'surface_composition': str (reduced chemical composition or
shortname),
'facet': str
'sites': dict
adsorption sites of species.
f.ex: {'OH': 'ontop', 'O': 'hollow'}
'coverages': dict
coverage of adsorbates relative to the unit cell
f.ex. {'OH': 0.25, 'O': 0.5})
'reactants'/ 'products': dict
keys with name of chemical species folloved by phase (gas, *)
values are the prefactor in the reaction.
For reaction H2Ogas -> 2Hstar + O star you would write:
'reactants': {OHstar: 1, Hstar: 2}
'products': {OHstar: 1, Hstar: 2}
'reaction_energy': float
'activation_energy': float
'dft_code': str
'dft_functional': str
'username': str
'pub_id': str
Should match the pub_id of the corresponding publications
}
"""
con = self.connection or self._connect()
self._initialize(con)
cur = con.cursor()
pub_id = values['pub_id']
ase_ids = values['ase_ids']
energy_corrections = values['energy_corrections']
if ase_ids is not None:
check_ase_ids(values, ase_ids)
else:
ase_ids = {}
values = (values['chemical_composition'],
values['surface_composition'],
values['facet'],
json.dumps(values['sites']),
json.dumps(values['coverages']),
json.dumps(values['reactants']),
json.dumps(values['products']),
values['reaction_energy'],
values['activation_energy'],
values['dft_code'],
values['dft_functional'],
values['username'],
values['pub_id']
)
""" Write to reaction table"""
q = self.default + ',' + ', '.join('?' * len(values))
cur.execute('INSERT INTO reaction VALUES ({})'.format(q),
values)
id = self.get_last_id(cur)
reaction_structure_values = []
""" Write to publication_system and reaction_system tables"""
for name, ase_id in ase_ids.items():
if name in energy_corrections:
energy_correction = energy_corrections[name]
else:
energy_correction = 0
reaction_structure_values.append([name, energy_correction,
ase_id, id])
insert_statement = """INSERT OR IGNORE INTO
publication_system(ase_id, pub_id) VALUES (?, ?)"""
cur.execute(insert_statement, [ase_id, pub_id])
cur.executemany('INSERT INTO reaction_system VALUES (?, ?, ?, ?)',
reaction_structure_values)
if self.connection is None:
con.commit()
con.close()
return id
def update(self, id, values, key_names='all'):
"""
Update reaction info for a selected row
Parameters
----------
id: int
row integer
values: dict
See write() method for details
key_names: list or 'all'
list with name of columns to update. Should match the keys-value
pairs in values.
default is 'all'
"""
con = self.connection or self._connect()
self._initialize(con)
cur = con.cursor()
pub_id = values['pub_id']
ase_ids = values['ase_ids']
energy_corrections = values['energy_corrections']
if ase_ids is not None:
check_ase_ids(values, ase_ids)
else:
ase_ids = {}
key_list, value_list = get_key_value_list(key_names, values)
N_keys = len(key_list)
value_strlist = get_value_strlist(value_list)
execute_str = ', '.join('{}={}'.format(key_list[i], value_strlist[i])
for i in range(N_keys))
update_command = 'UPDATE reaction SET {} WHERE id = {};'\
.format(execute_str, id)
cur.execute(update_command)
delete_command = 'DELETE from reaction_system WHERE id = {}'.format(id)
cur.execute(delete_command)
reaction_structure_values = []
for name, ase_id in ase_ids.items():
reaction_structure_values.append([name,
energy_corrections.get(name),
ase_id, id])
insert_statement = """INSERT OR IGNORE INTO
publication_system(ase_id, pub_id) VALUES (?, ?)"""
cur.execute(insert_statement, [ase_id, pub_id])
cur.executemany('INSERT INTO reaction_system VALUES (?, ?, ?, ?)',
reaction_structure_values)
if self.connection is None:
con.commit()
con.close()
return id
def get_last_id(self, cur, table='reaction'):
"""
Get the id of the last written row in table
Parameters
----------
cur: database connection().cursor() object
table: str
'reaction', 'publication', 'publication_system', 'reaction_system'
Returns: id
"""
cur.execute("SELECT seq FROM sqlite_sequence WHERE name='{0}'"
.format(table))
result = cur.fetchone()
if result is not None:
id = result[0]
else:
id = 0
return id
def check(self, chemical_composition, reaction_energy):
"""
Check if entry with same surface and energy is allready written
to database file
Parameters
----------
chemcial_composition: str
reaction_energy: str
Returns id or None
"""
con = self.connection or self._connect()
self._initialize(con)
cur = con.cursor()
statement = """SELECT reaction.id FROM reaction WHERE
reaction.chemical_composition=? and reaction.reaction_energy=?"""
argument = [chemical_composition, reaction_energy]
cur.execute(statement, argument)
rows = cur.fetchall()
if len(rows) > 0:
id = rows[0][0]
else:
id = None
return id
def check_reaction_on_surface(self, chemical_composition, reactants,
products):
"""
Check if entry with same surface and reaction is allready written
to database file
Parameters
----------
chemcial_composition: str
reactants: dict
products: dict
Returns id or None
"""
con = self.connection or self._connect()
self._initialize(con)
cur = con.cursor()
statement = """SELECT reaction.id FROM reaction WHERE
reaction.chemical_composition='{}' and reaction.reactants='{}'
and reaction.products='{}';""".format(chemical_composition,
json.dumps(reactants),
json.dumps(products))
cur.execute(statement)
rows = cur.fetchall()
if len(rows) > 0:
id = rows[0][0]
else:
id = None
return id
def check_publication(self, pub_id):
con = self.connection or self._connect()
self._initialize(con)
cur = con.cursor()
statement = """
SELECT id FROM publication WHERE publication.pub_id=?"""
argument = [pub_id]
cur.execute(statement, argument)
rows = cur.fetchall()
if len(rows) > 0:
id = rows[0][0]
else:
id = None
return id
def check_publication_structure(self, pub_id, ase_id):
con = self.connection or self._connect()
self._initialize(con)
cur = con.cursor()
statement = """
SELECT id FROM publication_system WHERE
publication.pub_id=? and publication.ase_id=?"""
argument = [pub_id, ase_id]
cur.execute(statement, argument)
rows = cur.fetchall()
if len(rows) > 0:
id = rows[0][0]
else:
id = None
return id
def print_summary(self):
self.stdout.write('------------------------------------------------\n')
self.stdout.write('Reaction Summary: \n')
self.stdout.write('------------------------------------------------\n')
con = self.connection or self._connect()
self._initialize(con)
cur = con.cursor()
cur.execute("""
SELECT
surface_composition, reactants, products, reaction_energy,
activation_energy, sites
FROM
reaction;""")
rows = cur.fetchall()
table = []
for row in rows:
equation = get_equation(json.loads(row[1]), json.loads(row[2]))
table += [[row[0], equation, row[3], row[4], row[5]]]
headers = ['Surface Composition', 'Equation', 'Reaction Energy',
'Activation Energy', 'Sites']
self.stdout.write(tabulate(table, headers) + '\n')
def check_ase_ids(values, ase_ids):
ase_values = ase_ids.values()
assert len(set(ase_values)) == len(ase_values), 'Duplicate ASE ids!'
reaction_species = set(list(values['reactants'].keys()) +
list(values['products'].keys()))
n_split = 0
for spec in ase_ids.keys():
if '_' in spec:
n_split += 1
assert len(reaction_species) <= len(ase_values) + n_split, \
'ASE ids missing!'
return
def get_key_value_list(key_list, values, table='reaction'):
total_keys = {'reaction': ['chemical_composition', 'surface_composition',
'facet', 'sites', 'coverages', 'reactants',
'products', 'reaction_energy',
'activation_energy', 'dft_code',
'dft_functional', 'username', 'pub_id'],
'publication': ['pub_id', 'title', 'authors', 'journal',
'volume', 'number', 'pages', 'year',
'publisher', 'doi', 'tags'],
'reaction_system': ['name', 'energy_correction',
'ase_id', 'id'],
'publication_system': ['ase_id, pub_id']}
total_key_list = total_keys[table]
if key_list == 'all':
key_list = total_key_list
else:
for key in key_list:
assert key in total_key_list
value_list = [values[key] for key in key_list]
return key_list, value_list
def get_value_strlist(value_list):
value_strlist = []
for v in value_list:
if PY2: # python 2
if isinstance(v, unicode):
v = v.encode('ascii', 'ignore')
if isinstance(v, dict):
v = json.dumps(v)
value_strlist.append("'{}'".format(v))
elif isinstance(v, str):
value_strlist.append("'{}'".format(v))
elif v is None or v == '':
value_strlist.append("{}".format('NULL'))
else:
value_strlist.append("{}".format(v))
return value_strlist
def get_equation(reactants, products):
equation = ''
arrow = 0
for column in (reactants, products):
if arrow == 1:
equation += ' -> '
arrow += 1
i = 0
for key in sorted(column, key=len, reverse=True):
prefactor = column[key]
if 'gas' in key:
key = key.replace('gas', '(g)')
if 'star' in key:
key = key.replace('star', '*')
if not i == 0:
if prefactor > 0:
equation += ' + '
else:
equation += ' - '
prefactor *= -1
if prefactor == 1:
prefactor = ''
equation += str(prefactor) + key
i += 1
return equation | PypiClean |
/Klampt-0.9.0-cp36-cp36m-win_amd64.whl/klampt/math/symbolic_io.py | from .symbolic import *
from .symbolic import _infix_operators,_prefix_operators,_builtin_functions
from ..io import loader
import json
from json import encoder
import weakref
import sys
import warnings
VAR_PREFIX = ''
USER_DATA_PREFIX = '$'
NAMED_EXPRESSION_TAG = '#'
NAMED_EXPRESSION_PREFIX = '@'
_operator_precedence = {'pow':1,
'mul':2,'div':2.5,
'add':3,'sum':3,'sub':3.5,
'neg':4,
'not':5,
'and':6,'or':6,
'ge':7,'le':7,'eq':7,'ne':7}
#just a helper class to do some duck-typing
class _Object(object):
pass
if sys.version_info[0] == 2:
def byteify(input):
"""Helpful for converting unicode values in JSON loaded objects to strings"""
if isinstance(input, dict):
return {byteify(key): byteify(value)
for key, value in input.items()}
elif isinstance(input, list):
return [byteify(element) for element in input]
elif isinstance(input, str):
return input.encode('utf-8')
else:
return input
else:
def byteify(input):
return input
class _TaggedExpression(Expression):
def __init__(self,name):
self.name = name
Expression.__init__(self)
def indent(s,spaces):
if spaces <= 0: return s
return s.replace('\n','\n'+' '*spaces)
def _prettyPrintExpr(expr,astr,parseCompatible):
"""Returns a string representing this expression, where astr is a list of strings
representing each argument"""
if not isinstance(expr,OperatorExpression):
return exprToStr(expr,parseCompatible)
if len(expr.functionInfo.printers) > 0:
if parseCompatible and 'parse' in expr.functionInfo.printers:
return expr.functionInfo.printers['parse'](expr,astr)
if not parseCompatible and 'str' in expr.functionInfo.printers:
return expr.functionInfo.printers['str'](expr,astr)
if expr.functionInfo.name in _prefix_operators:
prefix = _prefix_operators[expr.functionInfo.name]
assert len(expr.args) == 1,"Weird, prefix operator %s has %d arguments? %s"%(expr.functionInfo.name,len(astr),",".join(astr))
return prefix + astr[0]
if expr.functionInfo.name in _infix_operators:
assert len(expr.args) == 2,"Weird, infix operator %s has %d arguments? %s"%(expr.functionInfo.name,len(astr),",".join(astr))
return astr[0] + _infix_operators[expr.functionInfo.name] + astr[1]
if expr.functionInfo.name == 'setitem':
vconst = to_const(expr.args[0])
iconst = to_const(expr.args[1])
if vconst is not None and iconst is not None:
if hasattr(iconst,'__iter__'):
indexset = set(iconst)
if parseCompatible:
astr[0] = '[' + ','.join(['0' if i in indexset else str(v) for i,v in enumerate(vconst)])+']'
else:
astr[0] = '[' + ','.join(['*' if i in indexset else str(v) for i,v in enumerate(vconst)])+']'
if expr.functionInfo.name == 'getitem':
if isinstance(expr.args[0],OperatorExpression) and astr[0][0] != '(' and expr.args[0].functionInfo.name in _infix_operators:
astr[0] = '(' + astr[0] + ')'
#if expr.functionInfo.name == 'getslice':
# if len(astr) <= 2:
# astr.append('')
# if len(astr) <= 3:
# astr.append('')
# return astr[0] + '[%s:%s:%s]'%(astr[1],astr[2],astr[3])
if isinstance(expr.args[1],slice):
start,stop,step = expr.args[1].start,expr.args[1].stop,expr.args[1].step
astr[1] = "%s:%s%s"%(("" if start is None else str(start)),
("" if (stop is None or stop > 900000000000) else str(stop)),
("" if step is None else ":"+str(step)))
return astr[0] + '[' +astr[1] + ']'
#default
if len(astr) > 1 and sum(len(a) for a in astr) > 80-2-len(expr.functionInfo.name):
res = expr.functionInfo.name + "("
res += ',\n '.join([indent(a,2) for a in astr]) + ')'
else:
res = expr.functionInfo.name + "("
res += ','.join(astr) + ')'
return res
def _make_tagged(expr,prefix="SubExp"):
"""Creates a copy of expr where each reference to a common subexpression is
replaced with a TaggedExpression. If there are no common subexpressions,
expr is returned."""
def _refspre(node):
if 'refs' in node._cache:
node._cache['refs'] += 1
return (False,True,node._cache['refs'])
node._cache['refs'] = 1
return (True,True,None)
expr._traverse(pre=_refspre,cache=False)
#all the cache values are now the number of references to a subexpression
def _hassubexpr_pre(node):
if node._cache['refs'] > 1:
#print "Node",node.functionInfo.name,"is a repeated subexpression"
node._cache['hassubexpr'] = True
return (False,True,True)
return (True,True,None)
def _hassubexpr_post(node,cvals):
if len(cvals) == 0:
return (True,False)
res = any(cvals)
#print "Child of",node.functionInfo.name,"has repeated subexpression"
node._cache['hassubexpr'] = res
if res: return (True,True)
return (True,False)
if not expr._traverse(pre=_hassubexpr_pre,post=_hassubexpr_post,cache=False):
#print "***Expression has no subexpressions***"
expr._clearCache('refs')
expr._clearCache('hassubexpr')
return expr
assert expr._cache.get('hassubexpr',False) == True
expr._clearCache('refs')
#print "***Expression has subexpressions***"
subexprs = dict()
def replace(node):
if not node._cache.get('hassubexpr',False): return node
if 'refs' in node._cache:
if 'id' not in node._cache:
#new detected subexpression, not added yet
tag = prefix+str(len(subexprs)+1)
node._cache['id'] = tag
subexprs[tag] = _TaggedExpression(tag)
node._cache['refs'] += 1
#print "Reference",node._cache['refs'],"to",node.functionInfo.name
return subexprs[node._cache['id']]
node._cache['refs'] = 1
if node._children is None:
return node
else:
assert isinstance(node,OperatorExpression)
#print "New reference to",node.functionInfo.name
creps = [replace(c) for c in node._children]
if any(cr is not c for (cr,c) in zip(creps,node._children)):
return OperatorExpression(node.functionInfo,creps,node.op)
else:
return node
repl = replace(expr)
expr._clearCache('refs')
expr._clearCache('hassubexpr')
#NEED TO CLEAR 'id' from cache after repl is used
return repl
def _to_jsonobj(val):
if isinstance(val,(bool,int,float)):
return val
elif isinstance(val,(np.ndarray,np.float64)):
return val.tolist()
elif isinstance(val,(list,tuple)):
return [_to_jsonobj(x) for x in val]
else:
try:
return loader.toJson(val)
except:
raise ValueError("Unable to convert object "+repr(val)+" to JSON object")
return None
def _json_complex(jsonval):
if isinstance(jsonval,dict):
return (len(jsonval) > 0)
elif isinstance(jsonval,(list,tuple)):
return any(_json_complex(v) for v in jsonval)
else:
return False
def _json_depth(jsonval):
if isinstance(jsonval,dict):
return 1 + max(_json_depth(v) for v in jsonval.values())
elif isinstance(jsonval,(list,tuple)):
return 1 + max(_json_depth(v) for v in jsonval)
else:
return 1
def exprToStr(expr,parseCompatible=True,expandSubexprs='auto'):
"""Converts an Expression to a printable or parseable string.
Args:
expr (Expression): the Expression to convert
parseCompatible (bool, optional): if True, the result is readable via exprFromStr()
expandSubexprs (str or bool, optional): whether to expand subexpressions. Can be:
* 'auto': if parseCompatible, equivalent to False.
if parseCompatible=False, equivalent to True.
* True: expands all common subexpressions
* False: does not expand common subexpressions.
* 'show': Internally used.
Returns:
(str): a printable or parsable string representing expr.
"""
if isinstance(expr,ConstantExpression):
if isinstance(expr.value,slice):
start,stop,step = expr.value.start,expr.value.stop,expr.value.step
return "%s:%s%s"%(("" if start is None else str(start)),
("" if (stop is None or stop > 900000000000) else str(stop)),
("" if step is None else ":"+str(step)))
try:
jsonval = _to_jsonobj(expr.value)
except:
return str(expr.value)
if parseCompatible:
return json.dumps(jsonval)
else:
#Note: DOESNT WORK IN Python 3
#original_float_repr = encoder.FLOAT_REPR
encoder.FLOAT_REPR = lambda o:format(o,'.14g')
try:
if _json_complex(jsonval):
res = json.dumps(jsonval,sort_keys=True, indent=4, separators=(',', ': '))
else:
res = json.dumps(jsonval,sort_keys=True)
except Exception:
print("Unable to dump constant expression",expr.value,"of type",expr.value.__class__.__name__)
def print_recursive(v,indent=0):
if hasattr(v,'__iter__'):
print(indent*' ',"Sub objects have type",[a.__class__.__name__ for a in v])
for a in v:
print_recursive(a,indent+2)
print_recursive(expr.value)
return "___JSON_ENCODE_ERROR___"
#encoder.FLOAT_REPR = original_float_repr
return res
elif isinstance(expr,VariableExpression):
if parseCompatible:
return VAR_PREFIX+expr.var.name
else:
return str(expr.var)
elif isinstance(expr,UserDataExpression):
return USER_DATA_PREFIX+expr.name
elif isinstance(expr,OperatorExpression):
if expandSubexprs == 'auto':
expandSubexprs = not parseCompatible
if expandSubexprs:
astr = []
for i,a in enumerate(expr.args):
a._parent = (weakref.ref(expr),i)
astr.append(exprToStr(a,parseCompatible,expandSubexprs))
if not isinstance(a,OperatorExpression) and expandSubexprs == 'show' and ('id' in a._cache or 'name' in a._cache):
#tagged subexprs need parenthesies
if astr[-1][-1] != ')':
astr[-1] = '('+astr[-1]+')'
astr[-1] = astr[-1] + NAMED_EXPRESSION_TAG + a._cache.get('id',a._cache.get('name'))
a._parent = None
res = _prettyPrintExpr(expr,astr,parseCompatible)
if expandSubexprs == 'show' and ('id' in expr._cache or 'name' in expr._cache):
#tagged subexprs need parenthesies
if res[-1] != ')':
res = '('+res+')'
return res + NAMED_EXPRESSION_TAG + expr._cache.get('id',expr._cache.get('name'))
oldparent = expr._parent
iscomplex = expr.depth() >= 0 and (expr.functionInfo.name in _operator_precedence)
expr._parent = oldparent
if iscomplex and (expr._parent is not None and not isinstance(expr._parent,str)):
if parseCompatible:
return '(' + res + ')'
else:
parent = expr._parent[0]()
if parent.functionInfo.name in _operator_precedence:
expr_precedence = _operator_precedence[expr.functionInfo.name]
parent_precedence = _operator_precedence[parent.functionInfo.name]
#if - is the first in a summation, don't parenthesize it
if expr._parent[1] == 0 and expr.functionInfo.name == 'neg' and parent.functionInfo.name in ['sum','add','sub']:
return res
if expr_precedence > parent_precedence:
return '(' + res + ')'
if expr_precedence == parent_precedence:
if expr.functionInfo is parent.functionInfo and expr.functionInfo.properties.get('associative',False):
return res
else:
return '(' + res + ')'
return res
else:
if not parseCompatible:
taggedexpr = _make_tagged(expr,"")
else:
taggedexpr = _make_tagged(expr)
res = exprToStr(taggedexpr,parseCompatible,'show')
if taggedexpr is not expr:
expr._clearCache('id',deep=True)
return res
elif isinstance(expr,_TaggedExpression):
return NAMED_EXPRESSION_PREFIX+expr.name
elif is_const(expr):
return str(expr)
else:
raise ValueError("Unknown type "+expr.__class__.__name__)
def exprFromStr(context,string,fmt=None,add=False):
"""Returns an Expression from a string. In auto mode, this reads in constants in klampt.loader JSON-
compatible format, standard variables in the form "x", user data in the form of strings prepended with $
(e.g., "$x"), and named expression references in the form of strings prepended with @.
Args:
context (Context): the context containing possible functions in string
string (str): the string to parse.
fmt (str, optional): specifies a format for the string. Can be None (auto), 'auto', or 'json'
add (bool, optional): if true, adds all variables referenced in the string to the context.
Otherwise, undefined variables are referred to as user data.
An exception is raised on parsing failure.
(Parsing is a little slow, so try not to use it in tight inner loops)
Returns:
(Expression): the expression represented by str.
"""
if len(string) == 0:
raise ValueError("Empty string provided")
if fmt == None:
if string[0] == '{':
fmt = 'json'
else:
fmt = 'auto'
if fmt == 'auto':
import re,ast
USERDATA_MARKER = '___'
EXPR_MARKER = '____'
TAGLIST_NAME = '__tagexprlist__'
taglist = context.expressions.copy()
def __settag__(self,tagname,taglist):
assert isinstance(tagname,ConstantExpression) and isinstance(tagname.value,str)
taglist[tagname.value] = self
return self
def __gettag__(tagname,taglist):
assert isinstance(tagname,ConstantExpression) and isinstance(tagname.value,str)
return taglist[tagname.value]
Expression.__settag__ = __settag__
x = re.sub(r"\$(\w+)", r"___\1",string)
x = re.sub(r"\#(\w+)", r'.__settag__("\1",__tagexprlist__)',x)
x = re.sub(r"\@(\w+)", r'__gettag__("\1",__tagexprlist__)',x)
#print "Substituted string",x
tree = ast.parse(x,mode='eval')
missing_functions = []
missing_names = []
userdata = {}
#hack to easily access functions with the class.attribute syntax
allFunctions = _builtin_functions.copy()
for name,func in context.customFunctions.items():
path = name.split('.')
if len(path) == 1:
allFunctions[name] = func
else:
if path[0] not in allFunctions:
allFunctions[path[0]] = _Object()
root = allFunctions[path[0]]
for n in path[1:-1]:
if not hasattr(root,n):
setattr(root,n,_Object())
root = getattr(root,n)
setattr(root,path[-1],func)
allFunctions[TAGLIST_NAME] = taglist
allFunctions['__gettag__'] = __gettag__
class RewriteVarNames(ast.NodeTransformer):
def __init__(self):
self.infunc = False
def visit_Call(self,node):
self.infunc = True
self.generic_visit(node)
return node
def visit_Name(self, node):
if self.infunc:
self.infunc = False
if node.id not in allFunctions:
missing_functions.append(node.id)
return node
if node.id.startswith(USERDATA_MARKER):
basename = node.id[len(USERDATA_MARKER):]
userdata[node.id] = expr(basename)
else:
if node.id in context.variableDict:
userdata[node.id] = expr(context.variableDict[node.id])
elif add:
userdata[node.id] = expr(context.addVar(node.id,'N'))
elif node.id == TAGLIST_NAME:
pass
else:
missing_names.append(node.id)
userdata[node.id] = expr(node.id)
return node
def visit_Num(self, node):
return ast.copy_location(ast.Call(func=ast.copy_location(ast.Name(id="_const",ctx=ast.Load()),node),args=[node],keywords=[]),node)
def visit_Str(self, node):
return ast.copy_location(ast.Call(func=ast.copy_location(ast.Name(id="_const",ctx=ast.Load()),node),args=[node],keywords=[]),node)
def visit_List(self, node):
args = []
for idx, item in enumerate(node.elts):
args.append(self.visit(item))
return ast.copy_location(ast.Call(func=ast.copy_location(ast.Name(id="_convert_list",ctx=ast.Load()),node),args=args,keywords=[]),node)
def visit_Tuple(self, node):
args = []
for idx, item in enumerate(node.elts):
args.append(self.visit(item))
return ast.copy_location(ast.Call(func=ast.copy_location(ast.Name(id="_convert_list",ctx=ast.Load()),node),args=args,keywords=[]),node)
#print "old tree:",ast.dump(tree)
newtree = RewriteVarNames().visit(tree)
#print "new tree:",ast.dump(newtree)
if len(missing_functions) > 0:
raise ValueError("Undefined functions "+','.join(missing_functions))
if len(missing_names) > 0:
raise ValueError("Undefined variable "+','.join(missing_names))
allFunctions['_const'] = const
allFunctions['_convert_list'] = lambda *args:array(*args)
ctree = compile(newtree, filename="<ast>", mode="eval")
res = eval(ctree,allFunctions,userdata)
delattr(Expression,'__settag__')
return res
elif fmt == 'json':
import json
obj = json.parse(str)
return exprFromJson(context,obj)
else:
raise ValueError("Invalid format "+fmt)
def exprToJson(expr):
if isinstance(expr,ConstantExpression):
return _to_jsonobj(expr.value)
elif isinstance(expr,UserDataExpression):
return USER_DATA_PREFIX+expr.name
elif isinstance(expr,VariableExpression):
return VAR_PREFIX+expr.var.name
elif isinstance(expr,OperatorExpression):
def _tojson(node,childvals):
if isinstance(node,OperatorExpression):
res = {"type":expr.functionInfo.name}
res["args"] = childvals
if 'id' in node._cache:
res['id'] = node._cache['id']
return True,res
else:
return True,exprToJson(node)
taggedexpr = _make_tagged(expr)
res = taggedexpr._traverse(post=_tojson,cacheas='json')
if taggedexpr is not expr:
expr._clearCache('id',deep=True)
return res
elif isinstance(expr,_TaggedExpression):
return NAMED_EXPRESSION_PREFIX+expr.name
def exprFromJson(context,jsonObj,taggedExpressions=None):
"""Creates an Expression from a JSON object previously saved by expr.toJson()"""
#print "exprFromJson:",jsonObj
name = str(jsonObj['type'])
args = jsonObj['args']
parsedArgs = []
if taggedExpressions is None:
taggedExpressions = dict()
for a in args:
if isinstance(a,str):
if a.startswith(USER_DATA_PREFIX):
#user data reference
plen = len(USER_DATA_PREFIX)
parsedArgs.append(context.userData[a[plen:]])
elif a.startswith(NAMED_EXPRESSION_PREFIX):
plen = len(NAMED_EXPRESSION_PREFIX)
a = a[plen:]
if a in taggedExpressions:
parsedArgs.append(taggedExpressions[a])
elif a in context.expressions:
parsedArgs.append(context.expressions[a])
else:
print("exprFromJson(): Valid tags:",list(taggedExpressions.keys()),"(tags)",list(context.expressions.keys()),"(expressions)")
raise RuntimeError("Invalid expression tag "+NAMED_EXPRESSION_PREFIX+a)
else:
#variable reference
if a not in context.variableDict:
raise RuntimeError("Invalid variable reference "+a)
parsedArgs.append(context.variableDict[a])
elif isinstance(a,dict):
#assume it's an expression or a loader object
try:
parsedArgs.append(exprFromJson(context,a,taggedExpressions))
except KeyError:
try:
parsedArgs.append(loader.fromJson(a))
except Exception:
raise ValueError("Error parsing JSON object %s into expression or loader object"%(str(a),))
if 'id' in a:
assert a['id'] not in taggedExpressions,"Multiply defined tag "+str(a['id'])
taggedExpressions[a['id']] = parsedArgs[-1]
else:
parsedArgs.append(a)
if name in _builtin_functions:
return _builtin_functions[name](*parsedArgs)
if name in context.customFunctions:
return context.customFunctions[name](*parsedArgs)
raise RuntimeError("Invalid expression type "+name)
def typeToJson(type):
res = { 'char':type.char }
if type.size is not None:
res['size'] = type.size
if type.subtype is not None:
if isinstance(type.subtype,str):
res['subtype'] = type.subtype
elif isinstance(type.subtype,list):
res['subtype'] = [typeToJson(st) for st in type.subtype]
else:
res['subtype'] = typeToJson(type.subtype)
return res
def typeFromJson(jsonObj):
assert 'char' in jsonObj
st = None
if 'subtype' in jsonObj:
subtypeobj = jsonObj['subtype']
if isinstance(subtypeobj,list):
st = [typeFromJson(stobj) for stobj in subtypeobj]
elif isinstance(subtypeobj,str):
st = subtypeobj
elif isinstance(subtypeobj,dict):
st = typeFromJson(subtypeobj)
else:
raise ValueError("Invalid JSON object specifying Type subtype")
return Type(byteify(jsonObj['char']),jsonObj.get('size',None),st)
def contextToJson(ctx,saveFunctions=False):
"""Produces a JSON object from a context. Only the names for userData and customFunctions are saved.
If saveFunctions=False, customFunctions are not saved"""
res = {}
if len(ctx.variables) > 0:
varjson = []
for v in ctx.variables:
varjson.append({'name':v.name,'type':typeToJson(v.type)})
res['variables'] = varjson
if len(ctx.expressions) > 0:
exprjson = {}
for n,e in ctx.expressions.items():
exprjson[n] = exprToJson(e)
res['expressions'] = exprjson
if saveFunctions and len(ctx.customFunctions) > 0:
res['customFunctions'] = list(ctx.customFunctions.keys())
if len(ctx.userData) > 0:
res['userData'] = list(ctx.userData.keys())
return res
def contextFromJson(context,jsonObj):
"""Creates a context from a JSON object previously saved by context.toJson().
userData is not restored and customFunctions are not restored, but rather,
userData and customFunctions are assumed to have been set up with exactly the same keys
as when toJson was called.
Modifies context in-place.
"""
if 'userData' in jsonObj:
for d in jsonObj['userData']:
if d not in context.userData:
warnings.warn("Context.fromJson(): item {} is not yet in userData".format(d))
if 'customFunctions' in jsonObj:
for d in jsonObj['customFunctions']:
if d not in context.customFunctions:
warnings.warn("Context.fromJson(): item {} is not yet in customFunctions".format(d))
context.variables = []
context.variableDict = dict()
context.expressions = dict()
if 'variables' in jsonObj:
for v in jsonObj['variables']:
context.addVar(v['name'],typeFromJson(v['type']))
if 'expressions' in jsonObj:
for n,v in jsonObj['expressions'].items():
context.expressions[n] = exprFromJson(context,v)
return context
def toStr(obj,parseCompatible=True):
if not isinstance(obj,Expression):
raise ValueError("Can only convert Expressions to strings")
return exprToStr(obj,parseCompatible)
def toJson(obj):
if isinstance(obj,Expression):
return exprToJson(obj)
elif isinstance(obj,Context):
return contextToJson(obj)
else:
raise ValueError("Argument needs to be an Expression or Context")
def latex(expr):
"""Returns LaTeX code for the Expression expr. Requires Sympy."""
try:
import sympy
from . import symbolic_sympy
except ImportError as e:
raise e
raise RuntimeError("Sympy is required for conversion to latex")
return sympy.latex(symbolic_sympy.exprToSympy(expr))
def pprint(expr):
"""Pretty-prints the Expression expr. If Sympy is installed it will use the sympy
pretty-printer."""
try:
import sympy
from . import symbolic_sympy
sympy.pprint(symbolic_sympy.exprToSympy(expr),use_unicode=False)
except ImportError:
print(exprToStr(expr,parseCompatible=False))
except TypeError:
print(exprToStr(expr,parseCompatible=False))
except ValueError:
print(exprToStr(expr,parseCompatible=False))
def codegen(name_expr,language=None,**options):
"""Similar to sympy.codegen. Generates one or more expressions in the target language.
Requires Sympy.
Args:
name_expr: multiple interpretations:
* A single (name, Expression) tuple: generates code for one function
* A list of (name, Expression) tuples: generates code for multiple functions
* one or more Function objects: generates code for one or more functions, as long
as the Function is defined via symbolic Expressions rather than Python functions.
* A list of (Variable == Expression) expressions. Generates a function with multiple return values
language (str, optional): any language accepted by Sympy.
options: other options for sympy.codegen.
See the `Sympy codegen documentation <http://docs.sympy.org/latest/modules/utilities/codegen.html>`_
for more information.
"""
try:
import sympy
from . import symbolic_sympy
from sympy.utilities.codegen import codegen
except ImportError:
raise RuntimeError("Sympy is required for codegen")
def convert(ne):
if isinstance(ne,Function):
if not isinstance(ne.func,Expression):
raise ValueError("Can't do code generation for plain Python function %s"%(ne.name,))
sexpr = symbolic_sympy.exprToSympy(ne.expr)
return (ne.name,sexpr)
if not isinstance(ne,(list,tuple)) or len(ne)!=2 or not isinstance(ne[0],str):
raise ValueError("Input must be a (str,Expression) pair.")
name,expr_or_exprs = ne
sexpr = None
if not isinstance(expr_or_exprs,Expression):
if not isinstance(expr_or_exprs,(list,tuple)):
raise ValueError("Input must consist of one or more (str,Expression) pairs.")
sexpr = []
for expr in expr_or_exprs:
sexpr.append(symbolic_sympy.exprToSympy(expr))
else:
sexpr = symbolic_sympy.exprToSympy(expr_or_exprs)
return name,sexpr
if hasattr(name_expr,'__iter__') and isinstance(name_expr[0],(list,tuple)):
s_name_expr = [convert(x) for x in name_expr]
else:
s_name_expr = convert(name_expr)
return codegen(s_name_expr,language=language,**options) | PypiClean |
/Fom-0.9.10.zip/Fom-0.9.10/fom/db.py | import types
import urllib
import requests
try:
import json
except ImportError:
try:
import simplejson as json
except ImportError:
# For Google AppEngine
from django.utils import simplejson as json
from errors import raise_error
from utils import fom_request_sent, fom_response_received
from version import version
BASE_URL = 'https://fluiddb.fluidinfo.com'
NO_CONTENT = object()
PRIMITIVE_CONTENT_TYPE = 'application/vnd.fluiddb.value+json'
DESERIALIZABLE_CONTENT_TYPES = set(
(PRIMITIVE_CONTENT_TYPE, 'application/json'))
ITERABLE_TYPES = set((list, tuple))
SERIALIZABLE_TYPES = set((types.NoneType, bool, int, float, str, unicode,
list, tuple))
def _generate_endpoint_url(base, path, urlargs):
path_parts = [base]
for part in path:
if isinstance(part, unicode):
part = part.encode('utf-8')
path_parts.append(urllib.quote(part, safe=''))
url = '/'.join(path_parts)
if urlargs:
if isinstance(urlargs, dict):
# convert the dict to tuple pairs
urlargs = tuple(urlargs.items())
# make sure we handle unicode characters as possible values
# NOTE: only use UTF-8 unicode for urlargs values. Anything else will
# break.
clean_urlargs = []
for (tag, value) in urlargs:
if isinstance(value, unicode):
clean_urlargs.append((tag, value.encode('utf-8')))
else:
clean_urlargs.append((tag, value))
urlargs = tuple(clean_urlargs)
url = '?'.join([url, urllib.urlencode(urlargs)])
return url
def _get_body_and_type(payload, content_type):
if content_type:
if content_type == 'application/json':
return json.dumps(payload), content_type
return payload, content_type
if payload is NO_CONTENT:
return None, None
if isinstance(payload, dict):
return json.dumps(payload), 'application/json'
pt = type(payload)
if pt in SERIALIZABLE_TYPES:
if pt in ITERABLE_TYPES:
if not all(isinstance(x, basestring) for x in payload):
raise ValueError('Non-string in list payload %r.' % (payload,))
return json.dumps(payload), PRIMITIVE_CONTENT_TYPE
raise ValueError("Can't handle payload %r of type %s" % (payload, pt))
class FluidResponse(object):
"""A response to a FluidDB request.
These are generally created by the API, and returned, and there is little
use to create them manually.
:param response: A response instance, which is a dict with an
additional status attribute.
:param content: The body of the HTTP response.
:param is_value: A boolean flag to indicate whether the response is from a
*value* request. Value requests are not deserialized unless they are
of the primitive content type: `application/vnd.fluiddb.value+json`
even if they are of a deserializable content type such as
`application/json`
.. attribute:: content_type
The content type of the response.
.. attribute:: value
The deserialized value of the response body, if it is appropriate for
deserialization.
.. attribute:: content
The raw content of the response body.
.. attribute:: request_id
The request id of the response. This is only available during errors.
.. attribute:: error
The error from the response. This is only available during errors.
"""
def __init__(self, response, content, is_value):
self.content_type = response.headers['content-type']
if ((is_value and self.content_type == PRIMITIVE_CONTENT_TYPE) or
(self.content_type in DESERIALIZABLE_CONTENT_TYPES)):
try:
self.value = json.loads(content)
except ValueError:
self.value = content
else:
self.value = content
self.status = response.status_code
self.response = response
self.content = content
self.request_id = self.response.headers.get('x-fluiddb-request-id')
self.error = self.response.headers.get('x-fluiddb-error-class')
if self.status >= 400:
raise_error(self)
def __repr__(self):
return '<FluidResponse (%s, %r, %r, %r)>' % (self.status,
self.content_type, self.error, self.value)
__str__ = __repr__
# XXX Backwards Compatibility layer
def __iter__(self):
print 'Depravacamated use of status, response'
yield self.status
yield self.value
class FluidDB(object):
"""HTTP client.
Could/Should be swapped out for other implementations. Although is
generally synchronous.
:param base_url: The base FluidDB url to use. Currently, this can only be
either the main FluidDB instance, or the sandbox instance.
"""
def __init__(self, base_url=BASE_URL):
if base_url.endswith('/'):
raise ValueError('The domain for FluidDB must *not* end with'\
' "/". Correct example:'\
' https://fluiddb.fluidinfo.com')
self.base_url = base_url
self.headers = {
'User-agent': 'fom/%s' % version,
}
self.session = requests.session(headers=self.headers)
# XXX Backwards compat
self.client = self
def __call__(self, method, path, payload=NO_CONTENT, urlargs=None,
content_type=None, is_value=False):
"""Make a request and return a response.
>>> db = FluidDB()
>>> r = db('GET', '/users/aliafshar')
>>> print r.value
{u'name': u'aliafshar', u'id': u'11e00b96-e346-44e7-af7f-e1a3575ff43e'}
:param method: The HTTP method
:param path: The path to make the request to
:param payload: The body of the request
:param urlargs: URL arguments to be applied to the request
:param content_type: The content type of the payload
:param is_value: A boolean flag to indicate whether the response is
from a *value* request. Value requests are not deserialized unless
they are of the primitive content type:
`application/vnd.fluiddb.value+json` even if they are of a
deserializable content type such as `application/json`
"""
payload, content_type = _get_body_and_type(payload, content_type)
urlargs = urlargs or {}
headers = self._get_headers(content_type)
url = self._get_url(path, urlargs)
fom_request_sent.send(self, request=(url, method, payload, headers))
response = self.session.request(method, url, data=payload,
headers=headers)
fom_response_received.send(self, response=(response.status_code,
response.text, None))
return FluidResponse(response, response.text, is_value)
def _get_headers(self, content_type):
headers = self.headers.copy()
if content_type:
headers['content-type'] = content_type
return headers
def _get_url(self, path, urlargs=None):
return _generate_endpoint_url(self.base_url, path, urlargs)
def login(self, username, password):
"""Log in to this instance of FluidDB
:param username: The username to log in with.
:param password: The password to log in with.
"""
userpass = username + ':' + password
auth = 'Basic ' + userpass.encode('base64').strip()
self.headers['Authorization'] = auth
def login_oauth2(self, token):
"""Prepare to make OAuth2 calls to Fluidinfo.
:param token: The OAuth token to pass in calls to Fluidinfo.
"""
self.headers['Authorization'] = 'oauth2'
self.headers['X-FluidDB-Access-Token'] = token
def logout(self):
"""Log out of this FluidDB instance
"""
# Use pop here, to avoid catching KeyError if login was never called.
self.headers.pop('Authorization', None)
self.headers.pop('X-FluidDB-Access-Token', None) | PypiClean |
/Nuitka_fixed-1.1.2-cp310-cp310-win_amd64.whl/nuitka/build/inline_copy/yaml_27/yaml/emitter.py |
__all__ = ['Emitter', 'EmitterError']
from error import YAMLError
from events import *
class EmitterError(YAMLError):
pass
class ScalarAnalysis(object):
def __init__(self, scalar, empty, multiline,
allow_flow_plain, allow_block_plain,
allow_single_quoted, allow_double_quoted,
allow_block):
self.scalar = scalar
self.empty = empty
self.multiline = multiline
self.allow_flow_plain = allow_flow_plain
self.allow_block_plain = allow_block_plain
self.allow_single_quoted = allow_single_quoted
self.allow_double_quoted = allow_double_quoted
self.allow_block = allow_block
class Emitter(object):
DEFAULT_TAG_PREFIXES = {
u'!' : u'!',
u'tag:yaml.org,2002:' : u'!!',
}
def __init__(self, stream, canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None):
# The stream should have the methods `write` and possibly `flush`.
self.stream = stream
# Encoding can be overriden by STREAM-START.
self.encoding = None
# Emitter is a state machine with a stack of states to handle nested
# structures.
self.states = []
self.state = self.expect_stream_start
# Current event and the event queue.
self.events = []
self.event = None
# The current indentation level and the stack of previous indents.
self.indents = []
self.indent = None
# Flow level.
self.flow_level = 0
# Contexts.
self.root_context = False
self.sequence_context = False
self.mapping_context = False
self.simple_key_context = False
# Characteristics of the last emitted character:
# - current position.
# - is it a whitespace?
# - is it an indention character
# (indentation space, '-', '?', or ':')?
self.line = 0
self.column = 0
self.whitespace = True
self.indention = True
# Whether the document requires an explicit document indicator
self.open_ended = False
# Formatting details.
self.canonical = canonical
self.allow_unicode = allow_unicode
self.best_indent = 2
if indent and 1 < indent < 10:
self.best_indent = indent
self.best_width = 80
if width and width > self.best_indent*2:
self.best_width = width
self.best_line_break = u'\n'
if line_break in [u'\r', u'\n', u'\r\n']:
self.best_line_break = line_break
# Tag prefixes.
self.tag_prefixes = None
# Prepared anchor and tag.
self.prepared_anchor = None
self.prepared_tag = None
# Scalar analysis and style.
self.analysis = None
self.style = None
def dispose(self):
# Reset the state attributes (to clear self-references)
self.states = []
self.state = None
def emit(self, event):
self.events.append(event)
while not self.need_more_events():
self.event = self.events.pop(0)
self.state()
self.event = None
# In some cases, we wait for a few next events before emitting.
def need_more_events(self):
if not self.events:
return True
event = self.events[0]
if isinstance(event, DocumentStartEvent):
return self.need_events(1)
elif isinstance(event, SequenceStartEvent):
return self.need_events(2)
elif isinstance(event, MappingStartEvent):
return self.need_events(3)
else:
return False
def need_events(self, count):
level = 0
for event in self.events[1:]:
if isinstance(event, (DocumentStartEvent, CollectionStartEvent)):
level += 1
elif isinstance(event, (DocumentEndEvent, CollectionEndEvent)):
level -= 1
elif isinstance(event, StreamEndEvent):
level = -1
if level < 0:
return False
return (len(self.events) < count+1)
def increase_indent(self, flow=False, indentless=False):
self.indents.append(self.indent)
if self.indent is None:
if flow:
self.indent = self.best_indent
else:
self.indent = 0
elif not indentless:
self.indent += self.best_indent
# States.
# Stream handlers.
def expect_stream_start(self):
if isinstance(self.event, StreamStartEvent):
if self.event.encoding and not getattr(self.stream, 'encoding', None):
self.encoding = self.event.encoding
self.write_stream_start()
self.state = self.expect_first_document_start
else:
raise EmitterError("expected StreamStartEvent, but got %s"
% self.event)
def expect_nothing(self):
raise EmitterError("expected nothing, but got %s" % self.event)
# Document handlers.
def expect_first_document_start(self):
return self.expect_document_start(first=True)
def expect_document_start(self, first=False):
if isinstance(self.event, DocumentStartEvent):
if (self.event.version or self.event.tags) and self.open_ended:
self.write_indicator(u'...', True)
self.write_indent()
if self.event.version:
version_text = self.prepare_version(self.event.version)
self.write_version_directive(version_text)
self.tag_prefixes = self.DEFAULT_TAG_PREFIXES.copy()
if self.event.tags:
handles = self.event.tags.keys()
handles.sort()
for handle in handles:
prefix = self.event.tags[handle]
self.tag_prefixes[prefix] = handle
handle_text = self.prepare_tag_handle(handle)
prefix_text = self.prepare_tag_prefix(prefix)
self.write_tag_directive(handle_text, prefix_text)
implicit = (first and not self.event.explicit and not self.canonical
and not self.event.version and not self.event.tags
and not self.check_empty_document())
if not implicit:
self.write_indent()
self.write_indicator(u'---', True)
if self.canonical:
self.write_indent()
self.state = self.expect_document_root
elif isinstance(self.event, StreamEndEvent):
if self.open_ended:
self.write_indicator(u'...', True)
self.write_indent()
self.write_stream_end()
self.state = self.expect_nothing
else:
raise EmitterError("expected DocumentStartEvent, but got %s"
% self.event)
def expect_document_end(self):
if isinstance(self.event, DocumentEndEvent):
self.write_indent()
if self.event.explicit:
self.write_indicator(u'...', True)
self.write_indent()
self.flush_stream()
self.state = self.expect_document_start
else:
raise EmitterError("expected DocumentEndEvent, but got %s"
% self.event)
def expect_document_root(self):
self.states.append(self.expect_document_end)
self.expect_node(root=True)
# Node handlers.
def expect_node(self, root=False, sequence=False, mapping=False,
simple_key=False):
self.root_context = root
self.sequence_context = sequence
self.mapping_context = mapping
self.simple_key_context = simple_key
if isinstance(self.event, AliasEvent):
self.expect_alias()
elif isinstance(self.event, (ScalarEvent, CollectionStartEvent)):
self.process_anchor(u'&')
self.process_tag()
if isinstance(self.event, ScalarEvent):
self.expect_scalar()
elif isinstance(self.event, SequenceStartEvent):
if self.flow_level or self.canonical or self.event.flow_style \
or self.check_empty_sequence():
self.expect_flow_sequence()
else:
self.expect_block_sequence()
elif isinstance(self.event, MappingStartEvent):
if self.flow_level or self.canonical or self.event.flow_style \
or self.check_empty_mapping():
self.expect_flow_mapping()
else:
self.expect_block_mapping()
else:
raise EmitterError("expected NodeEvent, but got %s" % self.event)
def expect_alias(self):
if self.event.anchor is None:
raise EmitterError("anchor is not specified for alias")
self.process_anchor(u'*')
self.state = self.states.pop()
def expect_scalar(self):
self.increase_indent(flow=True)
self.process_scalar()
self.indent = self.indents.pop()
self.state = self.states.pop()
# Flow sequence handlers.
def expect_flow_sequence(self):
self.write_indicator(u'[', True, whitespace=True)
self.flow_level += 1
self.increase_indent(flow=True)
self.state = self.expect_first_flow_sequence_item
def expect_first_flow_sequence_item(self):
if isinstance(self.event, SequenceEndEvent):
self.indent = self.indents.pop()
self.flow_level -= 1
self.write_indicator(u']', False)
self.state = self.states.pop()
else:
if self.canonical or self.column > self.best_width:
self.write_indent()
self.states.append(self.expect_flow_sequence_item)
self.expect_node(sequence=True)
def expect_flow_sequence_item(self):
if isinstance(self.event, SequenceEndEvent):
self.indent = self.indents.pop()
self.flow_level -= 1
if self.canonical:
self.write_indicator(u',', False)
self.write_indent()
self.write_indicator(u']', False)
self.state = self.states.pop()
else:
self.write_indicator(u',', False)
if self.canonical or self.column > self.best_width:
self.write_indent()
self.states.append(self.expect_flow_sequence_item)
self.expect_node(sequence=True)
# Flow mapping handlers.
def expect_flow_mapping(self):
self.write_indicator(u'{', True, whitespace=True)
self.flow_level += 1
self.increase_indent(flow=True)
self.state = self.expect_first_flow_mapping_key
def expect_first_flow_mapping_key(self):
if isinstance(self.event, MappingEndEvent):
self.indent = self.indents.pop()
self.flow_level -= 1
self.write_indicator(u'}', False)
self.state = self.states.pop()
else:
if self.canonical or self.column > self.best_width:
self.write_indent()
if not self.canonical and self.check_simple_key():
self.states.append(self.expect_flow_mapping_simple_value)
self.expect_node(mapping=True, simple_key=True)
else:
self.write_indicator(u'?', True)
self.states.append(self.expect_flow_mapping_value)
self.expect_node(mapping=True)
def expect_flow_mapping_key(self):
if isinstance(self.event, MappingEndEvent):
self.indent = self.indents.pop()
self.flow_level -= 1
if self.canonical:
self.write_indicator(u',', False)
self.write_indent()
self.write_indicator(u'}', False)
self.state = self.states.pop()
else:
self.write_indicator(u',', False)
if self.canonical or self.column > self.best_width:
self.write_indent()
if not self.canonical and self.check_simple_key():
self.states.append(self.expect_flow_mapping_simple_value)
self.expect_node(mapping=True, simple_key=True)
else:
self.write_indicator(u'?', True)
self.states.append(self.expect_flow_mapping_value)
self.expect_node(mapping=True)
def expect_flow_mapping_simple_value(self):
self.write_indicator(u':', False)
self.states.append(self.expect_flow_mapping_key)
self.expect_node(mapping=True)
def expect_flow_mapping_value(self):
if self.canonical or self.column > self.best_width:
self.write_indent()
self.write_indicator(u':', True)
self.states.append(self.expect_flow_mapping_key)
self.expect_node(mapping=True)
# Block sequence handlers.
def expect_block_sequence(self):
indentless = (self.mapping_context and not self.indention)
self.increase_indent(flow=False, indentless=indentless)
self.state = self.expect_first_block_sequence_item
def expect_first_block_sequence_item(self):
return self.expect_block_sequence_item(first=True)
def expect_block_sequence_item(self, first=False):
if not first and isinstance(self.event, SequenceEndEvent):
self.indent = self.indents.pop()
self.state = self.states.pop()
else:
self.write_indent()
self.write_indicator(u'-', True, indention=True)
self.states.append(self.expect_block_sequence_item)
self.expect_node(sequence=True)
# Block mapping handlers.
def expect_block_mapping(self):
self.increase_indent(flow=False)
self.state = self.expect_first_block_mapping_key
def expect_first_block_mapping_key(self):
return self.expect_block_mapping_key(first=True)
def expect_block_mapping_key(self, first=False):
if not first and isinstance(self.event, MappingEndEvent):
self.indent = self.indents.pop()
self.state = self.states.pop()
else:
self.write_indent()
if self.check_simple_key():
self.states.append(self.expect_block_mapping_simple_value)
self.expect_node(mapping=True, simple_key=True)
else:
self.write_indicator(u'?', True, indention=True)
self.states.append(self.expect_block_mapping_value)
self.expect_node(mapping=True)
def expect_block_mapping_simple_value(self):
self.write_indicator(u':', False)
self.states.append(self.expect_block_mapping_key)
self.expect_node(mapping=True)
def expect_block_mapping_value(self):
self.write_indent()
self.write_indicator(u':', True, indention=True)
self.states.append(self.expect_block_mapping_key)
self.expect_node(mapping=True)
# Checkers.
def check_empty_sequence(self):
return (isinstance(self.event, SequenceStartEvent) and self.events
and isinstance(self.events[0], SequenceEndEvent))
def check_empty_mapping(self):
return (isinstance(self.event, MappingStartEvent) and self.events
and isinstance(self.events[0], MappingEndEvent))
def check_empty_document(self):
if not isinstance(self.event, DocumentStartEvent) or not self.events:
return False
event = self.events[0]
return (isinstance(event, ScalarEvent) and event.anchor is None
and event.tag is None and event.implicit and event.value == u'')
def check_simple_key(self):
length = 0
if isinstance(self.event, NodeEvent) and self.event.anchor is not None:
if self.prepared_anchor is None:
self.prepared_anchor = self.prepare_anchor(self.event.anchor)
length += len(self.prepared_anchor)
if isinstance(self.event, (ScalarEvent, CollectionStartEvent)) \
and self.event.tag is not None:
if self.prepared_tag is None:
self.prepared_tag = self.prepare_tag(self.event.tag)
length += len(self.prepared_tag)
if isinstance(self.event, ScalarEvent):
if self.analysis is None:
self.analysis = self.analyze_scalar(self.event.value)
length += len(self.analysis.scalar)
return (length < 128 and (isinstance(self.event, AliasEvent)
or (isinstance(self.event, ScalarEvent)
and not self.analysis.empty and not self.analysis.multiline)
or self.check_empty_sequence() or self.check_empty_mapping()))
# Anchor, Tag, and Scalar processors.
def process_anchor(self, indicator):
if self.event.anchor is None:
self.prepared_anchor = None
return
if self.prepared_anchor is None:
self.prepared_anchor = self.prepare_anchor(self.event.anchor)
if self.prepared_anchor:
self.write_indicator(indicator+self.prepared_anchor, True)
self.prepared_anchor = None
def process_tag(self):
tag = self.event.tag
if isinstance(self.event, ScalarEvent):
if self.style is None:
self.style = self.choose_scalar_style()
if ((not self.canonical or tag is None) and
((self.style == '' and self.event.implicit[0])
or (self.style != '' and self.event.implicit[1]))):
self.prepared_tag = None
return
if self.event.implicit[0] and tag is None:
tag = u'!'
self.prepared_tag = None
else:
if (not self.canonical or tag is None) and self.event.implicit:
self.prepared_tag = None
return
if tag is None:
raise EmitterError("tag is not specified")
if self.prepared_tag is None:
self.prepared_tag = self.prepare_tag(tag)
if self.prepared_tag:
self.write_indicator(self.prepared_tag, True)
self.prepared_tag = None
def choose_scalar_style(self):
if self.analysis is None:
self.analysis = self.analyze_scalar(self.event.value)
if self.event.style == '"' or self.canonical:
return '"'
if not self.event.style and self.event.implicit[0]:
if (not (self.simple_key_context and
(self.analysis.empty or self.analysis.multiline))
and (self.flow_level and self.analysis.allow_flow_plain
or (not self.flow_level and self.analysis.allow_block_plain))):
return ''
if self.event.style and self.event.style in '|>':
if (not self.flow_level and not self.simple_key_context
and self.analysis.allow_block):
return self.event.style
if not self.event.style or self.event.style == '\'':
if (self.analysis.allow_single_quoted and
not (self.simple_key_context and self.analysis.multiline)):
return '\''
return '"'
def process_scalar(self):
if self.analysis is None:
self.analysis = self.analyze_scalar(self.event.value)
if self.style is None:
self.style = self.choose_scalar_style()
split = (not self.simple_key_context)
#if self.analysis.multiline and split \
# and (not self.style or self.style in '\'\"'):
# self.write_indent()
if self.style == '"':
self.write_double_quoted(self.analysis.scalar, split)
elif self.style == '\'':
self.write_single_quoted(self.analysis.scalar, split)
elif self.style == '>':
self.write_folded(self.analysis.scalar)
elif self.style == '|':
self.write_literal(self.analysis.scalar)
else:
self.write_plain(self.analysis.scalar, split)
self.analysis = None
self.style = None
# Analyzers.
def prepare_version(self, version):
major, minor = version
if major != 1:
raise EmitterError("unsupported YAML version: %d.%d" % (major, minor))
return u'%d.%d' % (major, minor)
def prepare_tag_handle(self, handle):
if not handle:
raise EmitterError("tag handle must not be empty")
if handle[0] != u'!' or handle[-1] != u'!':
raise EmitterError("tag handle must start and end with '!': %r"
% (handle.encode('utf-8')))
for ch in handle[1:-1]:
if not (u'0' <= ch <= u'9' or u'A' <= ch <= u'Z' or u'a' <= ch <= u'z' \
or ch in u'-_'):
raise EmitterError("invalid character %r in the tag handle: %r"
% (ch.encode('utf-8'), handle.encode('utf-8')))
return handle
def prepare_tag_prefix(self, prefix):
if not prefix:
raise EmitterError("tag prefix must not be empty")
chunks = []
start = end = 0
if prefix[0] == u'!':
end = 1
while end < len(prefix):
ch = prefix[end]
if u'0' <= ch <= u'9' or u'A' <= ch <= u'Z' or u'a' <= ch <= u'z' \
or ch in u'-;/?!:@&=+$,_.~*\'()[]':
end += 1
else:
if start < end:
chunks.append(prefix[start:end])
start = end = end+1
data = ch.encode('utf-8')
for ch in data:
chunks.append(u'%%%02X' % ord(ch))
if start < end:
chunks.append(prefix[start:end])
return u''.join(chunks)
def prepare_tag(self, tag):
if not tag:
raise EmitterError("tag must not be empty")
if tag == u'!':
return tag
handle = None
suffix = tag
prefixes = self.tag_prefixes.keys()
prefixes.sort()
for prefix in prefixes:
if tag.startswith(prefix) \
and (prefix == u'!' or len(prefix) < len(tag)):
handle = self.tag_prefixes[prefix]
suffix = tag[len(prefix):]
chunks = []
start = end = 0
while end < len(suffix):
ch = suffix[end]
if u'0' <= ch <= u'9' or u'A' <= ch <= u'Z' or u'a' <= ch <= u'z' \
or ch in u'-;/?:@&=+$,_.~*\'()[]' \
or (ch == u'!' and handle != u'!'):
end += 1
else:
if start < end:
chunks.append(suffix[start:end])
start = end = end+1
data = ch.encode('utf-8')
for ch in data:
chunks.append(u'%%%02X' % ord(ch))
if start < end:
chunks.append(suffix[start:end])
suffix_text = u''.join(chunks)
if handle:
return u'%s%s' % (handle, suffix_text)
else:
return u'!<%s>' % suffix_text
def prepare_anchor(self, anchor):
if not anchor:
raise EmitterError("anchor must not be empty")
for ch in anchor:
if not (u'0' <= ch <= u'9' or u'A' <= ch <= u'Z' or u'a' <= ch <= u'z' \
or ch in u'-_'):
raise EmitterError("invalid character %r in the anchor: %r"
% (ch.encode('utf-8'), anchor.encode('utf-8')))
return anchor
def analyze_scalar(self, scalar):
# Empty scalar is a special case.
if not scalar:
return ScalarAnalysis(scalar=scalar, empty=True, multiline=False,
allow_flow_plain=False, allow_block_plain=True,
allow_single_quoted=True, allow_double_quoted=True,
allow_block=False)
# Indicators and special characters.
block_indicators = False
flow_indicators = False
line_breaks = False
special_characters = False
# Important whitespace combinations.
leading_space = False
leading_break = False
trailing_space = False
trailing_break = False
break_space = False
space_break = False
# Check document indicators.
if scalar.startswith(u'---') or scalar.startswith(u'...'):
block_indicators = True
flow_indicators = True
# First character or preceded by a whitespace.
preceeded_by_whitespace = True
# Last character or followed by a whitespace.
followed_by_whitespace = (len(scalar) == 1 or
scalar[1] in u'\0 \t\r\n\x85\u2028\u2029')
# The previous character is a space.
previous_space = False
# The previous character is a break.
previous_break = False
index = 0
while index < len(scalar):
ch = scalar[index]
# Check for indicators.
if index == 0:
# Leading indicators are special characters.
if ch in u'#,[]{}&*!|>\'\"%@`':
flow_indicators = True
block_indicators = True
if ch in u'?:':
flow_indicators = True
if followed_by_whitespace:
block_indicators = True
if ch == u'-' and followed_by_whitespace:
flow_indicators = True
block_indicators = True
else:
# Some indicators cannot appear within a scalar as well.
if ch in u',?[]{}':
flow_indicators = True
if ch == u':':
flow_indicators = True
if followed_by_whitespace:
block_indicators = True
if ch == u'#' and preceeded_by_whitespace:
flow_indicators = True
block_indicators = True
# Check for line breaks, special, and unicode characters.
if ch in u'\n\x85\u2028\u2029':
line_breaks = True
if not (ch == u'\n' or u'\x20' <= ch <= u'\x7E'):
if (ch == u'\x85' or u'\xA0' <= ch <= u'\uD7FF'
or u'\uE000' <= ch <= u'\uFFFD') and ch != u'\uFEFF':
unicode_characters = True
if not self.allow_unicode:
special_characters = True
else:
special_characters = True
# Detect important whitespace combinations.
if ch == u' ':
if index == 0:
leading_space = True
if index == len(scalar)-1:
trailing_space = True
if previous_break:
break_space = True
previous_space = True
previous_break = False
elif ch in u'\n\x85\u2028\u2029':
if index == 0:
leading_break = True
if index == len(scalar)-1:
trailing_break = True
if previous_space:
space_break = True
previous_space = False
previous_break = True
else:
previous_space = False
previous_break = False
# Prepare for the next character.
index += 1
preceeded_by_whitespace = (ch in u'\0 \t\r\n\x85\u2028\u2029')
followed_by_whitespace = (index+1 >= len(scalar) or
scalar[index+1] in u'\0 \t\r\n\x85\u2028\u2029')
# Let's decide what styles are allowed.
allow_flow_plain = True
allow_block_plain = True
allow_single_quoted = True
allow_double_quoted = True
allow_block = True
# Leading and trailing whitespaces are bad for plain scalars.
if (leading_space or leading_break
or trailing_space or trailing_break):
allow_flow_plain = allow_block_plain = False
# We do not permit trailing spaces for block scalars.
if trailing_space:
allow_block = False
# Spaces at the beginning of a new line are only acceptable for block
# scalars.
if break_space:
allow_flow_plain = allow_block_plain = allow_single_quoted = False
# Spaces followed by breaks, as well as special character are only
# allowed for double quoted scalars.
if space_break or special_characters:
allow_flow_plain = allow_block_plain = \
allow_single_quoted = allow_block = False
# Although the plain scalar writer supports breaks, we never emit
# multiline plain scalars.
if line_breaks:
allow_flow_plain = allow_block_plain = False
# Flow indicators are forbidden for flow plain scalars.
if flow_indicators:
allow_flow_plain = False
# Block indicators are forbidden for block plain scalars.
if block_indicators:
allow_block_plain = False
return ScalarAnalysis(scalar=scalar,
empty=False, multiline=line_breaks,
allow_flow_plain=allow_flow_plain,
allow_block_plain=allow_block_plain,
allow_single_quoted=allow_single_quoted,
allow_double_quoted=allow_double_quoted,
allow_block=allow_block)
# Writers.
def flush_stream(self):
if hasattr(self.stream, 'flush'):
self.stream.flush()
def write_stream_start(self):
# Write BOM if needed.
if self.encoding and self.encoding.startswith('utf-16'):
self.stream.write(u'\uFEFF'.encode(self.encoding))
def write_stream_end(self):
self.flush_stream()
def write_indicator(self, indicator, need_whitespace,
whitespace=False, indention=False):
if self.whitespace or not need_whitespace:
data = indicator
else:
data = u' '+indicator
self.whitespace = whitespace
self.indention = self.indention and indention
self.column += len(data)
self.open_ended = False
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
def write_indent(self):
indent = self.indent or 0
if not self.indention or self.column > indent \
or (self.column == indent and not self.whitespace):
self.write_line_break()
if self.column < indent:
self.whitespace = True
data = u' '*(indent-self.column)
self.column = indent
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
def write_line_break(self, data=None):
if data is None:
data = self.best_line_break
self.whitespace = True
self.indention = True
self.line += 1
self.column = 0
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
def write_version_directive(self, version_text):
data = u'%%YAML %s' % version_text
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
self.write_line_break()
def write_tag_directive(self, handle_text, prefix_text):
data = u'%%TAG %s %s' % (handle_text, prefix_text)
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
self.write_line_break()
# Scalar streams.
def write_single_quoted(self, text, split=True):
self.write_indicator(u'\'', True)
spaces = False
breaks = False
start = end = 0
while end <= len(text):
ch = None
if end < len(text):
ch = text[end]
if spaces:
if ch is None or ch != u' ':
if start+1 == end and self.column > self.best_width and split \
and start != 0 and end != len(text):
self.write_indent()
else:
data = text[start:end]
self.column += len(data)
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
start = end
elif breaks:
if ch is None or ch not in u'\n\x85\u2028\u2029':
if text[start] == u'\n':
self.write_line_break()
for br in text[start:end]:
if br == u'\n':
self.write_line_break()
else:
self.write_line_break(br)
self.write_indent()
start = end
else:
if ch is None or ch in u' \n\x85\u2028\u2029' or ch == u'\'':
if start < end:
data = text[start:end]
self.column += len(data)
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
start = end
if ch == u'\'':
data = u'\'\''
self.column += 2
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
start = end + 1
if ch is not None:
spaces = (ch == u' ')
breaks = (ch in u'\n\x85\u2028\u2029')
end += 1
self.write_indicator(u'\'', False)
ESCAPE_REPLACEMENTS = {
u'\0': u'0',
u'\x07': u'a',
u'\x08': u'b',
u'\x09': u't',
u'\x0A': u'n',
u'\x0B': u'v',
u'\x0C': u'f',
u'\x0D': u'r',
u'\x1B': u'e',
u'\"': u'\"',
u'\\': u'\\',
u'\x85': u'N',
u'\xA0': u'_',
u'\u2028': u'L',
u'\u2029': u'P',
}
def write_double_quoted(self, text, split=True):
self.write_indicator(u'"', True)
start = end = 0
while end <= len(text):
ch = None
if end < len(text):
ch = text[end]
if ch is None or ch in u'"\\\x85\u2028\u2029\uFEFF' \
or not (u'\x20' <= ch <= u'\x7E'
or (self.allow_unicode
and (u'\xA0' <= ch <= u'\uD7FF'
or u'\uE000' <= ch <= u'\uFFFD'))):
if start < end:
data = text[start:end]
self.column += len(data)
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
start = end
if ch is not None:
if ch in self.ESCAPE_REPLACEMENTS:
data = u'\\'+self.ESCAPE_REPLACEMENTS[ch]
elif ch <= u'\xFF':
data = u'\\x%02X' % ord(ch)
elif ch <= u'\uFFFF':
data = u'\\u%04X' % ord(ch)
else:
data = u'\\U%08X' % ord(ch)
self.column += len(data)
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
start = end+1
if 0 < end < len(text)-1 and (ch == u' ' or start >= end) \
and self.column+(end-start) > self.best_width and split:
data = text[start:end]+u'\\'
if start < end:
start = end
self.column += len(data)
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
self.write_indent()
self.whitespace = False
self.indention = False
if text[start] == u' ':
data = u'\\'
self.column += len(data)
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
end += 1
self.write_indicator(u'"', False)
def determine_block_hints(self, text):
hints = u''
if text:
if text[0] in u' \n\x85\u2028\u2029':
hints += unicode(self.best_indent)
if text[-1] not in u'\n\x85\u2028\u2029':
hints += u'-'
elif len(text) == 1 or text[-2] in u'\n\x85\u2028\u2029':
hints += u'+'
return hints
def write_folded(self, text):
hints = self.determine_block_hints(text)
self.write_indicator(u'>'+hints, True)
if hints[-1:] == u'+':
self.open_ended = True
self.write_line_break()
leading_space = True
spaces = False
breaks = True
start = end = 0
while end <= len(text):
ch = None
if end < len(text):
ch = text[end]
if breaks:
if ch is None or ch not in u'\n\x85\u2028\u2029':
if not leading_space and ch is not None and ch != u' ' \
and text[start] == u'\n':
self.write_line_break()
leading_space = (ch == u' ')
for br in text[start:end]:
if br == u'\n':
self.write_line_break()
else:
self.write_line_break(br)
if ch is not None:
self.write_indent()
start = end
elif spaces:
if ch != u' ':
if start+1 == end and self.column > self.best_width:
self.write_indent()
else:
data = text[start:end]
self.column += len(data)
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
start = end
else:
if ch is None or ch in u' \n\x85\u2028\u2029':
data = text[start:end]
self.column += len(data)
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
if ch is None:
self.write_line_break()
start = end
if ch is not None:
breaks = (ch in u'\n\x85\u2028\u2029')
spaces = (ch == u' ')
end += 1
def write_literal(self, text):
hints = self.determine_block_hints(text)
self.write_indicator(u'|'+hints, True)
if hints[-1:] == u'+':
self.open_ended = True
self.write_line_break()
breaks = True
start = end = 0
while end <= len(text):
ch = None
if end < len(text):
ch = text[end]
if breaks:
if ch is None or ch not in u'\n\x85\u2028\u2029':
for br in text[start:end]:
if br == u'\n':
self.write_line_break()
else:
self.write_line_break(br)
if ch is not None:
self.write_indent()
start = end
else:
if ch is None or ch in u'\n\x85\u2028\u2029':
data = text[start:end]
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
if ch is None:
self.write_line_break()
start = end
if ch is not None:
breaks = (ch in u'\n\x85\u2028\u2029')
end += 1
def write_plain(self, text, split=True):
if self.root_context:
self.open_ended = True
if not text:
return
if not self.whitespace:
data = u' '
self.column += len(data)
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
self.whitespace = False
self.indention = False
spaces = False
breaks = False
start = end = 0
while end <= len(text):
ch = None
if end < len(text):
ch = text[end]
if spaces:
if ch != u' ':
if start+1 == end and self.column > self.best_width and split:
self.write_indent()
self.whitespace = False
self.indention = False
else:
data = text[start:end]
self.column += len(data)
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
start = end
elif breaks:
if ch not in u'\n\x85\u2028\u2029':
if text[start] == u'\n':
self.write_line_break()
for br in text[start:end]:
if br == u'\n':
self.write_line_break()
else:
self.write_line_break(br)
self.write_indent()
self.whitespace = False
self.indention = False
start = end
else:
if ch is None or ch in u' \n\x85\u2028\u2029':
data = text[start:end]
self.column += len(data)
if self.encoding:
data = data.encode(self.encoding)
self.stream.write(data)
start = end
if ch is not None:
spaces = (ch == u' ')
breaks = (ch in u'\n\x85\u2028\u2029')
end += 1 | PypiClean |
/IBATS_Huobi_Feeder-0.1.3.tar.gz/IBATS_Huobi_Feeder-0.1.3/ibats_huobi_feeder/backend/orm.py | from sqlalchemy import Column, Integer, String, UniqueConstraint, TIMESTAMP, text
from sqlalchemy.dialects.mysql import DOUBLE
from sqlalchemy.ext.declarative import declarative_base
from ibats_common.utils.db import with_db_session
from ibats_huobi_feeder.backend import engine_md
from ibats_huobi_feeder.config import config
import logging
logger = logging.getLogger()
BaseModel = declarative_base()
class SymbolPair(BaseModel):
__tablename__ = 'symbol_pair_info'
id = Column(Integer, autoincrement=True, unique=True)
market = Column(String(10), primary_key=True)
base_currency = Column(String(10), primary_key=True)
quote_currency = Column(String(10), primary_key=True)
price_precision = Column(Integer)
amount_precision = Column(Integer)
symbol_partition = Column(String(12))
__table_args__ = (
UniqueConstraint('base_currency', 'quote_currency'),
)
class MDTick(BaseModel):
__tablename__ = 'md_min1_tick_bc'
id = Column(Integer, autoincrement=True, unique=True)
market = Column(String(10), primary_key=True)
symbol = Column(String(10), primary_key=True)
ts_start = Column(TIMESTAMP)
ts_curr = Column(TIMESTAMP, primary_key=True, server_default=text('CURRENT_TIMESTAMP'))
open = Column(DOUBLE)
high = Column(DOUBLE)
low = Column(DOUBLE)
close = Column(DOUBLE)
amount = Column(DOUBLE)
vol = Column(DOUBLE)
count = Column(DOUBLE)
class MDMin1(BaseModel):
__tablename__ = 'md_min1_bc'
market = Column(String(10), primary_key=True)
symbol = Column(String(10), primary_key=True)
ts_start = Column(TIMESTAMP, primary_key=True, server_default=text('CURRENT_TIMESTAMP'))
ts_curr = Column(TIMESTAMP)
open = Column(DOUBLE)
high = Column(DOUBLE)
low = Column(DOUBLE)
close = Column(DOUBLE)
amount = Column(DOUBLE)
vol = Column(DOUBLE)
count = Column(DOUBLE)
class MDMin1Temp(BaseModel):
__tablename__ = 'md_min1_bc_temp'
market = Column(String(10), primary_key=True)
symbol = Column(String(10), primary_key=True)
ts_start = Column(TIMESTAMP, primary_key=True, server_default=text('CURRENT_TIMESTAMP'))
ts_curr = Column(TIMESTAMP)
open = Column(DOUBLE)
high = Column(DOUBLE)
low = Column(DOUBLE)
close = Column(DOUBLE)
amount = Column(DOUBLE)
vol = Column(DOUBLE)
count = Column(DOUBLE)
class MDMin60(BaseModel):
__tablename__ = 'md_min60_bc'
market = Column(String(10), primary_key=True)
symbol = Column(String(10), primary_key=True)
ts_start = Column(TIMESTAMP, primary_key=True, server_default=text('CURRENT_TIMESTAMP'))
ts_curr = Column(TIMESTAMP)
open = Column(DOUBLE)
high = Column(DOUBLE)
low = Column(DOUBLE)
close = Column(DOUBLE)
amount = Column(DOUBLE)
vol = Column(DOUBLE)
count = Column(DOUBLE)
class MDMin60Temp(BaseModel):
__tablename__ = 'md_min60_bc_temp'
market = Column(String(10), primary_key=True)
symbol = Column(String(10), primary_key=True)
ts_start = Column(TIMESTAMP, primary_key=True, server_default=text('CURRENT_TIMESTAMP'))
ts_curr = Column(TIMESTAMP)
open = Column(DOUBLE)
high = Column(DOUBLE)
low = Column(DOUBLE)
close = Column(DOUBLE)
amount = Column(DOUBLE)
vol = Column(DOUBLE)
count = Column(DOUBLE)
class MDMinDaily(BaseModel):
__tablename__ = 'md_daily_bc'
market = Column(String(10), primary_key=True)
symbol = Column(String(10), primary_key=True)
ts_start = Column(TIMESTAMP, primary_key=True, server_default=text('CURRENT_TIMESTAMP'))
ts_curr = Column(TIMESTAMP)
open = Column(DOUBLE)
high = Column(DOUBLE)
low = Column(DOUBLE)
close = Column(DOUBLE)
amount = Column(DOUBLE)
vol = Column(DOUBLE)
count = Column(DOUBLE)
class MDMinDailyTemp(BaseModel):
__tablename__ = 'md_daily_bc_temp'
market = Column(String(10), primary_key=True)
symbol = Column(String(10), primary_key=True)
ts_start = Column(TIMESTAMP, primary_key=True, server_default=text('CURRENT_TIMESTAMP'))
ts_curr = Column(TIMESTAMP)
open = Column(DOUBLE)
high = Column(DOUBLE)
low = Column(DOUBLE)
close = Column(DOUBLE)
amount = Column(DOUBLE)
vol = Column(DOUBLE)
count = Column(DOUBLE)
def init(alter_table=False):
BaseModel.metadata.create_all(engine_md)
if alter_table:
with with_db_session(engine=engine_md) as session:
show_status_sql_str = f"show table status from {config.DB_SCHEMA_MD} where name=:table_name"
for table_name, _ in BaseModel.metadata.tables.items():
row_data = session.execute(show_status_sql_str, params={'table_name': table_name}).first()
if row_data is None:
continue
if row_data[1].lower() == 'myisam':
continue
logger.info('修改 %s 表引擎为 MyISAM', table_name)
sql_str = "ALTER TABLE %s ENGINE = MyISAM" % table_name
session.execute(sql_str)
sql_str = f"""select table_name from information_schema.columns
where table_schema = :table_schema and column_name = 'ts_start' and extra <> ''"""
table_name_list = [row_data[0]
for row_data in session.execute(sql_str, params={'table_schema': config.DB_SCHEMA_MD})]
for table_name in table_name_list:
logger.info('修改 %s 表 ts_start 默认值,剔除 on update 默认项', table_name)
# TimeStamp 类型的数据会被自动设置 default: 'CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP'
# 需要将 “on update CURRENT_TIMESTAMP”剔除,否则在执行更新时可能会引起错误
session.execute(f"ALTER TABLE {table_name} CHANGE COLUMN `ts_start` `ts_start` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP")
# This is an issue https://www.mail-archive.com/[email protected]/msg19744.html
session.execute(f"ALTER TABLE {SymbolPair.__tablename__} CHANGE COLUMN `id` `id` INT(11) NULL AUTO_INCREMENT")
session.commit()
# This is an issue https://www.mail-archive.com/[email protected]/msg19744.html
session.execute(f"ALTER TABLE {MDTick.__tablename__} CHANGE COLUMN `id` `id` INT(11) NULL AUTO_INCREMENT")
session.commit()
logger.info("所有表结构建立完成")
if __name__ == "__main__":
init() | PypiClean |
/graphinate-0.0.9.tar.gz/graphinate-0.0.9/src/graphinate/tools/mutators.py | import dataclasses
import inspect
from collections.abc import Iterable, Mapping
from typing import Any, Callable, Optional
def is_strict_iterable(obj: Any) -> bool:
return isinstance(obj, Iterable) and not isinstance(obj, (str, bytes, bytearray))
def _process_obj(obj: Any,
mapping_handler: Optional[Callable] = None,
iterable_handler: Optional[Callable] = None,
class_handler: Optional[Callable] = None,
*args,
**kwargs):
if callable(mapping_handler) and isinstance(obj, Mapping):
return mapping_handler(obj, *args, **kwargs)
if callable(iterable_handler) and is_strict_iterable(obj):
return iterable_handler(obj, *args, **kwargs)
if callable(class_handler) and inspect.isclass(obj):
return class_handler(obj, *args, **kwargs)
if (value_converter := kwargs.get('value_converter')) and callable(value_converter):
return value_converter(obj)
return obj
def dictify_mapping(obj: Mapping[Any, Any],
key_converter: Optional[Callable[[Any], str]] = None,
value_converter: Optional[Callable[[Any], Any]] = None) -> dict:
return {(key_converter(k) if key_converter else k): dictify(v, key_converter, value_converter)
for k, v in obj.items()}
def dictify_iterable(obj: Iterable,
key_converter: Optional[Callable[[Any], str]] = None,
value_converter: Optional[Callable[[Any], Any]] = None) -> list:
return [dictify(v, key_converter, value_converter) for v in obj]
def dictify_class(obj,
key_converter: Optional[Callable[[Any], str]] = None,
value_converter: Optional[Callable[[Any], Any]] = None):
if dataclasses.is_dataclass(obj):
return {key_converter(k) if key_converter else k: dictify(v, key_converter, value_converter)
for k, v in inspect.getmembers(obj) if not k.startswith("_")}
return str(obj)
def dictify(obj,
key_converter: Optional[Callable[[Any], str]] = None,
value_converter: Optional[Callable[[Any], Any]] = None):
return _process_obj(obj,
mapping_handler=dictify_mapping,
iterable_handler=dictify_iterable,
class_handler=dictify_class,
key_converter=key_converter,
value_converter=value_converter) | PypiClean |
/Bravo-2.0.tar.gz/Bravo-2.0/bravo/plugins/beds.py | from zope.interface import implements
from bravo.blocks import blocks, items
from bravo.ibravo import IPreBuildHook, IDigHook
# Metadata
HEAD_PART = 0x8
class Bed(object):
"""
Make placing/removing beds work correctly.
"""
implements(IPreBuildHook, IDigHook)
def __init__(self, factory):
self.factory = factory
def deltas(self, orientation):
return {'+x': (1, 0),
'+z': (0, 1),
'-x': (-1, 0),
'-z': (0, -1)}[orientation]
def pre_build_hook(self, player, builddata):
item, metadata, x, y, z, face = builddata
if item.slot != items["bed"].slot:
return True, builddata, False
# Can place only on top of other block
if face != "+y":
return False, builddata, True
# Offset coords for the second block; use facing direction to
# set correct bed blocks.
orientation = ('-x', '-z', '+x', '+z')[((int(player.location.yaw) \
- 45 + 360) % 360) / 90]
dx, dz = self.deltas(orientation)
metadata = blocks["bed"].orientation(orientation)
y += 1
# Check if there is enough space for the bed.
bl = self.factory.world.sync_get_block((x + dx, y, z + dz))
if bl and bl != blocks["snow"].slot:
return False, builddata, True
# Make sure we can remove it from the inventory.
if not player.inventory.consume((item.slot, 0), player.equipped):
return False, builddata, False
self.factory.world.set_block((x, y, z), blocks["bed-block"].slot)
self.factory.world.set_block((x + dx, y, z + dz),
blocks["bed-block"].slot)
self.factory.world.set_metadata((x, y, z), metadata)
self.factory.world.set_metadata((x + dx, y, z + dz),
metadata | HEAD_PART)
# XXX As we doing all of the building actions manually we cancel at this point.
# This is not what we shall do, but now it's the best solution we have.
# Please note that post build hooks and automations will be skipped as well as
# default run_build() hook.
return False, builddata, True
def dig_hook(self, chunk, x, y, z, block):
if block.slot != blocks["bed-block"].slot:
return
# Calculate offset for the second block according to the direction.
metadata = chunk.get_metadata((x, y, z))
orientation = blocks["bed-block"].face(metadata & 0x3)
dx, dz = self.deltas(orientation)
# If the head of the bed was digged, look for the second block in
# the opposite direction.
if metadata & HEAD_PART:
dx *= -1
dz *= -1
# Block coordinates for the second block of the bed.
x = chunk.x * 16 + x
z = chunk.z * 16 + z
self.factory.world.destroy((x + dx, y, z + dz))
name = "bed"
before = () # plugins that come before this plugin
after = tuple() | PypiClean |
/DjangoDjangoAppCenter-0.0.11-py3-none-any.whl/AppCenter/simpleui/static/admin/simpleui-x/elementui/link.js | module.exports =
/******/ (function (modules) { // webpackBootstrap
/******/ // The module cache
/******/
var installedModules = {};
/******/
/******/ // The require function
/******/
function __webpack_require__(moduleId) {
/******/
/******/ // Check if module is in cache
/******/
if (installedModules[moduleId]) {
/******/
return installedModules[moduleId].exports;
/******/
}
/******/ // Create a new module (and put it into the cache)
/******/
var module = installedModules[moduleId] = {
/******/ i: moduleId,
/******/ l: false,
/******/ exports: {}
/******/
};
/******/
/******/ // Execute the module function
/******/
modules[moduleId].call(module.exports, module, module.exports, __webpack_require__);
/******/
/******/ // Flag the module as loaded
/******/
module.l = true;
/******/
/******/ // Return the exports of the module
/******/
return module.exports;
/******/
}
/******/
/******/
/******/ // expose the modules object (__webpack_modules__)
/******/
__webpack_require__.m = modules;
/******/
/******/ // expose the module cache
/******/
__webpack_require__.c = installedModules;
/******/
/******/ // define getter function for harmony exports
/******/
__webpack_require__.d = function (exports, name, getter) {
/******/
if (!__webpack_require__.o(exports, name)) {
/******/
Object.defineProperty(exports, name, {enumerable: true, get: getter});
/******/
}
/******/
};
/******/
/******/ // define __esModule on exports
/******/
__webpack_require__.r = function (exports) {
/******/
if (typeof Symbol !== 'undefined' && Symbol.toStringTag) {
/******/
Object.defineProperty(exports, Symbol.toStringTag, {value: 'Module'});
/******/
}
/******/
Object.defineProperty(exports, '__esModule', {value: true});
/******/
};
/******/
/******/ // create a fake namespace object
/******/ // mode & 1: value is a module id, require it
/******/ // mode & 2: merge all properties of value into the ns
/******/ // mode & 4: return value when already ns object
/******/ // mode & 8|1: behave like require
/******/
__webpack_require__.t = function (value, mode) {
/******/
if (mode & 1) value = __webpack_require__(value);
/******/
if (mode & 8) return value;
/******/
if ((mode & 4) && typeof value === 'object' && value && value.__esModule) return value;
/******/
var ns = Object.create(null);
/******/
__webpack_require__.r(ns);
/******/
Object.defineProperty(ns, 'default', {enumerable: true, value: value});
/******/
if (mode & 2 && typeof value != 'string') for (var key in value) __webpack_require__.d(ns, key, function (key) {
return value[key];
}.bind(null, key));
/******/
return ns;
/******/
};
/******/
/******/ // getDefaultExport function for compatibility with non-harmony modules
/******/
__webpack_require__.n = function (module) {
/******/
var getter = module && module.__esModule ?
/******/ function getDefault() {
return module['default'];
} :
/******/ function getModuleExports() {
return module;
};
/******/
__webpack_require__.d(getter, 'a', getter);
/******/
return getter;
/******/
};
/******/
/******/ // Object.prototype.hasOwnProperty.call
/******/
__webpack_require__.o = function (object, property) {
return Object.prototype.hasOwnProperty.call(object, property);
};
/******/
/******/ // __webpack_public_path__
/******/
__webpack_require__.p = "/dist/";
/******/
/******/
/******/ // Load entry module and return exports
/******/
return __webpack_require__(__webpack_require__.s = 117);
/******/
})
/************************************************************************/
/******/({
/***/ 0:
/***/ (function (module, __webpack_exports__, __webpack_require__) {
"use strict";
/* harmony export (binding) */
__webpack_require__.d(__webpack_exports__, "a", function () {
return normalizeComponent;
});
/* globals __VUE_SSR_CONTEXT__ */
// IMPORTANT: Do NOT use ES2015 features in this file (except for modules).
// This module is a runtime utility for cleaner component module output and will
// be included in the final webpack user bundle.
function normalizeComponent(
scriptExports,
render,
staticRenderFns,
functionalTemplate,
injectStyles,
scopeId,
moduleIdentifier, /* server only */
shadowMode /* vue-cli only */
) {
// Vue.extend constructor export interop
var options = typeof scriptExports === 'function'
? scriptExports.options
: scriptExports
// render functions
if (render) {
options.render = render
options.staticRenderFns = staticRenderFns
options._compiled = true
}
// functional template
if (functionalTemplate) {
options.functional = true
}
// scopedId
if (scopeId) {
options._scopeId = 'data-v-' + scopeId
}
var hook
if (moduleIdentifier) { // server build
hook = function (context) {
// 2.3 injection
context =
context || // cached call
(this.$vnode && this.$vnode.ssrContext) || // stateful
(this.parent && this.parent.$vnode && this.parent.$vnode.ssrContext) // functional
// 2.2 with runInNewContext: true
if (!context && typeof __VUE_SSR_CONTEXT__ !== 'undefined') {
context = __VUE_SSR_CONTEXT__
}
// inject component styles
if (injectStyles) {
injectStyles.call(this, context)
}
// register component module identifier for async chunk inferrence
if (context && context._registeredComponents) {
context._registeredComponents.add(moduleIdentifier)
}
}
// used by ssr in case component is cached and beforeCreate
// never gets called
options._ssrRegister = hook
} else if (injectStyles) {
hook = shadowMode
? function () {
injectStyles.call(this, this.$root.$options.shadowRoot)
}
: injectStyles
}
if (hook) {
if (options.functional) {
// for template-only hot-reload because in that case the render fn doesn't
// go through the normalizer
options._injectStyles = hook
// register for functioal component in vue file
var originalRender = options.render
options.render = function renderWithStyleInjection(h, context) {
hook.call(context)
return originalRender(h, context)
}
} else {
// inject component registration as beforeCreate hook
var existing = options.beforeCreate
options.beforeCreate = existing
? [].concat(existing, hook)
: [hook]
}
}
return {
exports: scriptExports,
options: options
}
}
/***/
}),
/***/ 117:
/***/ (function (module, __webpack_exports__, __webpack_require__) {
"use strict";
__webpack_require__.r(__webpack_exports__);
// CONCATENATED MODULE: ./node_modules/[email protected]@vue-loader/lib/loaders/templateLoader.js??vue-loader-options!./node_modules/[email protected]@vue-loader/lib??vue-loader-options!./packages/link/src/main.vue?vue&type=template&id=01cf3b65&
var render = function () {
var _vm = this
var _h = _vm.$createElement
var _c = _vm._self._c || _h
return _c(
"a",
_vm._b(
{
class: [
"el-link",
_vm.type ? "el-link--" + _vm.type : "",
_vm.disabled && "is-disabled",
_vm.underline && !_vm.disabled && "is-underline"
],
attrs: {href: _vm.disabled ? null : _vm.href},
on: {click: _vm.handleClick}
},
"a",
_vm.$attrs,
false
),
[
_vm.icon ? _c("i", {class: _vm.icon}) : _vm._e(),
_vm.$slots.default
? _c("span", {staticClass: "el-link--inner"}, [_vm._t("default")], 2)
: _vm._e(),
_vm.$slots.icon ? [_vm.$slots.icon ? _vm._t("icon") : _vm._e()] : _vm._e()
],
2
)
}
var staticRenderFns = []
render._withStripped = true
// CONCATENATED MODULE: ./packages/link/src/main.vue?vue&type=template&id=01cf3b65&
// CONCATENATED MODULE: ./node_modules/[email protected]@babel-loader/lib!./node_modules/[email protected]@vue-loader/lib??vue-loader-options!./packages/link/src/main.vue?vue&type=script&lang=js&
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
/* harmony default export */
var mainvue_type_script_lang_js_ = ({
name: 'ElLink',
props: {
type: {
type: String,
default: 'default'
},
underline: {
type: Boolean,
default: true
},
disabled: Boolean,
href: String,
icon: String
},
methods: {
handleClick: function handleClick(event) {
if (!this.disabled) {
if (!this.href) {
this.$emit('click', event);
}
}
}
}
});
// CONCATENATED MODULE: ./packages/link/src/main.vue?vue&type=script&lang=js&
/* harmony default export */
var src_mainvue_type_script_lang_js_ = (mainvue_type_script_lang_js_);
// EXTERNAL MODULE: ./node_modules/[email protected]@vue-loader/lib/runtime/componentNormalizer.js
var componentNormalizer = __webpack_require__(0);
// CONCATENATED MODULE: ./packages/link/src/main.vue
/* normalize component */
var component = Object(componentNormalizer["a" /* default */])(
src_mainvue_type_script_lang_js_,
render,
staticRenderFns,
false,
null,
null,
null
)
/* hot reload */
if (false) {
var api;
}
component.options.__file = "packages/link/src/main.vue"
/* harmony default export */
var main = (component.exports);
// CONCATENATED MODULE: ./packages/link/index.js
/* istanbul ignore next */
main.install = function (Vue) {
Vue.component(main.name, main);
};
/* harmony default export */
var packages_link = __webpack_exports__["default"] = (main);
/***/
})
/******/
}); | PypiClean |
/ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/postfilter.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified July 27, 2015
Description: Maps reads, then filters an assembly by contig coverage.
Intended to reduce misassembly rate of SPAdes by removing suspicious contigs.
Usage: postfilter.sh in=<reads> ref=<contigs> out=<filtered contigs>
Standard Parameters:
in=<file> File containing input reads.
in2=<file> Optional file containing read mates.
ref=<file> File containing input assembly.
cov=covstats.txt File to write coverage stats generated by pileup.
out=filtered.fa Destination of clean output assembly.
outdirty=<file> (outd) Destination of removed contigs; optional.
ow=f (overwrite) Overwrites files that already exist.
app=f (append) Append to files that already exist.
zl=4 (ziplevel) Set compression level, 1 (low) to 9 (max).
int=f (interleaved) Determines whether input reads are considered interleaved.
Filtering Parameters:
minc=2 (mincov) Discard contigs with lower average coverage.
minp=95 (minpercent) Discard contigs with a lower percent covered bases.
minr=6 (minreads) Discard contigs with fewer mapped reads.
minl=400 (minlength) Discard shorter contigs.
trim=0 (trimends) Trim the first and last X bases of each sequence.
Mapping Parameters (unlisted params will use BBMap defaults)
minhits=2
maxindel=0
tipsearch=0
bw=20
rescue=f
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an
out-of-memory exception occurs. Requires Java 8u92+.
-da Disable assertions.
Other parameters will be passed directly to BBMap.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx800m"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 800m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
function postfilter() {
local CMD="java $EA $EOOM $z -cp $CP assemble.Postfilter $@"
echo $CMD >&2
eval $CMD
}
postfilter "$@" | PypiClean |
/GNUIndentBear-0.10.0.tar.gz/GNUIndentBear-0.10.0/coalaGNUIndentBear/GNUIndentBear.py | import platform
import shlex
from coalib.bearlib import deprecate_settings
from coalib.bearlib.abstractions.Linter import linter
from coalib.bearlib.spacing.SpacingHelper import SpacingHelper
from dependency_management.requirements.DistributionRequirement import (
DistributionRequirement)
@linter(executable='indent' if platform.system() != 'Darwin' else 'gindent',
use_stdin=True,
output_format='corrected',
result_message='Indentation can be improved.')
class GNUIndentBear:
"""
This bear checks and corrects spacing and indentation via the well known
Indent utility.
C++ support is considered experimental.
"""
LANGUAGES = {'C', 'C++'}
REQUIREMENTS = {DistributionRequirement(apt_get='indent')}
AUTHORS = {'The coala developers'}
AUTHORS_EMAILS = {'[email protected]'}
LICENSE = 'AGPL-3.0'
CAN_FIX = {'Formatting'}
@staticmethod
@deprecate_settings(indent_size='tab_width')
def create_arguments(filename, file, config_file,
max_line_length: int=79,
use_spaces: bool=True,
blank_lines_after_declarations: bool=False,
blank_lines_after_procedures: bool=False,
blank_lines_after_commas: bool=False,
braces_on_if_line: bool=False,
braces_on_func_def_line: bool=False,
cuddle_else: bool=False,
while_and_brace_on_same_line: bool=False,
case_indentation: int=0,
space_before_semicolon_after_empty_loop: bool=True,
delete_optional_blank_lines: bool=True,
declaration_indent: int=0,
brace_indent: int=2,
gnu_style: bool=False,
k_and_r_style: bool=False,
linux_style: bool=False,
indent_size: int=SpacingHelper.DEFAULT_TAB_WIDTH,
indent_cli_options: str=''):
"""
:param max_line_length:
Maximum number of characters for a line.
:param use_spaces:
True if spaces are to be used, else tabs.
:param blank_lines_after_declarations:
Forces blank lines after the declarations.
Example:
If ``blank_lines_after_declarations = True`` then
```
int a;
return ...;
```
changes to
```
int a;
return ...;
```
:param blank_lines_after_procedures:
Force blank lines after procedure bodies.
:param blank_lines_after_commas:
Forces newline after comma in declaration.
Example:
If ``blank_lines_after_commas = True`` then
```
int a, b;
```
changes to
```
int a,
b;
```
:param braces_on_if_line:
Puts the brace ``{`` on same line with if.
Example:
If ``braces_on_if_line = True`` then
```
if (x > 0)
{
```
changes to
```
if (x > 0) {
```
:param braces_on_func_def_line:
Puts the brace `{` on same line with the function declaration.
:param cuddle_else:
Cuddle else and preceding ``}``.
Example:
If ``cuddle_else = True`` then
```
if (...) {
....
}
else {
```
changes to
```
if (...) {
....
} else {
```
:param while_and_brace_on_same_line:
Cuddles while of ``do {} while``; and preceding ``}``.
:param case_indentation:
Specifies the number of spaces by which ``case`` in the ``switch``
are indented.
:param space_before_semicolon_after_empty_loop:
Forces a blank before the semicolon ``;`` on one-line ``for`` and
``while`` statements.
:param delete_optional_blank_lines:
Deletes blank lines that are not needed. An example of needed
blank line, is the blank line following a declaration when
``blank_line_after_declaration=True``.
:param declaration_indent:
Forces variables names to be aligned in column ``n`` with
``n = declaration_indent`` in declaration.
Example:
If ``declaration_indent = 8`` then,
```
int a;
float b;
```
changes to
```
int a;
float b;
```
:param brace_indent:
Specifies the number of spaces by which braces are indented. Its
default value is 2.
:param gnu_style:
Uses GNU coding style.
:param k_and_r_style:
Uses Kernighan & Ritchie coding style.
:param linux_style:
Uses Linux coding style.
:param indent_size:
Number of spaces per indentation level.
:param indent_cli_options:
Any command line options the indent binary understands. They
will be simply passed through.
"""
indent_options = ('--no-tabs' if use_spaces else '--use-tabs',
'--line-length', str(max_line_length),
'--indent-level', str(indent_size),
'--tab-size', str(indent_size), )
indent_options += (('--cuddle-do-while',)
if while_and_brace_on_same_line
else ('--dont-cuddle-do-while',))
indent_options += (('--swallow-optional-blank-lines',)
if delete_optional_blank_lines else ('-nsob',))
indent_options += (('--blank-lines-after-declarations',)
if blank_lines_after_declarations else ('-nbad',))
indent_options += (('--blank-lines-after-commas',)
if blank_lines_after_commas else ('-nbc',))
indent_options += (('--blank-lines-after-procedures',)
if blank_lines_after_procedures else ('-nbap',))
indent_options += (('-di' + str(declaration_indent),)
if declaration_indent != 0 else ())
indent_options += (('--case-indentation'+str(case_indentation),)
if case_indentation != 0 else ())
indent_options += (('--space-special-semicolon',)
if space_before_semicolon_after_empty_loop
else ('-nss',))
indent_options += ('--brace-indent'+str(brace_indent),)
indent_options += (('--braces-on-func-def-line',)
if braces_on_func_def_line else ('-blf',))
indent_options += ((('-ce',) if cuddle_else else ('-nce',)) +
('-br',)) if braces_on_if_line else ('-bl',)
indent_style_option = ()
indent_style_option += ('--gnu-style',) if gnu_style else ()
indent_style_option += (('--k-and-r-style',)
if k_and_r_style and indent_style_option is ()
else ())
indent_style_option += (('--linux-style',)
if linux_style and indent_style_option is ()
else ())
# If a style is chosen the other configs aren't passed to `indent`
return (indent_style_option if indent_style_option is not ()
else indent_options) + tuple(shlex.split(indent_cli_options)) | PypiClean |
/Django-MobilityHelpers-0.3.0.tar.gz/Django-MobilityHelpers-0.3.0/mobilityhelpers/middleware/__init__.py | import re
from mobilityhelpers import detect_flavour, disable_helper
from django.utils.deprecation import MiddlewareMixin
class MobileDetectionMiddleware(MiddlewareMixin):
"""
"""
def process_request(self, request):
is_mobile = False
if 'HTTP_USER_AGENT' in request.META:
user_agent = request.META['HTTP_USER_AGENT']
if 'Nexus 7' in user_agent:
setattr(request, 'mobile_flavour', 'tablet')
request.is_mobile = True
return
# Test common mobile values.
pattern = "(up.browser|up.link|mmp|symbian|smartphone|midp|wap|phone|windows ce|pda|mobile|mini|palm|netfront)"
prog = re.compile(pattern, re.IGNORECASE)
match = prog.search(user_agent)
if match:
is_mobile = True
else:
# Nokia like test for WAP browsers.
# http://www.developershome.com/wap/xhtmlmp/xhtml_mp_tutorial.asp?page=mimeTypesFileExtension
if 'HTTP_ACCEPT' in request.META:
http_accept = request.META['HTTP_ACCEPT']
pattern = "application/vnd\.wap\.xhtml\+xml"
prog = re.compile(pattern, re.IGNORECASE)
match = prog.search(http_accept)
if match:
is_mobile = True
if not is_mobile:
# Now we test the user_agent from a big list.
user_agents_test = ("w3c ", "acs-", "alav", "alca", "amoi", "audi",
"avan", "benq", "bird", "blac", "blaz", "brew",
"cell", "cldc", "cmd-", "dang", "doco", "eric",
"hipt", "inno", "ipaq", "java", "jigs", "kddi",
"keji", "leno", "lg-c", "lg-d", "lg-g", "lge-",
"maui", "maxo", "midp", "mits", "mmef", "mobi",
"mot-", "moto", "mwbp", "nec-", "newt", "noki",
"xda", "palm", "pana", "pant", "phil", "play",
"port", "prox", "qwap", "sage", "sams", "sany",
"sch-", "sec-", "send", "seri", "sgh-", "shar",
"sie-", "siem", "smal", "smar", "sony", "sph-",
"symb", "t-mo", "teli", "tim-", "tosh", "tsm-",
"upg1", "upsi", "vk-v", "voda", "wap-", "wapa",
"wapi", "wapp", "wapr", "webc", "winw", "winw",
"xda-",)
test = user_agent[0:4].lower()
if test in user_agents_test:
is_mobile = True
if is_mobile:
if detect_flavour and 'ipad' in user_agent.lower():
setattr(request, 'mobile_flavour', 'ipad')
else:
setattr(request, 'mobile_flavour', 'mobile')
else:
setattr(request, 'mobile_flavour', None)
request.is_mobile = is_mobile | PypiClean |
/DuHast-1.0.7-py3-none-any.whl/duHast/APISamples/RevitFamilyCategoryDataUtils.py |
from collections import namedtuple
import Utility as util
# tuples containing change family category data read from file
changeFamilyCategory = namedtuple('changeFamilyCategory', 'filePath newCategoryName')
# tuples containing change family subcategory data read from file
changeFamilySubCategory = namedtuple('changeFamilySubCategory', 'familyCategory oldSubCategoryName newSubCategoryName')
# tuples used to build category data
graphicPropertyRGB = namedtuple('graphicPropertyRGB', 'red green blue')
graphicPropertyLineWeight = namedtuple('graphicPropertyLineWeight', 'cut projection')
graphicPropertyMaterial = namedtuple('graphicPropertyMaterial', 'id name')
graphicPropertyThreeDCutProjection = namedtuple('graphicProperty', 'threeD cut projection')
# container for category properties
subCategoryPropertiesContainer = namedtuple('subCategoryProperties', 'rgb lineWeight material graphic')
# the actual subcategory representing single row in report
subCategory = namedtuple('subCategory', 'parentCategoryName subCategoryName subCategoryId usageCounter usedBy subCategoryProperties ')
# a root family
rootFamily = namedtuple('rootFamily', 'name category filePath parent child subcategories')
# a nested family
nestedFamily = namedtuple('nestedFamily', 'name category filePath rootPath categoryPath hostFamily subcategories')
# row structure of family change category directive file
_CATEGORY_CHANGE_DATA_LIST_INDEX_FAMILY_FILE_PATH = 0
_CATEGORY_CHANGE_DATA_LIST_INDEX_NEW_FAMILY_CATEGORY = 1
# row structure of family change subcategory directive file
_SUBCATEGORY_CHANGE_DATA_LIST_INDEX_FAMILY_CATEGORY = 0
_SUBCATEGORY_CHANGE_DATA_LIST_INDEX_OLD_SUBCATEGORY_NAME = 1
_SUBCATEGORY_CHANGE_DATA_LIST_INDEX_NEW_SUBCATEGORY_NAME = 2
# row structure of family category data file
_CATEGORY_DATA_LIST_INDEX_ROOT_PATH = 0
_CATEGORY_DATA_LIST_INDEX_ROOT_CATEGORY_PATH = 1
_CATEGORY_DATA_LIST_INDEX_FAMILY_NAME = 2
_CATEGORY_DATA_LIST_INDEX_FAMILY_FILE_PATH = 3
_CATEGORY_DATA_LIST_INDEX_USAGE_COUNTER = 4
_CATEGORY_DATA_LIST_INDEX_USED_BY = 5
_CATEGORY_DATA_LIST_INDEX_CATEGORY_NAME = 6
_CATEGORY_DATA_LIST_INDEX_SUBCATEGORY_NAME = 7
_CATEGORY_DATA_LIST_INDEX_SUBCATEGORY_ID = 8
_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_3D = 9
_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_CUT = 10
_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_PROJECTION = 11
_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_MATERIAL_NAME = 12
_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_MATERIAL_ID = 13
_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_LINE_WEIGHT_CUT = 14
_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_LINE_WEIGHT_PROJECTION = 15
_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_RGB_RED = 16
_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_RGB_GREEN = 17
_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_RGB_BLUE = 18
# file name identifiers for family base data
_FAMILY_CATEGORY_DATA_FILE_NAME_PREFIX = 'FamilyCategories'
_FAMILY_CATEGORY_DATA_FILE_EXTENSION = '.csv'
# file name identifiers for category change directives
_CATEGORY_CHANGE_DIRECTIVE_FILE_NAME_PREFIX = 'CategoryChangeDirective'
_CATEGORY_CHANGE_DIRECTIVE_FILE_EXTENSION = '.csv'
# file name identifiers for subcategory change directives
_SUBCATEGORY_CHANGE_DIRECTIVE_FILE_NAME_PREFIX = 'SubCategoryChangeDirective'
_SUBCATEGORY_CHANGE_DIRECTIVE_FILE_EXTENSION = '.csv'
# exceptions
_EXCEPTION_NO_FAMILY_CHANGE_DIRECTIVE_DATA_FILES = 'Families change directive list files do not exist.'
_EXCEPTION_EMPTY_CHANGE_DIRECTIVE_DATA_FILES = 'Empty Families change directive data file(s)!'
_EXCEPTION_NO_FAMILY_CATEGORY_DATA_FILES = 'Families category data list files do not exist.'
_EXCEPTION_EMPTY_FAMILY_CATEGORY_DATA_FILES = 'Empty Families category data list file!'
_EXCEPTION_NO_FAMILY_SUBCATEGORY_DATA_FILES = 'Families subcategory data list files do not exist.'
_EXCEPTION_EMPTY_FAMILY_SUBCATEGORY_DATA_FILES = 'Empty Families subcategory data list file!'
# -------------------------------- read category data set ----------------------------------------------------------------
# sample root and nested family tuple usage / set up
# set up subcategory properties
'''
dataRGB = graphicPropertyRGB(120,120,120)
dataLineWeight = graphicPropertyLineWeight(3,6)
dataMaterial = graphicPropertyMaterial(-1,'')
dataGraphic = graphicPropertyThreeDCutProjection(None,None,None)
# combine into a container
dataSubPropertiesContainer = subCategoryPropertiesContainer (dataRGB, dataLineWeight, dataMaterial, dataGraphic)
# set up the actual sub category ( single row in report )
dataSubCatSample = subCategory('Parent Cat Name', 'subCat name', 1234, dataSubPropertiesContainer)
# set up a root family tuple
dataRootFam = rootFamily('root family name', 'root family category', 'file path', [], [], [dataSubCatSample])
# set up a nested family tuple
dataNestedFam = nestedFamily('nested family name', 'nested family category', 'file path', 'root Path', 'category Path','host Family data', [dataSubCatSample])
'''
#end samples
def _createRootFamilyFromData(dataRow):
'''
Sets up a root family tuple from data row past in.
:param dataRow: A row from the category report.
:type dataRow: [str]
:return: a rootFamily tuple
:rtype: named tuple :rootFamily
'''
# need to check if this is a category belonging to the current family or a new family??
fam = rootFamily(
dataRow[_CATEGORY_DATA_LIST_INDEX_FAMILY_NAME],
dataRow[_CATEGORY_DATA_LIST_INDEX_CATEGORY_NAME],
dataRow[_CATEGORY_DATA_LIST_INDEX_FAMILY_FILE_PATH],
[], # set up an empty list for parent families
[], # set up an empty list for child families
[] # set up empty list for sub-categories
)
return fam
def _createNestedFamilyFromData(dataRow):
'''
Sets up a nested family tuple from data row past in.
:param dataRow: A row from the category report.
:type dataRow: [str]
:return: a nested family tuple
:rtype: named tuple :nestedFamily
'''
# found a child family
fam = nestedFamily (
dataRow[_CATEGORY_DATA_LIST_INDEX_FAMILY_NAME],
dataRow[_CATEGORY_DATA_LIST_INDEX_CATEGORY_NAME],
dataRow[_CATEGORY_DATA_LIST_INDEX_FAMILY_FILE_PATH],
dataRow[_CATEGORY_DATA_LIST_INDEX_ROOT_PATH].split(' :: '), # split root path into list for ease of searching
dataRow[_CATEGORY_DATA_LIST_INDEX_ROOT_CATEGORY_PATH].split(' :: '), # split category path into list for ease of searching
[], # set up an empty list for host families
[] # set up empty list for sub-categories
)
return fam
def _setupFamilyFromData(dataRow):
'''
Creates a nested family or root family tuple from data row past in.
:param dataRow: A row from the category report.
:type dataRow: [str]
:return: A nested or root family tuple.
:rtype: named tuple
'''
fam = None
if( '::' not in dataRow[_CATEGORY_DATA_LIST_INDEX_ROOT_PATH]):
fam = _createRootFamilyFromData(dataRow)
else:
# found a child family
fam = _createNestedFamilyFromData(dataRow)
return fam
def _buildSubCategoryPropertiesFromData(dataRow):
'''
Generates a subcategory tuple based on data row past in.
:param dataRow: A row from the category report.
:type dataRow: [str]
:return: A subCategory tuple.
:rtype: named tuple
'''
# read category data first
# get colour RGB values
dataRGB = graphicPropertyRGB(
dataRow[_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_RGB_RED],
dataRow[_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_RGB_GREEN],
dataRow[_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_RGB_BLUE]
)
# get line weight values
dataLineWeight = graphicPropertyLineWeight(
dataRow[_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_LINE_WEIGHT_CUT],
dataRow[_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_LINE_WEIGHT_PROJECTION]
)
# get material values
dataMaterial = graphicPropertyMaterial(
dataRow[_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_MATERIAL_ID],
dataRow[_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_MATERIAL_NAME]
)
# get graphic properties
dataGraphic = graphicPropertyThreeDCutProjection(
dataRow[_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_3D],
dataRow[_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_CUT],
dataRow[_CATEGORY_DATA_LIST_INDEX_GRAPHIC_PROPERTY_PROJECTION]
)
# put all of the above together
dataSubPropertiesContainer = subCategoryPropertiesContainer (
dataRGB,
dataLineWeight,
dataMaterial,
dataGraphic
)
# set up the actual sub category ( single row in report )
dataSubCatSample = subCategory(
dataRow[_CATEGORY_DATA_LIST_INDEX_CATEGORY_NAME],
dataRow[_CATEGORY_DATA_LIST_INDEX_SUBCATEGORY_NAME],
dataRow[_CATEGORY_DATA_LIST_INDEX_SUBCATEGORY_ID],
dataRow[_CATEGORY_DATA_LIST_INDEX_USAGE_COUNTER],
dataRow[_CATEGORY_DATA_LIST_INDEX_USED_BY],
dataSubPropertiesContainer
)
return dataSubCatSample
def _getCategoryDataFileName(directoryPath):
'''
Gets the first family base data file in provided directory or any of it's sub directories.
:param directoryPath: Fully qualified directory path.
:type directoryPath: str
:raises Exception: EXCEPTION_NO_FAMILY_BASE_DATA_FILES
:return: Fully qualified file path to family base data file.
:rtype: str
'''
# get all base data files in folder
files = util.GetFilesFromDirectoryWalkerWithFilters(
directoryPath,
_FAMILY_CATEGORY_DATA_FILE_NAME_PREFIX,
'',
_FAMILY_CATEGORY_DATA_FILE_EXTENSION
)
if( len(files) > 0):
return files[0]
else:
raise Exception(_EXCEPTION_NO_FAMILY_CATEGORY_DATA_FILES)
def ReadOverallFamilyDataList(filePath):
'''
Reads list of families from family category data report file into named tuples.
:param filePath: Fully qualified file path to family category data report file.
:type filePath: str
:raises Exception: "Families category data list files does not exist."
:raises Exception: "Empty Families category data list file!"
:return: Two lists: first list of named tuples contain family root data, second list contains family nested data.
:rtype: [rootFamily], [nestedFamily]
'''
rows = []
if(util.FileExist(filePath)):
rows = util.ReadCSVfile(filePath)
else:
raise Exception(_EXCEPTION_NO_FAMILY_CATEGORY_DATA_FILES)
if(len(rows) > 0):
pass
else:
raise Exception(_EXCEPTION_EMPTY_FAMILY_CATEGORY_DATA_FILES)
returnValueRootFamily = []
returnValueNestedFamily = []
# pointer to the current family
currentFam = None
for i in range(1, len(rows)):
# set up the actual sub category ( single row in report )
dataSubCatSample = _buildSubCategoryPropertiesFromData(rows[i])
# get name and category as unique identifier
famId = rows[i][_CATEGORY_DATA_LIST_INDEX_FAMILY_NAME] + rows[i][_CATEGORY_DATA_LIST_INDEX_CATEGORY_NAME]
# check if this is the current family ...
# this assumes family category data in report file is ordered by family!!!
if(currentFam == None):
# and set up a new one:
currentFam = _setupFamilyFromData(rows[i])
# append category data to new family
currentFam.subcategories.append(dataSubCatSample)
elif (currentFam.name + currentFam.category == famId):
# append category data to existing family
currentFam.subcategories.append(dataSubCatSample)
else:
# if not:
# append family to one of the list to be returned
if(isinstance(currentFam,rootFamily)):
returnValueRootFamily.append(currentFam)
else:
returnValueNestedFamily.append(currentFam)
# and set up a new one:
currentFam = _setupFamilyFromData(rows[i])
# append category data to new family
currentFam.subcategories.append(dataSubCatSample)
return returnValueRootFamily, returnValueNestedFamily
def ReadOverallFamilyCategoryDataFromDirectory(directoryPath):
'''
Reads the first family category data file it finds in a folder.
Note: This method calls ReadOverallFamilyDataList(filePath) which will raise exceptions if files are empty or dont exist in specified folder.
:param directoryPath: A fully qualified directory path containing family category data file(s)
:type directoryPath: _str
:return: Two lists: first list of named tuples contain family root data, second list contains family nested data.
:rtype: [rootFamily], [nestedFamily]
'''
fileName = _getCategoryDataFileName(directoryPath)
return ReadOverallFamilyDataList(fileName)
# -------------------------------- read family category change directives ----------------------------------------------------------------
def ReadOverallChangeCategoryDirectivesList(filePaths):
'''
Reads list of family change category directives from files into named tuples.
:param filePath: List of fully qualified file path to family change category directive file.
:type filePath: [str]
:raises Exception: "Families change directive list files do not exist."
:raises Exception: "Empty Families category data list file!"
:return: List of named tuples contain family category change directive.
:rtype: [changeFamilyCategory]
'''
rows = []
matchAnyFile = False
for filePath in filePaths:
if(util.FileExist(filePath)):
# set flag that we at least found one file
matchAnyFile = True
rowsFile = util.ReadCSVfile(filePath)
result = list(rows)
result.extend(item for item in rowsFile
if item not in result)
rows = result
# check if any files found
if(matchAnyFile == False):
raise Exception(_EXCEPTION_NO_FAMILY_CHANGE_DIRECTIVE_DATA_FILES)
# check if files contained any data
if(len(rows) > 0):
# populate change directive tuples
returnValueChangeDirectives = []
for row in rows:
changeDirective = changeFamilyCategory(
row[_CATEGORY_CHANGE_DATA_LIST_INDEX_FAMILY_FILE_PATH],
row[_CATEGORY_CHANGE_DATA_LIST_INDEX_NEW_FAMILY_CATEGORY]
)
returnValueChangeDirectives.append(changeDirective)
else:
raise Exception(_EXCEPTION_EMPTY_CHANGE_DIRECTIVE_DATA_FILES)
return returnValueChangeDirectives
def _getCategoryChangeDirectiveFileNames(directoryPath):
'''
Gets change category directive file in provided directory or any of it's sub directories.
:param directoryPath: Fully qualified directory path.
:type directoryPath: str
:raises Exception: _EXCEPTION_NO_FAMILY_CHANGE_DIRECTIVE_DATA_FILES
:return: List of fully qualified file path to family change category directive files.
:rtype: [str]
'''
# get all base data files in folder
files = util.GetFilesFromDirectoryWalkerWithFilters(
directoryPath,
_CATEGORY_CHANGE_DIRECTIVE_FILE_NAME_PREFIX,
'',
_CATEGORY_CHANGE_DIRECTIVE_FILE_EXTENSION
)
if( len(files) > 0):
return files
else:
raise Exception(_EXCEPTION_NO_FAMILY_CHANGE_DIRECTIVE_DATA_FILES)
def ReadOverallFamilyCategoryChangeDirectivesFromDirectory(directoryPath):
'''
Reads all category change directive file it finds in a folder.
Note: This method calls ReadOverallFamilyDataList(filePath) which will raise exceptions if files are empty or dont exist in specified folder.
:param directoryPath: A fully qualified directory path containing family category change directive file(s)
:type directoryPath: _str
:return: list of named tuples contain family category change directives.
:rtype: [changeFamilyCategory]
'''
fileNames = _getCategoryChangeDirectiveFileNames(directoryPath)
return ReadOverallChangeCategoryDirectivesList(fileNames)
# -------------------------------- read family subcategory change directives ----------------------------------------------------------------
def ReadOverallFamilySubCategoryChangeDirectivesFromDirectory(directoryPath):
'''
Reads all subcategory change directive file it finds in a folder.
Note: This method calls ReadOverallFamilyDataList(filePath) which will raise exceptions if files are empty or dont exist in specified folder.
:param directoryPath: A fully qualified directory path containing family category change directive file(s)
:type directoryPath: _str
:return: list of named tuples contain family sub category change directives.
:rtype: [changeFamilySubCategory]
'''
fileNames = _getSubCategoryChangeDirectiveFileNames(directoryPath)
return ReadOverallChangeSubCategoryDirectivesList(fileNames)
def _getSubCategoryChangeDirectiveFileNames(directoryPath):
'''
Gets change subcategory directive file in provided directory or any of it's sub directories.
:param directoryPath: Fully qualified directory path.
:type directoryPath: str
:raises Exception: _EXCEPTION_NO_FAMILY_CHANGE_DIRECTIVE_DATA_FILES
:return: List of fully qualified file path to family change sub category directive files.
:rtype: [str]
'''
# get all base data files in folder
files = util.GetFilesFromDirectoryWalkerWithFilters(
directoryPath,
_SUBCATEGORY_CHANGE_DIRECTIVE_FILE_NAME_PREFIX,
'',
_SUBCATEGORY_CHANGE_DIRECTIVE_FILE_EXTENSION
)
if( len(files) > 0):
return files
else:
raise Exception(_EXCEPTION_NO_FAMILY_CHANGE_DIRECTIVE_DATA_FILES)
def ReadOverallChangeSubCategoryDirectivesList(filePaths):
'''
Reads list of family change subcategory directives from files into named tuples.
:param filePath: List of fully qualified file path to family change category directive file.
:type filePath: [str]
:raises Exception: _EXCEPTION_NO_FAMILY_SUBCATEGORY_DATA_FILES
:raises Exception: _EXCEPTION_EMPTY_FAMILY_SUBCATEGORY_DATA_FILES
:return: List of named tuples containing family subcategory change directive.
:rtype: [changeFamilySubCategory]
'''
rows = []
matchAnyFile = False
for filePath in filePaths:
if(util.FileExist(filePath)):
# set flag that we at least found one file
matchAnyFile = True
rowsFile = util.ReadCSVfile(filePath)
result = list(rows)
result.extend(item for item in rowsFile
if item not in result)
rows = result
# check if any files found
if(matchAnyFile == False):
raise Exception(_EXCEPTION_NO_FAMILY_SUBCATEGORY_DATA_FILES)
# check if files contained any data
if(len(rows) > 0):
# populate change directive tuples
returnValueChangeDirectives = []
for row in rows:
changeDirective = changeFamilySubCategory(
row[_SUBCATEGORY_CHANGE_DATA_LIST_INDEX_FAMILY_CATEGORY],
row[_SUBCATEGORY_CHANGE_DATA_LIST_INDEX_OLD_SUBCATEGORY_NAME],
row[_SUBCATEGORY_CHANGE_DATA_LIST_INDEX_NEW_SUBCATEGORY_NAME],
)
returnValueChangeDirectives.append(changeDirective)
else:
raise Exception(_EXCEPTION_EMPTY_FAMILY_SUBCATEGORY_DATA_FILES)
return returnValueChangeDirectives
def GetFamiliesRequiringSubCategoryChange(rootFamilies, subCatChangeDirectives):
'''
Returns a list of file path of root families containing subcategories requiring a rename.
Note: list of file path returned is unique: i.e. if a family has multiple matches for rename subcategory directives it will still only appear once in the list.
:param rootFamilies: List of tuples of root families.
:type rootFamilies: [rootFamily]
:param subCatChangeDirectives: List of subcategory change directives.
:type subCatChangeDirectives: [changeFamilySubCategory]
:return: List of family file path.
:rtype: [str]
'''
rootFamiliesNeedingChange = []
# check each root family
for rootFam in rootFamilies:
# against each subcategory change directive
for changeD in subCatChangeDirectives:
# check if match family category
if(rootFam.category == changeD.familyCategory):
# loop over subcategories in family and check if any of them needs renaming
for subCatRoot in rootFam.subcategories:
if(subCatRoot.subCategoryName == changeD.oldSubCategoryName):
# found a match
if (rootFam.filePath not in rootFamiliesNeedingChange):
rootFamiliesNeedingChange.append(rootFam.filePath)
break
return rootFamiliesNeedingChange | PypiClean |
/Firefly%20III%20API%20Python%20Client-1.5.6.post2.tar.gz/Firefly III API Python Client-1.5.6.post2/firefly_iii_client/model/rule_trigger_type.py | import re # noqa: F401
import sys # noqa: F401
from firefly_iii_client.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
)
from ..model_utils import OpenApiModel
from firefly_iii_client.exceptions import ApiAttributeError
class RuleTriggerType(ModelSimple):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
('value',): {
'STORE-JOURNAL': "store-journal",
'UPDATE-JOURNAL': "update-journal",
},
}
validations = {
}
additional_properties_type = None
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
return {
'value': (str,),
}
@cached_property
def discriminator():
return None
attribute_map = {}
read_only_vars = set()
_composed_schemas = None
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs):
"""RuleTriggerType - a model defined in OpenAPI
Note that value can be passed either in args or in kwargs, but not in both.
Args:
args[0] (str): Which action is necessary for the rule to fire? Use either store-journal or update-journal.., must be one of ["store-journal", "update-journal", ] # noqa: E501
Keyword Args:
value (str): Which action is necessary for the rule to fire? Use either store-journal or update-journal.., must be one of ["store-journal", "update-journal", ] # noqa: E501
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
# required up here when default value is not given
_path_to_item = kwargs.pop('_path_to_item', ())
if 'value' in kwargs:
value = kwargs.pop('value')
elif args:
args = list(args)
value = args.pop(0)
else:
raise ApiTypeError(
"value is required, but not passed in args or kwargs and doesn't have default",
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.value = value
if kwargs:
raise ApiTypeError(
"Invalid named arguments=%s passed to %s. Remove those invalid named arguments." % (
kwargs,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs):
"""RuleTriggerType - a model defined in OpenAPI
Note that value can be passed either in args or in kwargs, but not in both.
Args:
args[0] (str): Which action is necessary for the rule to fire? Use either store-journal or update-journal.., must be one of ["store-journal", "update-journal", ] # noqa: E501
Keyword Args:
value (str): Which action is necessary for the rule to fire? Use either store-journal or update-journal.., must be one of ["store-journal", "update-journal", ] # noqa: E501
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
"""
# required up here when default value is not given
_path_to_item = kwargs.pop('_path_to_item', ())
self = super(OpenApiModel, cls).__new__(cls)
if 'value' in kwargs:
value = kwargs.pop('value')
elif args:
args = list(args)
value = args.pop(0)
else:
raise ApiTypeError(
"value is required, but not passed in args or kwargs and doesn't have default",
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
self.value = value
if kwargs:
raise ApiTypeError(
"Invalid named arguments=%s passed to %s. Remove those invalid named arguments." % (
kwargs,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
return self | PypiClean |
/BinTools-0.2.0.zip/BinTools-0.2.0/README.txt | ===========
Python modules for Software Development Tools
===========
PyDevTools is a suite of python modules for the rapid prototyping of software development tools.
Currently, the project include modules for:
* ELF handling
* DWARF handling
Next, we are planning to add a GDB module containing :
* `RSP <http://sourceware.org/gdb/current/onlinedocs/gdb/Remote-Protocol.html>`
* `MI <http://sourceware.org/gdb/current/onlinedocs/gdb/GDB_002fMI.html>`
protocols implementation, this will allow to drive gdbserver and gdb from python scripts.
Please, report a new issue for bugs, or desired features.
(possibly, attaching the target ELF file to reproduce the issue).
'E-mail <mailto:[email protected]>`the project owner for any feedback, or for joining the project.
| PypiClean |
/KratosDelaunayMeshingApplication-9.1.3-1-cp37-cp37m-win_amd64.whl/KratosMultiphysics/DelaunayMeshingApplication/mesher.py | from __future__ import print_function, absolute_import, division # makes KratosMultiphysics backward compatible with python 2.6 and 2.7
#import kratos core and applications
import KratosMultiphysics
import KratosMultiphysics.DelaunayMeshingApplication as KratosDelaunay
def CreateMesher(main_model_part, meshing_parameters):
return Mesher(main_model_part, meshing_parameters)
class Mesher(object):
#
def __init__(self, main_model_part, meshing_parameters):
self.echo_level = 1
self.main_model_part = main_model_part
self.MeshingParameters = meshing_parameters
self.model_part = self.main_model_part
if( self.main_model_part.Name != self.MeshingParameters.GetSubModelPartName() ):
self.model_part = self.main_model_part.GetSubModelPart(self.MeshingParameters.GetSubModelPartName())
#
def Initialize(self, dimension):
self.dimension = dimension
# set mesher
if(self.dimension == 2):
self.mesher = KratosDelaunay.TriangularMesh2DMesher()
elif(self.dimension == 3):
self.mesher = KratosDelaunay.TetrahedralMesh3DMesher()
self.mesher.SetEchoLevel(self.echo_level)
self.mesher.SetMeshingParameters(self.MeshingParameters)
self.SetPreMeshingProcesses()
self.SetPostMeshingProcesses()
self.mesher.Initialize()
#
def InitializeMeshing(self):
self.MeshingParameters.InitializeMeshing()
# set execution flags: to set the options to be executed in methods and processes
execution_options = KratosMultiphysics.Flags()
execution_options.Set(KratosDelaunay.MesherUtilities.INITIALIZE_MESHER_INPUT, False)
execution_options.Set(KratosDelaunay.MesherUtilities.FINALIZE_MESHER_INPUT, False)
execution_options.Set(KratosDelaunay.MesherUtilities.TRANSFER_KRATOS_NODES_TO_MESHER, False)
execution_options.Set(KratosDelaunay.MesherUtilities.TRANSFER_KRATOS_ELEMENTS_TO_MESHER, False)
execution_options.Set(KratosDelaunay.MesherUtilities.TRANSFER_KRATOS_NEIGHBOURS_TO_MESHER, False)
execution_options.Set(KratosDelaunay.MesherUtilities.TRANSFER_KRATOS_FACES_TO_MESHER, False)
execution_options.Set(KratosDelaunay.MesherUtilities.SELECT_TESSELLATION_ELEMENTS, False)
execution_options.Set(KratosDelaunay.MesherUtilities.KEEP_ISOLATED_NODES, False)
self.MeshingParameters.SetExecutionOptions(execution_options)
# set mesher flags: to set options for the mesher (triangle 2D, tetgen 3D)
if( self.dimension == 2 ):
pass
#REFINE
#ADD NODES
#to add_nodes automatically and refine the mesh ("q"-quality mesh and "a"-area constraint switches)
# "YYJaqrn" "YJq1.4arn" "Jq1.4arn"
#refine
#mesher_flags = "YJq1.4arnQ"
#refine constrained
#mesher_flags = "pYJq1.4arnCQ"
#INSERT NODES
#to insert a set of given points and refine the mesh
# "rinYYJQ" "rinYYJQ" "rinJQ" "rinQ"
#refine
#mesher_flags = "rinJQ"
#refine constrained
#mesher_flags = "rinYYJQ"
#refine without adding nodes
#mesher_flags = "YJrnQ"
#RECONNECT
#to reconnect a set of points only
#mesher_flags = "nQP"
#constrained
#mesher_flags = "pnBYYQ"
#BOUNDARY SEARCH
#to get conectivities, boundaries and neighbours only
#mesher_flags = "ncEBQ"
if( self.dimension == 3 ):
#other flags
pass
#
def SetPreMeshingProcesses(self):
# The order set is the order of execution:
# process to refine elements / refine boundary
refine_mesh_elements = KratosDelaunay.RefineElementsOnThreshold(self.model_part, self.MeshingParameters, self.echo_level)
self.mesher.SetPreMeshingProcess(refine_mesh_elements)
# process to refine boundary / contact boundary
refine_mesh_boundary = RefineConditions(self.model_part, self.RefiningParameters, self.echo_level)
self.mesher.SetPreMeshingProcess(refine_mesh_boundary)
# process to remove nodes / remove boundary
remove_mesh_nodes = KratosDelaunay.RemoveNodes(self.model_part, self.MeshingParameters, self.echo_level)
self.mesher.SetPreMeshingProcess(remove_mesh_nodes)
#
def SetPostMeshingProcesses(self):
# The order set is the order of execution:
#generate new particles
generate_particles = KratosDelaunay.GenerateNewNodes(self.model_part, self.MeshingParameters, self.echo_level)
self.mesher.SetPostMeshingProcess(generate_particles)
#select mesh elements
select_mesh_elements = KratosDelaunay.SelectElements(self.model_part, self.MeshingParameters, self.echo_level)
self.mesher.SetPostMeshingProcess(select_mesh_elements)
#rebuild elements
rebuild_mesh_elements = KratosDelaunay.GenerateNewElements(self.model_part, self.MeshingParameters, self.echo_level)
self.mesher.SetPostMeshingProcess(rebuild_mesh_elements)
#rebuild boundary
rebuild_mesh_boundary = KratosDelaunay.GenerateNewConditions(self.model_part, self.MeshingParameters, self.echo_level)
self.mesher.SetPostMeshingProcess(rebuild_mesh_boundary)
#
def FinalizeMeshing(self):
# reset execution flags: to unset the options to be executed in methods and processes
execution_options = KratosMultiphysics.Flags()
# all flags
execution_options.Set(KratosDelaunay.MesherUtilities.INITIALIZE_MESHER_INPUT, False)
execution_options.Set(KratosDelaunay.MesherUtilities.FINALIZE_MESHER_INPUT, False)
execution_options.Set(KratosDelaunay.MesherUtilities.TRANSFER_KRATOS_NODES_TO_MESHER, False)
execution_options.Set(KratosDelaunay.MesherUtilities.TRANSFER_KRATOS_ELEMENTS_TO_MESHER, False)
execution_options.Set(KratosDelaunay.MesherUtilities.TRANSFER_KRATOS_NEIGHBOURS_TO_MESHER, False)
execution_options.Set(KratosDelaunay.MesherUtilities.TRANSFER_KRATOS_FACES_TO_MESHER, False)
execution_options.Set(KratosDelaunay.MesherUtilities.SELECT_TESSELLATION_ELEMENTS, False)
execution_options.Set(KratosDelaunay.MesherUtilities.KEEP_ISOLATED_NODES, False)
self.MeshingParameters.SetExecutionOptions(execution_options)
self.MeshingParameters.FinalizeMeshing()
#
def ExecuteMeshing(self):
self.InitializeMeshing() #set execution flags and mesher flags
self.mesher.ExecuteMeshing(self.model_part)
self.FinalizeMeshing() #set execution flags and mesher flags
#
def SetEchoLevel(self, echo_level):
self.echo_level = echo_level
#
@classmethod
def _class_prefix(self):
header = "::[-------Mesher------]::"
return header | PypiClean |
/LeanSim-0.1.tar.gz/LeanSim-0.1/leansim/workflow.py | from __future__ import print_function
from time import sleep
import os
from .worker import Worker
import sys
class Workflow:
def __init__(self, workers):
self.workers = workers
self.work_done = 0
@property
def total_work(self):
return sum(w.wip + w.done for w in self.workers) - self.workers[-1].done
@property
def wip(self):
return self.total_work - self.workers[0].todo
def step(self):
for worker in self.workers[::-1]:
work_done = worker.work()
worker.push()
self.work_done += work_done
def process(self, work, verbose=False, sleep_time=0, start_delay=1.5):
"""Returns number of steps to process some piece of work."""
self.workers[0].todo = work
if verbose:
os.system('cls' if os.name == 'nt' else 'clear')
print(self)
sleep(start_delay)
steps = 0
while self.total_work:
steps += 1
self.step()
if verbose:
os.system('cls' if os.name == 'nt' else ' clear')
print(self)
print('Steps: {}'.format(steps), end='\n\n')
sys.stdout.flush()
if sleep_time:
sleep(sleep_time)
return steps
def __repr__(self):
rep = '------------------------- LEAN SIM --------------------------------\n\n'
for attr in ['task_duration', 'capacity', 'batch_size', 'max_todo', '', 'todo', 'doing', 'done', '', 'wip']:
rep += '{:>15}:'.format(attr)
for w in self.workers:
if attr:
val = getattr(w, attr)
val = len(val) if hasattr(val, '__iter__') else val
rep += '\t {}'.format(val if val else ' ')
else:
rep += '\t---'
rep += '\n'
rep += '\n-------------------------------------------------------------------\n'
for attr in ['total_work', 'wip']:
rep += '{}: {} '.format(attr, getattr(self, attr))
return rep
@classmethod
def run_chained_process(cls, work=20, workers=4, verbose=False, sleep_time=0.2, bottleneck_worker=None, **worker_kwargs):
queue = [Worker(**worker_kwargs) for _ in range(workers)]
for w1, w2 in zip(queue[:-1], queue[1:]):
w1.target = w2
workflow = cls(workers=queue)
if bottleneck_worker:
workflow.workers[bottleneck_worker - 1].task_duration = workflow.workers[0].task_duration * 4
steps = workflow.process(work=work, verbose=verbose, sleep_time=sleep_time)
return steps | PypiClean |
Subsets and Splits