code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
# zlibrary
Update: Zlibrary is back to clearnet in Hydra mode, see #11.
### Install
`pip install zlibrary`
### Example
```python
import zlibrary
import asyncio
async def main():
lib = zlibrary.AsyncZlib()
# zlibrary requires a singlelogin account in order to access the website
await lib.login(email, password)
# count: 10 results per set
paginator = await lib.search(q="biology", count=10)
# fetching first result set (0 ... 10)
first_set = await paginator.next()
# fetching next result set (10 ... 20)
next_set = await paginator.next()
# get back to previous set (0 ... 10)
prev_set = await paginator.prev()
# create a paginator of computer science with max count of 50
paginator = await lib.search(q="computer science", count=50)
# fetching results (0 ... 50)
next_set = await paginator.next()
# calling another next_set will fire up a request to fetch the next page
next_set = await paginator.next()
# get current result set
current_set = paginator.result
# current_set = [
# {
# 'id': '123',
# 'isbn': '123',
# 'url': 'https://x.x/book/123',
# 'cover': 'https://x.x/2f.jpg',
# 'name': 'Numerical Python',
# 'publisher': 'No Fun Allowed LLC',
# 'publisher_url': 'https://x.x/s/?q=NoFunAllowedLLC',
# 'authors': [
# {
# 'author': 'Ben Dover',
# 'author_url': 'https://x.x/g/Ben_Dover'
# }
# ],
# 'year': '2019',
# 'language': 'english',
# 'extension': 'PDF',
# 'size': ' 23.46 MB',
# 'rating': '5.0/5.0'
# },
# { 'id': '234', ... },
# { 'id': '456', ... },
# { 'id': '678', ... },
# ]
# switch pages explicitly
await paginator.next_page()
# here, no requests are being made: results are cached
await paginator.prev_page()
await paginator.next_page()
# retrieve specific book from list
book = await paginator.result[0].fetch()
# book = {
# 'url': 'https://x.x/book/123',
# 'name': 'Numerical Python',
# 'cover': 'https://x.x/2f.jpg',
# 'description': "Leverage the numerical and mathematical modules...",
# 'year': '2019',
# 'edition': '2',
# 'publisher': 'No Fun Allowed LLC',
# 'language': 'english',
# 'categories': 'Computers - Computer Science',
# 'categories_url': 'https://x.x/category/173/Computers-Computer-Science',
# 'extension': 'PDF',
# 'size': ' 23.46 MB',
# 'rating': '5.0/5.0',
# 'download_url': 'https://x.x/dl/123'
# }
if __name__ == '__main__':
asyncio.run(main())
```
### Search params
```python
from zlibrary import Language, Extension
await lib.search(q="Deleuze", from_year=1976, to_year=2005,
lang=[Language.ENGLISH, Language.RUSSIAN], extensions=[Extension.PDF, Extension.EPUB])
await lib.full_text_search(q="The circuits of surveillance cameras are themselves part of the decor of simulacra",
lang=[Language.ENGLISH], extensions=[Extension.PDF], phrase=True, exact=True)
```
### Onion example
You need to enable onion domains and set up a tor proxy before you can use the library.
```python
import zlibrary
import asyncio
async def main():
lib = zlibrary.AsyncZlib(onion=True, proxy_list=['socks5://127.0.0.1:9050'])
# 127.0.0.1:9050 is the default address:port of tor service
# tor website cannot be accessed without login
await lib.login(email, password)
# now you can use it as usual
paginator = await lib.search(q="biology", count=10)
if __name__ == '__main__':
asyncio.run(main())
```
### Enable logging
Put anywhere in your code:
```python
import logging
logging.getLogger("zlibrary").addHandler(logging.StreamHandler())
logging.getLogger("zlibrary").setLevel(logging.DEBUG)
```
### Proxy support
```python
# You can add multiple proxies in the chain:
# proxy_list=[
# "http://login:password@addr:port",
# "socks4://addr:port",
# "socks5://addr:port"
# ]
lib = zlibrary.AsyncZlib(proxy_list=["socks5://127.0.0.1:9050"])
await lib.login(email, password)
await lib.init()
```
### Download history
```python
await lib.login(email, password)
# get a paginator of download history
dhistory = await lib.profile.download_history()
# get current page
first_page = dhistory.result
# get next page (if any; returns [] if empty)
await dhistory.next_page()
# go back
await dhistory.prev_page()
# fetch a book
book = await dhistory.result[0].fetch()
```
### Download limits
```python
await lib.login(email, password)
limit = await lib.profile.get_limits()
# limit = {'books': {'daily': '0/10', 'total': '13'}, 'articles': {'daily': '0/30', 'total': '0'}}
```
### Booklists
```python
await lib.login(email, password)
# get booklists paginator
bpage = await lib.profile.search_public_booklists(q="philosophy", count=10, order=zlibrary.OrderOptions.POPULAR)
# get first 10 booklists
first_set = await bpage.next()
# get one booklist
booklist = first_set[0]
# get booklist data
booklist_data = await booklist.fetch()
# booklist_data = { 'name': 'VVV', url: 'YYY' }
# get first 10 books from the booklist
book_set = await booklist.next()
# fetch a book
book = await book_set[0].fetch()
# fetch personal booklists
bpage = await lib.profile.search_private_booklists(q="")
```
### Set up a tor service
`sudo apt install tor obfs4proxy` or `yay -S tor obfs4proxy` for Arch
`sudo systemctl enable --now tor`
If tor is blocked in your country, you also need to edit /etc/tor/torrc and set up bridges for it to work properly.
**HOW TO REQUEST BRIDGES**
Using gmail, send an email to `[email protected]` with the following content: `get transport obfs4`
Shortly after you should receive a reply with bridges.
Edit /etc/tor/torrc to enable and add your bridges:
```
UseBridges 1
ClientTransportPlugin obfs4 exec /usr/bin/obfs4proxy
<INSERT YOUR BRIDGES HERE>
```
Restart tor service:
`sudo systemctl restart tor`
| zlibrary | /zlibrary-0.2.5.tar.gz/zlibrary-0.2.5/README.md | README.md |
# Z-Library Terminal User Interface (zlibtui)
_zlibtui_ is a terminal UI for [Z-Library](https://b-ok.cc/)

# Demo

# Installation
_zlibtui_ is available on [PyPI](https://pypi.org/project/zlibtui/)
```
pip install zlibtui
```
# Controls
## Global
- `esc`: Quit
- `tab`: Switch focus
## Search
- `enter`: Search
## Browsing
- `enter`: More info
- `j`: Scroll down
- `k`: Scroll up
- `p`: Next page
- `P`: Previous page
- `l`: Open link
- `m`: Open menu
## Menu
- `m`: Close menu
- `j`: Scroll down
- `k`: Scroll up
- `h`: Change option (left)
- `l`: Change option (right)
# Settings
The preferred browser to be used when opening links can be specified by the environment variable `ZLIBTUI_BROWSER`. The default browser is Firefox.
| zlibtui | /zlibtui-1.1.2.tar.gz/zlibtui-1.1.2/README.md | README.md |
# zlink
A command line script for navigating and editing Zettelkasten files.
## Usage
```
usage: zlink [-h] [--addlink ADDLINK] [--nobacklink] [--defrag] [filename]
Peruse and maintain a collection of Zettelkasten files in the current
directory.
positional arguments:
filename
optional arguments:
-h, --help show this help message and exit
--addlink ADDLINK add a link to ADDLINK to filename
--nobacklink when adding a link, don't create a backlink from filename
to ADDLINK
--defrag update the zettelkasten files to remove any gaps between
entries
```
| zlink | /zlink-0.0.6.tar.gz/zlink-0.0.6/README.md | README.md |
import json
import pika
from zhanlan_pkg.Utils import rabitmq_config
class RabbitClient:
def __init__(self, queue_name):
self.queue_name = queue_name
def rabbit_conn(self):
"""
创建连接
:return:
"""
user_pwd = pika.PlainCredentials(
rabitmq_config.get("mq_username"),
rabitmq_config.get("mq_pwd")
)
params = pika.ConnectionParameters(
host=rabitmq_config.get("mq_ip"),
port=rabitmq_config.get('mq_port'),
virtual_host=rabitmq_config.get("mq_virtual_host"),
credentials=user_pwd
)
self.conn = pika.BlockingConnection(parameters=params)
self.col = self.conn.channel()
self.col.queue_declare(
queue=self.queue_name,
durable=True
)
def push_rabbit(self, item):
self.rabbit_conn()
self.col.basic_publish(
exchange='',
routing_key=self.queue_name,
body=json.dumps(item, ensure_ascii=False)
)
def get_rabbit(self, fun):
self.rabbit_conn()
self.col.queue_declare(self.queue_name, durable=True, passive=True)
self.col.basic_consume(self.queue_name, fun)
self.col.start_consuming()
if __name__ == '__main__':
# # 一键搬家
# RabbitClient('TEST_SMT_COPY_PRODUCT').push_rabbit(
# {"request_type": "SMT_COPY_PRODUCT", "request_id": "2cac500ba49c4fb97d9a80eb3f9cb216",
# "secret_key": "_QXSYYXGJQUQS", "biz_id": "https:\\/\\/detail.1688.com\\/offer\\/614651326996.html",
# "send_at": 1629164414,
# "data": {"productUrl": "https:\\/\\/detail.1688.com\\/offer\\/614651326996.html", "type": 1}}
# )
# 查排名
RabbitClient('TEST_SMT_PRODUCT_RANKING').push_rabbit(
{"send_at": 1619520635,
"data": {"keyword": "衣服", "shopName": "东莞市汇百商网络科技有限公司", "shopUrl": "shop085o885b77228.1688.com",
"productUrl": "", "type": "3", "startPage": "1", "endPage": "3", "requestId": 8703}}
) | zlkj | /zlkj-0.1.0-py3-none-any.whl/Utils/Rabbit_conn.py | Rabbit_conn.py |
import json
import time
import traceback
import logging
from pika.exceptions import ConnectionClosedByBroker, AMQPChannelError, AMQPConnectionError
class MonitorRabbit:
def __init__(
self, rabbit_conn, redis_coon,
redis_key=None, callback=None
):
"""
:param rabbit_conn: rabbit链接
:param redis_coon: redis链接
:param redis_key: redis储存的键
:param callback: 方法
"""
self.rabbit_conn = rabbit_conn
self.redis_coon = redis_coon
self.redis_key = redis_key
self._callback = callback
def start_run(self):
"""
监听队列
:return:
"""
while True:
try:
self.rabbit_conn.get_rabbit(self.callback)
except ConnectionClosedByBroker:
logging.info(f'error [{ConnectionClosedByBroker}]')
time.sleep(10)
continue
except AMQPChannelError:
logging.info(f'error [{AMQPChannelError}]')
time.sleep(10)
continue
except AMQPConnectionError:
# traceback.print_exc()
logging.info(f'error [{AMQPConnectionError}]')
time.sleep(10)
continue
except:
traceback.print_exc()
logging.info(f'error [{"unknow error"}]')
time.sleep(10)
continue
def callback(self, channel, method, properties, body):
"""
回调函数
"""
try:
req_body = body.decode('utf-8')
logging.info(req_body)
mes = {'result': json.loads(req_body)}
if self._callback:
self._callback.shop_start(json.dumps(mes))
else:
self.redis_coon.lpush(f'{self.redis_key}:start_urls', json.dumps(mes, ensure_ascii=False))
except Exception as e:
print(e)
finally:
channel.basic_ack(delivery_tag=method.delivery_tag) | zlkj | /zlkj-0.1.0-py3-none-any.whl/Utils/monitor_rabbit.py | monitor_rabbit.py |
import json
import re
import time
import pytz
import random
from datetime import datetime
from faker import Faker
from zhanlan_pkg.Utils.Mongo_conn import MongoPerson
fake = Faker()
cntz = pytz.timezone("Asia/Shanghai")
class ReDict:
@classmethod
def string(
cls,
re_pattern: dict,
string_: str,
):
if string_:
return {
key: cls.compute_res(
re_pattern=re.compile(scale),
string_=string_.translate(
{
ord('\t'): '', ord('\f'): '',
ord('\r'): '', ord('\n'): '',
ord(' '): '',
})
)
for key, scale in re_pattern.items()
}
@classmethod
def compute_res(
cls,
re_pattern: re,
string_=None
):
data = [
result.groups()[0]
for result in re_pattern.finditer(string_)
]
if data:
try:
return json.loads(data[0])
except:
return data[0]
else:
return None
class Utils:
@classmethod
def time_cycle(
cls,
times,
int_time=None
):
"""
入库时间规整
:param times: string - 字符串时间
:param int_time: True and False - 获时间戳
:return:
"""
if int_time:
return int(time.mktime(time.strptime(times, "%Y-%m-%d")))
if type(times) is str:
times = int(time.mktime(time.strptime(times, "%Y-%m-%d %H:%M:%S")))
return str(datetime.fromtimestamp(times, tz=cntz))
@classmethod
def merge_dic(
cls,
dic: dict,
lst: list
):
"""
合并多个dict
:param dic: dict - 主dict
:param lst: list - 多个字典列表方式传入
:return:
"""
for d in lst:
for k, v in d.items():
if v:
dic[k] = v
return dic
@classmethod
def is_None(
cls,
dic: dict,
) -> dict:
"""
:param dic: dict
:return: 返回字典中值是None的键值对
"""
return {
k: v
for k, v in dic.items()
if not v
}
@classmethod
def find(
cls, target: str,
dictData: dict,
) -> list:
queue = [dictData]
result = []
while len(queue) > 0:
data = queue.pop()
for key, value in data.items():
if key == target:
result.append(value)
elif isinstance(value, dict):
queue.append(value)
if result:
return result[0]
class Headers:
def headers(self, referer=None):
while True:
user_agent = fake.chrome(
version_from=63, version_to=80, build_from=999, build_to=3500
)
if "Android" in user_agent or "CriOS" in user_agent:
continue
else:
break
if referer:
return {
"user-agent": user_agent,
"referer": referer,
}
return {
"user-agent": user_agent,
}
class Cookies(object):
def __init__(self, db_name):
self.mongo_conn = MongoPerson(db_name, 'cookie').test()
def cookie(self):
return random.choice(list(self.mongo_conn.find())) | zlkj | /zlkj-0.1.0-py3-none-any.whl/Utils/MyUtils.py | MyUtils.py |
from datetime import datetime
from pymongo import MongoClient
from zhanlan_pkg.Utils import mongo_config
class MongoConn(object):
def __init__(self, db_name, config):
"""
:param db_name:
:param config: {
"host": "192.168.20.211",
# "host": "47.107.86.234",
"port": 27017
}
"""
if config:
self.client = MongoClient(**config, connect=True)
else:
self.client = MongoClient(**mongo_config, connect=True)
self.db = self.client[db_name]
class DBBase(object):
def __init__(self, collection, db_name, config):
self.mg = MongoConn(db_name, config)
self.collection = self.mg.db[collection]
def exist_list(self, data, key, get_id: callable):
lst = [get_id(obj) for obj in data]
print('lst', len(lst))
set_list = set([
i.get(key)
for i in list(
self.collection.find({key: {"$in": lst}})
)
])
set_li = set(lst) - set_list
with open("./ignore/null_field.txt", "rt", encoding="utf-8") as f:
_ignore = [int(line.split(",")[0]) for line in f.readlines()]
exist = list(set_li - set(_ignore))
print(len(exist))
for obj in data:
if get_id(obj) in exist:
yield obj
def exist(self, dic):
"""
单条查询
:param dic:
:return:1,0
"""
return self.collection.find(dic).count()
def update_one(self, dic, item=None):
result = self.exist(dic)
if item and result == 1:
item['updateTime'] = datetime.strftime(datetime.now(), "%Y-%m-%d %H:%M:%S")
self.collection.update(dic, {"$set": item})
elif item:
self.collection.update(dic, {"$set": item}, upsert=True)
def insert_one(self, param):
"""
:param param: 多条list 或者 单条dict
:return:
"""
self.collection.insert(param)
def find_len(self, dic):
return self.collection.find(dic).count()
def find_one(self):
self.collection.find().limit()
def find_list(self, count, dic=None, page=None, ):
"""
查询数据
:param count:查询量
:param dic:{'city': ''} 条件查询
:param page:分页查询
:return:
"""
if dic:
return list(self.collection.find(dic).limit(count))
if page:
return list(self.collection.find().skip(page * count - count).limit(count))
def daochu(self):
return list(self.collection.find({'$and': [
{'$or': [{"transaction_medal": "A"}, {"transaction_medal": "AA"}]},
{"tpServiceYear": {'$lte': 2}},
{"overdue": {'$ne': "店铺已过期"}},
{"province": "广东"}
]}))
# return self.collection.find().skip(count).next()
def test(self):
return self.collection
class MongoPerson(DBBase):
def __init__(self, table, db_name, config=None):
super(MongoPerson, self).__init__(table, db_name, config) | zlkj | /zlkj-0.1.0-py3-none-any.whl/Utils/Mongo_conn.py | Mongo_conn.py |
root_config = {
"SPIDER": {
"log_dir_path": "./log"
},
"MYSQL": {
"HOST": "119.29.9.92",
"PORT": 3306,
"USER": "root",
"PASSWD": "zl123456",
"DBNAME": "taobao"
},
"MONGO": {
"host": "192.168.20.211",
# "host": "119.29.9.92",
# "host": "47.107.86.234",
"port": 27017
},
"REDIS": {
"HOST": "119.29.9.92",
# "HOST":"47.107.86.234",
"PORT": 6379,
"DB": 11
},
"TASK_REDIS": {
"HOST": "119.29.9.92",
"PORT": 6379,
},
"OPERATE_CONFIG": {
"GET_SHOPS_URL": "http://localhost:8000/getAllShop",
"SAVE_DATA_URL": "http://119.29.9.92/crm/saveCrmData",
"GET_GOOD_COOKIE_URL": "http://localhost:8000/getRandomCookie",
"DEL_COOKIE_URL": "http://localhost:8000/delExpiredCookie?id="
},
"MACHINE_CONFIG": {
"MACHINE_NO": 0
},
"AUTO_LOGIN_CONFIG": {
"IP": "http://localhost:8000"
},
"PROXY_KEY": ["shop_list"],
"SYNC_CONFIG": {
"shop_list": 0
},
"ZL_CONFIG": {
"OFFICIAL_IP": "http://localhost:8000"
},
"RABITMQ_CONFIG": {
"mq_ip": "121.89.219.152",
"mq_port": 30002,
"mq_virtual_host": "my_vhost",
"mq_username": "dev",
"mq_pwd": "zl123456",
"prefix": ""
# "prefix": "TEST_"
},
"GOOD_COUNT_IP": {
'server': "https://erp.zlkj.com/"
},
"MOGU_PROXY": {
"proxyServer": "secondtransfer.moguproxy.com:9001",
"proxyAuth": "Basic d25ZU3YxekxZSk40Y3hydDozN01uN0lLTFViZkdUY2tK"
}
}
# browser_config = root_config.get("BROWSER", {})
redis_config = root_config.get("REDIS", {})
mysql_config = root_config.get("MYSQL", {})
spider_config = root_config.get('SPIDER', {})
mongo_config = root_config.get('MONGO', {})
auto_login_config = root_config.get("AUTO_LOGIN_CONFIG")
dm_config = root_config.get("ZL_CONFIG", {})
task_redis_config = root_config.get("TASK_REDIS")
celery_redis_config = root_config.get("REDIS")
proxy_key_list = root_config.get("PROXY_KEY")
operate_config = root_config.get("OPERATE_CONFIG")
machine_config = root_config.get("MACHINE_CONFIG", {})
sync_config = root_config.get("SYNC_CONFIG", {})
rabitmq_config = root_config.get("RABITMQ_CONFIG", {})
# post_data_config = root_config.get("POSTDATAURL", {})
good_count_server = root_config.get("GOOD_COUNT_IP", {})
mogu_config = root_config.get("MOGU_PROXY", {}) | zlkj | /zlkj-0.1.0-py3-none-any.whl/Utils/__init__.py | __init__.py |
Introduction to zLMDB
=====================
.. image:: https://img.shields.io/pypi/v/zlmdb.svg
:target: https://pypi.python.org/pypi/zlmdb
:alt: PyPI
.. image:: https://github.com/crossbario/zlmdb/workflows/main/badge.svg
:target: https://github.com/crossbario/zlmdb/actions?query=workflow%3Amain
:alt: Build
.. image:: https://readthedocs.org/projects/zlmdb/badge/?version=latest
:target: https://zlmdb.readthedocs.io/en/latest/?badge=latest
:alt: Documentation
.. image:: https://github.com/crossbario/zlmdb/workflows/deploy/badge.svg
:target: https://github.com/crossbario/zlmdb/actions?query=workflow%3Adeploy
:alt: Deploy
Object-relational in-memory database layer based on LMDB:
* High-performance (see below)
* Supports multiple serializers (JSON, CBOR, Pickle, Flatbuffers)
* Supports export/import from/to Apache Arrow
* Support native Numpy arrays and Pandas data frames
* Automatic indexes
* Free software (MIT license)
| zlmdb | /zlmdb-23.1.1.tar.gz/zlmdb-23.1.1/README.rst | README.rst |
Introduction to zLMDB
=====================
.. image:: https://img.shields.io/pypi/v/zlmdb.svg
:target: https://pypi.python.org/pypi/zlmdb
:alt: PyPI
.. image:: https://github.com/crossbario/zlmdb/workflows/main/badge.svg
:target: https://github.com/crossbario/zlmdb/actions?query=workflow%3Amain
:alt: Build
.. image:: https://readthedocs.org/projects/zlmdb/badge/?version=latest
:target: https://zlmdb.readthedocs.io/en/latest/?badge=latest
:alt: Documentation
.. image:: https://github.com/crossbario/zlmdb/workflows/deploy/badge.svg
:target: https://github.com/crossbario/zlmdb/actions?query=workflow%3Adeploy
:alt: Deploy
Object-relational in-memory database layer based on LMDB:
* High-performance (see below)
* Supports multiple serializers (JSON, CBOR, Pickle, Flatbuffers)
* Supports export/import from/to Apache Arrow
* Support native Numpy arrays and Pandas data frames
* Automatic indexes
* Free software (MIT license)
| zlmdb | /zlmdb-23.1.1.tar.gz/zlmdb-23.1.1/docs/index.rst | index.rst |
Reference
=========
.. contents:: :local:
-------------
Schema
------
.. autoclass:: zlmdb.Schema
:members:
Database
--------
.. autoclass:: zlmdb.Database
:members:
Transaction
-----------
* :class:`zlmdb.Transaction`
* :class:`zlmdb.TransactionStats`
-------
.. autoclass:: zlmdb.Transaction
:members:
.. autoclass:: zlmdb.TransactionStats
:members:
PersistentMap
-------------
* :class:`zlmdb._pmap.PersistentMap`
* :class:`zlmdb._pmap.PersistentMapIterator`
-------
.. autoclass:: zlmdb._pmap.PersistentMap
:members:
.. autoclass:: zlmdb._pmap.PersistentMapIterator
:members:
Typed PersistentMap
-------------------
* :class:`zlmdb.MapBytes16FlatBuffers`
* :class:`zlmdb.MapBytes16TimestampUuid`
* :class:`zlmdb.MapBytes16TimestampUuidFlatBuffers`
* :class:`zlmdb.MapBytes20Bytes16`
* :class:`zlmdb.MapBytes20Bytes20`
* :class:`zlmdb.MapBytes20Bytes20FlatBuffers`
* :class:`zlmdb.MapBytes20Bytes20Timestamp`
* :class:`zlmdb.MapBytes20FlatBuffers`
* :class:`zlmdb.MapBytes20StringFlatBuffers`
* :class:`zlmdb.MapBytes20TimestampBytes20`
* :class:`zlmdb.MapBytes20TimestampUuid`
* :class:`zlmdb.MapBytes20Uuid`
* :class:`zlmdb.MapBytes32Bytes32`
* :class:`zlmdb.MapBytes32Bytes32FlatBuffers`
* :class:`zlmdb.MapBytes32FlatBuffers`
* :class:`zlmdb.MapBytes32StringFlatBuffers`
* :class:`zlmdb.MapBytes32Timestamp`
* :class:`zlmdb.MapBytes32Uuid`
* :class:`zlmdb.MapBytes32UuidFlatBuffers`
* :class:`zlmdb.MapOid3FlatBuffers`
* :class:`zlmdb.MapOidCbor`
* :class:`zlmdb.MapOidFlatBuffers`
* :class:`zlmdb.MapOidJson`
* :class:`zlmdb.MapOidOid`
* :class:`zlmdb.MapOidOidFlatBuffers`
* :class:`zlmdb.MapOidOidOid`
* :class:`zlmdb.MapOidOidSet`
* :class:`zlmdb.MapOidPickle`
* :class:`zlmdb.MapOidString`
* :class:`zlmdb.MapOidStringOid`
* :class:`zlmdb.MapOidTimestampFlatBuffers`
* :class:`zlmdb.MapOidTimestampOid`
* :class:`zlmdb.MapOidTimestampStringOid`
* :class:`zlmdb.MapOidUuid`
* :class:`zlmdb.MapSlotUuidUuid`
* :class:`zlmdb.MapStringCbor`
* :class:`zlmdb.MapStringFlatBuffers`
* :class:`zlmdb.MapStringJson`
* :class:`zlmdb.MapStringOid`
* :class:`zlmdb.MapStringOidOid`
* :class:`zlmdb.MapStringPickle`
* :class:`zlmdb.MapStringString`
* :class:`zlmdb.MapStringStringStringUuid`
* :class:`zlmdb.MapStringStringUuid`
* :class:`zlmdb.MapStringTimestampCbor`
* :class:`zlmdb.MapStringUuid`
* :class:`zlmdb.MapTimestampBytes32FlatBuffers`
* :class:`zlmdb.MapTimestampFlatBuffers`
* :class:`zlmdb.MapTimestampStringCbor`
* :class:`zlmdb.MapTimestampStringFlatBuffers`
* :class:`zlmdb.MapTimestampUuidCbor`
* :class:`zlmdb.MapTimestampUuidFlatBuffers`
* :class:`zlmdb.MapTimestampUuidStringFlatBuffers`
* :class:`zlmdb.MapUint16UuidTimestampFlatBuffers`
* :class:`zlmdb.MapUuidBytes20Bytes20Uint8UuidFlatBuffers`
* :class:`zlmdb.MapUuidBytes20Uint8FlatBuffers`
* :class:`zlmdb.MapUuidBytes20Uint8UuidFlatBuffers`
* :class:`zlmdb.MapUuidBytes32FlatBuffers`
* :class:`zlmdb.MapUuidCbor`
* :class:`zlmdb.MapUuidFlatBuffers`
* :class:`zlmdb.MapUuidJson`
* :class:`zlmdb.MapUuidOid`
* :class:`zlmdb.MapUuidPickle`
* :class:`zlmdb.MapUuidString`
* :class:`zlmdb.MapUuidStringFlatBuffers`
* :class:`zlmdb.MapUuidStringOid`
* :class:`zlmdb.MapUuidStringUuid`
* :class:`zlmdb.MapUuidTimestampBytes32`
* :class:`zlmdb.MapUuidTimestampCbor`
* :class:`zlmdb.MapUuidTimestampFlatBuffers`
* :class:`zlmdb.MapUuidTimestampUuid`
* :class:`zlmdb.MapUuidTimestampUuidFlatBuffers`
* :class:`zlmdb.MapUuidUuid`
* :class:`zlmdb.MapUuidUuidCbor`
* :class:`zlmdb.MapUuidUuidFlatBuffers`
* :class:`zlmdb.MapUuidUuidSet`
* :class:`zlmdb.MapUuidUuidStringFlatBuffers`
* :class:`zlmdb.MapUuidUuidStringUuid`
* :class:`zlmdb.MapUuidUuidUuid`
* :class:`zlmdb.MapUuidUuidUuidStringUuid`
* :class:`zlmdb.MapUuidUuidUuidUuid`
* :class:`zlmdb.MapUuidUuidUuidUuidUuid`
------
.. autoclass:: zlmdb.MapBytes16FlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes16TimestampUuid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes16TimestampUuidFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes20Bytes16
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes20Bytes20
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes20Bytes20FlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes20Bytes20Timestamp
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes20FlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes20StringFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes20TimestampBytes20
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes20TimestampUuid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes20Uuid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes32Bytes32
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes32Bytes32FlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes32FlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes32StringFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes32Timestamp
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes32Uuid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapBytes32UuidFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapOid3FlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapOidCbor
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapOidFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapOidJson
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapOidOid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapOidOidFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapOidOidOid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapOidOidSet
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapOidPickle
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapOidString
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapOidStringOid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapOidTimestampFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapOidTimestampOid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapOidTimestampStringOid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapOidUuid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapSlotUuidUuid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapStringCbor
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapStringFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapStringJson
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapStringOid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapStringOidOid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapStringPickle
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapStringString
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapStringStringStringUuid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapStringStringUuid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapStringTimestampCbor
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapStringUuid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapTimestampBytes32FlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapTimestampFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapTimestampStringCbor
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapTimestampStringFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapTimestampUuidCbor
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapTimestampUuidFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapTimestampUuidStringFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUint16UuidTimestampFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidBytes20Bytes20Uint8UuidFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidBytes20Uint8FlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidBytes20Uint8UuidFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidBytes32FlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidCbor
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidJson
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidOid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidPickle
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidString
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidStringFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidStringOid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidStringUuid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidTimestampBytes32
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidTimestampCbor
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidTimestampFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidTimestampUuid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidTimestampUuidFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidUuid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidUuidCbor
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidUuidFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidUuidSet
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidUuidStringFlatBuffers
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidUuidStringUuid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidUuidUuid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidUuidUuidStringUuid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidUuidUuidUuid
:members:
:show-inheritance:
.. autoclass:: zlmdb.MapUuidUuidUuidUuidUuid
:members:
:show-inheritance:
Key Types
---------
* :class:`zlmdb._types._Bytes16KeysMixin`
* :class:`zlmdb._types._Bytes16TimestampKeysMixin`
* :class:`zlmdb._types._Bytes16TimestampUuidKeysMixin`
* :class:`zlmdb._types._Bytes20Bytes20KeysMixin`
* :class:`zlmdb._types._Bytes20KeysMixin`
* :class:`zlmdb._types._Bytes20StringKeysMixin`
* :class:`zlmdb._types._Bytes20TimestampKeysMixin`
* :class:`zlmdb._types._Bytes32Bytes32KeysMixin`
* :class:`zlmdb._types._Bytes32KeysMixin`
* :class:`zlmdb._types._Bytes32StringKeysMixin`
* :class:`zlmdb._types._Bytes32UuidKeysMixin`
* :class:`zlmdb._types._Oid3KeysMixin`
* :class:`zlmdb._types._OidKeysMixin`
* :class:`zlmdb._types._OidOidKeysMixin`
* :class:`zlmdb._types._OidStringKeysMixin`
* :class:`zlmdb._types._OidTimestampKeysMixin`
* :class:`zlmdb._types._OidTimestampStringKeysMixin`
* :class:`zlmdb._types._SlotUuidKeysMixin`
* :class:`zlmdb._types._StringKeysMixin`
* :class:`zlmdb._types._StringOidKeysMixin`
* :class:`zlmdb._types._StringStringKeysMixin`
* :class:`zlmdb._types._StringStringStringKeysMixin`
* :class:`zlmdb._types._StringTimestampKeysMixin`
* :class:`zlmdb._types._TimestampBytes32KeysMixin`
* :class:`zlmdb._types._TimestampKeysMixin`
* :class:`zlmdb._types._TimestampStringKeysMixin`
* :class:`zlmdb._types._TimestampUuidKeysMixin`
* :class:`zlmdb._types._TimestampUuidStringKeysMixin`
* :class:`zlmdb._types._Uint16UuidTimestampKeysMixin`
* :class:`zlmdb._types._UuidBytes20Bytes20Uint8UuidKeysMixin`
* :class:`zlmdb._types._UuidBytes20Uint8KeysMixin`
* :class:`zlmdb._types._UuidBytes20Uint8UuidKeysMixin`
* :class:`zlmdb._types._UuidBytes32KeysMixin`
* :class:`zlmdb._types._UuidKeysMixin`
* :class:`zlmdb._types._UuidStringKeysMixin`
* :class:`zlmdb._types._UuidTimestampKeysMixin`
* :class:`zlmdb._types._UuidTimestampUuidKeysMixin`
* :class:`zlmdb._types._UuidUuidKeysMixin`
* :class:`zlmdb._types._UuidUuidStringKeysMixin`
* :class:`zlmdb._types._UuidUuidUuidKeysMixin`
* :class:`zlmdb._types._UuidUuidUuidStringKeysMixin`
* :class:`zlmdb._types._UuidUuidUuidUuidKeysMixin`
------
.. autoclass:: zlmdb._types._Bytes16KeysMixin
:members:
.. autoclass:: zlmdb._types._Bytes16TimestampKeysMixin
:members:
.. autoclass:: zlmdb._types._Bytes16TimestampUuidKeysMixin
:members:
.. autoclass:: zlmdb._types._Bytes20Bytes20KeysMixin
:members:
.. autoclass:: zlmdb._types._Bytes20KeysMixin
:members:
.. autoclass:: zlmdb._types._Bytes20StringKeysMixin
:members:
.. autoclass:: zlmdb._types._Bytes20TimestampKeysMixin
:members:
.. autoclass:: zlmdb._types._Bytes32Bytes32KeysMixin
:members:
.. autoclass:: zlmdb._types._Bytes32KeysMixin
:members:
.. autoclass:: zlmdb._types._Bytes32StringKeysMixin
:members:
.. autoclass:: zlmdb._types._Bytes32UuidKeysMixin
:members:
.. autoclass:: zlmdb._types._Oid3KeysMixin
:members:
.. autoclass:: zlmdb._types._OidKeysMixin
:members:
.. autoclass:: zlmdb._types._OidOidKeysMixin
:members:
.. autoclass:: zlmdb._types._OidStringKeysMixin
:members:
.. autoclass:: zlmdb._types._OidTimestampKeysMixin
:members:
.. autoclass:: zlmdb._types._OidTimestampStringKeysMixin
:members:
.. autoclass:: zlmdb._types._SlotUuidKeysMixin
:members:
.. autoclass:: zlmdb._types._StringKeysMixin
:members:
.. autoclass:: zlmdb._types._StringOidKeysMixin
:members:
.. autoclass:: zlmdb._types._StringStringKeysMixin
:members:
.. autoclass:: zlmdb._types._StringStringStringKeysMixin
:members:
.. autoclass:: zlmdb._types._StringTimestampKeysMixin
:members:
.. autoclass:: zlmdb._types._TimestampBytes32KeysMixin
:members:
.. autoclass:: zlmdb._types._TimestampKeysMixin
:members:
.. autoclass:: zlmdb._types._TimestampStringKeysMixin
:members:
.. autoclass:: zlmdb._types._TimestampUuidKeysMixin
:members:
.. autoclass:: zlmdb._types._TimestampUuidStringKeysMixin
:members:
.. autoclass:: zlmdb._types._Uint16UuidTimestampKeysMixin
:members:
.. autoclass:: zlmdb._types._UuidBytes20Bytes20Uint8UuidKeysMixin
:members:
.. autoclass:: zlmdb._types._UuidBytes20Uint8KeysMixin
:members:
.. autoclass:: zlmdb._types._UuidBytes20Uint8UuidKeysMixin
:members:
.. autoclass:: zlmdb._types._UuidBytes32KeysMixin
:members:
.. autoclass:: zlmdb._types._UuidKeysMixin
:members:
.. autoclass:: zlmdb._types._UuidStringKeysMixin
:members:
.. autoclass:: zlmdb._types._UuidTimestampKeysMixin
:members:
.. autoclass:: zlmdb._types._UuidTimestampUuidKeysMixin
:members:
.. autoclass:: zlmdb._types._UuidUuidKeysMixin
:members:
.. autoclass:: zlmdb._types._UuidUuidStringKeysMixin
:members:
.. autoclass:: zlmdb._types._UuidUuidUuidKeysMixin
:members:
.. autoclass:: zlmdb._types._UuidUuidUuidStringKeysMixin
:members:
.. autoclass:: zlmdb._types._UuidUuidUuidUuidKeysMixin
:members:
Value Types
-----------
* :class:`zlmdb._types._Bytes16ValuesMixin`
* :class:`zlmdb._types._Bytes20TimestampValuesMixin`
* :class:`zlmdb._types._Bytes20ValuesMixin`
* :class:`zlmdb._types._Bytes32ValuesMixin`
* :class:`zlmdb._types._CborValuesMixin`
* :class:`zlmdb._types._FlatBuffersValuesMixin`
* :class:`zlmdb._types._JsonValuesMixin`
* :class:`zlmdb._types._OidSetValuesMixin`
* :class:`zlmdb._types._OidValuesMixin`
* :class:`zlmdb._types._Pickle5ValuesMixin`
* :class:`zlmdb._types._PickleValuesMixin`
* :class:`zlmdb._types._StringSetValuesMixin`
* :class:`zlmdb._types._StringValuesMixin`
* :class:`zlmdb._types._TimestampValuesMixin`
* :class:`zlmdb._types._UuidSetValuesMixin`
* :class:`zlmdb._types._UuidValuesMixin`
------
.. autoclass:: zlmdb._types._Bytes16ValuesMixin
:members:
.. autoclass:: zlmdb._types._Bytes20TimestampValuesMixin
:members:
.. autoclass:: zlmdb._types._Bytes20ValuesMixin
:members:
.. autoclass:: zlmdb._types._Bytes32ValuesMixin
:members:
.. autoclass:: zlmdb._types._CborValuesMixin
:members:
.. autoclass:: zlmdb._types._FlatBuffersValuesMixin
:members:
.. autoclass:: zlmdb._types._JsonValuesMixin
:members:
.. autoclass:: zlmdb._types._OidSetValuesMixin
:members:
.. autoclass:: zlmdb._types._OidValuesMixin
:members:
.. autoclass:: zlmdb._types._Pickle5ValuesMixin
:members:
.. autoclass:: zlmdb._types._PickleValuesMixin
:members:
.. autoclass:: zlmdb._types._StringSetValuesMixin
:members:
.. autoclass:: zlmdb._types._StringValuesMixin
:members:
.. autoclass:: zlmdb._types._TimestampValuesMixin
:members:
.. autoclass:: zlmdb._types._UuidSetValuesMixin
:members:
.. autoclass:: zlmdb._types._UuidValuesMixin
:members:
| zlmdb | /zlmdb-23.1.1.tar.gz/zlmdb-23.1.1/docs/reference.rst | reference.rst |
Performance
-----------
Read performance (with Flatbuffers serializer for object storage):
.. image:: _static/performance_test2.png
:width: 605px
Write performance with different serializers:
.. image:: _static/performance_test1.png
:width: 780px
* [zlmdb/tests/test_serialization.py](https://github.com/crossbario/zlmdb/blob/master/zlmdb/tests/test_serialization.py)
* [zlmdb/_pmap.py:_FlatBuffersValuesMixin](https://github.com/crossbario/zlmdb/blob/master/zlmdb/_pmap.py#L625)
Test system
...........
The test was run on an Intel NUC with Ubuntu Bionic:
.. code-block:: console
(cpy370_1) oberstet@crossbar1:~/scm/crossbario/zlmdb$ uname -a
Linux crossbar1 4.15.0-34-generic #37-Ubuntu SMP Mon Aug 27 15:21:48 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
(cpy370_1) oberstet@crossbar1:~/scm/crossbario/zlmdb$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic
(cpy370_1) oberstet@crossbar1:~/scm/crossbario/zlmdb$ lscpu
Architektur: x86_64
CPU Operationsmodus: 32-bit, 64-bit
Byte-Reihenfolge: Little Endian
CPU(s): 8
Liste der Online-CPU(s): 0-7
Thread(s) pro Kern: 2
Kern(e) pro Socket: 4
Sockel: 1
NUMA-Knoten: 1
Anbieterkennung: GenuineIntel
Prozessorfamilie: 6
Modell: 94
Modellname: Intel(R) Core(TM) i7-6770HQ CPU @ 2.60GHz
Stepping: 3
CPU MHz: 900.102
Maximale Taktfrequenz der CPU: 3500,0000
Minimale Taktfrequenz der CPU: 800,0000
BogoMIPS: 5184.00
Virtualisierung: VT-x
L1d Cache: 32K
L1i Cache: 32K
L2 Cache: 256K
L3 Cache: 6144K
NUMA-Knoten0 CPU(s): 0-7
Markierungen: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp flush_l1d
Results
.......
Fill & Read performance results for PyPy 3 (v6.0.0):
.. code-block:: console
zlmdb/tests/test_flatbuffers.py::test_pmap_flatbuffers_count Using temporary directory /tmp/tmpg38791il for database
Transaction ended: puts=10000 / dels=0 rows in 821 ms, 12166 rows/sec
Transaction ended: puts=10000 / dels=0 rows in 211 ms, 47390 rows/sec
Transaction ended: puts=10000 / dels=0 rows in 236 ms, 42372 rows/sec
Transaction ended: puts=10000 / dels=0 rows in 216 ms, 46112 rows/sec
Transaction ended: puts=10000 / dels=0 rows in 263 ms, 37881 rows/sec
Transaction ended: 1000000 rows read in 1349 ms, 740900 rows/sec
Transaction ended: 1000000 rows read in 1225 ms, 816188 rows/sec
Transaction ended: 1000000 rows read in 1230 ms, 812895 rows/sec
Transaction ended: 1000000 rows read in 1228 ms, 814307 rows/sec
Transaction ended: 1000000 rows read in 1228 ms, 814307 rows/sec
PASSED
and Write performance with different serializers:
.. code-block:: console
zlmdb/tests/test_serialization.py::test_json_serialization_speed running on:
3.5.3 (fdd60ed87e94, Apr 24 2018, 06:10:04)
[PyPy 6.0.0 with GCC 6.2.0 20160901]
uname_result(system='Linux', node='crossbar1', release='4.15.0-34-generic', version='#37-Ubuntu SMP Mon Aug 27 15:21:48 UTC 2018', machine='x86_64', processor='x86_64')
19384.7 objects/sec 8.5 MB
30204.7 objects/sec 17.0 MB
30075.6 objects/sec 25.4 MB
30390.1 objects/sec 33.9 MB
27105.8 objects/sec 42.4 MB
29900.0 objects/sec 50.9 MB
30598.2 objects/sec 59.3 MB
30044.7 objects/sec 67.8 MB
30140.4 objects/sec 76.3 MB
28741.3 objects/sec 84.8 MB
30598.2 objects/sec max, 84.8 MB bytes total, 847 Bytes bytes/obj
PASSED
zlmdb/tests/test_serialization.py::test_cbor_serialization_speed running on:
3.5.3 (fdd60ed87e94, Apr 24 2018, 06:10:04)
[PyPy 6.0.0 with GCC 6.2.0 20160901]
uname_result(system='Linux', node='crossbar1', release='4.15.0-34-generic', version='#37-Ubuntu SMP Mon Aug 27 15:21:48 UTC 2018', machine='x86_64', processor='x86_64')
24692.3 objects/sec 5.8 MB
32789.0 objects/sec 11.6 MB
34056.9 objects/sec 17.3 MB
32679.4 objects/sec 23.1 MB
33207.5 objects/sec 28.9 MB
33553.0 objects/sec 34.7 MB
27443.7 objects/sec 40.4 MB
31347.2 objects/sec 46.2 MB
33560.1 objects/sec 52.0 MB
33203.0 objects/sec 57.8 MB
34056.9 objects/sec max, 57.8 MB bytes total, 577 Bytes bytes/obj
PASSED
zlmdb/tests/test_serialization.py::test_pickle_serialization_speed running on:
3.5.3 (fdd60ed87e94, Apr 24 2018, 06:10:04)
[PyPy 6.0.0 with GCC 6.2.0 20160901]
uname_result(system='Linux', node='crossbar1', release='4.15.0-34-generic', version='#37-Ubuntu SMP Mon Aug 27 15:21:48 UTC 2018', machine='x86_64', processor='x86_64')
16280.2 objects/sec 8.5 MB
16985.4 objects/sec 17.0 MB
17206.1 objects/sec 25.5 MB
17056.9 objects/sec 34.0 MB
17406.6 objects/sec 42.4 MB
17474.5 objects/sec 50.9 MB
17509.5 objects/sec 59.4 MB
17450.8 objects/sec 67.9 MB
18063.3 objects/sec 76.4 MB
17343.1 objects/sec 84.9 MB
18063.3 objects/sec max, 84.9 MB bytes total, 848 Bytes bytes/obj
PASSED
zlmdb/tests/test_serialization.py::test_flatbuffer_serialization_speed running on:
3.5.3 (fdd60ed87e94, Apr 24 2018, 06:10:04)
[PyPy 6.0.0 with GCC 6.2.0 20160901]
uname_result(system='Linux', node='crossbar1', release='4.15.0-34-generic', version='#37-Ubuntu SMP Mon Aug 27 15:21:48 UTC 2018', machine='x86_64', processor='x86_64')
58094.0 objects/sec 1.6 MB
52665.7 objects/sec 3.2 MB
63701.7 objects/sec 4.8 MB
61753.9 objects/sec 6.4 MB
63488.8 objects/sec 8.0 MB
64583.2 objects/sec 9.6 MB
62175.9 objects/sec 11.2 MB
64443.8 objects/sec 12.8 MB
63375.5 objects/sec 14.4 MB
61808.2 objects/sec 16.0 MB
64583.2 objects/sec max, 16.0 MB bytes total, 159 Bytes bytes/obj
PASSED
Fill & Read performance results for CPython 3 (v3.7.0):
.. code-block:: console
zlmdb/tests/test_flatbuffers.py::test_pmap_flatbuffers_count Using temporary directory /tmp/tmpkxt44ayp for database
Transaction ended: puts=10000 / dels=0 rows in 1747 ms, 5721 rows/sec
Transaction ended: puts=10000 / dels=0 rows in 1716 ms, 5826 rows/sec
Transaction ended: puts=10000 / dels=0 rows in 1752 ms, 5705 rows/sec
Transaction ended: puts=10000 / dels=0 rows in 1742 ms, 5740 rows/sec
Transaction ended: puts=10000 / dels=0 rows in 1756 ms, 5692 rows/sec
Transaction ended: 1000000 rows read in 12931 ms, 77328 rows/sec
Transaction ended: 1000000 rows read in 12926 ms, 77361 rows/sec
Transaction ended: 1000000 rows read in 12956 ms, 77179 rows/sec
Transaction ended: 1000000 rows read in 12977 ms, 77056 rows/sec
Transaction ended: 1000000 rows read in 12860 ms, 77758 rows/sec
PASSED
and Write performance with different serializers:
.. code-block:: console
zlmdb/tests/test_serialization.py::test_json_serialization_speed running on:
3.7.0 (default, Sep 11 2018, 09:56:32)
[GCC 7.3.0]
uname_result(system='Linux', node='crossbar1', release='4.15.0-34-generic', version='#37-Ubuntu SMP Mon Aug 27 15:21:48 UTC 2018', machine='x86_64', processor='x86_64')
18612.4 objects/sec 8.5 MB
17952.2 objects/sec 17.0 MB
18716.1 objects/sec 25.4 MB
18239.6 objects/sec 33.9 MB
18900.9 objects/sec 42.4 MB
18328.9 objects/sec 50.9 MB
18454.4 objects/sec 59.3 MB
18544.6 objects/sec 67.8 MB
18553.5 objects/sec 76.3 MB
18304.3 objects/sec 84.8 MB
18900.9 objects/sec max, 84.8 MB bytes total, 847 Bytes bytes/obj
PASSED
zlmdb/tests/test_serialization.py::test_cbor_serialization_speed running on:
3.7.0 (default, Sep 11 2018, 09:56:32)
[GCC 7.3.0]
uname_result(system='Linux', node='crossbar1', release='4.15.0-34-generic', version='#37-Ubuntu SMP Mon Aug 27 15:21:48 UTC 2018', machine='x86_64', processor='x86_64')
9066.4 objects/sec 5.8 MB
9125.0 objects/sec 11.6 MB
9063.7 objects/sec 17.3 MB
9108.3 objects/sec 23.1 MB
8998.3 objects/sec 28.9 MB
8938.6 objects/sec 34.7 MB
9088.6 objects/sec 40.4 MB
9063.0 objects/sec 46.2 MB
9127.8 objects/sec 52.0 MB
9129.6 objects/sec 57.8 MB
9129.6 objects/sec max, 57.8 MB bytes total, 577 Bytes bytes/obj
PASSED
zlmdb/tests/test_serialization.py::test_pickle_serialization_speed running on:
3.7.0 (default, Sep 11 2018, 09:56:32)
[GCC 7.3.0]
uname_result(system='Linux', node='crossbar1', release='4.15.0-34-generic', version='#37-Ubuntu SMP Mon Aug 27 15:21:48 UTC 2018', machine='x86_64', processor='x86_64')
21894.9 objects/sec 5.8 MB
21725.4 objects/sec 11.6 MB
21793.6 objects/sec 17.4 MB
21755.0 objects/sec 23.2 MB
21873.5 objects/sec 28.9 MB
21651.3 objects/sec 34.7 MB
21620.2 objects/sec 40.5 MB
21810.5 objects/sec 46.3 MB
21956.2 objects/sec 52.1 MB
21133.8 objects/sec 57.9 MB
21956.2 objects/sec max, 57.9 MB bytes total, 578 Bytes bytes/obj
PASSED
zlmdb/tests/test_serialization.py::test_flatbuffer_serialization_speed running on:
3.7.0 (default, Sep 11 2018, 09:56:32)
[GCC 7.3.0]
uname_result(system='Linux', node='crossbar1', release='4.15.0-34-generic', version='#37-Ubuntu SMP Mon Aug 27 15:21:48 UTC 2018', machine='x86_64', processor='x86_64')
6127.6 objects/sec 1.6 MB
6176.0 objects/sec 3.2 MB
6171.0 objects/sec 4.8 MB
6194.4 objects/sec 6.4 MB
6191.5 objects/sec 8.0 MB
6225.2 objects/sec 9.6 MB
6144.9 objects/sec 11.2 MB
6175.1 objects/sec 12.8 MB
6118.0 objects/sec 14.4 MB
6119.6 objects/sec 16.0 MB
6225.2 objects/sec max, 16.0 MB bytes total, 159 Bytes bytes/obj
PASSED
| zlmdb | /zlmdb-23.1.1.tar.gz/zlmdb-23.1.1/docs/performance.rst | performance.rst |
# pylint: disable=invalid-name
# TODO(dkovalev): Add type hints everywhere, so tools like pytypes could work.
import array
import contextlib
import enum
import struct
__all__ = ('Type', 'Builder', 'GetRoot', 'Dumps', 'Loads')
class BitWidth(enum.IntEnum):
"""Supported bit widths of value types.
These are used in the lower 2 bits of a type field to determine the size of
the elements (and or size field) of the item pointed to (e.g. vector).
"""
W8 = 0 # 2^0 = 1 byte
W16 = 1 # 2^1 = 2 bytes
W32 = 2 # 2^2 = 4 bytes
W64 = 3 # 2^3 = 8 bytes
@staticmethod
def U(value):
"""Returns the minimum `BitWidth` to encode unsigned integer value."""
assert value >= 0
if value < (1 << 8):
return BitWidth.W8
elif value < (1 << 16):
return BitWidth.W16
elif value < (1 << 32):
return BitWidth.W32
elif value < (1 << 64):
return BitWidth.W64
else:
raise ValueError('value is too big to encode: %s' % value)
@staticmethod
def I(value):
"""Returns the minimum `BitWidth` to encode signed integer value."""
# -2^(n-1) <= value < 2^(n-1)
# -2^n <= 2 * value < 2^n
# 2 * value < 2^n, when value >= 0 or 2 * (-value) <= 2^n, when value < 0
# 2 * value < 2^n, when value >= 0 or 2 * (-value) - 1 < 2^n, when value < 0
#
# if value >= 0:
# return BitWidth.U(2 * value)
# else:
# return BitWidth.U(2 * (-value) - 1) # ~x = -x - 1
value *= 2
return BitWidth.U(value if value >= 0 else ~value)
@staticmethod
def F(value):
"""Returns the `BitWidth` to encode floating point value."""
if struct.unpack('f', struct.pack('f', value))[0] == value:
return BitWidth.W32
return BitWidth.W64
@staticmethod
def B(byte_width):
return {
1: BitWidth.W8,
2: BitWidth.W16,
4: BitWidth.W32,
8: BitWidth.W64
}[byte_width]
I = {1: 'b', 2: 'h', 4: 'i', 8: 'q'} # Integer formats
U = {1: 'B', 2: 'H', 4: 'I', 8: 'Q'} # Unsigned integer formats
F = {4: 'f', 8: 'd'} # Floating point formats
def _Unpack(fmt, buf):
return struct.unpack(fmt[len(buf)], buf)[0]
def _UnpackVector(fmt, buf, length):
byte_width = len(buf) // length
return struct.unpack('%d%s' % (length, fmt[byte_width]), buf)
def _Pack(fmt, value, byte_width):
return struct.pack(fmt[byte_width], value)
def _PackVector(fmt, values, byte_width):
return struct.pack('%d%s' % (len(values), fmt[byte_width]), *values)
def _Mutate(fmt, buf, value, byte_width, value_bit_width):
if (1 << value_bit_width) <= byte_width:
buf[:byte_width] = _Pack(fmt, value, byte_width)
return True
return False
# Computes how many bytes you'd have to pad to be able to write an
# "scalar_size" scalar if the buffer had grown to "buf_size",
# "scalar_size" is a power of two.
def _PaddingBytes(buf_size, scalar_size):
# ((buf_size + (scalar_size - 1)) // scalar_size) * scalar_size - buf_size
return -buf_size & (scalar_size - 1)
def _ShiftSlice(s, offset, length):
start = offset + (0 if s.start is None else s.start)
stop = offset + (length if s.stop is None else s.stop)
return slice(start, stop, s.step)
# https://en.cppreference.com/w/cpp/algorithm/lower_bound
def _LowerBound(values, value, pred):
"""Implementation of C++ std::lower_bound() algorithm."""
first, last = 0, len(values)
count = last - first
while count > 0:
i = first
step = count // 2
i += step
if pred(values[i], value):
i += 1
first = i
count -= step + 1
else:
count = step
return first
# https://en.cppreference.com/w/cpp/algorithm/binary_search
def _BinarySearch(values, value, pred=lambda x, y: x < y):
"""Implementation of C++ std::binary_search() algorithm."""
index = _LowerBound(values, value, pred)
if index != len(values) and not pred(value, values[index]):
return index
return -1
class Type(enum.IntEnum):
"""Supported types of encoded data.
These are used as the upper 6 bits of a type field to indicate the actual
type.
"""
NULL = 0
INT = 1
UINT = 2
FLOAT = 3
# Types above stored inline, types below store an offset.
KEY = 4
STRING = 5
INDIRECT_INT = 6
INDIRECT_UINT = 7
INDIRECT_FLOAT = 8
MAP = 9
VECTOR = 10 # Untyped.
VECTOR_INT = 11 # Typed any size (stores no type table).
VECTOR_UINT = 12
VECTOR_FLOAT = 13
VECTOR_KEY = 14
# DEPRECATED, use VECTOR or VECTOR_KEY instead.
# Read test.cpp/FlexBuffersDeprecatedTest() for details on why.
VECTOR_STRING_DEPRECATED = 15
VECTOR_INT2 = 16 # Typed tuple (no type table, no size field).
VECTOR_UINT2 = 17
VECTOR_FLOAT2 = 18
VECTOR_INT3 = 19 # Typed triple (no type table, no size field).
VECTOR_UINT3 = 20
VECTOR_FLOAT3 = 21
VECTOR_INT4 = 22 # Typed quad (no type table, no size field).
VECTOR_UINT4 = 23
VECTOR_FLOAT4 = 24
BLOB = 25
BOOL = 26
VECTOR_BOOL = 36 # To do the same type of conversion of type to vector type
@staticmethod
def Pack(type_, bit_width):
return (int(type_) << 2) | bit_width
@staticmethod
def Unpack(packed_type):
return 1 << (packed_type & 0b11), Type(packed_type >> 2)
@staticmethod
def IsInline(type_):
return type_ <= Type.FLOAT or type_ == Type.BOOL
@staticmethod
def IsTypedVector(type_):
return Type.VECTOR_INT <= type_ <= Type.VECTOR_STRING_DEPRECATED or \
type_ == Type.VECTOR_BOOL
@staticmethod
def IsTypedVectorElementType(type_):
return Type.INT <= type_ <= Type.STRING or type_ == Type.BOOL
@staticmethod
def ToTypedVectorElementType(type_):
if not Type.IsTypedVector(type_):
raise ValueError('must be typed vector type')
return Type(type_ - Type.VECTOR_INT + Type.INT)
@staticmethod
def IsFixedTypedVector(type_):
return Type.VECTOR_INT2 <= type_ <= Type.VECTOR_FLOAT4
@staticmethod
def IsFixedTypedVectorElementType(type_):
return Type.INT <= type_ <= Type.FLOAT
@staticmethod
def ToFixedTypedVectorElementType(type_):
if not Type.IsFixedTypedVector(type_):
raise ValueError('must be fixed typed vector type')
# 3 types each, starting from length 2.
fixed_type = type_ - Type.VECTOR_INT2
return Type(fixed_type % 3 + Type.INT), fixed_type // 3 + 2
@staticmethod
def ToTypedVector(element_type, fixed_len=0):
"""Converts element type to corresponding vector type.
Args:
element_type: vector element type
fixed_len: number of elements: 0 for typed vector; 2, 3, or 4 for fixed
typed vector.
Returns:
Typed vector type or fixed typed vector type.
"""
if fixed_len == 0:
if not Type.IsTypedVectorElementType(element_type):
raise ValueError('must be typed vector element type')
else:
if not Type.IsFixedTypedVectorElementType(element_type):
raise ValueError('must be fixed typed vector element type')
offset = element_type - Type.INT
if fixed_len == 0:
return Type(offset + Type.VECTOR_INT) # TypedVector
elif fixed_len == 2:
return Type(offset + Type.VECTOR_INT2) # FixedTypedVector
elif fixed_len == 3:
return Type(offset + Type.VECTOR_INT3) # FixedTypedVector
elif fixed_len == 4:
return Type(offset + Type.VECTOR_INT4) # FixedTypedVector
else:
raise ValueError('unsupported fixed_len: %s' % fixed_len)
class Buf:
"""Class to access underlying buffer object starting from the given offset."""
def __init__(self, buf, offset):
self._buf = buf
self._offset = offset if offset >= 0 else len(buf) + offset
self._length = len(buf) - self._offset
def __getitem__(self, key):
if isinstance(key, slice):
return self._buf[_ShiftSlice(key, self._offset, self._length)]
elif isinstance(key, int):
return self._buf[self._offset + key]
else:
raise TypeError('invalid key type')
def __setitem__(self, key, value):
if isinstance(key, slice):
self._buf[_ShiftSlice(key, self._offset, self._length)] = value
elif isinstance(key, int):
self._buf[self._offset + key] = key
else:
raise TypeError('invalid key type')
def __repr__(self):
return 'buf[%d:]' % self._offset
def Find(self, sub):
"""Returns the lowest index where the sub subsequence is found."""
return self._buf[self._offset:].find(sub)
def Slice(self, offset):
"""Returns new `Buf` which starts from the given offset."""
return Buf(self._buf, self._offset + offset)
def Indirect(self, offset, byte_width):
"""Return new `Buf` based on the encoded offset (indirect encoding)."""
return self.Slice(offset - _Unpack(U, self[offset:offset + byte_width]))
class Object:
"""Base class for all non-trivial data accessors."""
__slots__ = '_buf', '_byte_width'
def __init__(self, buf, byte_width):
self._buf = buf
self._byte_width = byte_width
@property
def ByteWidth(self):
return self._byte_width
class Sized(Object):
"""Base class for all data accessors which need to read encoded size."""
__slots__ = '_size',
def __init__(self, buf, byte_width, size=0):
super().__init__(buf, byte_width)
if size == 0:
self._size = _Unpack(U, self.SizeBytes)
else:
self._size = size
@property
def SizeBytes(self):
return self._buf[-self._byte_width:0]
def __len__(self):
return self._size
class Blob(Sized):
"""Data accessor for the encoded blob bytes."""
__slots__ = ()
@property
def Bytes(self):
return self._buf[0:len(self)]
def __repr__(self):
return 'Blob(%s, size=%d)' % (self._buf, len(self))
class String(Sized):
"""Data accessor for the encoded string bytes."""
__slots__ = ()
@property
def Bytes(self):
return self._buf[0:len(self)]
def Mutate(self, value):
"""Mutates underlying string bytes in place.
Args:
value: New string to replace the existing one. New string must have less
or equal UTF-8-encoded bytes than the existing one to successfully
mutate underlying byte buffer.
Returns:
Whether the value was mutated or not.
"""
encoded = value.encode('utf-8')
n = len(encoded)
if n <= len(self):
self._buf[-self._byte_width:0] = _Pack(U, n, self._byte_width)
self._buf[0:n] = encoded
self._buf[n:len(self)] = bytearray(len(self) - n)
return True
return False
def __str__(self):
return self.Bytes.decode('utf-8')
def __repr__(self):
return 'String(%s, size=%d)' % (self._buf, len(self))
class Key(Object):
"""Data accessor for the encoded key bytes."""
__slots__ = ()
def __init__(self, buf, byte_width):
assert byte_width == 1
super().__init__(buf, byte_width)
@property
def Bytes(self):
return self._buf[0:len(self)]
def __len__(self):
return self._buf.Find(0)
def __str__(self):
return self.Bytes.decode('ascii')
def __repr__(self):
return 'Key(%s, size=%d)' % (self._buf, len(self))
class Vector(Sized):
"""Data accessor for the encoded vector bytes."""
__slots__ = ()
def __getitem__(self, index):
if index < 0 or index >= len(self):
raise IndexError('vector index %s is out of [0, %d) range' % \
(index, len(self)))
packed_type = self._buf[len(self) * self._byte_width + index]
buf = self._buf.Slice(index * self._byte_width)
return Ref.PackedType(buf, self._byte_width, packed_type)
@property
def Value(self):
"""Returns the underlying encoded data as a list object."""
return [e.Value for e in self]
def __repr__(self):
return 'Vector(%s, byte_width=%d, size=%d)' % \
(self._buf, self._byte_width, self._size)
class TypedVector(Sized):
"""Data accessor for the encoded typed vector or fixed typed vector bytes."""
__slots__ = '_element_type', '_size'
def __init__(self, buf, byte_width, element_type, size=0):
super().__init__(buf, byte_width, size)
if element_type == Type.STRING:
# These can't be accessed as strings, since we don't know the bit-width
# of the size field, see the declaration of
# FBT_VECTOR_STRING_DEPRECATED above for details.
# We change the type here to be keys, which are a subtype of strings,
# and will ignore the size field. This will truncate strings with
# embedded nulls.
element_type = Type.KEY
self._element_type = element_type
@property
def Bytes(self):
return self._buf[:self._byte_width * len(self)]
@property
def ElementType(self):
return self._element_type
def __getitem__(self, index):
if index < 0 or index >= len(self):
raise IndexError('vector index %s is out of [0, %d) range' % \
(index, len(self)))
buf = self._buf.Slice(index * self._byte_width)
return Ref(buf, self._byte_width, 1, self._element_type)
@property
def Value(self):
"""Returns underlying data as list object."""
if not self:
return []
if self._element_type is Type.BOOL:
return [bool(e) for e in _UnpackVector(U, self.Bytes, len(self))]
elif self._element_type is Type.INT:
return list(_UnpackVector(I, self.Bytes, len(self)))
elif self._element_type is Type.UINT:
return list(_UnpackVector(U, self.Bytes, len(self)))
elif self._element_type is Type.FLOAT:
return list(_UnpackVector(F, self.Bytes, len(self)))
elif self._element_type is Type.KEY:
return [e.AsKey for e in self]
elif self._element_type is Type.STRING:
return [e.AsString for e in self]
else:
raise TypeError('unsupported element_type: %s' % self._element_type)
def __repr__(self):
return 'TypedVector(%s, byte_width=%d, element_type=%s, size=%d)' % \
(self._buf, self._byte_width, self._element_type, self._size)
class Map(Vector):
"""Data accessor for the encoded map bytes."""
@staticmethod
def CompareKeys(a, b):
if isinstance(a, Ref):
a = a.AsKeyBytes
if isinstance(b, Ref):
b = b.AsKeyBytes
return a < b
def __getitem__(self, key):
if isinstance(key, int):
return super().__getitem__(key)
index = _BinarySearch(self.Keys, key.encode('ascii'), self.CompareKeys)
if index != -1:
return super().__getitem__(index)
raise KeyError(key)
@property
def Keys(self):
byte_width = _Unpack(U, self._buf[-2 * self._byte_width:-self._byte_width])
buf = self._buf.Indirect(-3 * self._byte_width, self._byte_width)
return TypedVector(buf, byte_width, Type.KEY)
@property
def Values(self):
return Vector(self._buf, self._byte_width)
@property
def Value(self):
return {k.Value: v.Value for k, v in zip(self.Keys, self.Values)}
def __repr__(self):
return 'Map(%s, size=%d)' % (self._buf, len(self))
class Ref:
"""Data accessor for the encoded data bytes."""
__slots__ = '_buf', '_parent_width', '_byte_width', '_type'
@staticmethod
def PackedType(buf, parent_width, packed_type):
byte_width, type_ = Type.Unpack(packed_type)
return Ref(buf, parent_width, byte_width, type_)
def __init__(self, buf, parent_width, byte_width, type_):
self._buf = buf
self._parent_width = parent_width
self._byte_width = byte_width
self._type = type_
def __repr__(self):
return 'Ref(%s, parent_width=%d, byte_width=%d, type_=%s)' % \
(self._buf, self._parent_width, self._byte_width, self._type)
@property
def _Bytes(self):
return self._buf[:self._parent_width]
def _ConvertError(self, target_type):
raise TypeError('cannot convert %s to %s' % (self._type, target_type))
def _Indirect(self):
return self._buf.Indirect(0, self._parent_width)
@property
def IsNull(self):
return self._type is Type.NULL
@property
def IsBool(self):
return self._type is Type.BOOL
@property
def AsBool(self):
if self._type is Type.BOOL:
return bool(_Unpack(U, self._Bytes))
else:
return self.AsInt != 0
def MutateBool(self, value):
"""Mutates underlying boolean value bytes in place.
Args:
value: New boolean value.
Returns:
Whether the value was mutated or not.
"""
return self.IsBool and \
_Mutate(U, self._buf, value, self._parent_width, BitWidth.W8)
@property
def IsNumeric(self):
return self.IsInt or self.IsFloat
@property
def IsInt(self):
return self._type in (Type.INT, Type.INDIRECT_INT, Type.UINT,
Type.INDIRECT_UINT)
@property
def AsInt(self):
"""Returns current reference as integer value."""
if self.IsNull:
return 0
elif self.IsBool:
return int(self.AsBool)
elif self._type is Type.INT:
return _Unpack(I, self._Bytes)
elif self._type is Type.INDIRECT_INT:
return _Unpack(I, self._Indirect()[:self._byte_width])
if self._type is Type.UINT:
return _Unpack(U, self._Bytes)
elif self._type is Type.INDIRECT_UINT:
return _Unpack(U, self._Indirect()[:self._byte_width])
elif self.IsString:
return len(self.AsString)
elif self.IsKey:
return len(self.AsKey)
elif self.IsBlob:
return len(self.AsBlob)
elif self.IsVector:
return len(self.AsVector)
elif self.IsTypedVector:
return len(self.AsTypedVector)
elif self.IsFixedTypedVector:
return len(self.AsFixedTypedVector)
else:
raise self._ConvertError(Type.INT)
def MutateInt(self, value):
"""Mutates underlying integer value bytes in place.
Args:
value: New integer value. It must fit to the byte size of the existing
encoded value.
Returns:
Whether the value was mutated or not.
"""
if self._type is Type.INT:
return _Mutate(I, self._buf, value, self._parent_width, BitWidth.I(value))
elif self._type is Type.INDIRECT_INT:
return _Mutate(I, self._Indirect(), value, self._byte_width,
BitWidth.I(value))
elif self._type is Type.UINT:
return _Mutate(U, self._buf, value, self._parent_width, BitWidth.U(value))
elif self._type is Type.INDIRECT_UINT:
return _Mutate(U, self._Indirect(), value, self._byte_width,
BitWidth.U(value))
else:
return False
@property
def IsFloat(self):
return self._type in (Type.FLOAT, Type.INDIRECT_FLOAT)
@property
def AsFloat(self):
"""Returns current reference as floating point value."""
if self.IsNull:
return 0.0
elif self.IsBool:
return float(self.AsBool)
elif self.IsInt:
return float(self.AsInt)
elif self._type is Type.FLOAT:
return _Unpack(F, self._Bytes)
elif self._type is Type.INDIRECT_FLOAT:
return _Unpack(F, self._Indirect()[:self._byte_width])
elif self.IsString:
return float(self.AsString)
elif self.IsVector:
return float(len(self.AsVector))
elif self.IsTypedVector():
return float(len(self.AsTypedVector))
elif self.IsFixedTypedVector():
return float(len(self.FixedTypedVector))
else:
raise self._ConvertError(Type.FLOAT)
def MutateFloat(self, value):
"""Mutates underlying floating point value bytes in place.
Args:
value: New float value. It must fit to the byte size of the existing
encoded value.
Returns:
Whether the value was mutated or not.
"""
if self._type is Type.FLOAT:
return _Mutate(F, self._buf, value, self._parent_width,
BitWidth.B(self._parent_width))
elif self._type is Type.INDIRECT_FLOAT:
return _Mutate(F, self._Indirect(), value, self._byte_width,
BitWidth.B(self._byte_width))
else:
return False
@property
def IsKey(self):
return self._type is Type.KEY
@property
def AsKeyBytes(self):
if self.IsKey:
return Key(self._Indirect(), self._byte_width).Bytes
else:
raise self._ConvertError(Type.KEY)
@property
def AsKey(self):
if self.IsKey:
return str(Key(self._Indirect(), self._byte_width))
else:
raise self._ConvertError(Type.KEY)
@property
def IsString(self):
return self._type is Type.STRING
@property
def AsString(self):
if self.IsString:
return str(String(self._Indirect(), self._byte_width))
elif self.IsKey:
return self.AsKey
else:
raise self._ConvertError(Type.STRING)
def MutateString(self, value):
return String(self._Indirect(), self._byte_width).Mutate(value)
@property
def IsBlob(self):
return self._type is Type.BLOB
@property
def AsBlob(self):
if self.IsBlob:
return Blob(self._Indirect(), self._byte_width).Bytes
else:
raise self._ConvertError(Type.BLOB)
@property
def IsAnyVector(self):
return self.IsVector or self.IsTypedVector or self.IsFixedTypedVector()
@property
def IsVector(self):
return self._type in (Type.VECTOR, Type.MAP)
@property
def AsVector(self):
if self.IsVector:
return Vector(self._Indirect(), self._byte_width)
else:
raise self._ConvertError(Type.VECTOR)
@property
def IsTypedVector(self):
return Type.IsTypedVector(self._type)
@property
def AsTypedVector(self):
if self.IsTypedVector:
return TypedVector(self._Indirect(), self._byte_width,
Type.ToTypedVectorElementType(self._type))
else:
raise self._ConvertError('TYPED_VECTOR')
@property
def IsFixedTypedVector(self):
return Type.IsFixedTypedVector(self._type)
@property
def AsFixedTypedVector(self):
if self.IsFixedTypedVector:
element_type, size = Type.ToFixedTypedVectorElementType(self._type)
return TypedVector(self._Indirect(), self._byte_width, element_type, size)
else:
raise self._ConvertError('FIXED_TYPED_VECTOR')
@property
def IsMap(self):
return self._type is Type.MAP
@property
def AsMap(self):
if self.IsMap:
return Map(self._Indirect(), self._byte_width)
else:
raise self._ConvertError(Type.MAP)
@property
def Value(self):
"""Converts current reference to value of corresponding type.
This is equivalent to calling `AsInt` for integer values, `AsFloat` for
floating point values, etc.
Returns:
Value of corresponding type.
"""
if self.IsNull:
return None
elif self.IsBool:
return self.AsBool
elif self.IsInt:
return self.AsInt
elif self.IsFloat:
return self.AsFloat
elif self.IsString:
return self.AsString
elif self.IsKey:
return self.AsKey
elif self.IsBlob:
return self.AsBlob
elif self.IsMap:
return self.AsMap.Value
elif self.IsVector:
return self.AsVector.Value
elif self.IsTypedVector:
return self.AsTypedVector.Value
elif self.IsFixedTypedVector:
return self.AsFixedTypedVector.Value
else:
raise TypeError('cannot convert %r to value' % self)
def _IsIterable(obj):
try:
iter(obj)
return True
except TypeError:
return False
class Value:
"""Class to represent given value during the encoding process."""
@staticmethod
def Null():
return Value(0, Type.NULL, BitWidth.W8)
@staticmethod
def Bool(value):
return Value(value, Type.BOOL, BitWidth.W8)
@staticmethod
def Int(value, bit_width):
return Value(value, Type.INT, bit_width)
@staticmethod
def UInt(value, bit_width):
return Value(value, Type.UINT, bit_width)
@staticmethod
def Float(value, bit_width):
return Value(value, Type.FLOAT, bit_width)
@staticmethod
def Key(offset):
return Value(offset, Type.KEY, BitWidth.W8)
def __init__(self, value, type_, min_bit_width):
self._value = value
self._type = type_
# For scalars: of itself, for vector: of its elements, for string: length.
self._min_bit_width = min_bit_width
@property
def Value(self):
return self._value
@property
def Type(self):
return self._type
@property
def MinBitWidth(self):
return self._min_bit_width
def StoredPackedType(self, parent_bit_width=BitWidth.W8):
return Type.Pack(self._type, self.StoredWidth(parent_bit_width))
# We have an absolute offset, but want to store a relative offset
# elem_index elements beyond the current buffer end. Since whether
# the relative offset fits in a certain byte_width depends on
# the size of the elements before it (and their alignment), we have
# to test for each size in turn.
def ElemWidth(self, buf_size, elem_index=0):
if Type.IsInline(self._type):
return self._min_bit_width
for byte_width in 1, 2, 4, 8:
offset_loc = buf_size + _PaddingBytes(buf_size, byte_width) + \
elem_index * byte_width
bit_width = BitWidth.U(offset_loc - self._value)
if byte_width == (1 << bit_width):
return bit_width
raise ValueError('relative offset is too big')
def StoredWidth(self, parent_bit_width=BitWidth.W8):
if Type.IsInline(self._type):
return max(self._min_bit_width, parent_bit_width)
return self._min_bit_width
def __repr__(self):
return 'Value(%s, %s, %s)' % (self._value, self._type, self._min_bit_width)
def __str__(self):
return str(self._value)
def InMap(func):
def wrapper(self, *args, **kwargs):
if isinstance(args[0], str):
self.Key(args[0])
func(self, *args[1:], **kwargs)
else:
func(self, *args, **kwargs)
return wrapper
def InMapForString(func):
def wrapper(self, *args):
if len(args) == 1:
func(self, args[0])
elif len(args) == 2:
self.Key(args[0])
func(self, args[1])
else:
raise ValueError('invalid number of arguments')
return wrapper
class Pool:
"""Collection of (data, offset) pairs sorted by data for quick access."""
def __init__(self):
self._pool = [] # sorted list of (data, offset) tuples
def FindOrInsert(self, data, offset):
do = data, offset
index = _BinarySearch(self._pool, do, lambda a, b: a[0] < b[0])
if index != -1:
_, offset = self._pool[index]
return offset
self._pool.insert(index, do)
return None
def Clear(self):
self._pool = []
@property
def Elements(self):
return [data for data, _ in self._pool]
class Builder:
"""Helper class to encode structural data into flexbuffers format."""
def __init__(self,
share_strings=False,
share_keys=True,
force_min_bit_width=BitWidth.W8):
self._share_strings = share_strings
self._share_keys = share_keys
self._force_min_bit_width = force_min_bit_width
self._string_pool = Pool()
self._key_pool = Pool()
self._finished = False
self._buf = bytearray()
self._stack = []
def __len__(self):
return len(self._buf)
@property
def StringPool(self):
return self._string_pool
@property
def KeyPool(self):
return self._key_pool
def Clear(self):
self._string_pool.Clear()
self._key_pool.Clear()
self._finished = False
self._buf = bytearray()
self._stack = []
def Finish(self):
"""Finishes encoding process and returns underlying buffer."""
if self._finished:
raise RuntimeError('builder has been already finished')
# If you hit this exception, you likely have objects that were never
# included in a parent. You need to have exactly one root to finish a
# buffer. Check your Start/End calls are matched, and all objects are inside
# some other object.
if len(self._stack) != 1:
raise RuntimeError('internal stack size must be one')
value = self._stack[0]
byte_width = self._Align(value.ElemWidth(len(self._buf)))
self._WriteAny(value, byte_width=byte_width) # Root value
self._Write(U, value.StoredPackedType(), byte_width=1) # Root type
self._Write(U, byte_width, byte_width=1) # Root size
self.finished = True
return self._buf
def _ReadKey(self, offset):
key = self._buf[offset:]
return key[:key.find(0)]
def _Align(self, alignment):
byte_width = 1 << alignment
self._buf.extend(b'\x00' * _PaddingBytes(len(self._buf), byte_width))
return byte_width
def _Write(self, fmt, value, byte_width):
self._buf.extend(_Pack(fmt, value, byte_width))
def _WriteVector(self, fmt, values, byte_width):
self._buf.extend(_PackVector(fmt, values, byte_width))
def _WriteOffset(self, offset, byte_width):
relative_offset = len(self._buf) - offset
assert byte_width == 8 or relative_offset < (1 << (8 * byte_width))
self._Write(U, relative_offset, byte_width)
def _WriteAny(self, value, byte_width):
fmt = {
Type.NULL: U, Type.BOOL: U, Type.INT: I, Type.UINT: U, Type.FLOAT: F
}.get(value.Type)
if fmt:
self._Write(fmt, value.Value, byte_width)
else:
self._WriteOffset(value.Value, byte_width)
def _WriteBlob(self, data, append_zero, type_):
bit_width = BitWidth.U(len(data))
byte_width = self._Align(bit_width)
self._Write(U, len(data), byte_width)
loc = len(self._buf)
self._buf.extend(data)
if append_zero:
self._buf.append(0)
self._stack.append(Value(loc, type_, bit_width))
return loc
def _WriteScalarVector(self, element_type, byte_width, elements, fixed):
"""Writes scalar vector elements to the underlying buffer."""
bit_width = BitWidth.B(byte_width)
# If you get this exception, you're trying to write a vector with a size
# field that is bigger than the scalars you're trying to write (e.g. a
# byte vector > 255 elements). For such types, write a "blob" instead.
if BitWidth.U(len(elements)) > bit_width:
raise ValueError('too many elements for the given byte_width')
self._Align(bit_width)
if not fixed:
self._Write(U, len(elements), byte_width)
loc = len(self._buf)
fmt = {Type.INT: I, Type.UINT: U, Type.FLOAT: F}.get(element_type)
if not fmt:
raise TypeError('unsupported element_type')
self._WriteVector(fmt, elements, byte_width)
type_ = Type.ToTypedVector(element_type, len(elements) if fixed else 0)
self._stack.append(Value(loc, type_, bit_width))
return loc
def _CreateVector(self, elements, typed, fixed, keys=None):
"""Writes vector elements to the underlying buffer."""
length = len(elements)
if fixed and not typed:
raise ValueError('fixed vector must be typed')
# Figure out smallest bit width we can store this vector with.
bit_width = max(self._force_min_bit_width, BitWidth.U(length))
prefix_elems = 1 # Vector size
if keys:
bit_width = max(bit_width, keys.ElemWidth(len(self._buf)))
prefix_elems += 2 # Offset to the keys vector and its byte width.
vector_type = Type.KEY
# Check bit widths and types for all elements.
for i, e in enumerate(elements):
bit_width = max(bit_width, e.ElemWidth(len(self._buf), prefix_elems + i))
if typed:
if i == 0:
vector_type = e.Type
else:
if vector_type != e.Type:
raise RuntimeError('typed vector elements must be of the same type')
if fixed and not Type.IsFixedTypedVectorElementType(vector_type):
raise RuntimeError('must be fixed typed vector element type')
byte_width = self._Align(bit_width)
# Write vector. First the keys width/offset if available, and size.
if keys:
self._WriteOffset(keys.Value, byte_width)
self._Write(U, 1 << keys.MinBitWidth, byte_width)
if not fixed:
self._Write(U, length, byte_width)
# Then the actual data.
loc = len(self._buf)
for e in elements:
self._WriteAny(e, byte_width)
# Then the types.
if not typed:
for e in elements:
self._buf.append(e.StoredPackedType(bit_width))
if keys:
type_ = Type.MAP
else:
if typed:
type_ = Type.ToTypedVector(vector_type, length if fixed else 0)
else:
type_ = Type.VECTOR
return Value(loc, type_, bit_width)
def _PushIndirect(self, value, type_, bit_width):
byte_width = self._Align(bit_width)
loc = len(self._buf)
fmt = {
Type.INDIRECT_INT: I,
Type.INDIRECT_UINT: U,
Type.INDIRECT_FLOAT: F
}[type_]
self._Write(fmt, value, byte_width)
self._stack.append(Value(loc, type_, bit_width))
@InMapForString
def String(self, value):
"""Encodes string value."""
reset_to = len(self._buf)
encoded = value.encode('utf-8')
loc = self._WriteBlob(encoded, append_zero=True, type_=Type.STRING)
if self._share_strings:
prev_loc = self._string_pool.FindOrInsert(encoded, loc)
if prev_loc is not None:
del self._buf[reset_to:]
self._stack[-1]._value = loc = prev_loc # pylint: disable=protected-access
return loc
@InMap
def Blob(self, value):
"""Encodes binary blob value.
Args:
value: A byte/bytearray value to encode
Returns:
Offset of the encoded value in underlying the byte buffer.
"""
return self._WriteBlob(value, append_zero=False, type_=Type.BLOB)
def Key(self, value):
"""Encodes key value.
Args:
value: A byte/bytearray/str value to encode. Byte object must not contain
zero bytes. String object must be convertible to ASCII.
Returns:
Offset of the encoded value in the underlying byte buffer.
"""
if isinstance(value, (bytes, bytearray)):
encoded = value
else:
encoded = value.encode('ascii')
if 0 in encoded:
raise ValueError('key contains zero byte')
loc = len(self._buf)
self._buf.extend(encoded)
self._buf.append(0)
if self._share_keys:
prev_loc = self._key_pool.FindOrInsert(encoded, loc)
if prev_loc is not None:
del self._buf[loc:]
loc = prev_loc
self._stack.append(Value.Key(loc))
return loc
def Null(self, key=None):
"""Encodes None value."""
if key:
self.Key(key)
self._stack.append(Value.Null())
@InMap
def Bool(self, value):
"""Encodes boolean value.
Args:
value: A boolean value.
"""
self._stack.append(Value.Bool(value))
@InMap
def Int(self, value, byte_width=0):
"""Encodes signed integer value.
Args:
value: A signed integer value.
byte_width: Number of bytes to use: 1, 2, 4, or 8.
"""
bit_width = BitWidth.I(value) if byte_width == 0 else BitWidth.B(byte_width)
self._stack.append(Value.Int(value, bit_width))
@InMap
def IndirectInt(self, value, byte_width=0):
"""Encodes signed integer value indirectly.
Args:
value: A signed integer value.
byte_width: Number of bytes to use: 1, 2, 4, or 8.
"""
bit_width = BitWidth.I(value) if byte_width == 0 else BitWidth.B(byte_width)
self._PushIndirect(value, Type.INDIRECT_INT, bit_width)
@InMap
def UInt(self, value, byte_width=0):
"""Encodes unsigned integer value.
Args:
value: An unsigned integer value.
byte_width: Number of bytes to use: 1, 2, 4, or 8.
"""
bit_width = BitWidth.U(value) if byte_width == 0 else BitWidth.B(byte_width)
self._stack.append(Value.UInt(value, bit_width))
@InMap
def IndirectUInt(self, value, byte_width=0):
"""Encodes unsigned integer value indirectly.
Args:
value: An unsigned integer value.
byte_width: Number of bytes to use: 1, 2, 4, or 8.
"""
bit_width = BitWidth.U(value) if byte_width == 0 else BitWidth.B(byte_width)
self._PushIndirect(value, Type.INDIRECT_UINT, bit_width)
@InMap
def Float(self, value, byte_width=0):
"""Encodes floating point value.
Args:
value: A floating point value.
byte_width: Number of bytes to use: 4 or 8.
"""
bit_width = BitWidth.F(value) if byte_width == 0 else BitWidth.B(byte_width)
self._stack.append(Value.Float(value, bit_width))
@InMap
def IndirectFloat(self, value, byte_width=0):
"""Encodes floating point value indirectly.
Args:
value: A floating point value.
byte_width: Number of bytes to use: 4 or 8.
"""
bit_width = BitWidth.F(value) if byte_width == 0 else BitWidth.B(byte_width)
self._PushIndirect(value, Type.INDIRECT_FLOAT, bit_width)
def _StartVector(self):
"""Starts vector construction."""
return len(self._stack)
def _EndVector(self, start, typed, fixed):
"""Finishes vector construction by encodung its elements."""
vec = self._CreateVector(self._stack[start:], typed, fixed)
del self._stack[start:]
self._stack.append(vec)
return vec.Value
@contextlib.contextmanager
def Vector(self, key=None):
if key:
self.Key(key)
try:
start = self._StartVector()
yield self
finally:
self._EndVector(start, typed=False, fixed=False)
@InMap
def VectorFromElements(self, elements):
"""Encodes sequence of any elements as a vector.
Args:
elements: sequence of elements, they may have different types.
"""
with self.Vector():
for e in elements:
self.Add(e)
@contextlib.contextmanager
def TypedVector(self, key=None):
if key:
self.Key(key)
try:
start = self._StartVector()
yield self
finally:
self._EndVector(start, typed=True, fixed=False)
@InMap
def TypedVectorFromElements(self, elements, element_type=None):
"""Encodes sequence of elements of the same type as typed vector.
Args:
elements: Sequence of elements, they must be of the same type.
element_type: Suggested element type. Setting it to None means determining
correct value automatically based on the given elements.
"""
if isinstance(elements, array.array):
if elements.typecode == 'f':
self._WriteScalarVector(Type.FLOAT, 4, elements, fixed=False)
elif elements.typecode == 'd':
self._WriteScalarVector(Type.FLOAT, 8, elements, fixed=False)
elif elements.typecode in ('b', 'h', 'i', 'l', 'q'):
self._WriteScalarVector(
Type.INT, elements.itemsize, elements, fixed=False)
elif elements.typecode in ('B', 'H', 'I', 'L', 'Q'):
self._WriteScalarVector(
Type.UINT, elements.itemsize, elements, fixed=False)
else:
raise ValueError('unsupported array typecode: %s' % elements.typecode)
else:
add = self.Add if element_type is None else self.Adder(element_type)
with self.TypedVector():
for e in elements:
add(e)
@InMap
def FixedTypedVectorFromElements(self,
elements,
element_type=None,
byte_width=0):
"""Encodes sequence of elements of the same type as fixed typed vector.
Args:
elements: Sequence of elements, they must be of the same type. Allowed
types are `Type.INT`, `Type.UINT`, `Type.FLOAT`. Allowed number of
elements are 2, 3, or 4.
element_type: Suggested element type. Setting it to None means determining
correct value automatically based on the given elements.
byte_width: Number of bytes to use per element. For `Type.INT` and
`Type.UINT`: 1, 2, 4, or 8. For `Type.FLOAT`: 4 or 8. Setting it to 0
means determining correct value automatically based on the given
elements.
"""
if not 2 <= len(elements) <= 4:
raise ValueError('only 2, 3, or 4 elements are supported')
types = {type(e) for e in elements}
if len(types) != 1:
raise TypeError('all elements must be of the same type')
type_, = types
if element_type is None:
element_type = {int: Type.INT, float: Type.FLOAT}.get(type_)
if not element_type:
raise TypeError('unsupported element_type: %s' % type_)
if byte_width == 0:
width = {
Type.UINT: BitWidth.U,
Type.INT: BitWidth.I,
Type.FLOAT: BitWidth.F
}[element_type]
byte_width = 1 << max(width(e) for e in elements)
self._WriteScalarVector(element_type, byte_width, elements, fixed=True)
def _StartMap(self):
"""Starts map construction."""
return len(self._stack)
def _EndMap(self, start):
"""Finishes map construction by encodung its elements."""
# Interleaved keys and values on the stack.
stack = self._stack[start:]
if len(stack) % 2 != 0:
raise RuntimeError('must be even number of keys and values')
for key in stack[::2]:
if key.Type is not Type.KEY:
raise RuntimeError('all map keys must be of %s type' % Type.KEY)
pairs = zip(stack[::2], stack[1::2]) # [(key, value), ...]
pairs = sorted(pairs, key=lambda pair: self._ReadKey(pair[0].Value))
del self._stack[start:]
for pair in pairs:
self._stack.extend(pair)
keys = self._CreateVector(self._stack[start::2], typed=True, fixed=False)
values = self._CreateVector(
self._stack[start + 1::2], typed=False, fixed=False, keys=keys)
del self._stack[start:]
self._stack.append(values)
return values.Value
@contextlib.contextmanager
def Map(self, key=None):
if key:
self.Key(key)
try:
start = self._StartMap()
yield self
finally:
self._EndMap(start)
def MapFromElements(self, elements):
start = self._StartMap()
for k, v in elements.items():
self.Key(k)
self.Add(v)
self._EndMap(start)
def Adder(self, type_):
return {
Type.BOOL: self.Bool,
Type.INT: self.Int,
Type.INDIRECT_INT: self.IndirectInt,
Type.UINT: self.UInt,
Type.INDIRECT_UINT: self.IndirectUInt,
Type.FLOAT: self.Float,
Type.INDIRECT_FLOAT: self.IndirectFloat,
Type.KEY: self.Key,
Type.BLOB: self.Blob,
Type.STRING: self.String,
}[type_]
@InMapForString
def Add(self, value):
"""Encodes value of any supported type."""
if value is None:
self.Null()
elif isinstance(value, bool):
self.Bool(value)
elif isinstance(value, int):
self.Int(value)
elif isinstance(value, float):
self.Float(value)
elif isinstance(value, str):
self.String(value)
elif isinstance(value, (bytes, bytearray)):
self.Blob(value)
elif isinstance(value, dict):
with self.Map():
for k, v in value.items():
self.Key(k)
self.Add(v)
elif isinstance(value, array.array):
self.TypedVectorFromElements(value)
elif _IsIterable(value):
self.VectorFromElements(value)
else:
raise TypeError('unsupported python type: %s' % type(value))
@property
def LastValue(self):
return self._stack[-1]
@InMap
def ReuseValue(self, value):
self._stack.append(value)
def GetRoot(buf):
"""Returns root `Ref` object for the given buffer."""
if len(buf) < 3:
raise ValueError('buffer is too small')
byte_width = buf[-1]
return Ref.PackedType(
Buf(buf, -(2 + byte_width)), byte_width, packed_type=buf[-2])
def Dumps(obj):
"""Returns bytearray with the encoded python object."""
fbb = Builder()
fbb.Add(obj)
return fbb.Finish()
def Loads(buf):
"""Returns python object decoded from the buffer."""
return GetRoot(buf).Value | zlmdb | /zlmdb-23.1.1.tar.gz/zlmdb-23.1.1/flatbuffers/flexbuffers.py | flexbuffers.py |
from . import encode
from . import number_types as N
class Table(object):
"""Table wraps a byte slice and provides read access to its data.
The variable `Pos` indicates the root of the FlatBuffers object therein."""
__slots__ = ("Bytes", "Pos")
def __init__(self, buf, pos):
N.enforce_number(pos, N.UOffsetTFlags)
self.Bytes = buf
self.Pos = pos
def Offset(self, vtableOffset):
"""Offset provides access into the Table's vtable.
Deprecated fields are ignored by checking the vtable's length."""
vtable = self.Pos - self.Get(N.SOffsetTFlags, self.Pos)
vtableEnd = self.Get(N.VOffsetTFlags, vtable)
if vtableOffset < vtableEnd:
return self.Get(N.VOffsetTFlags, vtable + vtableOffset)
return 0
def Indirect(self, off):
"""Indirect retrieves the relative offset stored at `offset`."""
N.enforce_number(off, N.UOffsetTFlags)
return off + encode.Get(N.UOffsetTFlags.packer_type, self.Bytes, off)
def String(self, off):
"""String gets a string from data stored inside the flatbuffer."""
N.enforce_number(off, N.UOffsetTFlags)
off += encode.Get(N.UOffsetTFlags.packer_type, self.Bytes, off)
start = off + N.UOffsetTFlags.bytewidth
length = encode.Get(N.UOffsetTFlags.packer_type, self.Bytes, off)
return bytes(self.Bytes[start:start+length])
def VectorLen(self, off):
"""VectorLen retrieves the length of the vector whose offset is stored
at "off" in this object."""
N.enforce_number(off, N.UOffsetTFlags)
off += self.Pos
off += encode.Get(N.UOffsetTFlags.packer_type, self.Bytes, off)
ret = encode.Get(N.UOffsetTFlags.packer_type, self.Bytes, off)
return ret
def Vector(self, off):
"""Vector retrieves the start of data of the vector whose offset is
stored at "off" in this object."""
N.enforce_number(off, N.UOffsetTFlags)
off += self.Pos
x = off + self.Get(N.UOffsetTFlags, off)
# data starts after metadata containing the vector length
x += N.UOffsetTFlags.bytewidth
return x
def Union(self, t2, off):
"""Union initializes any Table-derived type to point to the union at
the given offset."""
assert type(t2) is Table
N.enforce_number(off, N.UOffsetTFlags)
off += self.Pos
t2.Pos = off + self.Get(N.UOffsetTFlags, off)
t2.Bytes = self.Bytes
def Get(self, flags, off):
"""
Get retrieves a value of the type specified by `flags` at the
given offset.
"""
N.enforce_number(off, N.UOffsetTFlags)
return flags.py_type(encode.Get(flags.packer_type, self.Bytes, off))
def GetSlot(self, slot, d, validator_flags):
N.enforce_number(slot, N.VOffsetTFlags)
if validator_flags is not None:
N.enforce_number(d, validator_flags)
off = self.Offset(slot)
if off == 0:
return d
return self.Get(validator_flags, self.Pos + off)
def GetVectorAsNumpy(self, flags, off):
"""
GetVectorAsNumpy returns the vector that starts at `Vector(off)`
as a numpy array with the type specified by `flags`. The array is
a `view` into Bytes, so modifying the returned array will
modify Bytes in place.
"""
offset = self.Vector(off)
length = self.VectorLen(off) # TODO: length accounts for bytewidth, right?
numpy_dtype = N.to_numpy_type(flags)
return encode.GetVectorAsNumpy(numpy_dtype, self.Bytes, length, offset)
def GetVOffsetTSlot(self, slot, d):
"""
GetVOffsetTSlot retrieves the VOffsetT that the given vtable location
points to. If the vtable value is zero, the default value `d`
will be returned.
"""
N.enforce_number(slot, N.VOffsetTFlags)
N.enforce_number(d, N.VOffsetTFlags)
off = self.Offset(slot)
if off == 0:
return d
return off | zlmdb | /zlmdb-23.1.1.tar.gz/zlmdb-23.1.1/flatbuffers/table.py | table.py |
from . import number_types as N
from .number_types import (UOffsetTFlags, SOffsetTFlags, VOffsetTFlags)
from . import encode
from . import packer
from . import compat
from .compat import range_func
from .compat import memoryview_type
from .compat import import_numpy, NumpyRequiredForThisFeature
np = import_numpy()
## @file
## @addtogroup flatbuffers_python_api
## @{
## @cond FLATBUFFERS_INTERNAL
class OffsetArithmeticError(RuntimeError):
"""
Error caused by an Offset arithmetic error. Probably caused by bad
writing of fields. This is considered an unreachable situation in
normal circumstances.
"""
pass
class IsNotNestedError(RuntimeError):
"""
Error caused by using a Builder to write Object data when not inside
an Object.
"""
pass
class IsNestedError(RuntimeError):
"""
Error caused by using a Builder to begin an Object when an Object is
already being built.
"""
pass
class StructIsNotInlineError(RuntimeError):
"""
Error caused by using a Builder to write a Struct at a location that
is not the current Offset.
"""
pass
class BuilderSizeError(RuntimeError):
"""
Error caused by causing a Builder to exceed the hardcoded limit of 2
gigabytes.
"""
pass
class BuilderNotFinishedError(RuntimeError):
"""
Error caused by not calling `Finish` before calling `Output`.
"""
pass
# VtableMetadataFields is the count of metadata fields in each vtable.
VtableMetadataFields = 2
## @endcond
class Builder(object):
""" A Builder is used to construct one or more FlatBuffers.
Typically, Builder objects will be used from code generated by the `flatc`
compiler.
A Builder constructs byte buffers in a last-first manner for simplicity and
performance during reading.
Internally, a Builder is a state machine for creating FlatBuffer objects.
It holds the following internal state:
- Bytes: an array of bytes.
- current_vtable: a list of integers.
- vtables: a hash of vtable entries.
Attributes:
Bytes: The internal `bytearray` for the Builder.
finished: A boolean determining if the Builder has been finalized.
"""
## @cond FLATBUFFERS_INTENRAL
__slots__ = ("Bytes", "current_vtable", "head", "minalign", "objectEnd",
"vtables", "nested", "forceDefaults", "finished", "vectorNumElems")
"""Maximum buffer size constant, in bytes.
Builder will never allow it's buffer grow over this size.
Currently equals 2Gb.
"""
MAX_BUFFER_SIZE = 2**31
## @endcond
def __init__(self, initialSize=1024):
"""Initializes a Builder of size `initial_size`.
The internal buffer is grown as needed.
"""
if not (0 <= initialSize <= Builder.MAX_BUFFER_SIZE):
msg = "flatbuffers: Cannot create Builder larger than 2 gigabytes."
raise BuilderSizeError(msg)
self.Bytes = bytearray(initialSize)
## @cond FLATBUFFERS_INTERNAL
self.current_vtable = None
self.head = UOffsetTFlags.py_type(initialSize)
self.minalign = 1
self.objectEnd = None
self.vtables = {}
self.nested = False
self.forceDefaults = False
## @endcond
self.finished = False
def Output(self):
"""Return the portion of the buffer that has been used for writing data.
This is the typical way to access the FlatBuffer data inside the
builder. If you try to access `Builder.Bytes` directly, you would need
to manually index it with `Head()`, since the buffer is constructed
backwards.
It raises BuilderNotFinishedError if the buffer has not been finished
with `Finish`.
"""
if not self.finished:
raise BuilderNotFinishedError()
return self.Bytes[self.Head():]
## @cond FLATBUFFERS_INTERNAL
def StartObject(self, numfields):
"""StartObject initializes bookkeeping for writing a new object."""
self.assertNotNested()
# use 32-bit offsets so that arithmetic doesn't overflow.
self.current_vtable = [0 for _ in range_func(numfields)]
self.objectEnd = self.Offset()
self.nested = True
def WriteVtable(self):
"""
WriteVtable serializes the vtable for the current object, if needed.
Before writing out the vtable, this checks pre-existing vtables for
equality to this one. If an equal vtable is found, point the object to
the existing vtable and return.
Because vtable values are sensitive to alignment of object data, not
all logically-equal vtables will be deduplicated.
A vtable has the following format:
<VOffsetT: size of the vtable in bytes, including this value>
<VOffsetT: size of the object in bytes, including the vtable offset>
<VOffsetT: offset for a field> * N, where N is the number of fields
in the schema for this type. Includes deprecated fields.
Thus, a vtable is made of 2 + N elements, each VOffsetT bytes wide.
An object has the following format:
<SOffsetT: offset to this object's vtable (may be negative)>
<byte: data>+
"""
# Prepend a zero scalar to the object. Later in this function we'll
# write an offset here that points to the object's vtable:
self.PrependSOffsetTRelative(0)
objectOffset = self.Offset()
vtKey = []
trim = True
for elem in reversed(self.current_vtable):
if elem == 0:
if trim:
continue
else:
elem = objectOffset - elem
trim = False
vtKey.append(elem)
vtKey = tuple(vtKey)
vt2Offset = self.vtables.get(vtKey)
if vt2Offset is None:
# Did not find a vtable, so write this one to the buffer.
# Write out the current vtable in reverse , because
# serialization occurs in last-first order:
i = len(self.current_vtable) - 1
trailing = 0
trim = True
while i >= 0:
off = 0
elem = self.current_vtable[i]
i -= 1
if elem == 0:
if trim:
trailing += 1
continue
else:
# Forward reference to field;
# use 32bit number to ensure no overflow:
off = objectOffset - elem
trim = False
self.PrependVOffsetT(off)
# The two metadata fields are written last.
# First, store the object bytesize:
objectSize = UOffsetTFlags.py_type(objectOffset - self.objectEnd)
self.PrependVOffsetT(VOffsetTFlags.py_type(objectSize))
# Second, store the vtable bytesize:
vBytes = len(self.current_vtable) - trailing + VtableMetadataFields
vBytes *= N.VOffsetTFlags.bytewidth
self.PrependVOffsetT(VOffsetTFlags.py_type(vBytes))
# Next, write the offset to the new vtable in the
# already-allocated SOffsetT at the beginning of this object:
objectStart = SOffsetTFlags.py_type(len(self.Bytes) - objectOffset)
encode.Write(packer.soffset, self.Bytes, objectStart,
SOffsetTFlags.py_type(self.Offset() - objectOffset))
# Finally, store this vtable in memory for future
# deduplication:
self.vtables[vtKey] = self.Offset()
else:
# Found a duplicate vtable.
objectStart = SOffsetTFlags.py_type(len(self.Bytes) - objectOffset)
self.head = UOffsetTFlags.py_type(objectStart)
# Write the offset to the found vtable in the
# already-allocated SOffsetT at the beginning of this object:
encode.Write(packer.soffset, self.Bytes, self.Head(),
SOffsetTFlags.py_type(vt2Offset - objectOffset))
self.current_vtable = None
return objectOffset
def EndObject(self):
"""EndObject writes data necessary to finish object construction."""
self.assertNested()
self.nested = False
return self.WriteVtable()
def growByteBuffer(self):
"""Doubles the size of the byteslice, and copies the old data towards
the end of the new buffer (since we build the buffer backwards)."""
if len(self.Bytes) == Builder.MAX_BUFFER_SIZE:
msg = "flatbuffers: cannot grow buffer beyond 2 gigabytes"
raise BuilderSizeError(msg)
newSize = min(len(self.Bytes) * 2, Builder.MAX_BUFFER_SIZE)
if newSize == 0:
newSize = 1
bytes2 = bytearray(newSize)
bytes2[newSize-len(self.Bytes):] = self.Bytes
self.Bytes = bytes2
## @endcond
def Head(self):
"""Get the start of useful data in the underlying byte buffer.
Note: unlike other functions, this value is interpreted as from the
left.
"""
## @cond FLATBUFFERS_INTERNAL
return self.head
## @endcond
## @cond FLATBUFFERS_INTERNAL
def Offset(self):
"""Offset relative to the end of the buffer."""
return UOffsetTFlags.py_type(len(self.Bytes) - self.Head())
def Pad(self, n):
"""Pad places zeros at the current offset."""
for i in range_func(n):
self.Place(0, N.Uint8Flags)
def Prep(self, size, additionalBytes):
"""
Prep prepares to write an element of `size` after `additional_bytes`
have been written, e.g. if you write a string, you need to align
such the int length field is aligned to SizeInt32, and the string
data follows it directly.
If all you need to do is align, `additionalBytes` will be 0.
"""
# Track the biggest thing we've ever aligned to.
if size > self.minalign:
self.minalign = size
# Find the amount of alignment needed such that `size` is properly
# aligned after `additionalBytes`:
alignSize = (~(len(self.Bytes) - self.Head() + additionalBytes)) + 1
alignSize &= (size - 1)
# Reallocate the buffer if needed:
while self.Head() < alignSize+size+additionalBytes:
oldBufSize = len(self.Bytes)
self.growByteBuffer()
updated_head = self.head + len(self.Bytes) - oldBufSize
self.head = UOffsetTFlags.py_type(updated_head)
self.Pad(alignSize)
def PrependSOffsetTRelative(self, off):
"""
PrependSOffsetTRelative prepends an SOffsetT, relative to where it
will be written.
"""
# Ensure alignment is already done:
self.Prep(N.SOffsetTFlags.bytewidth, 0)
if not (off <= self.Offset()):
msg = "flatbuffers: Offset arithmetic error."
raise OffsetArithmeticError(msg)
off2 = self.Offset() - off + N.SOffsetTFlags.bytewidth
self.PlaceSOffsetT(off2)
## @endcond
def PrependUOffsetTRelative(self, off):
"""Prepends an unsigned offset into vector data, relative to where it
will be written.
"""
# Ensure alignment is already done:
self.Prep(N.UOffsetTFlags.bytewidth, 0)
if not (off <= self.Offset()):
msg = "flatbuffers: Offset arithmetic error."
raise OffsetArithmeticError(msg)
off2 = self.Offset() - off + N.UOffsetTFlags.bytewidth
self.PlaceUOffsetT(off2)
## @cond FLATBUFFERS_INTERNAL
def StartVector(self, elemSize, numElems, alignment):
"""
StartVector initializes bookkeeping for writing a new vector.
A vector has the following format:
- <UOffsetT: number of elements in this vector>
- <T: data>+, where T is the type of elements of this vector.
"""
self.assertNotNested()
self.nested = True
self.vectorNumElems = numElems
self.Prep(N.Uint32Flags.bytewidth, elemSize*numElems)
self.Prep(alignment, elemSize*numElems) # In case alignment > int.
return self.Offset()
## @endcond
def EndVector(self):
"""EndVector writes data necessary to finish vector construction."""
self.assertNested()
## @cond FLATBUFFERS_INTERNAL
self.nested = False
## @endcond
# we already made space for this, so write without PrependUint32
self.PlaceUOffsetT(self.vectorNumElems)
self.vectorNumElems = None
return self.Offset()
def CreateString(self, s, encoding='utf-8', errors='strict'):
"""CreateString writes a null-terminated byte string as a vector."""
self.assertNotNested()
## @cond FLATBUFFERS_INTERNAL
self.nested = True
## @endcond
if isinstance(s, compat.string_types):
x = s.encode(encoding, errors)
elif isinstance(s, compat.binary_types):
x = s
else:
raise TypeError("non-string passed to CreateString")
self.Prep(N.UOffsetTFlags.bytewidth, (len(x)+1)*N.Uint8Flags.bytewidth)
self.Place(0, N.Uint8Flags)
l = UOffsetTFlags.py_type(len(s))
## @cond FLATBUFFERS_INTERNAL
self.head = UOffsetTFlags.py_type(self.Head() - l)
## @endcond
self.Bytes[self.Head():self.Head()+l] = x
self.vectorNumElems = len(x)
return self.EndVector()
def CreateByteVector(self, x):
"""CreateString writes a byte vector."""
self.assertNotNested()
## @cond FLATBUFFERS_INTERNAL
self.nested = True
## @endcond
if not isinstance(x, compat.binary_types):
raise TypeError("non-byte vector passed to CreateByteVector")
self.Prep(N.UOffsetTFlags.bytewidth, len(x)*N.Uint8Flags.bytewidth)
l = UOffsetTFlags.py_type(len(x))
## @cond FLATBUFFERS_INTERNAL
self.head = UOffsetTFlags.py_type(self.Head() - l)
## @endcond
self.Bytes[self.Head():self.Head()+l] = x
self.vectorNumElems = len(x)
return self.EndVector()
def CreateNumpyVector(self, x):
"""CreateNumpyVector writes a numpy array into the buffer."""
if np is None:
# Numpy is required for this feature
raise NumpyRequiredForThisFeature("Numpy was not found.")
if not isinstance(x, np.ndarray):
raise TypeError("non-numpy-ndarray passed to CreateNumpyVector")
if x.dtype.kind not in ['b', 'i', 'u', 'f']:
raise TypeError("numpy-ndarray holds elements of unsupported datatype")
if x.ndim > 1:
raise TypeError("multidimensional-ndarray passed to CreateNumpyVector")
self.StartVector(x.itemsize, x.size, x.dtype.alignment)
# Ensure little endian byte ordering
if x.dtype.str[0] == "<":
x_lend = x
else:
x_lend = x.byteswap(inplace=False)
# Calculate total length
l = UOffsetTFlags.py_type(x_lend.itemsize * x_lend.size)
## @cond FLATBUFFERS_INTERNAL
self.head = UOffsetTFlags.py_type(self.Head() - l)
## @endcond
# tobytes ensures c_contiguous ordering
self.Bytes[self.Head():self.Head()+l] = x_lend.tobytes(order='C')
self.vectorNumElems = x.size
return self.EndVector()
## @cond FLATBUFFERS_INTERNAL
def assertNested(self):
"""
Check that we are in the process of building an object.
"""
if not self.nested:
raise IsNotNestedError()
def assertNotNested(self):
"""
Check that no other objects are being built while making this
object. If not, raise an exception.
"""
if self.nested:
raise IsNestedError()
def assertStructIsInline(self, obj):
"""
Structs are always stored inline, so need to be created right
where they are used. You'll get this error if you created it
elsewhere.
"""
N.enforce_number(obj, N.UOffsetTFlags)
if obj != self.Offset():
msg = ("flatbuffers: Tried to write a Struct at an Offset that "
"is different from the current Offset of the Builder.")
raise StructIsNotInlineError(msg)
def Slot(self, slotnum):
"""
Slot sets the vtable key `voffset` to the current location in the
buffer.
"""
self.assertNested()
self.current_vtable[slotnum] = self.Offset()
## @endcond
def __Finish(self, rootTable, sizePrefix, file_identifier=None):
"""Finish finalizes a buffer, pointing to the given `rootTable`."""
N.enforce_number(rootTable, N.UOffsetTFlags)
prepSize = N.UOffsetTFlags.bytewidth
if file_identifier is not None:
prepSize += N.Int32Flags.bytewidth
if sizePrefix:
prepSize += N.Int32Flags.bytewidth
self.Prep(self.minalign, prepSize)
if file_identifier is not None:
self.Prep(N.UOffsetTFlags.bytewidth, encode.FILE_IDENTIFIER_LENGTH)
# Convert bytes object file_identifier to an array of 4 8-bit integers,
# and use big-endian to enforce size compliance.
# https://docs.python.org/2/library/struct.html#format-characters
file_identifier = N.struct.unpack(">BBBB", file_identifier)
for i in range(encode.FILE_IDENTIFIER_LENGTH-1, -1, -1):
# Place the bytes of the file_identifer in reverse order:
self.Place(file_identifier[i], N.Uint8Flags)
self.PrependUOffsetTRelative(rootTable)
if sizePrefix:
size = len(self.Bytes) - self.Head()
N.enforce_number(size, N.Int32Flags)
self.PrependInt32(size)
self.finished = True
return self.Head()
def Finish(self, rootTable, file_identifier=None):
"""Finish finalizes a buffer, pointing to the given `rootTable`."""
return self.__Finish(rootTable, False, file_identifier=file_identifier)
def FinishSizePrefixed(self, rootTable, file_identifier=None):
"""
Finish finalizes a buffer, pointing to the given `rootTable`,
with the size prefixed.
"""
return self.__Finish(rootTable, True, file_identifier=file_identifier)
## @cond FLATBUFFERS_INTERNAL
def Prepend(self, flags, off):
self.Prep(flags.bytewidth, 0)
self.Place(off, flags)
def PrependSlot(self, flags, o, x, d):
N.enforce_number(x, flags)
N.enforce_number(d, flags)
if x != d or self.forceDefaults:
self.Prepend(flags, x)
self.Slot(o)
def PrependBoolSlot(self, *args): self.PrependSlot(N.BoolFlags, *args)
def PrependByteSlot(self, *args): self.PrependSlot(N.Uint8Flags, *args)
def PrependUint8Slot(self, *args): self.PrependSlot(N.Uint8Flags, *args)
def PrependUint16Slot(self, *args): self.PrependSlot(N.Uint16Flags, *args)
def PrependUint32Slot(self, *args): self.PrependSlot(N.Uint32Flags, *args)
def PrependUint64Slot(self, *args): self.PrependSlot(N.Uint64Flags, *args)
def PrependInt8Slot(self, *args): self.PrependSlot(N.Int8Flags, *args)
def PrependInt16Slot(self, *args): self.PrependSlot(N.Int16Flags, *args)
def PrependInt32Slot(self, *args): self.PrependSlot(N.Int32Flags, *args)
def PrependInt64Slot(self, *args): self.PrependSlot(N.Int64Flags, *args)
def PrependFloat32Slot(self, *args): self.PrependSlot(N.Float32Flags,
*args)
def PrependFloat64Slot(self, *args): self.PrependSlot(N.Float64Flags,
*args)
def PrependUOffsetTRelativeSlot(self, o, x, d):
"""
PrependUOffsetTRelativeSlot prepends an UOffsetT onto the object at
vtable slot `o`. If value `x` equals default `d`, then the slot will
be set to zero and no other data will be written.
"""
if x != d or self.forceDefaults:
self.PrependUOffsetTRelative(x)
self.Slot(o)
def PrependStructSlot(self, v, x, d):
"""
PrependStructSlot prepends a struct onto the object at vtable slot `o`.
Structs are stored inline, so nothing additional is being added.
In generated code, `d` is always 0.
"""
N.enforce_number(d, N.UOffsetTFlags)
if x != d:
self.assertStructIsInline(x)
self.Slot(v)
## @endcond
def PrependBool(self, x):
"""Prepend a `bool` to the Builder buffer.
Note: aligns and checks for space.
"""
self.Prepend(N.BoolFlags, x)
def PrependByte(self, x):
"""Prepend a `byte` to the Builder buffer.
Note: aligns and checks for space.
"""
self.Prepend(N.Uint8Flags, x)
def PrependUint8(self, x):
"""Prepend an `uint8` to the Builder buffer.
Note: aligns and checks for space.
"""
self.Prepend(N.Uint8Flags, x)
def PrependUint16(self, x):
"""Prepend an `uint16` to the Builder buffer.
Note: aligns and checks for space.
"""
self.Prepend(N.Uint16Flags, x)
def PrependUint32(self, x):
"""Prepend an `uint32` to the Builder buffer.
Note: aligns and checks for space.
"""
self.Prepend(N.Uint32Flags, x)
def PrependUint64(self, x):
"""Prepend an `uint64` to the Builder buffer.
Note: aligns and checks for space.
"""
self.Prepend(N.Uint64Flags, x)
def PrependInt8(self, x):
"""Prepend an `int8` to the Builder buffer.
Note: aligns and checks for space.
"""
self.Prepend(N.Int8Flags, x)
def PrependInt16(self, x):
"""Prepend an `int16` to the Builder buffer.
Note: aligns and checks for space.
"""
self.Prepend(N.Int16Flags, x)
def PrependInt32(self, x):
"""Prepend an `int32` to the Builder buffer.
Note: aligns and checks for space.
"""
self.Prepend(N.Int32Flags, x)
def PrependInt64(self, x):
"""Prepend an `int64` to the Builder buffer.
Note: aligns and checks for space.
"""
self.Prepend(N.Int64Flags, x)
def PrependFloat32(self, x):
"""Prepend a `float32` to the Builder buffer.
Note: aligns and checks for space.
"""
self.Prepend(N.Float32Flags, x)
def PrependFloat64(self, x):
"""Prepend a `float64` to the Builder buffer.
Note: aligns and checks for space.
"""
self.Prepend(N.Float64Flags, x)
def ForceDefaults(self, forceDefaults):
"""
In order to save space, fields that are set to their default value
don't get serialized into the buffer. Forcing defaults provides a
way to manually disable this optimization. When set to `True`, will
always serialize default values.
"""
self.forceDefaults = forceDefaults
##############################################################
## @cond FLATBUFFERS_INTERNAL
def PrependVOffsetT(self, x): self.Prepend(N.VOffsetTFlags, x)
def Place(self, x, flags):
"""
Place prepends a value specified by `flags` to the Builder,
without checking for available space.
"""
N.enforce_number(x, flags)
self.head = self.head - flags.bytewidth
encode.Write(flags.packer_type, self.Bytes, self.Head(), x)
def PlaceVOffsetT(self, x):
"""PlaceVOffsetT prepends a VOffsetT to the Builder, without checking
for space.
"""
N.enforce_number(x, N.VOffsetTFlags)
self.head = self.head - N.VOffsetTFlags.bytewidth
encode.Write(packer.voffset, self.Bytes, self.Head(), x)
def PlaceSOffsetT(self, x):
"""PlaceSOffsetT prepends a SOffsetT to the Builder, without checking
for space.
"""
N.enforce_number(x, N.SOffsetTFlags)
self.head = self.head - N.SOffsetTFlags.bytewidth
encode.Write(packer.soffset, self.Bytes, self.Head(), x)
def PlaceUOffsetT(self, x):
"""PlaceUOffsetT prepends a UOffsetT to the Builder, without checking
for space.
"""
N.enforce_number(x, N.UOffsetTFlags)
self.head = self.head - N.UOffsetTFlags.bytewidth
encode.Write(packer.uoffset, self.Bytes, self.Head(), x)
## @endcond
## @cond FLATBUFFERS_INTERNAL
def vtableEqual(a, objectStart, b):
"""vtableEqual compares an unwritten vtable to a written vtable."""
N.enforce_number(objectStart, N.UOffsetTFlags)
if len(a) * N.VOffsetTFlags.bytewidth != len(b):
return False
for i, elem in enumerate(a):
x = encode.Get(packer.voffset, b, i * N.VOffsetTFlags.bytewidth)
# Skip vtable entries that indicate a default value.
if x == 0 and elem == 0:
pass
else:
y = objectStart - elem
if x != y:
return False
return True
## @endcond
## @} | zlmdb | /zlmdb-23.1.1.tar.gz/zlmdb-23.1.1/flatbuffers/builder.py | builder.py |
import collections
import struct
from . import packer
from .compat import import_numpy, NumpyRequiredForThisFeature
np = import_numpy()
# For reference, see:
# https://docs.python.org/2/library/ctypes.html#ctypes-fundamental-data-types-2
# These classes could be collections.namedtuple instances, but those are new
# in 2.6 and we want to work towards 2.5 compatability.
class BoolFlags(object):
bytewidth = 1
min_val = False
max_val = True
py_type = bool
name = "bool"
packer_type = packer.boolean
class Uint8Flags(object):
bytewidth = 1
min_val = 0
max_val = (2**8) - 1
py_type = int
name = "uint8"
packer_type = packer.uint8
class Uint16Flags(object):
bytewidth = 2
min_val = 0
max_val = (2**16) - 1
py_type = int
name = "uint16"
packer_type = packer.uint16
class Uint32Flags(object):
bytewidth = 4
min_val = 0
max_val = (2**32) - 1
py_type = int
name = "uint32"
packer_type = packer.uint32
class Uint64Flags(object):
bytewidth = 8
min_val = 0
max_val = (2**64) - 1
py_type = int
name = "uint64"
packer_type = packer.uint64
class Int8Flags(object):
bytewidth = 1
min_val = -(2**7)
max_val = (2**7) - 1
py_type = int
name = "int8"
packer_type = packer.int8
class Int16Flags(object):
bytewidth = 2
min_val = -(2**15)
max_val = (2**15) - 1
py_type = int
name = "int16"
packer_type = packer.int16
class Int32Flags(object):
bytewidth = 4
min_val = -(2**31)
max_val = (2**31) - 1
py_type = int
name = "int32"
packer_type = packer.int32
class Int64Flags(object):
bytewidth = 8
min_val = -(2**63)
max_val = (2**63) - 1
py_type = int
name = "int64"
packer_type = packer.int64
class Float32Flags(object):
bytewidth = 4
min_val = None
max_val = None
py_type = float
name = "float32"
packer_type = packer.float32
class Float64Flags(object):
bytewidth = 8
min_val = None
max_val = None
py_type = float
name = "float64"
packer_type = packer.float64
class SOffsetTFlags(Int32Flags):
pass
class UOffsetTFlags(Uint32Flags):
pass
class VOffsetTFlags(Uint16Flags):
pass
def valid_number(n, flags):
if flags.min_val is None and flags.max_val is None:
return True
return flags.min_val <= n <= flags.max_val
def enforce_number(n, flags):
if flags.min_val is None and flags.max_val is None:
return
if not flags.min_val <= n <= flags.max_val:
raise TypeError("bad number %s for type %s" % (str(n), flags.name))
def float32_to_uint32(n):
packed = struct.pack("<1f", n)
(converted,) = struct.unpack("<1L", packed)
return converted
def uint32_to_float32(n):
packed = struct.pack("<1L", n)
(unpacked,) = struct.unpack("<1f", packed)
return unpacked
def float64_to_uint64(n):
packed = struct.pack("<1d", n)
(converted,) = struct.unpack("<1Q", packed)
return converted
def uint64_to_float64(n):
packed = struct.pack("<1Q", n)
(unpacked,) = struct.unpack("<1d", packed)
return unpacked
def to_numpy_type(number_type):
if np is not None:
return np.dtype(number_type.name).newbyteorder('<')
else:
raise NumpyRequiredForThisFeature('Numpy was not found.') | zlmdb | /zlmdb-23.1.1.tar.gz/zlmdb-23.1.1/flatbuffers/number_types.py | number_types.py |
The Ultimate Hosts Blacklist central repository updater
=======================================================
This is the branch which contain the tool which we use to update our `central repository`_ .
Installation
------------
::
$ pip3 install --user ultimate-hosts-blacklist-central-repo-updater
Usage
-----
The sript can be called as :code:`uhb-central-repo-updater`, :code:`uhb_central_repo_updater` and :code:`ultimate-hosts-blacklist-central-repo-updater`.
::
usage: uhb_central_repo_updater [-h] [-m] [-p PROCESSES]
The tool to update the central repository of the Ultimate-Hosts-Blacklist
project.
optional arguments:
-h, --help show this help message and exit
-m, --multiprocessing
Activate the usage of the multiprocessing.
-p PROCESSES, --processes PROCESSES
The number of simulatenous processes to create and
use.
Crafted with ♥ by Nissar Chababy (Funilrys)
.. _central repository: https://github.com/mitchellkrogza/Ultimate.Hosts.Blacklist
License
-------
::
MIT License
Copyright (c) 2019 Ultimate-Hosts-Blacklist
Copyright (c) 2019 Nissar Chababy
Copyright (c) 2019 Mitchell Krog
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | zlocated-central-repo-updater | /zlocated-central-repo-updater-1.4.1.tar.gz/zlocated-central-repo-updater-1.4.1/README.rst | README.rst |
from os import path
from requests import get
from ultimate_hosts_blacklist.central_repo_updater.configuration import (
GitHub,
Infrastructure,
Output,
)
from ultimate_hosts_blacklist.helpers import Dict, File, Regex
class Repositories: # pylint: disable=too-few-public-methods
"""
Provide the list of repositories we are going to work with.
"""
regex_next_url = r"(?:.*\<(.*?)\>\;\s?rel\=\"next\")"
def __init__(self):
self.first_url_to_get = "{0}/repos?sort=created&direction=desc".format(
GitHub.complete_api_orgs_url
)
self.headers = {
"Accept": "application/vnd.github.v3+json",
"Authorization": "token %s",
}
if path.isfile(Output.etags_file):
self.etags = Dict.from_json(File(Output.etags_file).read())
else:
self.etags = {}
if GitHub.api_token:
self.headers["Authorization"] %= GitHub.api_token
else:
del self.headers["Authorization"]
def get(self, url_to_get=None): # pylint: disable=too-many-branches
"""
Return the data from the API or the local file
if nothing changes.
:param url_to_get: The url to get next.
:type url_to_get: str
"""
next_url = None
if not url_to_get:
url_to_get = self.first_url_to_get
if self.etags and url_to_get in self.etags:
self.headers["If-None-Match"] = self.etags[url_to_get]
req = get(url_to_get, headers=self.headers)
if req.status_code == 200:
data = req.json()
repos = []
if "Etag" in req.headers:
self.etags[url_to_get] = req.headers["Etag"]
Dict(self.etags).to_json(Output.etags_file)
if isinstance(data, list):
repos.extend(data)
else:
raise NotImplementedError(
"Unable to understand GitHub API reponse for {0}".format(
repr(url_to_get)
)
)
if "Link" in req.headers:
next_url = Regex(
req.headers["Link"], self.regex_next_url, group=1, return_data=True
).match()
if next_url:
for element in self.get(url_to_get=next_url):
if element["name"] not in Infrastructure.repositories_to_ignore:
yield element
else:
continue
if repos:
for element in repos:
if element["name"] not in Infrastructure.repositories_to_ignore:
yield element
else:
continue
elif req.status_code == 304:
data = Dict.from_json(File(Output.repos_file).read())
for element in data:
if element["name"] not in Infrastructure.repositories_to_ignore:
yield element
else:
continue
elif req.status_code == 401:
raise Exception("Bad GitHub credentials.")
else:
raise NotImplementedError(
"Something went wrong while communicating with {0}".format(
repr(url_to_get)
)
) | zlocated-central-repo-updater | /zlocated-central-repo-updater-1.4.1.tar.gz/zlocated-central-repo-updater-1.4.1/update_blacklist/central_repo_updater/repositories.py | repositories.py |
from os import cpu_count
from time import sleep
from PyFunceble import ipv4_syntax_check, syntax_check
from requests import get
from ultimate_hosts_blacklist.central_repo_updater import logging
from ultimate_hosts_blacklist.central_repo_updater.clean import Clean
from ultimate_hosts_blacklist.central_repo_updater.configuration import GitHub, Output
from ultimate_hosts_blacklist.central_repo_updater.deploy import Deploy
from ultimate_hosts_blacklist.central_repo_updater.generate import Generate
from ultimate_hosts_blacklist.central_repo_updater.repositories import Repositories
from ultimate_hosts_blacklist.helpers import Dict, List, TravisCI
from ultimate_hosts_blacklist.whitelist.core import Core as WhitelistCore
class Core:
"""
Brain of the program.
:param multiprocessing: Allow us to use multiprocessing.
:type multiprocessing: bool
"""
# Will save what we write into repos.json
repos = []
def __init__(self, multiprocessing=True, processes=None):
TravisCI().configure_git_repo()
TravisCI().fix_permissions()
self.multiprocessing = multiprocessing
if self.multiprocessing:
logging.info("multiprocessing activated.")
self.repositories = list(Repositories().get())
if not processes:
cpu_numbers = cpu_count()
if cpu_numbers is not None:
self.processes = cpu_numbers
else:
self.processes = len(self.repositories) // 2 % 10
else:
self.processes = processes
logging.info(
"Using {0} simultaneous processes.".format(repr(self.processes))
)
else:
self.repositories = Repositories().get()
if self.multiprocessing:
self.whitelisting_core = WhitelistCore(
multiprocessing=True, processes=self.processes // 2
)
else:
self.whitelisting_core = WhitelistCore()
@classmethod
def __separate_domains_from_ip(cls, cleaned_list):
"""
Given a cleaned list, we separate domains from IP.
"""
logging.info("Getting the list of domains.")
domains = [x for x in set(cleaned_list) if x and syntax_check(x)]
temp = set(cleaned_list) - set(domains)
logging.info("Getting the list of IPs.")
return (domains, [x for x in temp if x and ipv4_syntax_check(x)])
def get_list(self, repository_info):
"""
Get the list from the input source.
"""
logging.info(
"Trying to get domains and ips from {0} input source.".format(
repr(repository_info["name"])
)
)
url_base = GitHub.partial_raw_link % repository_info["name"]
clean_url = "{0}clean.list".format(url_base)
non_clean_url = "{0}domains.list".format(url_base)
req = get(clean_url)
if req.status_code == 200:
logging.info(
"Could get `clean.list` of {0}.".format(repr(repository_info["name"]))
)
logging.info(
"Starting whitelisting of {0}.".format(repr(repository_info["name"]))
)
result = self.whitelisting_core.filter(
string=req.text, already_formatted=True
)
logging.info(
"Finished whitelisting of {0}.".format(repr(repository_info["name"]))
)
else:
req = get(non_clean_url)
if req.status_code == 200:
logging.info(
"Could get `domains.list` of {0}.".format(
repr(repository_info["name"])
)
)
logging.info(
"Starting whitelisting of {0}.".format(
repr(repository_info["name"])
)
)
result = self.whitelisting_core.filter(
string=req.text, already_formatted=True
)
logging.info(
"Finished whitelisting of {0}.".format(
repr(repository_info["name"])
)
)
else:
raise Exception(
"Unable to get a list from {0}.".format(
repr(repository_info["name"])
)
)
return result
def process_simple(self):
"""
Process the repository update in a simple way.
"""
all_domains = []
all_ips = []
repos = []
for data in self.repositories:
logging.debug(data)
domains, ips = self.__separate_domains_from_ip(self.get_list(data))
all_domains.extend(domains)
all_ips.extend(ips)
repos.append(data)
logging.info("Saving the list of repositories.")
Dict(repos).to_json(Output.repos_file)
return (
List(all_domains).format(delete_empty=True),
List(all_ips).format(delete_empty=True),
)
def process(self):
"""
Process the repository update.
"""
domains, ips = self.process_simple()
Generate.dotted(domains)
Generate.plain_text_domain(domains)
Generate.plain_text_ip(ips)
Generate.readme_md(len(domains), len(ips))
Clean()
Deploy().github() | zlocated-central-repo-updater | /zlocated-central-repo-updater-1.4.1.tar.gz/zlocated-central-repo-updater-1.4.1/update_blacklist/central_repo_updater/core.py | core.py |
from os import makedirs, path, walk
from ultimate_hosts_blacklist.central_repo_updater import logging
from ultimate_hosts_blacklist.central_repo_updater.configuration import (
Infrastructure,
Output,
Templates,
directory_separator,
)
from ultimate_hosts_blacklist.helpers import File, Regex
class Generate:
"""
Provide the interface for file generation.
"""
@classmethod
def _next_file(
cls,
directory_path,
filename,
format_to_apply,
subject,
template=None,
endline=None,
): # pylint: disable= too-many-arguments, too-many-locals
"""
Generate the next file.
:param directory_path: The directory we are writting to.
:type directory_path: str
:param filename: The name of the file we are writting to.
:type filename: str
:param format_to_apply: The format to apply to a line.
:type format_to_apply: str
:param subject: The list of element to write/format.
:type subject: list
:param template: The template to write before generating each lines.
:type template: str
:param endline: The last line to write.
:type endline: str
"""
if path.isdir(directory_path):
for root, _, files in walk(directory_path):
for file in files:
File("{0}{1}{2}".format(root, directory_separator, file)).delete()
else:
makedirs(directory_path)
i = 0
last_element_index = 0
while True:
broken = False
destination = "{0}{1}".format(directory_path, filename.format(i))
logging.info("Generation of {0}".format(destination))
with open(destination, "w", encoding="utf-8") as file:
if not i and template:
logging.debug("Writting template:\n {0}".format(template))
file.write(template)
for element in subject[last_element_index:]:
file.write("{0}\n".format(format_to_apply.format(element)))
last_element_index += 1
if file.tell() >= Output.max_file_size_in_bytes:
logging.info("Reached the desired size. Closing file.")
i += 1
broken = True
break
if broken:
continue
if endline:
logging.debug("Writting last line: \n {0}".format(endline))
file.write(endline)
break
@classmethod
def dotted(cls, subject):
"""
Generate the dotted formatted file.
:param subject: The subject we are working with.
:type subject: list
"""
cls._next_file(
Output.dotted_directory, Output.incomplete_dotted_filename, ".{0}", subject
)
@classmethod
def plain_text_domain(cls, subject):
"""
Generate the plain text domain formatted file.
:param subject: The subject we are working with.
:type subject: list
"""
cls._next_file(
Output.plain_text_domains_directory,
Output.incomplete_plain_text_domains_filename,
"{0}",
subject,
)
@classmethod
def plain_text_ip(cls, subject):
"""
Generate the plain text ip formatted file.
:param subject: The subject we are working with.
:type subject: list
"""
cls._next_file(
Output.plain_text_ips_directory,
Output.incomplete_plain_text_ips_filename,
"{0}",
subject,
)
@classmethod
def hosts_deny(cls, subject):
"""
Generate the hosts deny file.
:param subject: The subject we are working with.
:type subject: list
"""
possible_template_file_location = "{0}hostsdeny.template".format(
Output.current_directory
)
if Output.templates_dir and path.isfile(possible_template_file_location):
template_base = File(possible_template_file_location).read()
else:
template_base = Templates.hosts_deny
template = Regex(template_base, r"%%version%%").replace_with(
Infrastructure.version
)
template = Regex(template, r"%%lenIP%%").replace_with(
format(len(subject), ",d")
)
cls._next_file(
Output.hosts_deny_directory,
Output.incomplete_hosts_deny_filename,
"ALL: {0}",
subject,
template=template,
endline="# ##### END hosts.deny Block List # DO NOT EDIT #####",
)
@classmethod
def superhosts_deny(cls, subject):
"""
Generate the super hosts deny file.
:param subject: The subject we are working with.
:type subject: list
"""
possible_template_file_location = "{0}superhostsdeny.template".format(
Output.current_directory
)
if Output.templates_dir and path.isfile(possible_template_file_location):
template_base = File(possible_template_file_location).read()
else:
template_base = Templates.superhosts_deny
template = Regex(template_base, r"%%version%%").replace_with(
Infrastructure.version
)
template = Regex(template, r"%%lenIPHosts%%").replace_with(
format(len(subject), ",d")
)
cls._next_file(
Output.superhosts_deny_directory,
Output.incomplete_superhosts_deny_filename,
"ALL: {0}",
subject,
template=template,
endline="# ##### END Super hosts.deny Block List # DO NOT EDIT #####",
)
@classmethod
def unix_hosts(cls, subject):
"""
Generate the unix hosts file.
:param subject: The subject we are working with.
:type subject: list
"""
possible_template_file_location = "{0}hosts.template".format(
Output.current_directory
)
if Output.templates_dir and path.isfile(possible_template_file_location):
template_base = File(possible_template_file_location).read()
else:
template_base = Templates.unix_hosts
template = Regex(template_base, r"%%version%%").replace_with(
Infrastructure.version
)
template = Regex(template, r"%%lenHosts%%").replace_with(
format(len(subject), ",d")
)
cls._next_file(
Output.unix_hosts_directory,
Output.incomplete_unix_hosts_filename,
"0.0.0.0 {0}",
subject,
template=template,
endline="# END HOSTS LIST ### DO NOT EDIT THIS LINE AT ALL ###",
)
@classmethod
def windows_hosts(cls, subject):
"""
Generate the windows hosts file.
:param subject: The subject we are working with.
:type subject: list
"""
possible_template_file_location = "{0}hosts.windows.template".format(
Output.current_directory
)
if Output.templates_dir and path.isfile(possible_template_file_location):
template_base = File(possible_template_file_location).read()
else:
template_base = Templates.windows_hosts
template = Regex(template_base, r"%%version%%").replace_with(
Infrastructure.version
)
template = Regex(template, r"%%lenHosts%%").replace_with(
format(len(subject), ",d")
)
cls._next_file(
Output.windows_hosts_directory,
Output.incomplete_windows_hosts_filename,
"127.0.0.1 {0}",
subject,
template=template,
endline="# END HOSTS LIST ### DO NOT EDIT THIS LINE AT ALL ###",
)
@classmethod
def readme_md(cls, len_domains, len_ips):
"""
Generate the README.md file.
:param domains: The number of domains.
:type domains: int
:param ips: The number of ips.
:type ips: int
"""
logging.info("Generation of {0}".format(repr(Output.readme_file)))
possible_template_file_location = "{0}README_template.md".format(
Output.current_directory
)
if Output.templates_dir and path.isfile(possible_template_file_location):
template_base = File(possible_template_file_location).read()
else:
template_base = Templates.readme_md
template = Regex(template_base, r"%%version%%").replace_with(
Infrastructure.version
)
template = Regex(template, r"%%lenHosts%%").replace_with(
format(len_domains, ",d")
)
template = Regex(template, r"%%lenIPs%%").replace_with(format(len_ips, ",d"))
template = Regex(template, r"%%lenHostsIPs%%").replace_with(
format(len_ips + len_domains, ",d")
)
File(Output.readme_file).write(template, overwrite=True) | zlocated-central-repo-updater | /zlocated-central-repo-updater-1.4.1.tar.gz/zlocated-central-repo-updater-1.4.1/update_blacklist/central_repo_updater/generate.py | generate.py |
from os import environ, getcwd, path
from os import sep as directory_separator
from time import strftime, time
class GitHub: # pylint: disable=too-few-public-methods
"""
Provide the configuration related to the GitHub communication.
"""
# This is the username we are going to use when communicating with the
# GitHub API.
username = "zlocated"
try:
# DO NOT edit this line.
api_token = environ["GH_TOKEN"]
except KeyError:
# You can edit this line.
api_token = ""
# Set the GitHub repository slug.
org_slug = "Ultimate-Hosts-Blacklist"
# Set the list of URL we are working with.
# Note: Every URL should ends with /.
urls = {
"api": "https://api.github.com/",
"raw": "https://raw.githubusercontent.com/",
}
# We partially construct the RAW link.
partial_raw_link = "{0}{1}/%s/master/".format(urls["raw"], org_slug)
# We construct the complete link to the ORGS api page.
complete_api_orgs_url = "{0}orgs/{1}".format(urls["api"], org_slug)
class Infrastructure: # pylint: disable=too-few-public-methods
"""
Provide the configuration related to our infrastructure,
"""
# Set the list of repository we are going to ignore.
repositories_to_ignore = [
"cleaning",
"dev-center",
"repository-structure",
"whitelist",
]
try:
# We construct the version.
version = "V1.%s.%s.%s.%s" % (
environ["TRAVIS_BUILD_NUMBER"],
strftime("%Y"),
strftime("%m"),
strftime("%d"),
)
except KeyError:
version = str(int(time()))
class Output: # pylint: disable=too-few-public-methods
"""
Provide teh configuration related to everything we are going to create.
"""
current_directory = getcwd() + directory_separator
max_file_size_in_bytes = 5_242_880
template_dir = "templates"
if path.isdir("{0}{1}".format(current_directory, template_dir)):
templates_dir = "{0}{1}".format(current_directory, template_dir)
else:
templates_dir = None
etags_file = "{0}etags.json".format(current_directory)
repos_file = "{0}repos.json".format(current_directory)
readme_file = "{0}README.md".format(current_directory)
dotted_directory = "{0}domains-dotted-format{1}".format(
current_directory, directory_separator
)
incomplete_dotted_filename = "domains-dotted-format{}.list"
plain_text_domains_directory = "{0}domains{1}".format(
current_directory, directory_separator
)
incomplete_plain_text_domains_filename = "domains{0}.list"
plain_text_ips_directory = "{0}ips{1}".format(
current_directory, directory_separator
)
incomplete_plain_text_ips_filename = "ips{0}.list"
class Templates: # pylint: disable=too-few-public-methods
"""
Provide the different templates
"""
# The UNIX hosts templaste.
# The windows hosts template.
# The hosts.deny template.
# The superhosts.deny template.
# The README template.
readme_md = """# DNSMasq BlackList - personal dnsmasq blacklist generator (fork of Ultimate Hosts File blacklist)
| Updated | Fueled By |
| :-----: | :------: |
| Daily :heavy_check_mark: | [<img src="https://github.com/mitchellkrogza/Ultimate.Hosts.Blacklist/blob/master/.assets/ultimate-hosts-org-small.png" alt="Hosts File - Ultimate Hosts Blacklist"/>](https://github.com/Ultimate-Hosts-Blacklist) |
| [](https://travis-ci.org/mitchellkrogza/Ultimate.Hosts.Blacklist) | [](https://github.com/mitchellkrogza/Ultimate.Hosts.Blacklist/blob/master/LICENSE.md) |
---
- Version: **%%version%%**
- Total Bad Hosts in hosts file: **%%lenHosts%%**
- Total Bad IP's in hosts.deny file: **%%lenIPs%%**
- Total Bad Hosts and IP's in superhosts.deny file: **%%lenHostsIPs%%**
:exclamation: **Yes you did indeed read those numbers correctly** :exclamation:
---
""" | zlocated-central-repo-updater | /zlocated-central-repo-updater-1.4.1.tar.gz/zlocated-central-repo-updater-1.4.1/update_blacklist/central_repo_updater/configuration.py | configuration.py |
# zlodziej-crawler

<!-- TABLE OF CONTENTS -->
## Table of Contents
* [About the Project](#about-the-project)
* [Built With](#built-with)
* [Getting Started](#getting-started)
* [Prerequisites](#prerequisites)
* [Installation](#installation)
* [Usage](#usage)
* [Extending Project](#extending-project)
<!-- ABOUT THE PROJECT -->
## About The Project
Small web-scraper for scraping and processing offers from website [olx.pl](http://olx.pl).
### Built With
* [Poetry](https://github.com/python-poetry/poetry)
* [Pydantic](https://github.com/samuelcolvin/pydantic)
* [bs4](https://pypi.org/project/beautifulsoup4/)
<!-- GETTING STARTED -->
## Getting Started
### Prerequisites
`Poetry` is used for managing project dependencies, you can install it by:
```
pip install poetry
```
### Installation
* Clone the repo
```
git clone https://gitlab.com/mwozniak11121/zlodziej-crawler-public.git
```
* Spawn poetry shell
```sh
poetry shell
```
* Install dependencies and package
```sh
poetry install
```
Or if you want to install package through `pip`
```sh
pip install zlodziej-crawler
```
<!-- USAGE EXAMPLES -->
## Usage
The only script made available is `steal`, which prompts for `url` with offer's category, e.g.
`olx.pl/nieruchomosci/mieszkania/wynajem/wroclaw/`
and then scraps, processes and saves found offers.
(Results are saved in dir: `cwd / results`)
Example output for `RentOffer` looks like this:

## Extending Project
Project is meant to be easily extendable by adding new Pydantic models to `zlodziej_crawler/models.py`.
`BaseOffer` serves purpose as a generic offer for all types of offers that are not specificly processed.
`RentOffer` and its parent class `BaseOffer` look like this:
```
class BaseOffer(BaseModel):
url: HttpUrl
offer_name: str
description: str
id: PositiveInt
time_offer_added: datetime
views: PositiveInt
location: str
price: Union[PositiveInt, str]
website: Optional[Website] = None
unused_data: Optional[Dict] = None
class RentOffer(BaseOffer):
rent: PositiveInt
area: float
number_of_rooms: Optional[str] = None
offer_type: Optional[OfferType] = OfferType.UNKNOWN
floor: Optional[str] = None
building_type: Optional[BuildingType] = BuildingType.UNKNOWN
furnished: Optional[bool] = None
total_price: Optional[int] = None
price_per_m: Optional[PositiveFloat] = None
total_price_per_m: Optional[PositiveFloat] = None
```
Project can be simply extended by adding matching classes based on other categories at [olx.pl](http://olx.pl).
Adding new OfferType needs:
* Parsing functions in `zlodziej_crawler/olx/offers_extraction/NEW_OFFER.py`
* Factory function in `OLXParserFactory` (`zlodziej_crawler/olx/parser_factory.py`)
* Matching offer category url in `OLXParserFactory.get_parser` (`zlodziej_crawler/olx/parser_factory.py`)
Currently any information found by scraper in `titlebox-details` section and not yet processed is saved as `unused_data`.
 | zlodziej-crawler | /zlodziej-crawler-0.1.1.tar.gz/zlodziej-crawler-0.1.1/README.md | README.md |
from enum import Enum
from typing import Union
import bs4
import requests
from unidecode import unidecode
from zlodziej_crawler.cache_helper import cache_get
def translate_to_enum(value: Union[str, Enum], enum_class):
enum_value = unidecode(value).lower().strip().replace(" ", "_")
return enum_class(enum_value)
def translate_city(city: str) -> str:
EXCEPTIONS = {"zielona gora": "zielonagora"}
translated = unidecode(city).lower()
try:
return EXCEPTIONS[translated]
except KeyError:
return translated.replace(" ", "-")
def translate_months(content: str) -> str:
mapping = {
"styczen": "january",
"stycznia": "january",
"luty": "february",
"lutego": "february",
"marzec": "march",
"marca": "march",
"kwiecien": "april",
"kwietnia": "april",
"maj": "may",
"maja": "may",
"czerwiec": "june",
"czerwca": "june",
"lipiec": "july",
"lipca": "july",
"sierpien": "august",
"sierpnia": "august",
"wrzesien": "september",
"wrzesnia": "september",
"pazdziernik": "october",
"pazdziernika": "october",
"listopad": "november",
"listopada": "november",
"grudzien": "december",
"grudnia": "december",
}
key = unidecode(content).lower().strip()
return mapping[key]
def create_soup_from_url(url: str, use_cache: bool = True) -> bs4.BeautifulSoup:
if use_cache:
content = cache_get(url)
else:
content = requests.get(url).content
return bs4.BeautifulSoup(content, "lxml")
def extract_category_url(url: str) -> str:
try:
category_url = url.split(".pl")[1]
except IndexError:
raise ValueError("Entered url isn't valid")
category_url = category_url.split("?")[0]
if category_url.startswith("/"):
category_url = category_url[1:]
if category_url.endswith("/"):
category_url = category_url[:-1]
return category_url.lower() | zlodziej-crawler | /zlodziej-crawler-0.1.1.tar.gz/zlodziej-crawler-0.1.1/zlodziej_crawler/utilities.py | utilities.py |
from datetime import datetime
from enum import Enum
from typing import Optional, Dict, Union
from pydantic import (
BaseModel,
HttpUrl,
validator,
Extra,
PositiveInt,
PositiveFloat,
)
class Website(str, Enum):
UNKNOWN = "unknown"
OLX = "olx.pl"
class OfferType(str, Enum):
UNKNOWN = "unknown"
PRIVATE = "private"
DEVELOPER = "developer"
class BuildingType(str, Enum):
UNKNOWN = "unknown"
BLOK = "blok"
KAMIENICA = "kamienica"
DOM_WOLNOSTAJACY = "dom_wolnostojacy"
SZEREGOWIEC = "szeregowiec"
APARTAMENTOWIEC = "apartamentowiec"
LOFT = "loft"
POZOSTALE = "pozostale"
class BaseOffer(BaseModel):
url: HttpUrl
offer_name: str
description: str
id: PositiveInt
time_offer_added: datetime
views: PositiveInt
location: str
price: Union[PositiveInt, str]
website: Optional[Website] = None
unused_data: Optional[Dict] = None
class Config:
extra = Extra.forbid
class RentOffer(BaseOffer):
rent: PositiveInt
area: float
number_of_rooms: Optional[str] = None
offer_type: Optional[OfferType] = OfferType.UNKNOWN
floor: Optional[str] = None
building_type: Optional[BuildingType] = BuildingType.UNKNOWN
furnished: Optional[bool] = None
total_price: Optional[int] = None
price_per_m: Optional[PositiveFloat] = None
total_price_per_m: Optional[PositiveFloat] = None
@validator("total_price", always=True)
def calculate_total_price(cls, v, values):
return values["price"] + values["rent"]
@validator("price_per_m", always=True)
def calculate_price_per_m(cls, v, values):
try:
return round(values["price"] / values["area"], 2)
except ZeroDivisionError:
return None
@validator("total_price_per_m", always=True)
def calculate_total_price_per_m(cls, v, values):
try:
return round(values["total_price"] / values["area"], 2)
except ZeroDivisionError:
return None | zlodziej-crawler | /zlodziej-crawler-0.1.1.tar.gz/zlodziej-crawler-0.1.1/zlodziej_crawler/models.py | models.py |
from typing import List, Dict
import bs4
from unidecode import unidecode
def _extract_offer_details(soup: bs4.BeautifulSoup) -> Dict:
details = soup.find_all("ul", {"class": "offer-details"}).pop()
tags = details.find_all("li", {"class": "offer-details__item"})
results = dict()
for tag in tags:
key, value = _extract_tag(tag)
results[key] = value.strip()
offer_description = _extract_tag(details.find_next("div", {"id": "textContent"}))
results["opis"] = " ".join(offer_description).strip()
return results
def _extract_titlebox_details(soup: bs4.BeautifulSoup) -> Dict:
titlebox = soup.find_all("div", {"class": "offer-titlebox"}).pop()
offer_name = _extract_tag(titlebox.find_next("h1")).pop()
try:
price_label = _extract_tag(
titlebox.find_next("div", {"class": "pricelabel"})
).pop(0)
except IndexError:
price_label = ""
result = {"cena": unidecode(price_label), "nazwa oferty": unidecode(offer_name)}
return result
def _extract_location(soup: bs4.BeautifulSoup) -> Dict:
location_bar = soup.find_all("div", {"class": "offer-user__location"}).pop()
location = _extract_tag(location_bar.find_next("address")).pop()
result = {"lokacja": unidecode(location)}
return result
def _extract_bottombar(soup: bs4.BeautifulSoup) -> Dict:
bottom_bar = soup.find_all("div", {"id": "offerbottombar"}).pop()
tags = [_extract_tag(tag) for tag in bottom_bar.find_all("li")]
result = {
"czas dodania": _concat_strings(tags[0]),
"wyswietlenia": _concat_strings(tags[1]),
"id ogloszenia": _concat_strings(tags[2]),
}
return result
def _concat_strings(contents: List[str]) -> str:
return " ".join(contents).strip()
def _extract_tag(tag: bs4.Tag) -> List[str]:
if not tag:
return []
return [_clean_up_field(field) for field in tag.get_text().split("\n") if field]
def _clean_up_field(value: str) -> str:
return value.strip().lower()
functions_to_execute = [
_extract_offer_details,
_extract_titlebox_details,
_extract_location,
_extract_bottombar,
] | zlodziej-crawler | /zlodziej-crawler-0.1.1.tar.gz/zlodziej-crawler-0.1.1/zlodziej_crawler/olx/scrap_data.py | scrap_data.py |
import datetime
from pprint import pformat
from typing import List
import click
from pygments import highlight
from pygments.formatters import get_formatter_by_name
from pygments.lexers import get_lexer_by_name
from tqdm import tqdm
from zlodziej_crawler.models import Website, BaseOffer
from zlodziej_crawler.olx.crawler import OLXCrawler
from zlodziej_crawler.olx.parser_factory import OLXParsersFactory
from zlodziej_crawler.tools.export import export_offers
from zlodziej_crawler.tools.utilities import (
filter_out_urls,
filter_out_seen_urls,
)
from zlodziej_crawler.utilities import extract_category_url
WEBSITE = Website.OLX
def process_offers(category_url: str) -> List[BaseOffer]:
crawler = OLXCrawler()
parser = OLXParsersFactory.get_parser(category_url)
page_url = WEBSITE.value
valid_offers, invalid_offers = filter_out_urls(
crawler.get_offers(category_url), page_url
)
if not valid_offers:
raise ValueError("Extracted list of urls is empty!")
new_urls = filter_out_seen_urls(valid_offers)
additional_data = {"strona": page_url}
offers: List[BaseOffer] = []
processed_ids: List[int] = []
progress_bar = tqdm(new_urls)
for url in progress_bar:
progress_bar.set_description(
f"Currently scraping {page_url}/{category_url}", refresh=True
)
offer = parser.process_url(url, additional_data)
if offer.id not in processed_ids:
offers.append(offer)
processed_ids.append(offer.id)
return offers
@click.command()
@click.option(
"--url",
default="olx.pl/nieruchomosci/mieszkania/wynajem/wroclaw/",
help="Enter category url to scrap",
prompt=True,
)
def main(url: str):
start = datetime.datetime.now()
category_url = extract_category_url(url)
offers = process_offers(category_url)
export_offers(offers, category_url)
json_output = [offer.dict() for offer in offers[-10:]]
print(
highlight(
pformat(json_output),
get_lexer_by_name("python"),
get_formatter_by_name("terminal"),
)
)
end = datetime.datetime.now()
print(
f"Scraped {len(offers)} new offers for {url} in {(end - start).total_seconds()} seconds"
)
if __name__ == "__main__":
main() | zlodziej-crawler | /zlodziej-crawler-0.1.1.tar.gz/zlodziej-crawler-0.1.1/zlodziej_crawler/olx/get_offers.py | get_offers.py |
__author__ = 'Zhang Fan'
import os
import logging
from enum import Enum
# 默认日志格式
_default_log_format = '%(asctime)s %(levelname)s [%(filename)s->%(module)s->%(funcName)s:%(lineno)d] %(message)s'
class LoggerLevel(Enum):
notset = 'NOTSET'
debug = 'DEBUG'
info = 'INFO'
warning = 'WARNING'
warn = 'WARNING'
error = 'ERROR'
critical = 'CRITICAL'
fatal = 'CRITICAL'
class _logger():
_instance = None
def __init__(self, name, write_stream=True, write_file=False, file_dir='.', level=LoggerLevel.debug,
interval=1, backup_count=2, append_pid=False, log_format='', time_format=''):
"""
构建日志对象
:param name: 日志名
:param write_stream: 是否输出日志到流(终端)
:param write_file: 是否输出日志到文件
:param file_dir: 日志文件的目录
:param level: 日志等级
:param interval: 间隔多少天重新创建一个日志文件
:param backup_count: 保留历史日志文件数量
:param append_pid: 是否在日志文件名后附加进程号
:param log_format: 日志格式
:param time_format: 时间格式
"""
self.name = name.lower()
self.log_path = os.path.abspath(file_dir)
self.pid = os.getpid()
self.append_pid = append_pid
self.write_stream = write_stream
self.write_file = write_file
self.level = level
self.interval = interval
self.backupCount = backup_count
self.log_format = log_format or _default_log_format
self.time_format = time_format
self.level_getter = lambda x: eval("logging" + "." + x.upper())
self.logger = logging.getLogger(self.name)
self.logger.setLevel(self.level_getter(self.level.value))
self._set_logger()
self.logger.propagate = False # 此处不设置,可能会打印2遍log
def _set_logger(self):
if self.write_stream:
sh = self._get_stream_handler()
self.logger.addHandler(sh)
if self.write_file:
fh = self._get_file_handler()
self.logger.addHandler(fh)
def _get_stream_handler(self):
sh = logging.StreamHandler()
sh.setLevel(self.level_getter(self.level.value))
fmt = logging.Formatter(fmt=self.log_format, datefmt=self.time_format)
sh.setFormatter(fmt)
return sh
def _get_file_handler(self):
filename = f'{self.name}_{self.pid}.log' if self.append_pid else f'{self.name}.log'
filename = os.path.abspath(os.path.join(self.log_path, filename))
from logging.handlers import TimedRotatingFileHandler
fh = TimedRotatingFileHandler(filename, 'D', self.interval, backupCount=self.backupCount,
encoding='utf-8')
fh.setLevel(self.level_getter(self.level.value))
fmt = logging.Formatter(fmt=self.log_format, datefmt=self.time_format)
fh.setFormatter(fmt)
return fh
def logger(name, **kwargs):
return _logger(name, **kwargs).logger
def logger_singleton(name, **kwargs):
if _logger._instance is None:
_logger._instance = _logger(name, **kwargs).logger
return _logger._instance
if __name__ == '__main__':
log = logger('test')
log.info('123') | zlogger | /zlogger-0.1.3-py3-none-any.whl/zlogger.py | zlogger.py |
================================================
ZLogging - Bro/Zeek logging framework for Python
================================================
Online documentation is available at https://zlogging.readthedocs.io/
The ``ZLogging`` module provides an easy-to-use bridge between the logging
framework of the well-known Bro/Zeek Network Security Monitor (IDS).
As of version 3.0, the ``Bro`` project has been officially renamed to
``Zeek``. [1]_
It was originally developed and derived from the |BroAPT|_ project, which is an
APT detection framework based on the Bro/Zeek IDS and extended with highly
customised and customisable Python wrappers.
.. _BroAPT: https://github.com/JarryShaw/BroAPT
.. |BroAPT| replace:: ``BroAPT``
------------
Installation
------------
.. note::
``ZLogging`` supports Python all versions above and includes **3.6**
.. code:: python
pip install zlogging
-----
Usage
-----
Currently ``ZLogging`` supports the two builtin formats as supported by the
Bro/Zeek logging framework, i.e. ASCII and JSON.
A typical ASCII log file would be like::
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path http
#open 2020-02-09-18-54-09
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer version user_agent origin request_body_len response_body_len status_code status_msg info_code info_msg tags username password proxied orig_fuids orig_filenames orig_mime_types resp_fuids resp_filenames resp_mime_types
#types time string addr port addr port count string string string string string string string count count count string count string set[enum] string string set[string] vector[string] vector[string] vector[string] vector[string] vector[string] vector[string]
1581245648.761106 CSksID3S6ZxplpvmXg 192.168.2.108 56475 151.139.128.14 80 1 GET ocsp.sectigo.com /MFYwVKADAgEAME0wSzBJMAkGBSsOAwIaBQAEFEML0g5PE3oabJGPJOXafjJNRzPIBBSNjF7EVK2K4Xfpm/mbBeG4AY1h4QIQfdsAWJ+CXcbhDVFyNWosjQ== - 1.1 com.apple.trustd/2.0 - 0 471 200 OK - - (empty) - - - - - - FPtlyEAhcf8orBPu7 - application/ocsp-response
1581245651.379048 CuvUnl4HyhQbCs4tXe 192.168.2.108 56483 23.59.247.10 80 1 GET isrg.trustid.ocsp.identrust.com /MFYwVKADAgEAME0wSzBJMAkGBSsOAwIaBQAEFG/0aE1DEtJIYoGcwCs9Rywdii+mBBTEp7Gkeyxx+tvhS5B1/8QVYIWJEAIQCgFBQgAAAVOFc2oLheynCA== - 1.1 com.apple.trustd/2.0 - 0 1398 200 OK - - (empty) - - - - - - FRfFoq3hSZkdCNDf9l - application/ocsp-response
1581245654.396334 CWo4pd1z97XLB2o0h2 192.168.2.108 56486 23.59.247.122 80 1 GET isrg.trustid.ocsp.identrust.com /MFYwVKADAgEAME0wSzBJMAkGBSsOAwIaBQAEFG/0aE1DEtJIYoGcwCs9Rywdii+mBBTEp7Gkeyxx+tvhS5B1/8QVYIWJEAIQCgFBQgAAAVOFc2oLheynCA== - 1.1 com.apple.trustd/2.0 - 0 1398 200 OK - - (empty) - - - - - - FvQehf1pRsGmwDUzJe - application/ocsp-response
1581245692.728840 CxFQzh2ePtsnQhFNX3 192.168.2.108 56527 23.59.247.10 80 1 GET isrg.trustid.ocsp.identrust.com /MFYwVKADAgEAME0wSzBJMAkGBSsOAwIaBQAEFG/0aE1DEtJIYoGcwCs9Rywdii+mBBTEp7Gkeyxx+tvhS5B1/8QVYIWJEAIQCgFBQgAAAVOFc2oLheynCA== - 1.1 com.apple.trustd/2.0 - 0 1398 200 OK - - (empty) - - - - - - FIeFj8WWNyhA1psGg - application/ocsp-response
1581245701.693971 CPZSNk1Y6kDvAN0KZ8 192.168.2.108 56534 23.59.247.122 80 1 GET isrg.trustid.ocsp.identrust.com /MFYwVKADAgEAME0wSzBJMAkGBSsOAwIaBQAEFG/0aE1DEtJIYoGcwCs9Rywdii+mBBTEp7Gkeyxx+tvhS5B1/8QVYIWJEAIQCgFBQgAAAVOFc2oLheynCA== - 1.1 com.apple.trustd/2.0 - 0 1398 200 OK - - (empty) - - - - - - F0fGHe4RPuNBhYWNv6 - application/ocsp-response
1581245707.848088 Cnab6CHFOprdppKi5 192.168.2.108 56542 23.59.247.122 80 1 GET isrg.trustid.ocsp.identrust.com /MFYwVKADAgEAME0wSzBJMAkGBSsOAwIaBQAEFG/0aE1DEtJIYoGcwCs9Rywdii+mBBTEp7Gkeyxx+tvhS5B1/8QVYIWJEAIQCgFBQgAAAVOFc2oLheynCA== - 1.1 com.apple.trustd/2.0 - 0 1398 200 OK - - (empty) - - - - - - FgDBep1h7EPHC8qQB6 - application/ocsp-response
1581245952.784242 CPNd6t3ofePpdNjErl 192.168.2.108 56821 176.31.225.118 80 1 GET tracker.trackerfix.com /announce?info_hash=y\x82es"\x1dV\xde|m\xbe"\xe5\xef\xbe\x04\xb3\x1fW\xfc&peer_id=-qB4210-0ZOn5Ifyl*WF&port=63108&uploaded=0&downloaded=0&left=3225455594&corrupt=0&key=6B23B036&event=started&numwant=200&compact=1&no_peer_id=1&supportcrypto=1&redundant=0 - 1.1 - - 0 0 307 Temporary Redirect - - (empty) - - - - - - - - -
1581245960.123295 CfAkwf2CFI13b24gqf 192.168.2.108 56889 176.31.225.118 80 1 GET tracker.trackerfix.com /announce?info_hash=!u7\xdad\x94x\xecS\x80\x89\x04\x9c\x13#\x84M\x1b\xcd\x1a&peer_id=-qB4210-i36iloGe*QT9&port=63108&uploaded=0&downloaded=0&left=1637966572&corrupt=0&key=ECE6637E&event=started&numwant=200&compact=1&no_peer_id=1&supportcrypto=1&redundant=0 - 1.1 - - 0 0 307 Temporary Redirect - - (empty) - - - - - - - - -
#close 2020-02-09-19-01-40
Its corresponding JSON log file would be like::
{"ts": 1581245648.761106, "uid": "CSksID3S6ZxplpvmXg", "id.orig_h": "192.168.2.108", "id.orig_p": 56475, "id.resp_h": "151.139.128.14", "id.resp_p": 80, "trans_depth": 1, "method": "GET", "host": "ocsp.sectigo.com", "uri": "/MFYwVKADAgEAME0wSzBJMAkGBSsOAwIaBQAEFEML0g5PE3oabJGPJOXafjJNRzPIBBSNjF7EVK2K4Xfpm/mbBeG4AY1h4QIQfdsAWJ+CXcbhDVFyNWosjQ==", "referrer": "-", "version": "1.1", "user_agent": "com.apple.trustd/2.0", "origin": "-", "request_body_len": 0, "response_body_len": 471, "status_code": 200, "status_msg": "OK", "info_code": null, "info_msg": "-", "tags": [], "username": "-", "password": "-", "proxied": null, "orig_fuids": null, "orig_filenames": null, "orig_mime_types": null, "resp_fuids": ["FPtlyEAhcf8orBPu7"], "resp_filenames": null, "resp_mime_types": ["application/ocsp-response"]}
{"ts": 1581245651.379048, "uid": "CuvUnl4HyhQbCs4tXe", "id.orig_h": "192.168.2.108", "id.orig_p": 56483, "id.resp_h": "23.59.247.10", "id.resp_p": 80, "trans_depth": 1, "method": "GET", "host": "isrg.trustid.ocsp.identrust.com", "uri": "/MFYwVKADAgEAME0wSzBJMAkGBSsOAwIaBQAEFG/0aE1DEtJIYoGcwCs9Rywdii+mBBTEp7Gkeyxx+tvhS5B1/8QVYIWJEAIQCgFBQgAAAVOFc2oLheynCA==", "referrer": "-", "version": "1.1", "user_agent": "com.apple.trustd/2.0", "origin": "-", "request_body_len": 0, "response_body_len": 1398, "status_code": 200, "status_msg": "OK", "info_code": null, "info_msg": "-", "tags": [], "username": "-", "password": "-", "proxied": null, "orig_fuids": null, "orig_filenames": null, "orig_mime_types": null, "resp_fuids": ["FRfFoq3hSZkdCNDf9l"], "resp_filenames": null, "resp_mime_types": ["application/ocsp-response"]}
{"ts": 1581245654.396334, "uid": "CWo4pd1z97XLB2o0h2", "id.orig_h": "192.168.2.108", "id.orig_p": 56486, "id.resp_h": "23.59.247.122", "id.resp_p": 80, "trans_depth": 1, "method": "GET", "host": "isrg.trustid.ocsp.identrust.com", "uri": "/MFYwVKADAgEAME0wSzBJMAkGBSsOAwIaBQAEFG/0aE1DEtJIYoGcwCs9Rywdii+mBBTEp7Gkeyxx+tvhS5B1/8QVYIWJEAIQCgFBQgAAAVOFc2oLheynCA==", "referrer": "-", "version": "1.1", "user_agent": "com.apple.trustd/2.0", "origin": "-", "request_body_len": 0, "response_body_len": 1398, "status_code": 200, "status_msg": "OK", "info_code": null, "info_msg": "-", "tags": [], "username": "-", "password": "-", "proxied": null, "orig_fuids": null, "orig_filenames": null, "orig_mime_types": null, "resp_fuids": ["FvQehf1pRsGmwDUzJe"], "resp_filenames": null, "resp_mime_types": ["application/ocsp-response"]}
{"ts": 1581245692.72884, "uid": "CxFQzh2ePtsnQhFNX3", "id.orig_h": "192.168.2.108", "id.orig_p": 56527, "id.resp_h": "23.59.247.10", "id.resp_p": 80, "trans_depth": 1, "method": "GET", "host": "isrg.trustid.ocsp.identrust.com", "uri": "/MFYwVKADAgEAME0wSzBJMAkGBSsOAwIaBQAEFG/0aE1DEtJIYoGcwCs9Rywdii+mBBTEp7Gkeyxx+tvhS5B1/8QVYIWJEAIQCgFBQgAAAVOFc2oLheynCA==", "referrer": "-", "version": "1.1", "user_agent": "com.apple.trustd/2.0", "origin": "-", "request_body_len": 0, "response_body_len": 1398, "status_code": 200, "status_msg": "OK", "info_code": null, "info_msg": "-", "tags": [], "username": "-", "password": "-", "proxied": null, "orig_fuids": null, "orig_filenames": null, "orig_mime_types": null, "resp_fuids": ["FIeFj8WWNyhA1psGg"], "resp_filenames": null, "resp_mime_types": ["application/ocsp-response"]}
{"ts": 1581245701.693971, "uid": "CPZSNk1Y6kDvAN0KZ8", "id.orig_h": "192.168.2.108", "id.orig_p": 56534, "id.resp_h": "23.59.247.122", "id.resp_p": 80, "trans_depth": 1, "method": "GET", "host": "isrg.trustid.ocsp.identrust.com", "uri": "/MFYwVKADAgEAME0wSzBJMAkGBSsOAwIaBQAEFG/0aE1DEtJIYoGcwCs9Rywdii+mBBTEp7Gkeyxx+tvhS5B1/8QVYIWJEAIQCgFBQgAAAVOFc2oLheynCA==", "referrer": "-", "version": "1.1", "user_agent": "com.apple.trustd/2.0", "origin": "-", "request_body_len": 0, "response_body_len": 1398, "status_code": 200, "status_msg": "OK", "info_code": null, "info_msg": "-", "tags": [], "username": "-", "password": "-", "proxied": null, "orig_fuids": null, "orig_filenames": null, "orig_mime_types": null, "resp_fuids": ["F0fGHe4RPuNBhYWNv6"], "resp_filenames": null, "resp_mime_types": ["application/ocsp-response"]}
{"ts": 1581245707.848088, "uid": "Cnab6CHFOprdppKi5", "id.orig_h": "192.168.2.108", "id.orig_p": 56542, "id.resp_h": "23.59.247.122", "id.resp_p": 80, "trans_depth": 1, "method": "GET", "host": "isrg.trustid.ocsp.identrust.com", "uri": "/MFYwVKADAgEAME0wSzBJMAkGBSsOAwIaBQAEFG/0aE1DEtJIYoGcwCs9Rywdii+mBBTEp7Gkeyxx+tvhS5B1/8QVYIWJEAIQCgFBQgAAAVOFc2oLheynCA==", "referrer": "-", "version": "1.1", "user_agent": "com.apple.trustd/2.0", "origin": "-", "request_body_len": 0, "response_body_len": 1398, "status_code": 200, "status_msg": "OK", "info_code": null, "info_msg": "-", "tags": [], "username": "-", "password": "-", "proxied": null, "orig_fuids": null, "orig_filenames": null, "orig_mime_types": null, "resp_fuids": ["FgDBep1h7EPHC8qQB6"], "resp_filenames": null, "resp_mime_types": ["application/ocsp-response"]}
{"ts": 1581245952.784242, "uid": "CPNd6t3ofePpdNjErl", "id.orig_h": "192.168.2.108", "id.orig_p": 56821, "id.resp_h": "176.31.225.118", "id.resp_p": 80, "trans_depth": 1, "method": "GET", "host": "tracker.trackerfix.com", "uri": "/announce?info_hash=y\\x82es\"\\x1dV\\xde|m\\xbe\"\\xe5\\xef\\xbe\\x04\\xb3\\x1fW\\xfc&peer_id=-qB4210-0ZOn5Ifyl*WF&port=63108&uploaded=0&downloaded=0&left=3225455594&corrupt=0&key=6B23B036&event=started&numwant=200&compact=1&no_peer_id=1&supportcrypto=1&redundant=0", "referrer": "-", "version": "1.1", "user_agent": "-", "origin": "-", "request_body_len": 0, "response_body_len": 0, "status_code": 307, "status_msg": "Temporary Redirect", "info_code": null, "info_msg": "-", "tags": [], "username": "-", "password": "-", "proxied": null, "orig_fuids": null, "orig_filenames": null, "orig_mime_types": null, "resp_fuids": null, "resp_filenames": null, "resp_mime_types": null}
{"ts": 1581245960.123295, "uid": "CfAkwf2CFI13b24gqf", "id.orig_h": "192.168.2.108", "id.orig_p": 56889, "id.resp_h": "176.31.225.118", "id.resp_p": 80, "trans_depth": 1, "method": "GET", "host": "tracker.trackerfix.com", "uri": "/announce?info_hash=!u7\\xdad\\x94x\\xecS\\x80\\x89\\x04\\x9c\\x13#\\x84M\\x1b\\xcd\\x1a&peer_id=-qB4210-i36iloGe*QT9&port=63108&uploaded=0&downloaded=0&left=1637966572&corrupt=0&key=ECE6637E&event=started&numwant=200&compact=1&no_peer_id=1&supportcrypto=1&redundant=0", "referrer": "-", "version": "1.1", "user_agent": "-", "origin": "-", "request_body_len": 0, "response_body_len": 0, "status_code": 307, "status_msg": "Temporary Redirect", "info_code": null, "info_msg": "-", "tags": [], "username": "-", "password": "-", "proxied": null, "orig_fuids": null, "orig_filenames": null, "orig_mime_types": null, "resp_fuids": null, "resp_filenames": null, "resp_mime_types": null}
How to Load/Parse a Log File?
-----------------------------
To load (parse) a log file generically, i.e. when you don't know what format
the log file is, you can simple call the ``~zlogging.loader.parse``,
``zlogging.loader.loads`` functions::
# to parse log at filename
>>> parse('path/to/log')
# to load log from a file object
>>> with open('path/to/log', 'rb') as file:
... load(file)
# to load log from a string
>>> with open('/path/to/log', 'rb') as file:
... loads(file.read())
.. note::
``zlogging.loader.load``, the file object must be opened
in binary mode.
``zlogging.loader.loads``, if the ``data`` suplied is an
encoded string (``str``), the function will first try to decode it as a
bytestring (``bytes``) with ``'ascii'`` encoding.
If you do know the format, you may call the specified functions for each
``zlogging.loader.parse_ascii`` and ``zlogging.loader.parse_json``, etc.
If you would like to customise your own parser, just subclass
``zlogging.loader.BaseParser`` and implement your own ideas.
How to Dump/Write a Log File?
-----------------------------
Before dumping (writing) a log file, you need to create a log **data model**
first. Just like in the Bro/Zeek script language, when customise logging, you
need to notify the logging framework with a new log stream. Here, in
``ZLogging``, we introduced **data model** for the same purpose.
A **data model** is a subclass of ``zlogging.model.Model`` with fields
and data types declared. A typical **data model** can be as following::
class MyLog(Model):
field_one = StringType()
field_two = SetType(element_type=PortType)
where ``field_one`` is ``string`` type, i.e. ``zlogging.types.StringType``;
and ``field_two`` is ``set[port]`` types, i.e. ``zlogging.types.SetType``
of ``zlogging.types.PortType``.
Or you may use type annotations as `PEP 484`_ introduced when declaring **data models**.
All available type hints can be found in ``zlogging.typing``::
class MyLog(Model):
field_one: zeek_string
field_two: zeek_set[zeek_port]
After declaration of your **data model**, you can know dump (write) your log
file with the corresponding functions.
If you would like to customise your own writer, just subclass
``zlogging.loader.BaseWriter`` and implement your own ideas.
.. _PEP 484:
https://www.python.org/dev/peps/pep-0484/
.. [1] https://blog.zeek.org/2018/10/renaming-bro-project_11.html
| zlogging | /zlogging-0.1.2.post1.tar.gz/zlogging-0.1.2.post1/README.rst | README.rst |
import collections
import keyword
import os
import re
import shutil
import subprocess # nosec: B404
import sys
import textwrap
import bs4
import html2text
ROOT = os.path.dirname(os.path.abspath(__file__))
PATH = os.path.abspath(os.path.join(ROOT, '..', 'zlogging', 'enum'))
os.makedirs(PATH, exist_ok=True)
shutil.rmtree(PATH)
os.makedirs(PATH, exist_ok=True)
# regular expression
REGEX_ENUM = re.compile(r'((?P<namespace>[_a-z]+[_a-z0-9]*)::)?(?P<enum>[_a-z]+[_a-z0-9]*)', re.IGNORECASE)
REGEX_LINK = re.compile(r'\[(?P<name>.*?)\]\(.*?\)', re.IGNORECASE)
# file template
TEMPLATE_ENUM = '''\
# -*- coding: utf-8 -*-
# pylint: disable=line-too-long
"""Namespace: ``{namespace}``."""
from zlogging._compat import enum
'''
TEMPLATE_INIT = '''\
# -*- coding: utf-8 -*-
# pylint: disable=ungrouped-imports,duplicate-key
"""Bro/Zeek enum namespace."""
import builtins
import warnings
from typing import TYPE_CHECKING
from zlogging._exc import BroDeprecationWarning
'''
TEMPLATE_FUNC = '''\
def globals(*namespaces: 'str', bare: 'bool' = False) -> 'dict[str, Enum]': # pylint: disable=redefined-builtin
"""Generate Bro/Zeek ``enum`` namespace.
Args:
*namespaces: Namespaces to be loaded.
bare: If ``True``, do not load ``zeek`` namespace by default.
Returns:
Global enum namespace.
Warns:
BroDeprecationWarning: If ``bro`` namespace used.
Raises:
:exc:`ValueError`: If ``namespace`` is not defined.
Note:
For back-port compatibility, the ``bro`` namespace is an alias of the
``zeek`` namespace.
"""
if bare:
enum_data = {} # type: dict[str, Enum]
else:
enum_data = _enum_zeek.copy()
for namespace in namespaces:
if namespace == 'bro':
warnings.warn("Use of 'bro' is deprecated. "
"Please use 'zeek' instead.", BroDeprecationWarning)
namespace = 'zeek'
enum_dict = builtins.globals().get('_enum_%s' % namespace) # pylint: disable=consider-using-f-string
if enum_dict is None:
raise ValueError('undefined namespace: %s' % namespace) # pylint: disable=consider-using-f-string
enum_data.update(enum_dict)
return enum_data
'''
file_list = [] # type: list[str]
for dirpath, _, filenames in os.walk(os.path.join(ROOT, 'sources')):
file_list.extend(map(
lambda name: os.path.join(ROOT, 'sources', dirpath, name), # pylint: disable=cell-var-from-loop
filter(lambda name: os.path.splitext(name)[1] == '.html', filenames)
))
# namespace, enum, name
enum_records = [] # type: list[tuple[str, str, str, str]]
# postpone checks
dest_list = []
for html_file in sorted(file_list):
print(f'+ {html_file}')
with open(html_file, encoding='utf-8') as file:
html = file.read()
soup = bs4.BeautifulSoup(html, 'html5lib')
for tag in soup.select('dl.type'):
descname = tag.select('dt code.descname')
if not descname:
continue
name = descname[0].text.strip()
print(f'++ {name}')
selected = tag.select('dd td p.first span.pre')
if not selected:
continue
type = selected[0].text.strip() # pylint: disable=redefined-builtin
if type != 'enum':
continue
enum_list = []
for dl in tag.select('dd td dl.enum'):
enum_name = dl.select('dt code.descname')[0].text.strip()
enum_docs = dl.select('dd')[0].text.strip()
enum_list.append((enum_name, enum_docs))
docs_list = []
for p in tag.select('dd')[0].children:
if p.name != 'p':
continue
docs = '\n '.join(
textwrap.wrap(
REGEX_LINK.sub(
r'\g<name>',
html2text.html2text(
str(p).replace('\n', ' ')
).replace('\n', ' ')
).replace('`', '``').strip(),
100, break_on_hyphens=False,
)
)
if not docs.endswith('.'):
docs += '.'
docs_list.append(docs)
match = REGEX_ENUM.fullmatch(name)
if match is None:
raise ValueError(name)
namespace = match.group('namespace')
if namespace is None:
namespace = 'zeek'
enum_name = match.group('enum')
dest = os.path.join(PATH, f'{namespace}.py')
if not os.path.isfile(dest):
with open(dest, 'w', encoding='utf-8') as file:
file.write(TEMPLATE_ENUM.format(namespace=namespace))
docs_list.insert(0, f'Enum: ``{name}``.')
html_path = os.path.splitext(os.path.relpath(html_file, os.path.join(ROOT, 'sources')))[0]
docs_list.append(f'See Also:\n `{html_path} <https://docs.zeek.org/en/stable/scripts/{html_path}.html#type-{name}>`__\n\n ') # pylint: disable=line-too-long
enum_docs = '\n\n '.join(docs_list)
with open(dest, 'a', encoding='utf-8') as file:
print('', file=file)
print('', file=file)
print('@enum.unique', file=file)
print(f'class {enum_name}(enum.IntFlag):', file=file)
if enum_docs:
print(f' """{enum_docs}"""', file=file)
print('', file=file)
print(f" _ignore_ = '{enum_name} _'", file=file)
print(f' {enum_name} = vars()', file=file)
print('', file=file)
length = len(enum_list)
for index, (enum, docs) in enumerate(enum_list, start=1):
safe_docs = docs.replace('\n', '\n #: ').replace('_', r'\_')
safe_enum = re.sub(f'{namespace}::', '', enum)
if '::' in safe_enum:
safe_docs = f'{enum}\n #: ' + safe_docs
safe_enum = safe_enum.replace('::', '_')
if safe_docs:
print(f' #: {safe_docs}', file=file)
if keyword.iskeyword(safe_enum):
print(f' {enum_name}[{safe_enum!r}] = enum.auto()', file=file)
else:
print(f' {safe_enum} = enum.auto()', file=file)
if index != length:
print('', file=file)
enum_records.append((namespace, enum_name, enum, safe_enum))
dest_list.append(dest)
imported = []
enum_line = collections.defaultdict(list)
with open(os.path.join(PATH, '__init__.py'), 'w', encoding='utf-8') as file:
file.write(TEMPLATE_INIT)
for namespace, enum, name, enum_name in sorted(enum_records):
if (namespace, enum) not in imported:
print(f'from zlogging.enum.{namespace} import {enum} as {namespace}_{enum}', file=file)
imported.append((namespace, enum))
enum_line[namespace].append(f' {enum!r}: {namespace}_{enum},')
match = REGEX_ENUM.fullmatch(name)
if match is None:
raise ValueError(name)
safe_namespace = match.group('namespace')
if safe_namespace is None:
safe_namespace = namespace
safe_name = match.group('enum')
if keyword.iskeyword(enum_name):
enum_line[safe_namespace].append(f' {safe_name!r}: {namespace}_{enum}[{enum_name!r}], # type: ignore[misc]') # pylint: disable=line-too-long
else:
enum_line[safe_namespace].append(f' {enum_name!r}: {namespace}_{enum}.{enum_name},')
print('', file=file)
print("__all__ = ['globals']", file=file)
print('', file=file)
print('if TYPE_CHECKING:', file=file)
print(' from enum import Enum', file=file)
print('', file=file)
for namespace in sorted(enum_line):
print(f'_enum_{namespace} = {{', file=file)
for line in sorted(enum_line[namespace]):
print(line, file=file)
print('}', file=file)
print('', file=file)
print('', file=file)
file.write(TEMPLATE_FUNC)
subprocess.check_call([sys.executable, os.path.join(PATH, '__init__.py')]) # nosec: B603
for dest in dest_list:
subprocess.check_call([sys.executable, dest]) # nosec: B603 | zlogging | /zlogging-0.1.2.post1.tar.gz/zlogging-0.1.2.post1/gen/make.py | make.py |
import abc
import binascii
import builtins
import contextlib
import ctypes
import dataclasses
import datetime
import enum
import functools
import ipaddress
import json
import multiprocessing
import os
import textwrap
import time
import typing
import warnings
__all__ = [
# Bro types
'bro_addr', 'bro_bool', 'bro_count', 'bro_double', 'bro_enum', 'bro_int',
'bro_interval', 'bro_vector', 'bro_port', 'bro_set', 'bro_string',
'bro_subnet', 'bro_time',
# logging fields
'AddrField', 'BoolField', 'CountField', 'DoubleField', 'EnumField',
'IntField', 'IntervalField', 'PortField', 'StringField', 'SubnetField',
'TimeField',
# logging model
'Model',
# logging writers
'Logger', 'JSONLogger', 'TEXTLogger',
]
LOGS_PATH = '/var/log'
###############################################################################
# warnings & exceptions
class LogWarning(Warning):
pass
class IntWarning(LogWarning):
pass
class CountWarning(LogWarning):
pass
class BoolWarning(LogWarning):
pass
class LogError(Exception):
pass
class FieldError(LogError, TypeError):
pass
class TypingError(LogError, TypeError):
pass
class ModelError(LogError, ValueError):
pass
###############################################################################
# Bro logging fields
def _type_check(func):
@functools.wraps(func)
def check(self, value):
if self.predicate(value):
return func(self, self.cast(value))
raise FieldError(f'Bro {self.type} is required (got type {type(value).__name__!r})')
return check
class _Field(metaclass=abc.ABCMeta):
###########################################################################
# APIs for overload
__type__ = NotImplemented
def jsonify(self, value): # pylint: disable=no-self-use
return json.dumps(value)
def textify(self, value): # pylint: disable=no-self-use
return str(value)
def predicate(self, value): # pylint: disable=unused-argument, no-self-use
return True
def cast(self, value): # pylint: disable=no-self-use
return value
###########################################################################
__use_json__ = False
__seperator__ = '\x09'
__set_seperator__ = ','
__empty_field__ = '(empty)'
__unset_field__ = '-'
@property
def type(self):
return self.__type__
@property
def json(self):
return self.__use_json__
@property
def seperator(self):
return self.__seperator__
@property
def set_seperator(self):
return self.__set_seperator__
@property
def empty_field(self):
return self.__empty_field__
@property
def unset_field(self):
return self.__unset_field__
@classmethod
def set_attributes(cls, *,
use_json=False,
seperator='\x09',
set_seperator=',',
empty_field='(empty)',
unset_field='-'):
cls.__use_json__ = use_json
cls.__seperator__ = seperator
cls.__set_seperator__ = set_seperator
cls.__empty_field__ = empty_field
cls.__unset_field__ = unset_field
return cls
def __new__(cls, *args, **kwargs): # pylint: disable=unused-argument
if cls.__type__ is NotImplemented:
raise NotImplementedError
return super().__new__(cls)
def __call__(self, value):
if self.json:
return self._to_json(value)
if value is None:
return self.unset_field
return self._to_text(value)
@classmethod
def __repr__(cls):
if hasattr(cls, 'type'):
return cls.type
return cls.__type__
@_type_check
def _to_json(self, value):
return self.jsonify(value)
@_type_check
def _to_text(self, value):
return self.textify(value) or self.empty_field
class _SimpleField(_Field):
pass
class StringField(_SimpleField):
__type__ = 'string'
def cast(self, value): # pylint: disable=no-self-use
return str(value).encode('unicode-escape').decode()
class PortField(_SimpleField):
__type__ = 'port'
def predicate(self, value): # pylint: disable=no-self-use
if isinstance(value, int):
if 0 <= value <= 65535:
return True
return False
if isinstance(value, ctypes.c_uint16):
return True
return False
def cast(self, value): # pylint: disable=no-self-use
if isinstance(value, int):
return ctypes.c_uint16(value).value
return value.value
class EnumField(_SimpleField):
__type__ = 'enum'
def predicate(self, value): # pylint: disable=no-self-use
return isinstance(value, enum.Enum)
def cast(self, value): # pylint: disable=no-self-use
if isinstance(value, enum.Enum):
return value.name
return value
class IntervalField(_SimpleField):
__type__ = 'interval'
def predicate(self, value): # pylint: disable=no-self-use
if isinstance(value, datetime.timedelta):
return True
try:
float(value)
except (TypeError, ValueError):
return False
return True
def cast(self, value): # pylint: disable=no-self-use
if isinstance(value, datetime.timedelta):
return '%.6f' % value.total_seconds()
return '%.6f' % float(value)
def jsonify(self, value): # pylint: disable=no-self-use
return '%s%s' % (value[-5], value[-5:].strip('0'))
def textify(self, value): # pylint: disable=no-self-use
return '%s%s' % (value[-5], value[-5:].strip('0'))
class AddrField(_SimpleField):
__type__ = 'addr'
def predicate(self, value): # pylint: disable=no-self-use
try:
ipaddress.ip_address(value)
except (TypeError, ValueError):
return False
return True
def cast(self, value): # pylint: disable=no-self-use
return str(ipaddress.ip_address(value))
class SubnetField(_SimpleField):
__type__ = 'subnet'
def predicate(self, value): # pylint: disable=no-self-use
try:
ipaddress.ip_network(value)
except (TypeError, ValueError):
return False
return True
def cast(self, value): # pylint: disable=no-self-use
return str(ipaddress.ip_network(value))
class IntField(_SimpleField):
__type__ = 'int'
def predicate(self, value): # pylint: disable=no-self-use
if isinstance(value, int):
if int.bit_length(value) > 64:
warnings.warn(f'{value} exceeds maximum value', IntWarning)
return True
if isinstance(value, ctypes.c_int64):
return True
return False
def cast(self, value): # pylint: disable=no-self-use
if isinstance(value, int):
return ctypes.c_int64(value).value
return value.value
class CountField(_SimpleField):
__type__ = 'count'
def predicate(self, value): # pylint: disable=no-self-use
if isinstance(value, int):
if int.bit_length(value) > 64:
warnings.warn(f'{value} exceeds maximum value', CountWarning)
if value < 0:
warnings.warn(f'negative value {value} casts to unsigned', CountWarning)
return True
if isinstance(value, ctypes.c_uint64):
return True
return False
def cast(self, value): # pylint: disable=no-self-use
if isinstance(value, int):
return ctypes.c_uint64(value).value
return value.value
class TimeField(_SimpleField):
__type__ = 'time'
def predicate(self, value): # pylint: disable=no-self-use
if isinstance(value, datetime.datetime):
return True
if isinstance(value, time.struct_time):
return True
try:
float(value)
except (TypeError, ValueError):
return False
return True
def cast(self, value): # pylint: disable=no-self-use
if isinstance(value, datetime.datetime):
return value.timestamp()
if isinstance(value, time.struct_time):
return time.mktime(value)
return float(value)
class DoubleField(_SimpleField):
__type__ = 'double'
def predicate(self, value): # pylint: disable=no-self-use
try:
float(value)
except (TypeError, ValueError):
return False
return True
def cast(self, value): # pylint: disable=no-self-use
return '%.6f' % float(value)
def jsonify(self, value): # pylint: disable=no-self-use
return '%s%s' % (value[-5], value[-5:].strip('0'))
def textify(self, value): # pylint: disable=no-self-use
return '%s%s' % (value[-5], value[-5:].strip('0'))
class BoolField(_SimpleField):
__type__ = 'bool'
def predicate(self, value): # pylint: disable=no-self-use
if not isinstance(value, bool):
warnings.warn(f'cast {type(value).__name__!r} type to bool value', BoolWarning)
return True
def jsonify(self, value): # pylint: disable=no-self-use
return 'true' if bool(value) else 'false'
def textify(self, value): # pylint: disable=no-self-use
return 'T' if bool(value) else 'F'
class _GenericField(_Field):
pass
class RecordField(_GenericField):
__type__ = '~record'
@property
def type(self):
return self.seperator.join(self.__field_type__)
def __init__(self, value=None, **kwargs):
if value is None:
_kw = dict()
elif dataclasses.is_dataclass(value):
_kw = dict()
for field in dataclasses.fields(value):
if field.default is not dataclasses.MISSING:
_kw[field.name] = field.default
elif field.default_factory is not dataclasses.MISSING:
_kw[field.name] = field.default_factory()
else:
_kw[field.name] = field.type
else:
_kw = dict(value)
_kw.update(kwargs)
self.__field_type__ = list()
self.__field_factory__ = dict()
for key, val in _kw.items():
if isinstance(val, typing.TypeVar):
val = val.__bound__
if isinstance(val, _Field):
field_val = val
elif isinstance(val, type) and issubclass(val, _SimpleField):
field_val = val()
else:
raise FieldError(f'invalid Bro record field: {val}')
self.__field_type__.append(field_val.type)
self.__field_factory__[key] = field_val
def predicate(self, value): # pylint: disable=no-self-use
if dataclasses.is_dataclass(value):
return True
try:
dict(value)
except (TypeError, ValueError):
return False
return False
def cast(self, value): # pylint: disable=no-self-use
if dataclasses.is_dataclass(value):
value_dict = dataclasses.asdict(value)
else:
value_dict = dict(value)
_value = dict()
for key, val in self.__field_factory__:
if key not in value_dict:
raise FieldError(f'missing field {key!r} in Bro record')
_value[key] = val(value_dict[key])
return _value
def jsonify(self, value):
return '{%s}' % ', '.join(f'{json.dumps(key)}: {val}' for key, val in value.items())
def textify(self, value): # pylint: disable=no-self-use
if value:
return self.seperator.join(value.values())
return self.empty_field
class _SequenceField(_GenericField):
@property
def type(self):
return '%s[%s]' % (self.__type__, self.__field_type__)
def __init__(self, value):
if isinstance(value, typing.TypeVar):
value = value.__bound__
if isinstance(value, _Field):
if not self.json and isinstance(value, _GenericField):
raise FieldError(f'invalid recursive field in ASCII mode: {self.__type__}[{value.__type__}]')
field_value = value
elif isinstance(value, type) and issubclass(value, _SimpleField):
field_value = value()
else:
raise FieldError(f'invalid Bro {self.__type__} field')
self.__field_type__ = field_value.type
self.__field_factory__ = field_value
def jsonify(self, value):
return '[%s]' % ', '.join(value)
def textify(self, value):
if value:
return self.set_seperator.join(value)
return self.empty_field
class SetField(_SequenceField):
__type__ = 'set'
def cast(self, value):
return set(self.__field_factory__(element) for element in value)
class VectorField(_SequenceField):
__type__ = 'vector'
def cast(self, value):
return list(self.__field_factory__(element) for element in value)
###############################################################################
# Bro logging types
# internal typings
_bro_string = typing.TypeVar('bro_string', bound=StringField) # _bro_string.__bound__ == StringField
_bro_port = typing.TypeVar('bro_port', bound=PortField)
_bro_enum = typing.TypeVar('bro_enum', bound=EnumField)
_bro_interval = typing.TypeVar('bro_interval', bound=IntervalField)
_bro_addr = typing.TypeVar('bro_addr', bound=AddrField)
_bro_subnet = typing.TypeVar('bro_subnet', bound=SubnetField)
_bro_int = typing.TypeVar('bro_int', bound=IntField)
_bro_count = typing.TypeVar('bro_count', bound=CountField)
_bro_time = typing.TypeVar('bro_time', bound=TimeField)
_bro_double = typing.TypeVar('bro_double', bound=DoubleField)
_bro_bool = typing.TypeVar('bro_bool', bound=BoolField)
_bro_record = typing.TypeVar('bro_record', bound=RecordField)
_bro_set = typing.TypeVar('bro_set', bound=SetField)
_bro_vector = typing.TypeVar('bro_vector', bound=VectorField)
_bro_type = typing.TypeVar('bro_type', # _bro_type.__constraints__ == (...)
_bro_string,
_bro_port,
_bro_enum,
_bro_interval,
_bro_addr,
_bro_subnet,
_bro_int,
_bro_count,
_bro_time,
_bro_double,
_bro_bool,
_bro_record,
_bro_set,
_bro_vector)
class _bro_record(typing._SpecialForm, _root=True): # pylint: disable=protected-access
def __repr__(self):
return 'bro_record'
def __init__(self, name, doc): # pylint: disable=unused-argument
super().__init__('bro_record', '')
@typing._tp_cache # pylint: disable=protected-access
def __getitem__(self, parameters):
if parameters == ():
raise TypingError('cannot take a Bro record of no types.')
if not isinstance(parameters, tuple):
parameters = (parameters,)
parameters = typing._remove_dups_flatten(parameters) # pylint: disable=protected-access
if len(parameters) == 1:
return parameters[0]
return typing._GenericAlias(self, parameters) # pylint: disable=protected-access
class _bro_set(typing.Generic[_bro_type]):
pass
class _bro_vector(typing.Generic[_bro_type]):
pass
# basic Bro types
bro_string = _bro_string
bro_port = _bro_port
bro_enum = _bro_enum
bro_interval = _bro_interval
bro_addr = _bro_addr
bro_subnet = _bro_subnet
bro_int = _bro_int
bro_count = _bro_count
bro_time = _bro_time
bro_double = _bro_double
bro_bool = _bro_bool
bro_set = _bro_set
bro_vector = _bro_vector
bro_record = _bro_record()
###############################################################################
# Bro logging data model
class Model(RecordField):
###########################################################################
# APIs for overload
# def __post_init_prefix__(self):
# pass
def __post_init_suffix__(self):
pass
###########################################################################
###########################################################################
# APIs for overload
def default(self, field_typing): # pylint: disable=unused-argument, no-self-use
return False
def fallback(self, field_typing): # pylint: disable=no-self-use
raise ModelError(f'unknown field type: {field_typing.__name__}')
###########################################################################
__dataclass_init__ = True
__dataclass_repr__ = True
__dataclass_eq__ = True
__dataclass_order__ = True
__dataclass_unsafe_hash__ = True
__dataclass_frozen__ = True
@classmethod
def set_dataclass(cls, *,
init=True,
repr=True, # pylint: disable=redefined-builtin
eq=True,
order=True,
unsafe_hash=True,
frozen=True):
cls.__dataclass_init__ = init
cls.__dataclass_repr__ = repr
cls.__dataclass_eq__ = eq
cls.__dataclass_order__ = order
cls.__dataclass_unsafe_hash__ = unsafe_hash
cls.__dataclass_frozen__ = frozen
return cls
def __new__(cls, *args, **kwargs): # pylint: disable=unused-argument
if dataclasses.is_dataclass(cls):
cls = dataclasses.make_dataclass(cls.__name__, # pylint: disable=self-cls-assignment
[(field.name, field.type, field) for field in dataclasses.fields(cls)],
bases=cls.__bases__,
namespace=cls.__dict__,
init=cls.__dataclass_init__,
repr=cls.__dataclass_repr__,
eq=cls.__dataclass_eq__,
order=cls.__dataclass_order__,
unsafe_hash=cls.__dataclass_unsafe_hash__,
frozen=cls.__dataclass_frozen__)
else:
cls = dataclasses._process_class(cls, # pylint: disable=protected-access, self-cls-assignment
init=cls.__dataclass_init__,
repr=cls.__dataclass_repr__,
eq=cls.__dataclass_eq__,
order=cls.__dataclass_order__,
unsafe_hash=cls.__dataclass_unsafe_hash__,
frozen=cls.__dataclass_frozen__)
return super().__new__(cls)
def __post_init__(self):
orig_flag = hasattr(self, '__foo')
if orig_flag:
orig = getattr(self, '__foo')
try:
setattr(self, '__foo', 'foo')
except dataclasses.FrozenInstanceError as error:
raise ModelError(f'frozen model: {error}').with_traceback(error.__traceback__) from None
except Exception: # pylint: disable=try-except-raise
raise
if orig_flag:
setattr(self, '__foo', orig)
else:
delattr(self, '__foo')
self.__post_init_prefix__()
for fn in dataclasses._frozen_get_del_attr(self, dataclasses.fields(self)): # pylint: disable=protected-access
if dataclasses._set_new_attribute(self, fn.__name__, fn): # pylint: disable=protected-access
raise ModelError(f'cannot overwrite attribute {fn.__name__} in class {type(self).__name__}')
self.__post_init_suffix__()
def __call__(self, value=None, **kwargs): # pylint: disable=arguments-differ
if value is None:
value_new = dict()
elif dataclasses.is_dataclass(value):
value_new = dataclasses.asdict(value)
else:
value_new = dict(value)
value_new.update(kwargs)
return super().__call__(value_new)
def __post_init_prefix__(self):
for field in dataclasses.fields(self):
self._typing_check(field.type)
value = getattr(self, field.name)
factory = self._get_factory(field.type)
setattr(self, field.name, factory(value))
def _typing_check(self, field_typing): # pylint: disable=inconsistent-return-statements
if self.default(field_typing):
return
if isinstance(field_typing, type):
if issubclass(field_typing, _Field):
return
raise FieldError(f'unknown Bro type: {field_typing.__name__}')
if isinstance(field_typing, _Field):
return
if field_typing in (bro_vector, bro_set):
raise FieldError('container Bro type not initialised')
if hasattr(field_typing, '__supertype__'):
if field_typing in _bro_type.__constraints__: # pylint: disable=no-member
return
raise FieldError(f'unknown Bro type: {field_typing.__name__}')
if hasattr(field_typing, '__origin__'):
if field_typing.__origin__ not in (bro_vector, bro_set):
raise FieldError(f'unknown Bro type: {field_typing.__name__}')
__args__ = field_typing.__args__
if len(__args__) < 1: # pylint: disable=len-as-condition
raise FieldError('too few types for Bro container type')
if len(__args__) > 1:
raise FieldError('too many types for Bro container type')
return self._typing_check(__args__[0])
raise FieldError(f'unknown Bro type: {field_typing.__name__}')
def _get_factory(self, field_typing):
if isinstance(field_typing, type) and issubclass(field_typing, _Field):
factory = field_typing.set_json(use_json=self.json)
if issubclass(factory, RecordField):
factory = factory.set_separator(self.__seperator__)
return factory
if isinstance(field_typing, _Field):
factory = type(field_typing).set_json(use_json=self.json)
if issubclass(factory, RecordField):
factory = factory.set_separator(self.__seperator__)
return factory
if hasattr(field_typing, '__supertype__'):
supertype = field_typing.__supertype__
factory = supertype.set_json(use_json=self.json)
if issubclass(factory, RecordField):
factory = factory.set_separator(self.__seperator__)
return factory
if hasattr(field_typing, '__origin__'):
if field_typing.__origin__ is bro_set:
factory = self._get_factory(field_typing.__args__[0]).set_json(use_json=self.json)
return lambda iterable: set(factory(element) for element in iterable)
if field_typing.__origin__ is bro_vector:
factory = self._get_factory(field_typing.__args__[0]).set_json(use_json=self.json)
return lambda iterable: list(factory(element) for element in iterable)
return self.fallback(field_typing)
###############################################################################
# Bro logging writers
class Logger(metaclass=abc.ABCMeta):
###########################################################################
# APIs for overload
@property
@abc.abstractmethod
def format(self):
pass
@property
def json(self):
return False
def init(self, file):
pass
def exit(self):
pass
@abc.abstractmethod
def log(self, model):
pass
def fallback(self, field_typing): # pylint: disable=no-self-use
raise ModelError(f'unknown field type: {field_typing.__name__}')
def __pre_init__(self, path, model, *, log_suffix=None, async_write=True, **kwargs):
pass
###########################################################################
__seperator__ = '\x09'
@property
def path(self):
return self._path
def __init__(self, path, model, *, log_suffix=None, async_write=True, **kwargs): # pylint: disable=unused-argument
if not issubclass(model, Model):
raise ModelError(f'type {model.__name__!r} is not a valid model')
self.__pre_init__(path, model, log_suffix=log_suffix, async_write=async_write, **kwargs)
if log_suffix is None:
log_suffix = os.getenv('BROAPT_LOG_SUFFIX', '.log')
self._model = model.set_separator(self.__seperator__)
self._field = self._init_field(model)
self._path = path
self._file = os.path.join(LOGS_PATH, f'{path}{log_suffix}')
parents = os.path.split(self._file)[0]
os.makedirs(parents, exist_ok=True)
if async_write:
self._lock = multiprocessing.Lock()
self.closed = multiprocessing.Value('B', False)
else:
self._lock = contextlib.nullcontext()
self.closed = dataclasses.make_dataclass('closed', [('value', bool, False)])()
self.open()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
def _get_name(self, field_typing):
if isinstance(field_typing, type) and issubclass(field_typing, _Field):
return (field_typing.__type__, None)
if isinstance(field_typing, _Field):
return (field_typing.__type__, None)
if hasattr(field_typing, '__supertype__'):
return (field_typing.__supertype__.__type__, None)
if hasattr(field_typing, '__origin__'):
if field_typing.__origin__ is bro_set:
return ('set', self._get_name(field_typing.__args__[0])[0])
if field_typing.__origin__ is bro_vector:
return ('vector', self._get_name(field_typing.__args__[0])[0])
return self.fallback(field_typing)
def _init_field(self, model):
fields = dict()
for field in dataclasses.fields(model):
fields[field.name] = (field.type, self._get_name(field.type))
return fields
def _init_model(self, *args, **kwargs):
if args and isinstance(args[0], self._model):
dataclass = args[0]
else:
dataclass = self._model(*args, **kwargs)
return dataclasses.asdict(dataclass)
def open(self):
with open(self._file, 'w') as file:
self.init(file)
def close(self):
if not self.closed.value:
with self._lock:
self.exit()
self.closed.value = True
def write(self, *args, **kwargs):
try:
model = self._init_model(*args, **kwargs)
except FieldError:
raise
except Exception as error:
raise ModelError(f'invalid model: {error}').with_traceback(error.__traceback__) from None
context = self.log(model)
with self._lock:
with open(self._file, 'a') as file:
print(context, file=file)
class JSONLogger(Logger):
@property
def format(self):
return 'json'
@property
def json(self):
return True
def log(self, model):
return json.dumps(model, default=str)
class TEXTLogger(Logger):
@property
def format(self):
return 'text'
@property
def seperator(self):
return self._seperator
@property
def set_seperator(self):
return self._set_seperator
@property
def empty_field(self):
return self._empty_field
@property
def unset_field(self):
return self._unset_field
def __pre_init__(self, path, model, *, log_suffix=None, async_write=True, # pylint: disable=unused-argument, arguments-differ
seperator='\x09', set_seperator=',', empty_field='(empty)', unset_field='-'):
self.__seperator__ = seperator
self._seperator = seperator
self._set_seperator = set_seperator
self._empty_field = empty_field
self._unset_field = unset_field
@staticmethod
def _hexlify(string):
hex_string = binascii.hexlify(string.encode()).decode()
return ''.join(map(lambda s: f'\\x{s}', textwrap.wrap(hex_string, 2)))
def _expand_field_names(self):
fields = list()
for key, val in self._field.items():
record = ((val[0] is bro_record) or \
(isinstance(val[0], type) and issubclass(val[0], RecordField)) or \
isinstance(val[0], RecordField))
if record:
fields.extend(f'{key}.{field}' for field in val[1].keys())
else:
fields.append(key)
return fields
def _expand_field_types(self):
fields = list()
for key, val in self._field.items():
record = ((val[0] is bro_record) or \
(isinstance(val[0], type) and issubclass(val[0], RecordField)) or \
isinstance(val[0], RecordField))
if record:
fields.extend(field for field in val[1].values())
else:
fields.append(key)
return fields
def init(self, file):
print(f'#seperator {self._hexlify(self.seperator)}', file=file)
print(f'#set_separator{self.seperator}{self.set_seperator}', file=file)
print(f'#empty_field{self.seperator}{self.empty_field}', file=file)
print(f'#unset_field{self.seperator}{self.unset_field}', file=file)
print(f'#path{self.seperator}{self.path}', file=file)
print(f'#open{self.seperator}{time.strftime("%Y-%m-%d-%H-%M-%S")}', file=file)
print(f'#fields{self.seperator}{self.seperator.join(self._expand_field_names())}', file=file)
print(f'#types{self.seperator}{self.seperator.join(self._expand_field_types())}', file=file)
def exit(self):
with open(self._file, 'a') as file:
print(f'#close{self.seperator}{time.strftime("%Y-%m-%d-%H-%M-%S")}', file=file)
def log(self, model):
return self.seperator.join(map(lambda field: str(getattr(model, field)), self._field.keys())) # pylint: disable=dict-keys-not-iterating | zlogging | /zlogging-0.1.2.post1.tar.gz/zlogging-0.1.2.post1/archive/blogging.py | blogging.py |
import collections
import ctypes
import dataclasses
import decimal
import datetime
import enum
import ipaddress
import json
import pprint
import re
import sys
import urllib.parse
import pandas
###############################################################################
# data classes
@dataclasses.dataclass
class TEXTInfo:
format = 'text'
path: str
open: datetime.datetime
close: datetime.datetime
context: pandas.DataFrame
exit_with_error: bool
@dataclasses.dataclass
class JSONInfo:
format = 'json'
context: pandas.DataFrame
###############################################################################
# parsers & macros
set_separator = None
empty_field = None
unset_field = None
def set_parser(s, t):
return set(t(e) for e in s.split(set_separator))
def vector_parser(s, t):
return list(t(e) for e in s.split(set_separator))
def str_parser(s):
if s == empty_field:
return str()
if s == unset_field:
return None
return s.encode().decode('unicode_escape')
def port_parser(s):
if s == unset_field:
return None
return ctypes.c_uint16(int(s)).value
def int_parser(s):
if s == unset_field:
return None
return ctypes.c_int64(int(s)).value
def count_parser(s):
if s == unset_field:
return None
return ctypes.c_uint64(int(s)).value
def addr_parser(s):
if s == unset_field:
return None
return ipaddress.ip_address(s)
def subnet_parser(s):
if s == unset_field:
return None
return ipaddress.ip_network(s)
def time_parser(s):
if s == unset_field:
return None
return datetime.datetime.fromtimestamp(float(s))
def float_parser(s):
if s == unset_field:
return None
with decimal.localcontext() as ctx:
ctx.prec = 6
value = decimal.Decimal(s)
return value
def interval_parser(s):
if s == unset_field:
return None
return datetime.timedelta(seconds=float(s))
def enum_parser(s):
if s == unset_field:
return None
return enum.Enum('<unknown>', [(s, 0)])[s]
def bool_parser(s):
if s == unset_field:
return None
if s == 'T':
return True
if s == 'F':
return False
raise ValueError
type_parser = collections.defaultdict(lambda: str_parser, dict(
string=str_parser,
port=port_parser,
enum=enum_parser,
interval=interval_parser,
addr=addr_parser,
subnet=subnet_parser,
int=int_parser,
count=count_parser,
time=time_parser,
double=float_parser,
bool=bool_parser,
))
###############################################################################
def parse_text(file, line, hook=None):
global set_separator, empty_field, unset_field
if hook is not None:
type_parser.update(hook)
temp = line.strip().split(' ', maxsplit=1)[1]
separator = urllib.parse.unquote(temp.replace('\\x', '%'))
set_separator = file.readline().strip().split(separator)[1]
empty_field = file.readline().strip().split(separator)[1]
unset_field = file.readline().strip().split(separator)[1]
path = file.readline().strip().split(separator)[1]
open_time = datetime.datetime.strptime(file.readline().strip().split(separator)[1], r'%Y-%m-%d-%H-%M-%S')
fields = file.readline().strip().split(separator)[1:]
types = file.readline().strip().split(separator)[1:]
field_parser = list()
for (field, type_) in zip(fields, types):
match_set = re.match(r'set\[(?P<type>.+)\]', type_)
if match_set is not None:
set_type = match_set.group('type')
field_parser.append((field, lambda s: set_parser(s, type_parser[set_type]))) # pylint: disable=cell-var-from-loop
continue
match_vector = re.match(r'^vector\[(.+?)\]', type_)
if match_vector is not None:
vector_type = match_vector.groups()[0]
field_parser.append((field, lambda s: vector_parser(s, type_parser[vector_type]))) # pylint: disable=cell-var-from-loop
continue
field_parser.append((field, type_parser[type_]))
exit_with_error = True
loglist = list()
for line in file: # pylint: disable = redefined-argument-from-local
if line.startswith('#'):
exit_with_error = False
break
logline = dict()
for i, s in enumerate(line.strip().split(separator)):
field_name, field_type = field_parser[i]
logline[field_name] = field_type(s)
loglist.append(logline)
if exit_with_error:
close_time = datetime.datetime.now()
else:
close_time = datetime.datetime.strptime(line.strip().split(separator)[1], r'%Y-%m-%d-%H-%M-%S')
loginfo = TEXTInfo(
# format='text',
path=path,
open=open_time,
close=close_time,
context=pandas.DataFrame(loglist),
exit_with_error=exit_with_error,
)
return loginfo
def parse_json(file, line):
loglist = [json.loads(line)]
for line in file: # pylint: disable = redefined-argument-from-local
loglist.append(json.loads(line))
loginfo = JSONInfo(
# format='json',
context=pandas.DataFrame(loglist),
)
return loginfo
def parse(filename, hook=None):
with open(filename) as file:
line = file.readline()
if line.startswith('#'):
loginfo = parse_text(file, line, hook)
else:
loginfo = parse_json(file, line)
return loginfo
def main():
for logfile in sys.argv[1:]:
loginfo = parse(logfile)
pprint.pprint(loginfo)
if __name__ == '__main__':
sys.exit(main()) | zlogging | /zlogging-0.1.2.post1.tar.gz/zlogging-0.1.2.post1/archive/logparser.py | logparser.py |
# Zerodha Login Automated; This package reads access token stored in dynamodb and provides KiteConnect / KiteTicker instances.
# This package can only be used in conjunction with a dynamodb access_store that keeps access key stored
## Context
This code is for a python package for automated login to Zerodha. The redirect URL is not handled here - Separate Repos with a container that runs as a lambda function saves the access token to dynamoDB
Check out zlogin-puppeteer (headless chrome based login to zerodha) [can be hosted for free in Google cloud functions]
and zerodha-dynamodb-save-accesstoken [AWS lambda container] to save access token to dynamodb from redirect url.
## How to productionalize
- Get Zerodha (kite.trade) account for API based access to the platform.
- Save the API key as env variable ZAPI
- Checkout zlogin-puppeteer repository that automates login process (using headless chrome)
- Check out the zerodha-dynamodb-save-accesstoken repository to set up a lambda function to handle Zerodha redirect url (it saves the access token in the save_access table in dynamodb)
- Install zlogin package from pypi, "pip install zlogin"
```python
import zlogin
access_token = zlogin.fetch_access_token()
```
## This repo is a sister repo with zerodha-dynamodb-save-accesstoken & zlogin-puppeteer tools which have a container code for lambda function that handles storage of access token to dynamodb and puppeteer based Google Cloud Function code to handle login process which redirects to the lambda.
| zlogin | /zlogin-0.0.8.tar.gz/zlogin-0.0.8/README.md | README.md |
from random import randint
from random import shuffle
from string import ascii_lowercase as lower
from string import ascii_uppercase as upper
from string import punctuation as punct
# Zloi (and Evil) Password Generator v. 1.0
# Can generate high, medium and low level passwords based on user desired lenght.
# For correct view use print("".join(passLow(USER_ENTERED_LEN))) command.'''
def passHigh(len):
'''Generate high level password.
Includes all available punctuations, numbers (0-9), lower and upper case characters.
Based on user desired lenght.
Example: 3$lbn*B<-IMr7B0
For correct view use print("".join(passLow(USER_ENTERED_LEN))) command.'''
lis = []
i = 0
while i < len:
lis.append(lower[randint(0, 25)])
i += 1
if i == len:
break
lis.append(punct[randint(0, 25)])
i += 1
if i == len:
break
lis.append(upper[randint(0, 25)])
i += 1
if i == len:
break
lis.append(str(randint(0, 9)))
i += 1
if i == len:
break
shuffle(lis)
return lis
def passMed(len):
'''Generate medium level password.
Includes only numbers (0-9), lower and upper case characters.
Based on user desired lenght.
Example: scc5Qf2W2K70cYD
For correct view use print("".join(passLow(USER_ENTERED_LEN))) command.'''
lis = []
i = 0
while i < len:
lis.append(lower[randint(0, 25)])
i += 1
if i == len:
break
lis.append(upper[randint(0, 25)])
i += 1
if i == len:
break
lis.append(str(randint(0, 9)))
i += 1
if i == len:
break
shuffle(lis)
return lis
def passLow(len):
'''Generate low level password.
Includes only numbers (0-9).
Based on user desired lenght.
Example: 800875488517709
For correct view use print("".join(passLow(USER_ENTERED_LEN))) command.'''
lis = []
x = 1
for x in range(len):
lis.append(str(randint(0, 9)))
return lis | zloipassgen-pkg-Zlomorda | /zloipassgen_pkg_Zlomorda-1.0-py3-none-any.whl/zloipassgen_pkg/zloipassgen.py | zloipassgen.py |
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>. | zloipassgen-pkg-Zlomorda | /zloipassgen_pkg_Zlomorda-1.0-py3-none-any.whl/zloipassgen_pkg_Zlomorda-1.0.dist-info/LICENSE.md | LICENSE.md |
📦 setup.py (for humans)
=======================
This repo exists to provide [an example setup.py] file, that can be used
to bootstrap your next Python project. It includes some advanced
patterns and best practices for `setup.py`, as well as some
commented–out nice–to–haves.
For example, this `setup.py` provides a `$ python setup.py upload`
command, which creates a *universal wheel* (and *sdist*) and uploads
your package to [PyPi] using [Twine], without the need for an annoying
`setup.cfg` file. It also creates/uploads a new git tag, automatically.
In short, `setup.py` files can be daunting to approach, when first
starting out — even Guido has been heard saying, "everyone cargo cults
thems". It's true — so, I want this repo to be the best place to
copy–paste from :)
[Check out the example!][an example setup.py]
Installation
-----
```bash
cd your_project
# Download the setup.py file:
# download with wget
wget https://raw.githubusercontent.com/navdeep-G/setup.py/master/setup.py -O setup.py
# download with curl
curl -O https://raw.githubusercontent.com/navdeep-G/setup.py/master/setup.py
```
To Do
-----
- Tests via `$ setup.py test` (if it's concise).
Pull requests are encouraged!
More Resources
--------------
- [What is setup.py?] on Stack Overflow
- [Official Python Packaging User Guide](https://packaging.python.org)
- [The Hitchhiker's Guide to Packaging]
- [Cookiecutter template for a Python package]
License
-------
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any means.
[an example setup.py]: https://github.com/navdeep-G/setup.py/blob/master/setup.py
[PyPi]: https://docs.python.org/3/distutils/packageindex.html
[Twine]: https://pypi.python.org/pypi/twine
[image]: https://farm1.staticflickr.com/628/33173824932_58add34581_k_d.jpg
[What is setup.py?]: https://stackoverflow.com/questions/1471994/what-is-setup-py
[The Hitchhiker's Guide to Packaging]: https://the-hitchhikers-guide-to-packaging.readthedocs.io/en/latest/creation.html
[Cookiecutter template for a Python package]: https://github.com/audreyr/cookiecutter-pypackage
| zlsender | /zlsender-4.0.0.tar.gz/zlsender-4.0.0/README.md | README.md |
import os
import redis
basedir = os.path.abspath(os.path.dirname(__file__))
# USE = 'wr'
USE = 'zly'
API_BASE_URL='http://192.168.0.58:8001/'
class Config:
SECRET_KEY = "FDHUFHSIFHSOIAFJSIOAJDShuhdh242424"
# 数据库
SQLALCHEMY_TRACK_MODIFICATIONS = True
# redis
# REDIS_HOST = '127.0.0.1'
# REDIS_PORT = 6379
# CACHE_REDIS_PASSWORD='qt@demo123'
# flask_session的配置
# SESSION_TYPE = "redis"
# SESSION_REDIS = redis.StrictRedis(host=REDIS_HOST, port=REDIS_PORT)
# SESSION_USE_SIGNER = True # 对cookie中的session_id进行隐藏处理
# PERMANENT_SESSION_LIFETIME = 86400 # session数据的过期时间
# CACHE_TYPE: 'redis'
# CACHE_REDIS_HOST: '127.0.0.1'
# CACHE_REDIS_PORT: 6379
# CACHE_REDIS_DB: ''
# CACHE_REDIS_PASSWORD: ''
# CACHE_REDIS_PASSWORD: 'qt@demo123' #服务器
class DevelopmentConfig(Config):
#开发环境
SQLALCHEMY_DATABASE_URI = "mysql+pymysql://root:[email protected]:3306/star_library?charset=utf8"
# SQLALCHEMY_DATABASE_URI = "mysql+pymysql://root:Qt@[email protected]:3306/zlytest?charset=utf8"
# 本地链接
# SQLALCHEMY_DATABASE_URI = "mysql+pymysql://root:[email protected]/star_library?charset=utf8"
SQLALCHEMY_BINDS = {
# 线上代码部署
# 'wr': 'mysql+pymysql://root:Qt@[email protected]:3306/mcn?charset=utf8&',
# 'zly': 'mysql+pymysql://root:Qt@[email protected]:3306/zlytest?charset=utf8&'
#本地链接
'wr': 'mysql+pymysql://root:[email protected]:3306/bpm?charset=utf8&',
'zly': 'mysql+pymysql://root:[email protected]:3306/star_library?charset=utf8&',
}
DEBUG = True
@classmethod
def init_app(cls, app):
pass
class TestingConfig(Config):
#测试环境
SQLALCHEMY_DATABASE_URI = "mysql+pymysql://root:Qt@[email protected]/star_library?charset=utf8"
# SQLALCHEMY_DATABASE_URI = "mysql+pymysql://root:[email protected]/star_dev?charset=utf8"
TESTING = True
@classmethod
def init_app(cls, app):
pass
class ProductionConfig(Config):
@classmethod
def init_app(cls, app):
pass
configs = {
'development': DevelopmentConfig,
'testing': TestingConfig,
'production': ProductionConfig,
'default': DevelopmentConfig
} | zly-resource-module | /zly-resource-module-1.0.1.tar.gz/zly-resource-module-1.0.1/config.py | config.py |
import os
COV = None
if os.environ.get('FLASK_COVERAGE'):
#测试模块
import coverage
COV = coverage.coverage(branch=True, include='app/*')
COV.start()
if os.path.exists('.env'):
print('Importing environment from .env...')
for line in open('.env'):
var = line.strip().split('=')
if len(var) == 2:
os.environ[var[0]] = var[1]
from aishow_model_app import create_app
from aishow_model_app.apis import *
from aishow_model_app.models import *
from flask_script import Manager, Shell
from flask_migrate import Migrate, MigrateCommand
app = create_app(os.getenv('FLASK_CONFIG') or 'default')
# manager = Manager(app)
# migrate = Migrate(app, db)
@app.shell_context_processor
def make_shell_context():
return dict(app=app, db=db)
# manager.add_command("shell", Shell(make_context=make_shell_context))
# manager.add_command('db', MigrateCommand)
#
# @manager.command
# def test(coverage=False):
# """Run the unit tests."""
# if coverage and not os.environ.get('FLASK_COVERAGE'):
# import sys
# os.environ['FLASK_COVERAGE'] = '1'
# os.execvp(sys.executable, [sys.executable] + sys.argv)
# import unittest
# tests = unittest.TestLoader().discover('tests')
# unittest.TextTestRunner(verbosity=2).run(tests)
# if COV:
# COV.stop()
# COV.save()
# print('Coverage Summary:')
# COV.report()
# basedir = os.path.abspath(os.path.dirname(__file__))
# covdir = os.path.join(basedir, 'tmp/coverage')
# COV.html_report(directory=covdir)
# print('HTML version: file://%s/index.html' % covdir)
# COV.erase()
# @manager.command
# def profile(length=25, profile_dir=None):
# """Start the application under the code profiler."""
# from werkzeug.contrib.profiler import ProfilerMiddleware
# app.wsgi_app = ProfilerMiddleware(app.wsgi_app, restrictions=[length],
# profile_dir=profile_dir)
# app.run()
# @manager.command
# def deploy():
# """Run deployment tasks."""
# from flask_migrate import upgrade
# from app.models import Role, User
#
# # migrate database to latest revision
# # upgrade()
#
# # create user roles
# Role.insert_roles()
#
# # create self-follows for all users
# User.add_self_follows()
if __name__ == '__main__':
# manager.run()
app.run(host='0.0.0.0',port=8001) | zly-resource-module | /zly-resource-module-1.0.1.tar.gz/zly-resource-module-1.0.1/manage.py | manage.py |
#######################
DingTalk Sdk for Python
#######################
.. image:: https://travis-ci.org/007gzs/dingtalk-sdk.svg?branch=master
:target: https://travis-ci.org/007gzs/dingtalk-sdk
.. image:: https://img.shields.io/pypi/v/dingtalk-sdk.svg
:target: https://pypi.org/project/dingtalk-sdk
钉钉开放平台第三方 Python SDK。
`【阅读文档】 <http://dingtalk-sdk.readthedocs.io/zh_CN/latest/>`_。
********
功能特性
********
+ 企业内部开发接入api
+ 应用服务商(ISV)接入api
********
安装
********
目前 dingtalk-sdk 支持的 Python 环境有 2.7, 3.4, 3.5, 3.6 和 pypy。
dingtalk-sdk 消息加解密同时兼容 cryptography 和 PyCrypto, 优先使用 cryptography 库。
可先自行安装 cryptography 或者 PyCrypto 库::
#cryptography python的一个加密方法
# 安装 cryptography
pip install cryptography>=0.8.2
# 或者安装 PyCrypto
pip install pycrypto>=2.6.1
为了简化安装过程,推荐使用 pip 进行安装
.. code-block:: bash
pip install dingtalk-sdk
# with cryptography
pip install dingtalk-sdk[cryptography]
# with pycrypto
pip install dingtalk-sdk[pycrypto]
升级 dingtalk-sdk 到新版本::
pip install -U dingtalk-sdk
****************
使用示例
****************
django 示例 https://github.com/007gzs/dingtalk-django-example
| zly-resource-module | /zly-resource-module-1.0.1.tar.gz/zly-resource-module-1.0.1/README.md | README.md |
from __future__ import absolute_import, unicode_literals
import inspect
import json
import logging
import requests
import six
from six.moves.urllib.parse import urljoin
"""
# >>>from urllib.parse import urljoin
# >>> urljoin("http://www.chachabei.com/folder/currentpage.html", "anotherpage.html")
# 'http://www.chachabei.com/folder/anotherpage.html'
# >>> urljoin("http://www.chachabei.com/folder/currentpage.html", "/anotherpage.html")
# 'http://www.chachabei.com/anotherpage.html'
# >>> urljoin("http://www.chachabei.com/folder/currentpage.html", "folder2/anotherpage.html")
# 'http://www.chachabei.com/folder/folder2/anotherpage.html'
# >>> urljoin("http://www.chachabei.com/folder/currentpage.html", "/folder2/anotherpage.html")
# 'http://www.chachabei.com/folder2/anotherpage.html'
# >>> urljoin("http://www.chachabei.com/abc/folder/currentpage.html", "/folder2/anotherpage.html")
# 'http://www.chachabei.com/folder2/anotherpage.html'
# >>> urljoin("http://www.chachabei.com/abc/folder/currentpage.html", "../anotherpage.html")
# 'http://www.chachabei.com/abc/anotherpage.html'
"""
from aishow_model_app.apis.base import BaseAPI
from aishow_model_app.core.exceptions import DingTalkClientException
from aishow_model_app.core.utils import json_loads
from aishow_model_app.storage.memorystorage import MemoryStorage
logger = logging.getLogger(__name__)
#判断是否是DingTalkBaseAPI的实列对象
def _is_api_endpoint(obj):
return isinstance(obj, BaseAPI)
class BaseClient(object):
_http = requests.Session()
API_BASE_URL = 'http://192.168.0.58:8001/'
def __new__(cls, *args, **kwargs):
self = super(BaseClient, cls).__new__(cls)
api_endpoints = inspect.getmembers(self, _is_api_endpoint)
for name, api in api_endpoints:
api_cls = type(api)
api = api_cls(self)
setattr(self, name, api)
return self
def __init__(self, storage=None, timeout=None, auto_retry=True):
self.storage = storage or MemoryStorage()
self.timeout = timeout
self.auto_retry = auto_retry #自动重播
#对request进行封装
def _request(self, method, url_or_endpoint, **kwargs):
# 将基础路径和试图函数拼接成完整的访问路劲
if not url_or_endpoint.startswith(('http://', 'https://')):
api_base_url = kwargs.pop('api_base_url', self.API_BASE_URL)
url = urljoin(api_base_url, url_or_endpoint)
else:
url = url_or_endpoint
#判断是否有params参数,
if 'params' not in kwargs:
kwargs['params'] = {}
# 判断是否有data参数,
if isinstance(kwargs.get('data', ''), dict):
body = json.dumps(kwargs['data'], ensure_ascii=False)
body = body.encode('utf-8')
kwargs['data'] = body
# 判断是否有headers参数,
if 'headers' not in kwargs:
kwargs['headers'] = {}
kwargs['headers']['Content-Type'] = 'application/json'
kwargs['timeout'] = kwargs.get('timeout', self.timeout)
result_processor = kwargs.pop('result_processor', None) #processor:处理器 ???????????
top_response_key = kwargs.pop('top_response_key', None) # ?????????
#发送requests.Session()的request()请求
res = self._http.request(
method=method,
url=url,
**kwargs
)
#异常处理返回的数剧是否正常
try:
res.raise_for_status()
except requests.RequestException as reqe:
logger.error("\n【请求地址】: %s\n【请求参数】:%s \n%s\n【异常信息】:%s",
url, kwargs.get('params', ''), kwargs.get('data', ''), reqe)
#返回自定义的DingTalkClientException异常处理类
raise DingTalkClientException(
errcode=None,
errmsg=None,
client=self,
request=reqe.request,
response=reqe.response
)
#处理返回的结果
result = self._handle_result(
res, method, url, result_processor, top_response_key, **kwargs
)
logger.debug("\n【请求地址】: %s\n【请求参数】:%s \n%s\n【响应数据】:%s",
url, kwargs.get('params', ''), kwargs.get('data', ''), result)
return result
def _decode_result(self, res):
try:
result = json_loads(res.content.decode('utf-8', 'ignore'), strict=False)
# 1、ValueError: Invalid control characterat: line 1 column 8363(char8362)使用json.loads(json_data)时,出现:
# ValueError: Invalid control character at: line 1column 8363(char8362)
# 出现错误的原因是字符串中包含了回车符(\r)或者换行符(\n)
# 解决方法:
# (1)对这些字符转义:json_data = json_data.replace('\r', '\\r').replace('\n', '\\n')
# (2)使用关键字strict:json.loads(json_data, strict=False)strict默认是True, 它将严格控制内部字符串,将其设置为False, 便可以允许你\n \r
except (TypeError, ValueError):
# Return origin response object if we can not decode it as JSON
logger.debug('Can not decode response as JSON', exc_info=True)
# logger.debug('Can not decode response as JSON', exc_info=True)
# exc_info: 其值为布尔值,如果该参数的值设置为True,则会将异常异常信息添加到日志消息中。如果没有异常信息则添加None到日志信息中。
# stack_info: 其值也为布尔值,默认值为False。如果该参数的值设置为True,栈信息将会被添加到日志信息中。
# extra: 这是一个字典(dict)参数,它可以用来自定义消息格式中所包含的字段,但是它的key不能与logging模块定义的字段冲突。
return res
return result
def _handle_result(self, res, method=None, url=None, result_processor=None, top_response_key=None, **kwargs):
"""
:param res:
:param method:
:param url:
:param result_processor: 结果处理;
:param top_response_key:
:param kwargs:
:return:
"""
if not isinstance(res, dict):
# Dirty hack around asyncio based AsyncWeChatClient
result = self._decode_result(res)
else:
result = res
if not isinstance(result, dict):
return result
if top_response_key:
if 'error_response' in result:
error_response = result['error_response']
logger.error("\n【请求地址】: %s\n【请求参数】:%s \n%s\n【错误信息】:%s",
url, kwargs.get('params', ''), kwargs.get('data', ''), result)
raise DingTalkClientException(
error_response.get('code', -1),
error_response.get('sub_msg', error_response.get('msg', '')),
client=self,
request=res.request,
response=res
)
top_result = result
if top_response_key in top_result:
top_result = result[top_response_key]
if 'result' in top_result:
top_result = top_result['result']
if isinstance(top_result, six.string_types):
try:
top_result = json_loads(top_result)
except Exception:
pass
if isinstance(top_result, dict):
if ('success' in top_result and not top_result['success']) or (
'is_success' in top_result and not top_result['is_success']):
logger.error("\n【请求地址】: %s\n【请求参数】:%s \n%s\n【错误信息】:%s",
url, kwargs.get('params', ''), kwargs.get('data', ''), result)
raise DingTalkClientException(
top_result.get('ding_open_errcode', -1),
top_result.get('error_msg', ''),
client=self,
request=res.request,
response=res
)
result = top_result
if not isinstance(result, dict):
return result
if 'errcode' in result:
result['errcode'] = int(result['errcode'])
if 'errcode' in result and result['errcode'] != 0:
errcode = result['errcode']
errmsg = result.get('errmsg', errcode)
logger.error("\n【请求地址】: %s\n【请求参数】:%s \n%s\n【错误信息】:%s",
url, kwargs.get('params', ''), kwargs.get('data', ''), result)
raise DingTalkClientException(
errcode,
errmsg,
client=self,
request=res.request,
response=res
)
return result if not result_processor else result_processor(result)
def _handle_pre_request(self, method, uri, kwargs):
return method, uri, kwargs
def _handle_pre_top_request(self, params, uri):
if not uri.startswith(('http://', 'https://')):
uri = urljoin('https://eco.taobao.com', uri)
return params, uri
def _handle_request_except(self, e, func, *args, **kwargs):
raise e
def request(self, method, uri, **kwargs):
method, uri_with_access_token, kwargs = self._handle_pre_request(method, uri, kwargs)
try:
return self._request(method, uri_with_access_token, **kwargs)
except DingTalkClientException as e:
return self._handle_request_except(e, self.request, method, uri, **kwargs)
def top_request(self, method, params=None, format_='json', v='2.0',
simplify='false', partner_id=None, url=None, **kwargs):
"""
top 接口请求
:param method: API接口名称。
:param params: 请求参数 (dict 格式)
:param format_: 响应格式(默认json,如果使用xml,需要自己对返回结果解析)
:param v: API协议版本,可选值:2.0。
:param simplify: 是否采用精简JSON返回格式
:param partner_id: 合作伙伴身份标识。
:param url: 请求url,默认为 https://eco.taobao.com/router/rest
"""
from datetime import datetime
reqparams = {}
if params is not None:
for key, value in params.items():
reqparams[key] = value if not isinstance(value, (dict, list, tuple)) else json.dumps(value)
reqparams['method'] = method
reqparams['timestamp'] = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
reqparams['format'] = format_
reqparams['v'] = v
if format_ == 'json':
reqparams['simplify'] = simplify
if partner_id:
reqparams['partner_id'] = partner_id
base_url = url or '/router/rest'
reqparams, base_url = self._handle_pre_top_request(reqparams, base_url)
if not base_url.startswith(('http://', 'https://')):
base_url = urljoin(self.API_BASE_URL, base_url)
response_key = method.replace('.', '_') + "_response"
try:
return self._request('POST', base_url, params=reqparams, top_response_key=response_key, **kwargs)
except DingTalkClientException as e:
return self._handle_request_except(e, self.request,
method, format_, v, simplify, partner_id, url, params, **kwargs)
def get(self, uri, params=None, **kwargs):
"""
get 接口请求
:param uri: 请求url
:param params: get 参数(dict 格式)
"""
if params is not None:
kwargs['params'] = params
# return self._client.get(url=none, kwargs['params']={'select_type':select_type,'page':page,'limit':limit})
return self.request('GET', uri, **kwargs)
def post(self, uri, data=None, params=None, **kwargs):
"""
post 接口请求
:param uri: 请求url
:param data: post 数据(dict 格式会自动转换为json)
:param params: post接口中url问号后参数(dict 格式)
"""
if data is not None:
kwargs['data'] = data
if params is not None:
kwargs['params'] = params
return self.request('POST', uri, **kwargs) | zly-resource-module | /zly-resource-module-1.0.1.tar.gz/zly-resource-module-1.0.1/aishow_model_app/baseclient.py | baseclient.py |
import json
from flask import jsonify
from flask_restful import Api, Resource, reqparse
from aishow_model_app.models.resource_model import DouyinSpecialLive, ResourceTable, DouyinSingleChainLive, DouyinViewExport, \
KuaiShouLive, RedbookImageTextLink, TaobaoLive, QitengTaobaoExportLiveOffer, QitengRedbookPrice, \
QitengDouyinViewPrice
from aishow_model_app.apis.base import BaseAPI
from aishow_model_app.ext import db
ResourceApi = Api(prefix='/platformresource/list')
#达人的增删改查
class TotalResource(Resource,BaseAPI):
def __init__(self):
self.parse = reqparse.RequestParser()
self.parse.add_argument('page', type=int)
self.parse.add_argument('limit', type=int)
self.parse.add_argument('user_id', type=str)
self.parse.add_argument('flag', type=str)
self.parse.add_argument('select_type', type=int)
self.parse.add_argument('kol_id', type=str)
self.parse.add_argument('kol_name', type=str)
self.parse.add_argument('platform', type=str)
self.parse.add_argument('type_datil', type=str)
self.parse.add_argument('company', type=str)
self.parse.add_argument('contact_name', type=str)
self.parse.add_argument('contact_phone', type=str)
self.parse.add_argument('fans', type=float)
self.parse.add_argument('status', type=str)
def get(self):
args = self.parse.parse_args()
print('resourcetable args get',args)
print('resourcetable args get',args['select_type'])
print('resourcetable args get',type(args['select_type']))
all_list = []
totle = 0
#抖音专场直播
if args['select_type'] == 1:
dysl_query = DouyinSpecialLive.query.offset((args['page'] - 1) * args['limit']).limit(args['limit']).all()
totle = DouyinSpecialLive.query.count()
for item in dysl_query:
dict = {}
dict['dysl'] = DouyinSpecialLive.queryToDict(item)
res_query = item.resource_tables
if res_query:
res_dict = ResourceTable.queryToDict(res_query)
dict['rsst'] = res_dict
all_list.append(dict)
#抖音单链接
elif args['select_type'] == 2:
dyscl_query = DouyinSingleChainLive.query.offset((args['page'] - 1) * args['limit']).limit(args['limit']).all()
totle = DouyinSingleChainLive.query.count()
for item in dyscl_query:
dict = {}
dict['dyscl'] = DouyinSingleChainLive.queryToDict(item)
res_query = item.resource_tables
if res_query:
res_dict = ResourceTable.queryToDict(res_query)
dict['rsst'] = res_dict
all_list.append(dict)
#抖音短视频
elif args['select_type'] == 3:
dyscl_query = DouyinViewExport.query.offset((args['page'] - 1) * args['limit']).limit(
args['limit']).all()
totle = DouyinViewExport.query.count()
for item in dyscl_query:
dict = {}
dict['dyve'] = DouyinViewExport.queryToDict(item)
res_query = item.resource_tables
if res_query:
res_dict = ResourceTable.queryToDict(res_query)
dict['rsst'] = res_dict
dysl2_query = QitengDouyinViewPrice.query.filter(QitengDouyinViewPrice.douyin_view_export_id == item.id).first()
if dysl2_query:
dysl_dict = QitengDouyinViewPrice.queryToDict(dysl2_query)
for k,v in dysl_dict.items():
dict['dyve'][k] = v
all_list.append(dict)
#快手直播
elif args['select_type'] == 4:
dyscl_query = KuaiShouLive.query.offset((args['page'] - 1) * args['limit']).limit(
args['limit']).all()
totle = KuaiShouLive.query.count()
for item in dyscl_query:
dict = {}
dict['ksl'] = KuaiShouLive.queryToDict(item)
res_query = item.resource_tables
if res_query:
res_dict = ResourceTable.queryToDict(res_query)
dict['rsst'] = res_dict
all_list.append(dict)
#小红书图文链接
elif args['select_type'] == 5:
dyscl_query = RedbookImageTextLink.query.offset((args['page'] - 1) * args['limit']).limit(
args['limit']).all()
totle = RedbookImageTextLink.query.count()
for item in dyscl_query:
dict = {}
dict['rbitl'] = RedbookImageTextLink.queryToDict(item)
res_query = item.resource_tables
if res_query:
res_dict = ResourceTable.queryToDict(res_query)
dict['rsst'] = res_dict
dysl2_query = QitengRedbookPrice.query.filter(
QitengRedbookPrice.redbook_image_text_link_id == item.id).first()
if dysl2_query:
dysl_dict = QitengDouyinViewPrice.queryToDict(dysl2_query)
for k,v in dysl_dict.items():
dict['rbitl'][k] = v
all_list.append(dict)
#淘宝直播
elif args['select_type'] == 6:
dyscl_query = TaobaoLive.query.offset((args['page'] - 1) * args['limit']).limit(
args['limit']).all()
totle = TaobaoLive.query.count()
for item in dyscl_query:
dict = {}
dict['tbl'] = TaobaoLive.queryToDict(item)
res_query = item.resource_tables
if res_query:
res_dict = ResourceTable.queryToDict(res_query)
print('res_dict',res_dict)
dict['rsst'] = res_dict
dysl2_query = QitengTaobaoExportLiveOffer.query.filter(
QitengTaobaoExportLiveOffer.taobao_live_id == item.id).first()
if dysl2_query:
dysl_dict = QitengTaobaoExportLiveOffer.queryToDict(dysl2_query)
print(dysl_dict)
for k,v in dysl_dict.items():
dict['tbl'][k] = v
all_list.append(dict)
# print('all_list',all_list)
else:
dict = {}
rsst_query = ResourceTable.query.all()
if rsst_query:
res_dict = ResourceTable.queryToDict(rsst_query)
# print('res_dict', res_dict)
# dict['rsst'] = res_dict
all_list=res_dict
print('all_list',json.dumps({'code': 20000, 'data': {'total': totle, 'items': all_list }},ensure_ascii=False))
return jsonify({'code': 20000, 'data': {'total': totle, 'items': all_list }})
def post(self):
print('resource post')
args = self.parse.parse_args()
res = self.add_updata(args)
if res:
return jsonify({'code': 20000, 'data': {'total': 0, 'items': [],'msg':'失败,请输入正确的数据'}})
else:
# res_query = ResourceTable.query.offset((int(args['page']) - 1) * int(args['limit'])).limit(
# int(args['limit'])).all()
# totle = ResourceTable.query.count()
# res_dict = ResourceTable.queryToDict(res_query)
#
return self.get()
def put(self):
print('resource put')
args = self.parse.parse_args()
res = self.add_updata(args)
if res:
return jsonify({'code': 20000, 'data': {'total': 0, 'items': [],'msg':'更新失败,请输入正确的数据'}})
else:
return self.get()
def delete(self):
args = self.parse.parse_args()
print('resourcetable delete args',args)
res_obj = ResourceTable.query.filter(ResourceTable.id == int(args['user_id'])).first()
res_id = res_obj.id
print('res_id',res_id)
flag = args.get('flag')
if flag == '抖音专场直播': # 抖音专场直播
print('抖音专场直播')
douyinsl_obj = DouyinSpecialLive.query.filter(DouyinSpecialLive.resource_table_id == res_id).first()
if douyinsl_obj:
try:
db.session.delete(douyinsl_obj)
db.session.commit()
except Exception as e:
print('e', e)
elif flag == '抖音单链直播': # 抖音专场直播
print('抖音单链直播')
douyinscl_obj = DouyinSingleChainLive.query.filter(DouyinSingleChainLive.resource_table_id == res_id).first()
if douyinscl_obj:
try:
db.session.delete(douyinscl_obj)
db.session.commit()
except Exception as e:
print('e', e)
elif flag == '抖音短视频': # 抖音专场直播
print('抖音短视频')
douyinve_obj = DouyinViewExport.query.filter(DouyinViewExport.resource_table_id == res_id).first()
if douyinve_obj:
try:
db.session.delete(douyinve_obj)
db.session.commit()
except Exception as e:
print('e', e)
elif flag == '快手直播': # 抖音专场直播
print('快手直播')
ks_obj = KuaiShouLive.query.filter(KuaiShouLive.resource_table_id == res_id).first()
if ks_obj:
try:
db.session.delete(ks_obj)
db.session.commit()
except Exception as e:
print('e', e)
elif flag == '小红书图文链接': # 抖音专场直播
print('小红书图文链接')
rebbookitl_obj = RedbookImageTextLink.query.filter(RedbookImageTextLink.resource_table_id == res_id).first()
if rebbookitl_obj:
try:
db.session.delete(rebbookitl_obj)
db.session.commit()
except Exception as e:
print('e', e)
elif flag == '淘宝直播': # 抖音专场直播
print('淘宝直播')
tbl_obj = TaobaoLive.query.filter(TaobaoLive.resource_table_id == res_id).first()
if tbl_obj:
try:
db.session.delete(tbl_obj)
db.session.commit()
except Exception as e:
print('e', e)
if res_obj:
try:
db.session.delete(res_obj)
db.session.commit()
except Exception as e:
print('e', e)
res_query = ResourceTable.query.offset((int(args['page']) - 1) * int(args['limit'])).limit(
int(args['limit'])).all()
totle = ResourceTable.query.count()
res_dict = ResourceTable.queryToDict(res_query)
return jsonify({'code': 20000, 'data': {'total': totle, 'items': res_dict}})
def add_updata(self,args):
status =args['status']
if status=='create':
rsourcetab = ResourceTable()
else :
rsourcetab = ResourceTable.query.filter(ResourceTable.kol_id==args['kol_id']).first()
rsourcetab.kol_id = args.get('kol_id')
rsourcetab.kol_name = args.get('kol_name')
rsourcetab.platform = args.get('platform')
rsourcetab.avatar = args.get('avatar')
rsourcetab.type_datil = args.get('type_datil')
rsourcetab.company = args.get('company')
rsourcetab.contact_name = args.get('contact_name')
rsourcetab.fans = args.get('fans')
try:
try:
rsourcetab.save(rsourcetab)
except Exception as e:
print('TotalResource post except', e)
resource_obj = ResourceTable.query.filter(ResourceTable.kol_id == args.get('kol_id')).all()[0]
resource_id = resource_obj.id
print(resource_id)
flag = args.get('flag')
if flag == '抖音专场直播': # 抖音专场直播
print('抖音专场直播')
self.parse.add_argument('export_tag', type=str)
self.parse.add_argument('special_offer', type=int)
self.parse.add_argument('export_city', type=str)
self.parse.add_argument('cooperation_case', type=str)
self.parse.add_argument('douyin_special_cost_price', type=int)
args1 = self.parse.parse_args()
print('抖音专场直播 args1', args1)
douyinsl = DouyinSpecialLive()
douyinsl.resource_table_id = resource_id
douyinsl.export_tag = args1.get('export_tag')
douyinsl.special_offer = args1.get('special_offer')
douyinsl.export_city = args1.get('export_city')
douyinsl.cooperation_case = args1.get('cooperation_case')
douyinsl.douyin_special_cost_price = args1.get('douyin_special_cost_price')
try:
douyinsl.save(douyinsl)
# dysl_query = DouyinSpecialLive.query.offset((args1['page']) - 1 * args1['limit']).limit(args1['limit']).all()
# print('ok7')
# totle = DouyinSpecialLive.query.count()
# dysl_dict = DouyinSpecialLive.queryToDict(dysl_query)
# return jsonify({'code': 20000, 'data': {'total': totle, 'items': dysl_dict}})
except Exception as e:
print('DouyinSpecialLive save except', e)
elif flag == '抖音单链直播': # 抖音单链直播
self.parse.add_argument('douyin_export_classification', type=str)
self.parse.add_argument('Single_chain_offer', type=int)
self.parse.add_argument('introduction', type=str)
self.parse.add_argument('live_time', type=str)
self.parse.add_argument('selection_requirements', type=str)
self.parse.add_argument('remarks', type=str)
self.parse.add_argument('douyin_single_cost_price', type=int)
args = self.parse.parse_args()
print('抖音单链直播',args)
douyinscl = DouyinSingleChainLive()
douyinscl.resource_table_id = resource_id
douyinscl.douyin_export_classification = args.get('douyin_export_classification')
douyinscl.Single_chain_offer = args.get('Single_chain_offer')
douyinscl.introduction = args.get('introduction')
douyinscl.live_time = args.get('live_time')
douyinscl.selection_requirements = args.get('selection_requirements')
douyinscl.remarks = args.get('remarks')
douyinscl.douyin_single_cost_price = args.get('douyin_single_cost_price')
try:
douyinscl.save(douyinscl)
dyscl_query = DouyinSpecialLive.query.offset((args['page']) - 1 * args['limit']).limit(
args['limit']).all()
# totle = DouyinSingleChainLive.query.count()
# dyscl_dict = DouyinSingleChainLive.queryToDict(dyscl_query)
# return jsonify({'code': 20000, 'data': {'total': totle, 'items': dyscl_dict}})
except Exception as e:
print('DouyinSpecialLive save except', e)
elif flag == '抖音短视频': # 抖音短视频
self.parse.add_argument('export_tag', type=str)
self.parse.add_argument('introduction', type=str)
self.parse.add_argument('douyin_home_page', type=str)
self.parse.add_argument('export_city', type=str)
self.parse.add_argument('cooperation_case', type=str)
self.parse.add_argument('better_sell_goods', type=str)
self.parse.add_argument('douyin_export_classification', type=str)
self.parse.add_argument('cooperation_mode', type=str)
self.parse.add_argument('offer_less', type=int)
self.parse.add_argument('offer_more', type=int)
self.parse.add_argument('offer_less', type=int)
self.parse.add_argument('star_offer', type=int)
self.parse.add_argument('douyin_view_cost_price', type=int)
args = self.parse.parse_args()
douyinve = DouyinViewExport()
douyinve.resource_table_id = resource_id
douyinve.export_tag = args.get('export_tag')
douyinve.introduction = args.get('introduction')
douyinve.douyin_home_page = args.get('douyin_home_page')
douyinve.export_city = args.get('export_city')
douyinve.cooperation_case = args.get('cooperation_case')
douyinve.better_sell_goods = args.get('better_sell_goods')
douyinve.douyin_export_classification = args.get('douyin_export_classification')
douyinve.cooperation_mode = args.get('cooperation_mode')
douyinve.offer_less = args.get('offer_less')
douyinve.offer_more = args.get('offer_more')
douyinve.star_offer = args.get('star_offer')
douyinve.douyin_view_cost_price = args.get('douyin_view_cost_price')
try:
douyinve.save(douyinve)
# dyve_query = DouyinViewExport.query.offset((args['page']) - 1 * args['limit']).limit(
# args['limit']).all()
# totle = DouyinViewExport.query.count()
# dyve_dict = DouyinViewExport.queryToDict(dyve_query)
# return jsonify({'code': 20000, 'data': {'total': totle, 'items': dyve_dict}})
except Exception as e:
print('DouyinViewExport save except', e)
elif flag == '快手直播': # 快手直播
self.parse.add_argument('avg_online_num', type=float)
self.parse.add_argument('sell_classification', type=str)
self.parse.add_argument('commission_less', type=int)
self.parse.add_argument('commission_more', type=int)
self.parse.add_argument('attributes', type=str)
self.parse.add_argument('better_sell_goods', type=str)
self.parse.add_argument('kuaishou_offer', type=int)
self.parse.add_argument('kuaishou_cost_price', type=int)
self.parse.add_argument('remarks', type=str)
args = self.parse.parse_args()
ksl = KuaiShouLive()
ksl.resource_table_id = resource_id
ksl.avg_online_num = args.get('avg_online_num')
ksl.sell_classification = args.get('sell_classification')
ksl.commission_less = args.get('commission_less')
ksl.commission_more = args.get('commission_more')
ksl.attributes = args.get('attributes')
ksl.kuaishou_offer = args.get('kuaishou_offer')
ksl.kuaishou_cost_price = args.get('kuaishou_cost_price')
ksl.remarks = args.get('remarks')
try:
ksl.save(ksl)
ksl_query = KuaiShouLive.query.offset((args['page']) - 1 * args['limit']).limit(
args['limit']).all()
# totle = KuaiShouLive.query.count()
# ksl_dict = KuaiShouLive.queryToDict(ksl_query)
# return jsonify({'code': 20000, 'data': {'total': totle, 'items': ksl_dict}})
except Exception as e:
print('KuaiShouLive save except', e)
elif flag == '小红书图文链接': # 小红书图文链接
print('抖音专场直播 args1')
self.parse.add_argument('dianzan', type=int)
self.parse.add_argument('redbook_link', type=str)
self.parse.add_argument('export_city', type=str)
self.parse.add_argument('export_tag', type=str)
self.parse.add_argument('brand_partner', type=bool)
self.parse.add_argument('redbook_cost_price', type=int)
args = self.parse.parse_args()
print('抖音专场直播 args1',args)
rdltl = RedbookImageTextLink()
rdltl.resource_table_id = resource_id
rdltl.dianzan = args.get('dianzan')
rdltl.redbook_link = args.get('redbook_link')
rdltl.export_city = args.get('export_city')
rdltl.export_tag = args.get('export_tag')
rdltl.brand_partner = args.get('brand_partner')
rdltl.redbook_cost_price = args.get('redbook_cost_price')
try:
rdltl.save(rdltl)
rdltl_query = RedbookImageTextLink.query.offset((args['page']) - 1 * args['limit']).limit(
args['limit']).all()
# totle = RedbookImageTextLink.query.count()
# rdltl_dict = RedbookImageTextLink.queryToDict(rdltl_query)
# return jsonify({'code': 20000, 'data': {'total': totle, 'items': rdltl_dict}})
except Exception as e:
print('RedbookImageTextLink save except', e)
elif flag == '淘宝直播': # 淘宝直播
print('抖音专场直播 args')
self.parse.add_argument('avg_viewing_num', type=float)
self.parse.add_argument('main_category', type=str)
self.parse.add_argument('introduction', type=str)
self.parse.add_argument('taobao_offer', type=int)
self.parse.add_argument('taobao_cost_price', type=int)
args = self.parse.parse_args()
print('抖音专场直播 args1', args)
tbl = TaobaoLive()
tbl.resource_table_id = resource_id
tbl.avg_viewing_num = args.get('avg_viewing_num')
tbl.main_category = args.get('main_category')
tbl.introduction = args.get('introduction')
tbl.taobao_offer = args.get('taobao_offer')
tbl.taobao_cost_price = args.get('taobao_cost_price')
try:
tbl.save(tbl)
# tbl_query = TaobaoLive.query.offset((args['page']) - 1 * args['limit']).limit(
# args['limit']).all()
# totle = TaobaoLive.query.count()
# tbl_dict = TaobaoLive.queryToDict(tbl_query)
# return jsonify({'code': 20000, 'data': {'total': totle, 'items': tbl_dict}})
except Exception as e:
print('TaobaoLive save except', e)
except Exception as e:
db.session.rollback()
print('TotalResource post except', e)
return e
def update(self, department_data):
"""
更新部门
:param department_data: 部门信息
:return: 已经更新的部门id
"""
if 'id' not in department_data:
raise AttributeError('必须包含Id')
return self._post(
'/department/update',
department_data,
result_processor=lambda x: x['id']
)
def xx_get(self,select_type,page,limit):
"""
资源
:param page: 页码
:param limit: 标签数量
:return:
"""
res = self._get('/platformresource/list/resourcetable',params={'select_type':select_type,'page':page,'limit':limit})
print('xx__gte',res)
return res
@classmethod
def get_datas(self, request, model=None):
headers = request.headers
content_type = headers.get('Content-Type')
print(content_type)
if request.method == "GET":
return request.args
if content_type == 'application/x-www-form-urlencoded':
print("1")
return request.form
if content_type.startswith('application/json'):
print("2")
return request.get_json()
content_type_list = str(content_type).split(';')
if len(content_type_list) > 0:
if content_type_list[0] == 'multipart/form-data':
print("3")
return request.form
# def list(self, department_id, offset=0, size=100, order='custom', lang='zh_CN'):
# """
# 获取部门成员(详情)
#
# :param department_id: 获取的部门id
# :param offset: 偏移量
# :param size: 表分页大小,最大100
# :param order: 排序规则
# entry_asc 代表按照进入部门的时间升序
# entry_desc 代表按照进入部门的时间降序
# modify_asc 代表按照部门信息修改时间升序
# modify_desc 代表按照部门信息修改时间降序
# custom 代表用户定义排序
# :param lang: 通讯录语言(默认zh_CN另外支持en_US)
# :return:
# """
# return self._get(
# '/user/list',
# {
# 'department_id': department_id,
# 'offset': offset,
# 'size': size,
# 'order': order,
# 'lang': lang
# }
# )
#
# class DingTalkBaseAPI(object):
# API_BASE_URL = None
# def __init__(self, client=None):
# self._client = client
#
# def _get(self, url, params=None, **kwargs):
# if self.API_BASE_URL:
# kwargs['api_base_url'] = self.API_BASE_URL
# return self._client.get(url, params, **kwargs)
#
# class BaseClient(object):
#
# _http = requests.Session()
#
# API_BASE_URL = 'https://oapi.dingtalk.com/'
# def get(self, uri, params=None, **kwargs):
# """
# get 接口请求
#
# :param uri: 请求url
# :param params: get 参数(dict 格式)
# """
# if params is not None:
# kwargs['params'] = params
# return self.request('GET', uri, **kwargs)
#
# def request(self, method, uri, **kwargs):
# method, uri_with_access_token, kwargs = self._handle_pre_request(method, uri, kwargs)
#
# def _handle_pre_request(self, method, uri, kwargs):
# return method, uri, kwargs
# try:
# return self._request(method, uri_with_access_token, **kwargs)
# except DingTalkClientException as e:
# return self._handle_request_except(e, self.request, method, uri, **kwargs)
ResourceApi.add_resource(TotalResource, '/resourcetable') | zly-resource-module | /zly-resource-module-1.0.1.tar.gz/zly-resource-module-1.0.1/aishow_model_app/apis/resource_api.py | resource_api.py |
from __future__ import absolute_import, unicode_literals
from aishow_model_app.apis.base import BaseAPI
class User(BaseAPI):
def auth_scopes(self):
"""
获取CorpSecret授权范围
:return:
"""
return self._get('/auth/scopes')
def get_org_user_count(self, only_active):
"""
获取企业员工人数
:param only_active: 是否包含未激活钉钉的人员数量
:return: 企业员工数量
"""
return self._get(
'/user/get_org_user_count',
{'onlyActive': 0 if only_active else 1},
result_processor=lambda x: x['count']
)
def getuserinfo(self, code):
"""
通过CODE换取用户身份
:param code: requestAuthCode接口中获取的CODE
:return:
"""
return self._get(
'/user/getuserinfo',
{'code': code}
)
def get(self, userid, lang='zh_CN'):
"""
获取成员详情
:param userid: 员工在企业内的UserID,企业用来唯一标识用户的字段
:param lang: 通讯录语言(默认zh_CN,未来会支持en_US)
:return:
"""
return self._get(
'/user/get',
{'userid': userid, 'lang': lang}
)
def create(self, user_data):
"""
创建成员
:param user_data: 用户信息
:return: userid
"""
return self._post(
'/user/create',
user_data,
result_processor=lambda x: x['userid']
)
def update(self, user_data):
"""
更新成员
:param user_data: 用户信息
:return:
"""
return self._post(
'/user/update',
user_data
)
def delete(self, userid):
"""
删除成员
:param userid: 员工在企业内的UserID,企业用来唯一标识用户的字段
:return:
"""
return self._get(
'/user/delete',
{'userid': userid}
)
def batchdelete(self, user_ids):
"""
批量删除成员
:param user_ids: 员工UserID列表。列表长度在1到20之间
:return:
"""
return self._post(
'/user/delete',
{'useridlist': list(user_ids)}
)
def simple_list(self, department_id, offset=0, size=100, order='custom', lang='zh_CN'):
"""
获取部门成员
:param department_id: 获取的部门id
:param offset: 偏移量
:param size: 表分页大小,最大100
:param order: 排序规则
entry_asc 代表按照进入部门的时间升序
entry_desc 代表按照进入部门的时间降序
modify_asc 代表按照部门信息修改时间升序
modify_desc 代表按照部门信息修改时间降序
custom 代表用户定义排序
:param lang: 通讯录语言(默认zh_CN另外支持en_US)
:return:
"""
return self._get(
'/user/simplelist',
{
'department_id': department_id,
'offset': offset,
'size': size,
'order': order,
'lang': lang
}
)
def list(self, department_id, offset=0, size=100, order='custom', lang='zh_CN'):
"""
获取部门成员(详情)
:param department_id: 获取的部门id
:param offset: 偏移量
:param size: 表分页大小,最大100
:param order: 排序规则
entry_asc 代表按照进入部门的时间升序
entry_desc 代表按照进入部门的时间降序
modify_asc 代表按照部门信息修改时间升序
modify_desc 代表按照部门信息修改时间降序
custom 代表用户定义排序
:param lang: 通讯录语言(默认zh_CN另外支持en_US)
:return:
"""
return self._get(
'/user/list',
{
'department_id': department_id,
'offset': offset,
'size': size,
'order': order,
'lang': lang
}
)
def get_admin(self):
"""
获取管理员列表
:return: sys_level 管理员角色 1:主管理员,2:子管理员
"""
return self._get(
'/user/get_admin',
result_processor=lambda x: x['admin_list']
)
def can_access_microapp(self, app_id, user_id):
"""
获取管理员的微应用管理权限
:param app_id: 微应用id
:param user_id: 员工唯一标识ID
:return: 是否能管理该微应用
"""
return self._get(
'/user/can_access_microapp',
{'appId': app_id, 'userId': user_id},
result_processor=lambda x: x['canAccess']
)
def get_userid_by_unionid(self, unionid):
"""
根据unionid获取成员的userid
:param unionid: 用户在当前钉钉开放平台账号范围内的唯一标识
:return:
"""
return self._get(
'/user/getUseridByUnionid',
{'unionid': unionid}
)
def get_dept_member(self, dept_id):
"""
获取部门用户userid列表
:param dept_id: 用户在当前钉钉开放平台账号范围内的唯一标识
:return 部门userid列表:
"""
return self._get(
'/user/getDeptMember',
{'deptId': dept_id},
result_processor=lambda x: x['userIds']
)
def listbypage(self, department_id, offset=0, size=100, order='custom', lang='zh_CN'):
"""
获取部门用户
:param department_id: 获取的部门id
:param offset: 偏移量
:param size: 表分页大小,最大100
:param order: 排序规则
entry_asc 代表按照进入部门的时间升序
entry_desc 代表按照进入部门的时间降序
modify_asc 代表按照部门信息修改时间升序
modify_desc 代表按照部门信息修改时间降序
custom 代表用户定义排序
:param lang: 通讯录语言(默认zh_CN另外支持en_US)
:return:
"""
return self._get(
'/user/list',
{
'department_id': department_id,
'offset': offset,
'size': size,
'order': order,
'lang': lang
}
)
def get_admin_scope(self, userid):
"""
查询管理员通讯录权限范围
:param userid: 用户id
"""
return self._top_request(
"dingtalk.oapi.user.get_admin_scope",
{"userid": userid},
result_processor=lambda x: x['dept_ids']
) | zly-resource-module | /zly-resource-module-1.0.1.tar.gz/zly-resource-module-1.0.1/aishow_model_app/apis/user.py | user.py |
from sqlalchemy import String, Integer, Date, Column, ForeignKey, BigInteger, Float
from aishowapp.ext import db
from aishowapp.models import BaseDef
# 达人列表
class KolList(db.Model, BaseDef):
__tablename__ = 'kol_list'
userid = Column(BigInteger, primary_key=True)
nickname = Column(String(255))
follower_num = Column(Integer)
tag = Column(String(255))
video_count = Column(Integer)
avg_interaction_15 = Column(Integer)
age = Column(Integer)
sex = Column(String(255))
# 服务器
Fans_description = Column(String(255))
#需要增加
view_link = Column(String(255))
# 自己
#页面访问量
pv = Column(Integer)
#独立访客
uv = Column(Integer)
#
# tv = Column(String(255))
#热度
heat = Column(Integer)
# word_cloud = Column(String(255))
c_time = Column(Date)
ebusines_id = db.relationship("Ebusines", backref='kolList')
fansattribute_id = db.relationship("FansAttribute", backref='kolList')
shoppingcart_id = db.relationship("ShoppingCart", backref='kolList')
phoneageinfo_id = db.relationship("PhoneAgeInfo", backref='kolList')
region_id = db.relationship("Region", backref='kolList')
# 电商表
class Ebusines(db.Model, BaseDef):
__tablename__ = 'ebusiness'
userid = Column(ForeignKey('kol_list.userid', ondelete='CASCADE', onupdate='CASCADE'), primary_key=True)
datatime = Column(Date)
goods_category = Column(String(255))
Commodity_classification = Column(String(255))
Video_likes = Column(Integer)
Video_count = Column(Integer)
Total_browsing = Column(Integer)
Sales_volume = Column(Integer)
sell_price = Column(Integer)
Cargo_link = Column(String(255))
# 粉丝属性
class FansAttribute(db.Model, BaseDef):
__tablename__ = 'fans_attributes'
userid = Column(ForeignKey('kol_list.userid', ondelete='CASCADE', onupdate='CASCADE'), primary_key=True)
avg_play = Column(Float(asdecimal=True))
avg_like = Column(Float(asdecimal=True))
avg_comment = Column(Float(asdecimal=True))
total_video_count = Column(Integer)
video_count_15 = Column(Integer)
video_count_30 = Column(Integer)
total_video_avg_interaction = Column(Float(asdecimal=True))
avg_interaction_15 = Column(Float(asdecimal=True))
avg_interaction_30 = Column(Float(asdecimal=True))
# 商品购物车
class ShoppingCart(db.Model, BaseDef):
__tablename__ = 'shopping_cart'
userid = Column(ForeignKey('kol_list.userid', ondelete='CASCADE', onupdate='CASCADE'), primary_key=True)
LE10 = Column(BigInteger)
GT10LE50 = Column(BigInteger)
GT50LE100 = Column(BigInteger)
GT100LE150 = Column(BigInteger)
GT150LE200 = Column(BigInteger)
GT200LE300 = Column(BigInteger)
GT300LE400 = Column(BigInteger)
GT400LE500 = Column(BigInteger)
GT500 = Column(BigInteger)
# 手机年龄信息
class PhoneAgeInfo(db.Model, BaseDef):
__tablename__ = 'phone_age_info'
userid = Column(ForeignKey('kol_list.userid', ondelete='CASCADE', onupdate='CASCADE'), primary_key=True)
iPhone = Column(BigInteger)
OPPO = Column(BigInteger)
vivo = Column(BigInteger)
others = Column(BigInteger)
huawei = Column(BigInteger)
xiaomi = Column(BigInteger)
less_18 = Column(BigInteger)
age_18_25 = Column(BigInteger)
age_26_32 = Column(BigInteger)
age_33_39 = Column(BigInteger)
age_40_46 = Column(BigInteger)
greater_46 = Column(BigInteger)
male = Column(BigInteger)
female = Column(BigInteger)
one_city = Column(Float(asdecimal=True))
two_city = Column(Float(asdecimal=True))
three_city = Column(Float(asdecimal=True))
# 地域
class Region(db.Model, BaseDef):
__tablename__ = 'region'
userid = Column(ForeignKey('kol_list.userid', ondelete='CASCADE', onupdate='CASCADE'), primary_key=True)
qingh = Column(BigInteger)
heil = Column(BigInteger)
shangd = Column(BigInteger)
sic = Column(BigInteger)
jiangs = Column(BigInteger)
guiz = Column(BigInteger)
xingj = Column(BigInteger)
fuj = Column(BigInteger)
zhej = Column(BigInteger)
hub = Column(BigInteger)
tianj = Column(BigInteger)
jiangx = Column(BigInteger)
xiz = Column(BigInteger)
heilj = Column(BigInteger)
guangd = Column(BigInteger)
yunn = Column(BigInteger)
beij = Column(BigInteger)
taiw = Column(BigInteger)
aom = Column(BigInteger)
guangx = Column(BigInteger)
shan3x = Column(BigInteger)
gans = Column(BigInteger)
heb = Column(BigInteger)
ningx = Column(BigInteger)
chongq = Column(BigInteger)
jil = Column(BigInteger)
hun = Column(BigInteger)
neimg = Column(BigInteger)
anh = Column(BigInteger)
xiangg = Column(BigInteger)
shangh = Column(BigInteger)
shan1x = Column(BigInteger)
hain = Column(BigInteger)
liaon = Column(BigInteger) | zly-resource-module | /zly-resource-module-1.0.1.tar.gz/zly-resource-module-1.0.1/aishow_model_app/models/ai_star.py | ai_star.py |
from datetime import datetime as cdatetime # 有时候会返回datatime类型
from datetime import date, time
from flask_sqlalchemy import Model
from sqlalchemy import DateTime, Numeric, Time, Date
from aishow_model_app.ext import db
class BaseDef(object):
@classmethod
def queryToDict(cls, models):
if (isinstance(models, list)):
if (isinstance(models[0], Model)):
lst = []
for model in models:
gen = cls.model_to_dict(model)
# print('gen',gen)
dit = dict((g[0], g[1]) for g in gen)
lst.append(dit)
return lst
else:
res = cls.result_to_dict(models)
return res
else:
if (isinstance(models, Model)):
gen = cls.model_to_dict(models)
dit = dict((g[0], g[1]) for g in gen)
return dit
else:
res = dict(zip(models.keys(), models))
cls.find_datetime(res)
return res
# 当结果为result对象列表时,result有key()方法
@classmethod
def result_to_dict(cls, results):
res = [dict(zip(r.keys(), r)) for r in results]
# 这里r为一个字典,对象传递直接改变字典属性
for r in res:
cls.find_datetime(r)
return res
@classmethod
def model_to_dict(cls, model): # 这段来自于参考资源
for col in model.__table__.columns:
# print('model_to_dict',)
# print(col)
if isinstance(col.type, DateTime):
value = cls.convert_datetime(getattr(model, col.name))
elif isinstance(col.type, Numeric):
if getattr(model, col.name):
# print(getattr(model, col.name))
# print(type(getattr(model, col.name)))
value = float(getattr(model, col.name))
else:
value = getattr(model, col.name)
yield (col.name, value)
@classmethod
def find_datetime(cls, value):
for v in value:
if (isinstance(value[v], cdatetime)):
value[v] = cls.convert_datetime(value[v]) # 这里原理类似,修改的字典对象,不用返回即可修改
@classmethod
def convert_datetime(cls, value):
if value:
if (isinstance(value, (cdatetime, DateTime))):
return value.strftime("%Y-%m-%d %H:%M:%S")
elif (isinstance(value, (date, Date))):
return value.strftime("%Y-%m-%d")
elif (isinstance(value, (Time, time))):
return value.strftime("%H:%M:%S")
else:
return ""
def save(self,obj):
try:
db.session.add(obj)
db.session.flush()
db.session.commit()
except Exception as e:
print('save fail',e)
# def delete(self,obj):
# try:
# db.session.delete(obj)
# db.session.commit()
# except Exception as e:
# print('save fail',e)
from . import resource_model | zly-resource-module | /zly-resource-module-1.0.1.tar.gz/zly-resource-module-1.0.1/aishow_model_app/models/__init__.py | __init__.py |
from sqlalchemy import String, Integer, Column, ForeignKey, BigInteger, Float, Boolean
from aishow_model_app.ext import db
from aishow_model_app.models import BaseDef
from config import USE
# 资源总表
class ResourceTable(db.Model,BaseDef):
# 资源总表
__tablename__ = 'resource_table'
__bind_key__ = USE
id = Column(BigInteger, primary_key=True)
kol_name = Column(String(128),nullable=False,unique=True) #达人昵称
kol_id = Column(String(128),nullable=False,unique=True) #达人平台ID
platform = Column(String(128),nullable=False) #平台
avatar = Column(String(128),nullable=True) #头像
type_datil = Column(String(128),nullable=False) #合作模式(专场直播,单链直播,直播,短视频,图文链接)
company = Column(String(128),nullable=True) #所属公司 BD不可见
contact_name = Column(String(128),nullable=True) #联系人 BD不可见
contact_phone = Column(String(128),nullable=True) #电话 BD不可见
fans = Column(Float,nullable=False) #粉丝数(万)
total_sell_money = Column(String(255),nullable=True) #与麒腾累计带货合作金额
cooperation_times = Column(Integer,nullable=True) #合作次数
# cooperation_times1 = Column(Integer,nullable=True) #合作次数
redbook_image_text_links = db.relationship('RedbookImageTextLink', backref='resource_tables')
douyin_view_exports = db.relationship("DouyinViewExport", backref='resource_tables')
douyin_special_lives = db.relationship("DouyinSpecialLive", backref='resource_tables')
douyin_single_chain_lives = db.relationship("DouyinSingleChainLive", backref='resource_tables')
taobao_lives = db.relationship("TaobaoLive", backref='resource_tables')
kuai_show_lives = db.relationship("KuaiShouLive", backref='resource_tables')
#抖音达人分类
class DouyinExportClassification(db.Model, BaseDef):
__tablename__ = 'douyin_export_classification'
__bind_key__ = USE
id = Column(BigInteger, primary_key=True, autoincrement=True)
classification_description = Column(String(64),unique=True,) # 分类描述
#小红书图文链接
class RedbookImageTextLink(db.Model, BaseDef):
__tablename__ = 'redbook_image_text_link'
__bind_key__ = USE
id = Column(BigInteger, primary_key=True)
resource_table_id = Column(BigInteger, ForeignKey('resource_table.id',ondelete='CASCADE', onupdate='CASCADE')) # 资源表ID
dianzan = Column(Integer, default=0) # 点赞量级(万)
redbook_link = Column(String(255), nullable=False, unique=True) # 链接
export_city = Column(String(255), nullable=True) # 达人所在城市
export_tag = Column(String(128), default=0) # 达人标签
brand_partner = Column(Boolean, default=False) # 是否品牌合作人
redbook_cost_price = Column(Integer, default=0) # 成本价 BD不可见
# 麒腾小红书3月报价
class QitengRedbookPrice(db.Model,BaseDef):
__tablename__ = 'qiteng_redbook_price'
__bind_key__ = USE
id = Column(BigInteger, primary_key=True,autoincrement=True)
fans_less = Column(Integer, default=0) #粉丝量级区间最低(万)
fans_more = Column(Integer, default=0) #粉丝量级区间最高(万)
offer_less = Column(Integer, default=0) # 报价区间最低值 630
offer_more = Column(Integer, default=0) # 报价区间最高值 900
cost_price = Column(Integer, default=0) #成本价
remarks = Column(String(255),nullable=True) #备注
brand_partner= Column(Boolean, default=False) #是否品牌合作人
redbook_image_text_link_id = Column(BigInteger, ForeignKey('redbook_image_text_link.id', ondelete='CASCADE',onupdate='CASCADE')) # 资源表ID
redbook_image_text_linkid = db.relationship('RedbookImageTextLink', backref='redbook_image_text_links',uselist=False)
#抖音短视频达人表
class DouyinViewExport(db.Model, BaseDef):
__tablename__ = 'douyin_view_export'
__bind_key__ = USE
id = Column(BigInteger, primary_key=True,autoincrement=True)
resource_table_id = Column(BigInteger, ForeignKey('resource_table.id',ondelete='CASCADE', onupdate='CASCADE')) # 资源表ID
douyin_export_classification_id = Column(BigInteger,ForeignKey('douyin_export_classification.id',ondelete='CASCADE', onupdate='CASCADE')) # 类目属性
export_tag = Column(String(255),nullable=True) # 达人标签
introduction = Column(String(255),nullable=True) # 简介 可能是多个tag标签
douyin_home_page = Column(String(255),nullable=False,unique=True) # 抖音主页url
export_city = Column(String(255),nullable=True) # 达人所在城市
cooperation_case = Column(String(255),nullable=True) # 达人合作过的品牌
better_sell_goods = Column(String(255),nullable=True) # 可能是空值,达人销售过较好的商品
douyin_export_classification = Column(String(255)) # 抖音达人分类
cooperation_mode = Column(String(255),nullable=False) # 合作模式
offer_less = Column(Integer, default=0) # 报价区间最低值 630
offer_more = Column(Integer, default=0) # 报价区间最高值 900
star_offer = Column(Integer, default=0) # 星图参考报价最高 3000
douyin_view_cost_price = Column(Integer, default=0) # 成本价 BD不可见
#麒腾抖音短视频3月报价
class QitengDouyinViewPrice(db.Model, BaseDef):
__tablename__ = 'qiteng_douyin_view_price'
__bind_key__ = USE
id = Column(BigInteger, primary_key=True, autoincrement=True)
fans_less = Column(Float, default=0) # 粉丝量级区间最低(万) 1w
fans_more = Column(Float, default=0) # 粉丝量级区间最高(万) 2w
offer_less = Column(Integer, default=0) # 报价区间最低值 630
offer_more = Column(Integer, default=0) # 报价区间最高值 900
star_offer_less = Column(Integer, default=0) # 星图参考报价最低 1700
star_offer_more = Column(Integer, default=0) # 星图参考报价最低 3000
estimated_exposure = Column(Float, default=0) # 预估曝光 100w
remarks = Column(String(255),nullable=False) # 备注 30条起作这个价格或下方粉丝量有数量可抵
douyin_view_export_id = Column(BigInteger, ForeignKey('douyin_view_export.id', ondelete='CASCADE',
onupdate='CASCADE')) # 资源表ID
douyin_view_exportid = db.relationship('DouyinViewExport', backref='douyin_view_exports',uselist=False)
#抖音专场直播
class DouyinSpecialLive(db.Model, BaseDef):
__tablename__ = 'douyin_special_live'
__bind_key__ = USE
id = Column(BigInteger, primary_key=True, autoincrement=True)
resource_table_id = Column(BigInteger, ForeignKey('resource_table.id',ondelete='CASCADE', onupdate='CASCADE')) # 资源表ID
export_tag = Column(String(128),nullable=False) # 达人标签
special_offer = Column(Integer, default=0) # 专场报价区间最低值 630
export_city = Column(String(64),nullable=True) # 达人所在城市
cooperation_case = Column(String(255),nullable=True) # 达人合作过的品牌
douyin_special_cost_price = Column(Integer, default=0) # 成本价 BD不可见
#抖音单链直播
class DouyinSingleChainLive(db.Model, BaseDef):
__tablename__ = 'douyin_single_chain_live'
__bind_key__ = USE
id = Column(BigInteger, primary_key=True, autoincrement=True)
resource_table_id = Column(BigInteger, ForeignKey('resource_table.id',ondelete='CASCADE', onupdate='CASCADE')) # 资源表ID
douyin_export_classification = Column(String(255),nullable=False) # 达人适合的类目 抖音大类,可能多个
Single_chain_offer = Column(Integer, default=0) # 单链接报价区间 630
introduction = Column(String(255),nullable=True) # 简介
selection_requirements = Column(String(255),nullable=False) # 选品要求 颜值高,成分好的美妆产品
live_time = Column(String(64)) # 直播时间,每天\每周N
remarks = Column(String(255), nullable=True) # 备注
douyin_single_cost_price = Column(Integer, default=0) # 成本价 BD不可见
#淘宝直播
class TaobaoLive(db.Model, BaseDef):
__tablename__ = 'taobao_live'
__bind_key__ = USE
id = Column(BigInteger, primary_key=True, autoincrement=True)
resource_table_id = Column(BigInteger, ForeignKey('resource_table.id',ondelete='CASCADE', onupdate='CASCADE')) # 资源表ID
avg_viewing_num = Column(Float,default=0) # 场均观看/小时
main_category = db.Column(db.String(64),nullable=False) # 主营类目
introduction = Column(String(255), nullable=True) # 简介
taobao_offer = Column(Integer,default=0) #最高
taobao_cost_price = Column(Integer, default=0) # 成本价 BD不可见
#麒腾淘宝KOL直播报价
class QitengTaobaoExportLiveOffer(db.Model, BaseDef):
__tablename__ = 'qiteng_taobao_export_live_offer'
__bind_key__ = USE
id = Column(BigInteger, primary_key=True, autoincrement=True)
hierarchy = Column(String(64),nullable=False) # 层级 优质、腰部、头部、超头部
avg_viewing_num_less = Column(Float,default=0) # 场均观看量级低区间最低万
avg_viewing_num_more = Column(Float,default=0) # 场均观看量级低 区间最高万
offer_less = Column(Integer,default=0) #最低报价
offer_more = Column(Integer,default=0) #最高报价
remarks = Column(String(255), nullable=True) # 备注
cost_price = Column(Integer,default=0) # 成本价 BD不可见
taobao_live_id = Column(BigInteger, ForeignKey('taobao_live.id', ondelete='CASCADE',
onupdate='CASCADE')) # 资源表ID
taobao_liveid = db.relationship('TaobaoLive', backref='taobao_lives',uselist=False)
# 快手直播
class KuaiShouLive(db.Model, BaseDef):
__tablename__ = 'kuai_show_live'
__bind_key__ = USE
id = Column(BigInteger, primary_key=True, autoincrement=True)
resource_table_id = Column(BigInteger, ForeignKey('resource_table.id',ondelete='CASCADE', onupdate='CASCADE')) # 资源表ID
avg_online_num = Column(Float, default=0) #单位:平均在线人数(万)
sell_classification = Column(String(255),nullable=False) # 可售卖类目
commission_less = Column(Integer, default=0) # 佣金最低范围最低,例如10%
commission_more = Column(Integer, default=0) # 佣金最高范围最高,例如10%
attributes = db.Column(db.String(64),nullable=False) # 属性 带货、刷榜
kuaishou_offer = db.Column(Integer, default=0) # 快手报价(单链接)
kuaishou_cost_price = Column(Integer, default=0) # 快手成本价 BD不可见
remarks = Column(String(255),nullable=True) # 备注
#
# douyin_view_exports = db.relationship("DouyinViewExport", backref='douyin_export_classification') | zly-resource-module | /zly-resource-module-1.0.1.tar.gz/zly-resource-module-1.0.1/aishow_model_app/models/resource_model.py | resource_model.py |
from __future__ import absolute_import, unicode_literals
from enum import Enum
#解释Enum
# Python——枚举(enum)
# 使用普通类直接实现枚举
# 在Python中,枚举和我们在对象中定义的类变量时一样的,每一个类变量就是一个枚举项,访问枚举项的方式为:类名加上类变量,像下面这样:
#
# class color():
# YELLOW = 1
# RED = 2
# GREEN = 3
# PINK = 4
#
# # 访问枚举项
# print(color.YELLOW) # 1
# 虽然这样是可以解决问题的,但是并不严谨,也不怎么安全,比如:
# 1、枚举类中,不应该存在key相同的枚举项(类变量)
# 2、不允许在类外直接修改枚举项的值
#
# class color():
# YELLOW = 1
# YELLOW = 3 # 注意这里又将YELLOW赋值为3,会覆盖前面的1
# RED = 2
# GREEN = 3
# PINK = 4
#
# # 访问枚举项
# print(color.YELLOW) # 3
# # 但是可以在外部修改定义的枚举项的值,这是不应该发生的
# color.YELLOW = 99
# print(color.YELLOW) # 99
# 解决方案:使用enum模块
# enum模块是系统内置模块,可以直接使用import导入,但是在导入的时候,不建议使用import
# enum将enum模块中的所有数据都导入,一般使用的最多的就是enum模块中的Enum、IntEnum、unique这几项
#
# # 导入枚举类
# from enum import Enum
#
# # 继承枚举类
# class color(Enum):
# YELLOW = 1
# BEOWN = 1
# # 注意BROWN的值和YELLOW的值相同,这是允许的,此时的BROWN相当于YELLOW的别名
# RED = 2
# GREEN = 3
# PINK = 4
#
# class color2(Enum):
# YELLOW = 1
# RED = 2
# GREEN = 3
# PINK = 4
#
# 使用自己定义的枚举类:
# print(color.YELLOW) # color.YELLOW
# print(type(color.YELLOW)) # <enum 'color'>
# print(color.YELLOW.value) # 1
# print(type(color.YELLOW.value)) # <class 'int'>
# print(color.YELLOW == 1) # False
# print(color.YELLOW.value == 1) # True
# print(color.YELLOW == color.YELLOW) # True
# print(color.YELLOW == color2.YELLOW) # False
# print(color.YELLOW is color2.YELLOW) # False
# print(color.YELLOW is color.YELLOW) # True
# print(color(1)) # color.YELLOW
# print(type(color(1))) # <enum 'color'>
# 注意事项如下:
# 1、枚举类不能用来实例化对象
# 2、访问枚举类中的某一项,直接使用类名访问加上要访问的项即可,比如color.YELLOW
# 3、枚举类里面定义的Key = Value,在类外部不能修改Value值,也就是说下面这个做法是错误的color.YELLOW = 2 # Wrong, can't reassign member
# 4、枚举项可以用来比较,使用 ==,或者is
# 5、导入Enum之后,一个枚举类中的Key和Value,Key不能相同,Value可以相,但是Value相同的各项Key都会当做别名,
# 6、如果要枚举类中的Value只能是整型数字,那么,可以导入IntEnum,然后继承IntEnum即可,注意,此时,如果value为字符串的数字,也不会报错:from enum import IntEnum
# 7、如果要枚举类中的key也不能相同,那么在导入Enum的同时,需要导入unique函数from enum import Enum, unique
class SuitePushType(Enum):
"""套件相关回调枚举"""
CHECK_URL = "check_url" # 校验url
CHANGE_AUTH = "change_auth" # 授权变更
SUITE_TICKET = "suite_ticket" # 套件ticket ticket 票; 券; 车票; 戏票; 入场券; 奖券;标签
TMP_AUTH_CODE = "tmp_auth_code" # 临时授权码
SUITE_RELIEVE = "suite_relieve" # 解除授权
CHECK_CREATE_SUITE_URL = "check_create_suite_url" # 校验创建套件时候的url
CHECK_UPDATE_SUITE_URL = "check_update_suite_url" # 校验更改套件时候的url
CHECK_SUITE_LICENSE_CODE = "check_suite_license_code" # 校验序列号
MARKET_BUY = "market_buy" # 用户购买下单
ORG_MICRO_APP_STOP = "org_micro_app_stop" # 企业逻辑停用微应用
ORG_MICRO_APP_REMOVE = "org_micro_app_remove" # 企业物理删除微应用
ORG_MICRO_APP_RESTORE = "org_micro_app_restore" # 企业逻辑启用微应用 | zly-resource-module | /zly-resource-module-1.0.1.tar.gz/zly-resource-module-1.0.1/aishow_model_app/core/constants.py | constants.py |
from __future__ import absolute_import, unicode_literals
import hashlib
import json
import random
import string
import six
class ObjectDict(dict):
"""Makes a dictionary behave like an object, with attribute-style access.
"""
def __getattr__(self, key):
if key in self:
return self[key]
return None
def __setattr__(self, key, value):
self[key] = value
class DingTalkSigner(object):
""" signer 符号; 征候; 签名; 署名;"""
"""DingTalk data signer"""
def __init__(self, delimiter=b''):
"""
:param delimiter:分隔符
"""
self._data = []
self._delimiter = to_binary(delimiter)
def add_data(self, *args):
"""Add data to signer"""
for data in args:
self._data.append(to_binary(data))
@property
def signature(self):
"""Get data signature"""
self._data.sort()
str_to_sign = self._delimiter.join(self._data)
return hashlib.sha1(str_to_sign).hexdigest()
def to_text(value, encoding='utf-8'):
"""Convert value to unicode, default encoding is utf-8
:param value: Value to be converted
:param encoding: Desired encoding
"""
if not value:
return ''
if isinstance(value, six.text_type):
return value
if isinstance(value, six.binary_type):
return value.decode(encoding)
return six.text_type(value)
def to_binary(value, encoding='utf-8'):
"""Convert value to binary string, default encoding is utf-8
:param value: Value to be converted
:param encoding: Desired encoding
"""
if not value:
return b''
if isinstance(value, six.binary_type):
return value
if isinstance(value, six.text_type):
return value.encode(encoding)
return to_text(value).encode(encoding)
def random_string(length=16):
"""
随机生成字符串
:param length:
:return:
"""
"""ascii_letters方法的作用是生成全部字母, 包括a - z, A - Z2.digits方法的作用是生成数组, 包括0 - 9"""
rule = string.ascii_letters + string.digits
rand_list = random.sample(rule, length)
return ''.join(rand_list)
def byte2int(c):
"""
ord()函数是chr()函数(对于8位的ASCII字符串)或unichr()函数(对于Unicode对象)的配对函数,它以一个字符(长度为1的字符串)
作为参数,返回对应的ASCII数值,或者Unicode数值,如果所给的Unicode字符超出了你的Python定义范围,则会引发一个TypeError
的异常。
语法
ord()方法的语法:
ord(c)参数
c - - 字符。
返回值
返回值是对应的十进制整数。
"""
if six.PY2:
return ord(c)
return c
def json_loads(s, object_hook=ObjectDict, **kwargs):
"""
object_hook参数是可选的,它会将(loads的)返回结果字典替换为你所指定的类型
:param s:
:param object_hook:
:param kwargs:
:return:
"""
return json.loads(s, object_hook=object_hook, **kwargs) | zly-resource-module | /zly-resource-module-1.0.1.tar.gz/zly-resource-module-1.0.1/aishow_model_app/core/utils.py | utils.py |
from __future__ import absolute_import, unicode_literals
import six
# Six 就是来解决这个烦恼的,这是一个专门用来兼容 Python 2 和 Python 3 的模块,
# 它解决了诸如 urllib 的部分方法不兼容, str 和 bytes 类型不兼容等“知名”问题。
# import six
# six.PY2 #返回一个表示当前运行环境是否为python2的boolean值
# six.PY3 #返回一个表示当前运行环境是否为python3的boolean值
# six.integer_types # 在python2中,存在 int 和 long 两种整数类型;在python3中,仅存在一种类型int
# six.string_types # 在python2中,使用的为basestring;在python3中,使用的为str
# six.text_type # 在python2中,使用的文本字符的类型为unicode;在python3中使用的文本字符的类型为str
# six.binary_type # 在python2中,使用的字节序列的类型为str;在python3中使用的字节序列的类型为bytes
# from dingtalk.core.utils import to_binary, to_text
from aishow_model_app.core.utils import to_binary, to_text
class DingTalkException(Exception):
def __init__(self, errcode, errmsg):
"""
:param errcode: Error code
:param errmsg: Error message
"""
self.errcode = errcode
self.errmsg = errmsg
def __str__(self):
_repr = 'Error code: {code}, message: {msg}'.format(
code=self.errcode,
msg=self.errmsg
)
if six.PY2:
return to_binary(_repr)
else:
return to_text(_repr)
def __repr__(self):
_repr = '{klass}({code}, {msg})'.format(
klass=self.__class__.__name__,
code=self.errcode,
msg=self.errmsg
)
if six.PY2:
return to_binary(_repr)
else:
return to_text(_repr)
class DingTalkClientException(DingTalkException):
"""WeChat API client exception class"""
def __init__(self, errcode, errmsg, client=None,
request=None, response=None):
super(DingTalkClientException, self).__init__(errcode, errmsg)
self.client = client
self.request = request
self.response = response
class InvalidSignatureException(DingTalkException):
"""Invalid signature exception class"""
def __init__(self, errcode=-40001, errmsg='Invalid signature'):
super(InvalidSignatureException, self).__init__(errcode, errmsg)
class InvalidCorpIdOrSuiteKeyException(DingTalkException):
"""Invalid app_id exception class"""
def __init__(self, errcode=-40005, errmsg='Invalid CorpIdOrSuiteKey'):
super(InvalidCorpIdOrSuiteKeyException, self).__init__(errcode, errmsg) | zly-resource-module | /zly-resource-module-1.0.1.tar.gz/zly-resource-module-1.0.1/aishow_model_app/core/exceptions.py | exceptions.py |
### 中量引擎服务端SDK (Python)
##### 安装方法
```
pip install zlyq-python-sdk
```
##### 中量引擎官方文档
```
https://wiki.zplatform.cn/
```
##### SDK说明
本SDK封装了向中量引擎服务同步用户数据、媒资数据及用户历史交互数据的方法
##### 接口调用示例
同步用户信息
```
from zlyqsync.client import SyncClient
from zlyqmodel.user import UserInfo
# 同步用户数据
userClient = SyncClient("f1edf8b4ff55610c129dc1626e6dd71f", "7de548e902debd7ca6aac1f5c6ec802e", 402918291821943819, "http://testappapi.zplatform.cn")
userInfo = UserInfo()
userInfo.thirdId = '51982'
userInfo.udid = "ABC"
userInfo.nickname = "test name"
userInfo.gender = 1
userInfo.account = "xiaoming"
userInfo.avatar = "http://www.baidu.com"
userInfo.phone = "10000000000"
print(userClient.userInfoSynchronize(userInfo))
```
同步用户历史交互信息
```
from zlyqsync.client import SyncClient
from zlyqmodel.history import TrackInfo, TrackCommon, TrackLike, TrackFinishVideo
trackClient = SyncClient("f1edf8b4ff55610c129dc1626e6dd71f", "7de548e902debd7ca6aac1f5c6ec802e", 402918291821943819, "http://testtrackapi.zplatform.cn")
trackCommon = TrackCommon()
trackCommon.udid = "AKDHA-KAJFO-KA81K-9HQ1L"
trackCommon.userId = 404910192718291827
trackCommon.distinctId = 409181916271928172
trackCommon.appId = 402918291821943819
trackCommon.platform =
trackCommon.time = 1585239477000
trackCommon.screenHeight = 670
trackCommon.screenWidth = 375
trackCommon.manufacturer = "huawei"
trackCommon.network = 4
trackCommon.os = 2
trackCommon.osVersion = 11.2.4
trackCommon.ip = 212.29.35.12
trackCommon.country = "中国"
trackCommon.province = "北京"
trackCommon.city = "北京"
trackCommon.carrier = 电信
trackLike = TrackLike()
trackLike.event = "like"
trackLike.contentId = 401929181919011928
trackLike.contentType = 1
trackFinishVideo = TrackFinishVideo()
trackFinishVideo.event = "finishVideo"
trackFinishVideo.contentId = 40192918191901
trackFinishVideo.contentType = 1
trackFinishVideo.videoTime = 15
trackFinishVideo.duration = 10
trackFinishVideo.isFinish = 0
properties = [trackLike, trackFinishVideo]
trackInfo = TrackInfo()
trackInfo.common = trackCommon
trackInfo.properties = properties
print(trackClient.historySynchronize(trackInfo))
```
| zlyq-python-sdk | /zlyq-python-sdk-0.0.3.tar.gz/zlyq-python-sdk-0.0.3/README.md | README.md |
from dataclasses import dataclass, asdict
import requests
import json
import os, sys
from requests_toolbelt.multipart.encoder import MultipartEncoder
sys.path.append("..")
from zlyqauth import sign as signAuth, appToken as appTokenAuth
@dataclass
class SyncClient():
appKey:str
appSecret:str
appId:int
address:int
def __buildHeader(self, params):
header = {}
sign, urlParam = signAuth.addSign(params, self.appId, self.appSecret)
appToken = appTokenAuth.addAppToken(self.appKey, self.appSecret)
header['Content-Type'] = 'application/json'
header['X-Sign'] = sign
header['X-App-Token'] = appToken
return header, urlParam
def __httpPost(self, address, apiUrl, params, body):
params = params if params else {}
header, urlParams = self.__buildHeader(params)
urlStr = address + apiUrl + "?" + urlParams
datas = json.dumps(body)
resp = requests.post(urlStr, data=datas, headers=header)
return resp.text
def __httpMultiForm(self, address, apiUrl, params, body):
params = params if params else {}
header, urlParams = self.__buildHeader(params)
urlStr = address + apiUrl + "?" + urlParams
multipart_encoder = MultipartEncoder(
fields=body,
)
header['Content-Type'] = multipart_encoder.content_type
resp = requests.post(
urlStr,
data=multipart_encoder,
headers=header)
return resp.text
def userInfoSynchronize(self, userInfo):
body = asdict(userInfo)
return self.__httpPost(self.address, "/api/v1/synchronize/userInfo", None, body)
def historySynchronize(self, trackInfo):
body = asdict(trackInfo)
return self.__httpPost(self.address, "/trace", None, body)
def videoUpload(self, video):
body = {
"image": ('imageFile', video.image, 'application/octet-stream'),
"video": ('videoFile', video.video, 'application/octet-stream'),
"title": video.title,
"userId": video.userId,
"content": video.content,
"orgFileName": video.orgFileName,
"os": str(video.os),
"source": str(video.source),
"thirdId": video.thirdId,
"thirdExtra": video.thirdExtra,
}
return self.__httpMultiForm(self.address, "/api/v1/videoUploadSync", None, body)
def videoSynchronize(self, video):
body = asdict(video)
return self.__httpPost(self.address, "/api/v1/videoSync", None, body)
def imageUpload(self, image):
body = {
"image": ('file', image.image, 'application/octet-stream'),
"description": image.description,
"userId": image.userId,
"source": str(image.source),
"thirdId": image.thirdId,
"thirdExtra": image.thirdExtra,
}
return self.__httpMultiForm(self.address, "/api/v1/imageUploadSync", None, body)
def articleSynchronize(self, article):
body = asdict(article)
return self.__httpPost(self.address, "/api/v1/articleSync", None, body)
def articleUpload(self, article):
body = asdict(article)
return self.__httpPost(self.address, "/api/v1/articleUploadSync", None, body)
def mediaLikeSynchronize(self, mls):
body = asdict(mls)
return self.__httpPost(self.address, "/api/v1/mediaLikeSync", None, body)
def mediaFavoriteSynchronize(self, mfs):
body = asdict(mfs)
return self.__httpPost(self.address, "/api/v1/mediaFavoriteSync", None, body)
def commentSynchronize(self, comments):
body = asdict(comments)
return self.__httpPost(self.address, "/api/v1/commentSync", None, body)
def commentLikeSynchronize(self, cls):
body = asdict(cls)
return self.__httpPost(self.address, "/api/v1/commentLikeSync", None, body) | zlyq-python-sdk | /zlyq-python-sdk-0.0.3.tar.gz/zlyq-python-sdk-0.0.3/zlyqsync/client.py | client.py |
from dataclasses import dataclass
from typing import List
@dataclass
class TrackCommon():
udid : str = ""
userId : int = 0
distinctId : int = 0
appId : int = 0
platform : str = ""
time : int = 0
iosSdkVersions : str = ""
androidSdkVersions : str = ""
screenHeight : int = 0
screenWidth : int = 0
manufacturer : str = ""
network : int = 0
os : int = 0
osVersion : str = ""
ip : str = ""
country : str = ""
province : str = ""
city : str = ""
carrier : str = ""
utmSource : str = ""
utmMedia : str = ""
utmCampaign : str = ""
utmContent : str = ""
utmTerm : str = ""
appVersion : str = ""
@dataclass
class EventCommon():
event : str = ""
eventTime : int = 0
feedConfigId : str = ""
requestId : str = ""
abtestIds : str = ""
@dataclass
class TrackInfo():
common:TrackCommon = None
properties:List[EventCommon] = None
@dataclass
class TrackRegister(EventCommon):
methord : int = 0
isSuccess : int = 0
failReason : str = ""
@dataclass
class TrackLogin(EventCommon):
methord : int = 0
isSuccess : int = 0
failReason : str = ""
@dataclass
class TrackFinishVideo(EventCommon):
methord : int = 0
contentId : int = 0
contentType : int = 0
videoTime : int = 0
duration : int = 0
isFinish : int = 0
@dataclass
class TrackLike(EventCommon):
contentId : int = 0
contentType : int = 0
@dataclass
class TrackDisLike(EventCommon):
contentId : int = 0
contentType : int = 0
@dataclass
class TrackFollow(EventCommon):
contentId : int = 0
contentType : int = 0
authorId : int = 0
@dataclass
class TrackComment(EventCommon):
contentId : int = 0
contentType : int = 0
@dataclass
class TrackLikeComment(EventCommon):
contentId : int = 0
contentType : int = 0
commentId : int = 0
@dataclass
class TrackDislikeComment(EventCommon):
contentId : int = 0
contentType : int = 0
commentId : int = 0
if __name__ == "__main__":
trackCommon = TrackCommon()
print(trackCommon)
trackRegister = TrackRegister()
print(trackRegister)
properties = [trackRegister]
trackInfo = TrackInfo()
trackInfo.common = trackCommon
trackInfo.properties = properties
print(trackInfo) | zlyq-python-sdk | /zlyq-python-sdk-0.0.3.tar.gz/zlyq-python-sdk-0.0.3/zlyqmodel/history.py | history.py |
# `zm-au`
`zm-au` is a developer tool that provides an auto-updating API for programs. Note that this can be a bad idea for many reasons, so you should probably ask the user first.
Sorry for prefixing the name with "zm", but I'm sure I'll have to do that again as I have no creative names for anything anymore.
## Usage
`zm-au` comes with two useful auto-updaters, `PipAU` and `PipGitHubAU`, and a class to base an auto-updater off of.
Let's say you are creating a Python package called `skippitybop` and you want it to notify the user when there is an update available on PyPI for it. Simply insert this code where you want the update check to happen.
```python
from zm_au import PipAU
updater = PipAU("skippitybop")
updater.update(prompt=True)
```
When the code is run, if there is an update available on PyPI, the user will be prompted to install it via `pip`. If the user chooses to install it, the program will exit on success. Or failure, for that matter.
Take a guess what `prompt=False` would do.
Let's say you are creating a Python package called `boppityskip` on bigboi's GitHub repo and you want it to notify the user when there is an update available on GitHub releases for it, probably because the package is private and not on PyPI. Insert this code where you want the update check to happen.
```python
from zm_au import PipGitHubAU
updater = PipGitHubAU("boppityskip", "bigboi/boppityskip", check_prerelease=True, dist="whl")
updater.update(prompt=True)
```
When the code is run, if there is an update available on GitHub releases (including prereleases) that is a `whl` file, the user will be prompted to install it via `pip`. Again, if the user chooses to install it, the program will exit on success or failure.
You can build your own AUs by making a class that inherits from `BaseAU`. Override the following functions as such.
- `_get_current_version` - Must return the current version of the package
- `_get_latest_version` - Must return the latest version of the package
- `_download` - Must download the package and return the filename of the downloaded file
- `_update` - Must install a package whose location is passed via the only parameter of this function
Be smart about how you use this!
| zm-au | /zm_au-2.0.0.tar.gz/zm_au-2.0.0/README.md | README.md |
import sys
from pathlib import Path
from tempfile import TemporaryDirectory
import packaging.version
import zmtools
class UpdateException(Exception):
# Base exception class for this module
pass
class BaseAU:
"""Base auto-updater. Should be inherited from.
Attributes:
current_version (str): The current version of the package.
latest_version (str): The latest version of the package.
needs_update (bool): If the package is not the latest version.
"""
def __init__(self) -> None:
self.current_version = self._get_current_version()
self.latest_version = self._get_latest_version()
def _get_current_version(self) -> str: # type: ignore
# Override me!
# This function must return the current version of the package
pass
def _get_latest_version(self) -> str: # type: ignore
# Override me!
# This function must return the latest version of the package
pass
def _download(self) -> Path: # type: ignore
# Override me!
# This function must download the package and return the path of the downloaded file
pass
def _update(self, update_file: Path) -> None:
# Override me!
# This function must install a package whose location is passed via the sole parameter
pass
def update(self, prompt: bool = True) -> None:
"""Update the package.
Args:
prompt (bool, optional): If True, prompt before updating. Defaults to True.
"""
if self.needs_update and (
not prompt
or zmtools.y_to_continue(
f"A newer version ({self.latest_version}) is available. Would you like to update?"
)
):
with TemporaryDirectory() as temp_dir:
with zmtools.working_directory(Path(temp_dir)):
update_file = self._download()
self._update(update_file)
print("Exiting as an update has completed")
sys.exit()
@property
def currently_installed_version_is_unreleased(self) -> bool:
# Override me!
# This property should return True if the currently installed version of a package is in development, i.e. an unreleased commit
return False
@property
def needs_update(self) -> bool:
if (
self.current_version == "0.0.0"
or self.currently_installed_version_is_unreleased
):
return False
else:
return packaging.version.parse(
self.latest_version
) > packaging.version.parse(self.current_version) | zm-au | /zm_au-2.0.0.tar.gz/zm_au-2.0.0/zm_au/base.py | base.py |
import importlib
import json
import os
import subprocess
import sys
from typing import Optional
import packaging.version
from python_install_directives import PipPackage
from ..base import BaseAU, UpdateException
class PipGitHubAU(BaseAU):
def __init__(
self,
name: str,
github_location: str,
check_prerelease: bool = False,
dist: str = "whl",
silent: bool = False,
):
# These parameters tell which uploads to look for and where (note that github_location's format is like "zmarffy/au")
self._pip_package = PipPackage(name)
self.github_location = github_location
self.check_prerelease = check_prerelease
self.dist = dist
self.silent = silent
super().__init__()
@property
def _o(self) -> Optional[int]:
# If silent, redirect output of pip to the null device
if self.silent:
return subprocess.DEVNULL
else:
return None
def _get_current_version(self) -> str:
return self._pip_package.version
def _get_latest_version(self) -> str:
# Uses `gh api` to find the latest version
try:
d = json.loads(
subprocess.check_output(
["gh", "api", f"repos/{self.github_location}/releases"]
)
)
except subprocess.CalledProcessError:
raise UpdateException(
f"Either {self.github_location} doesn't exist or you need to authorize with GitHub (run `gh auth login`) before attempting to check for updates for it"
)
if not self.check_prerelease:
latest = next(x for x in d if not x["prerelease"])
else:
latest = d[0]
return latest["tag_name"]
def _download(self) -> str:
# Uses `gh release download` to download the version found from _get_latest_version (doesn't blindly install the latest version to avoid a super rare race condition)
subprocess.run(
[
"gh",
"release",
"download",
self.latest_version,
"-R",
self.github_location,
"-p",
f"*.{self.dist}",
],
stdout=self._o,
)
files = os.listdir()
if len(files) != 1:
raise ValueError(
f"Ambiguous install instructions as multiple files were downloaded. Files: {files}"
)
return files[0]
def _update(self, update_file: str):
# Installs a file with `pip install`
subprocess.run(
[sys.executable, "-m", "pip", "install", update_file], stdout=self._o
)
try:
importlib.import_module(
f"{self._pip_package.name}.python_install_directives"
)
has_id = True
except ModuleNotFoundError:
has_id = False
if has_id:
subprocess.run(["install-directives", self._pip_package.name, "install"])
@property
def currently_installed_version_is_unreleased(self) -> bool:
return bool(packaging.version.parse(self.current_version).local) | zm-au | /zm_au-2.0.0.tar.gz/zm_au-2.0.0/zm_au/pip/github.py | github.py |
import importlib
import os
import subprocess
import sys
from typing import Optional, Union
import packaging.version
import requests
from python_install_directives import PipPackage
from ..base import BaseAU
class PipAU(BaseAU):
def __init__(self, name: str, silent: bool = False) -> None:
self._pip_package = PipPackage(name)
self.silent = silent
super().__init__()
@property
def _o(self) -> Optional[int]:
# If silent, redirect output of pip to the null device
if self.silent:
return subprocess.DEVNULL
else:
return None
def _get_current_version(self) -> str:
return self._pip_package.version
def _get_latest_version(self) -> str:
# Hits the PyPI API to find the latest version
return requests.get(
f"https://pypi.org/pypi/{self._pip_package.name}/json"
).json()["info"]["version"]
def _update(self, update_file: Union[os.PathLike, str]) -> None:
# Runs a `pip install` of the version found by _get_latest_version (doesn't blindly install the latest version to avoid weird race conditions)
# Note that _download was never overridden. It does not need to be as that functionality is built into `pip install`
subprocess.run(
[
sys.executable,
"-m",
"pip",
"install",
f"{self._pip_package.name}=={self.latest_version}",
],
stdout=self._o,
)
try:
importlib.import_module(
f"{self._pip_package.name}.python_install_directives"
)
except ModuleNotFoundError:
has_id = False
else:
has_id = True
if has_id:
subprocess.run(["install-directives", self._pip_package.name, "install"])
@property
def currently_installed_version_is_unreleased(self) -> bool:
return bool(packaging.version.parse(self.current_version).local) | zm-au | /zm_au-2.0.0.tar.gz/zm_au-2.0.0/zm_au/pip/base.py | base.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev) | zm-dsnd-probability | /zm_dsnd_probability-0.1.tar.gz/zm_dsnd_probability-0.1/zm_dsnd_probability/Gaussiandistribution.py | Gaussiandistribution.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Binomial(Distribution):
""" Binomial distribution class for calculating and
visualizing a Binomial distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats to be extracted from the data file
p (float) representing the probability of an event occurring
n (int) number of trials
TODO: Fill out all functions below
"""
def __init__(self, prob=.5, size=20):
self.n = size
self.p = prob
Distribution.__init__(self, self.calculate_mean(), self.calculate_stdev())
def calculate_mean(self):
"""Function to calculate the mean from p and n
Args:
None
Returns:
float: mean of the data set
"""
self.mean = self.p * self.n
return self.mean
def calculate_stdev(self):
"""Function to calculate the standard deviation from p and n.
Args:
None
Returns:
float: standard deviation of the data set
"""
self.stdev = math.sqrt(self.n * self.p * (1 - self.p))
return self.stdev
def replace_stats_with_data(self):
"""Function to calculate p and n from the data set
Args:
None
Returns:
float: the p value
float: the n value
"""
self.n = len(self.data)
self.p = 1.0 * sum(self.data) / len(self.data)
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev()
def plot_bar(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.bar(x = ['0', '1'], height = [(1 - self.p) * self.n, self.p * self.n])
plt.title('Bar Chart of Data')
plt.xlabel('outcome')
plt.ylabel('count')
def pdf(self, k):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
a = math.factorial(self.n) / (math.factorial(k) * (math.factorial(self.n - k)))
b = (self.p ** k) * (1 - self.p) ** (self.n - k)
return a * b
def plot_bar_pdf(self):
"""Function to plot the pdf of the binomial distribution
Args:
None
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
x = []
y = []
# calculate the x values to visualize
for i in range(self.n + 1):
x.append(i)
y.append(self.pdf(i))
# make the plots
plt.bar(x, y)
plt.title('Distribution of Outcomes')
plt.ylabel('Probability')
plt.xlabel('Outcome')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Binomial distributions with equal p
Args:
other (Binomial): Binomial instance
Returns:
Binomial: Binomial distribution
"""
try:
assert self.p == other.p, 'p values are not equal'
except AssertionError as error:
raise
result = Binomial()
result.n = self.n + other.n
result.p = self.p
result.calculate_mean()
result.calculate_stdev()
return result
def __repr__(self):
"""Function to output the characteristics of the Binomial instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}, p {}, n {}".\
format(self.mean, self.stdev, self.p, self.n) | zm-dsnd-probability | /zm_dsnd_probability-0.1.tar.gz/zm_dsnd_probability-0.1/zm_dsnd_probability/Binomialdistribution.py | Binomialdistribution.py |
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | zm-py | /zm-py-0.5.2.tar.gz/zm-py-0.5.2/LICENSE.md | LICENSE.md |
# Zm-py
[](https://badge.fury.io/py/zm-py/)
[](https://travis-ci.org/rohankapoorcom/zm-py)
[](https://pypi.python.org/pypi/zm-py)
[](https://github.com/rohankapoorcom/zm-py/blob/master/LICENSE.md)
A loose python wrapper around the [ZoneMinder](https://www.zoneminder.org) API. As time goes on additional functionality will be added to this API client.
zm-py is based on code that was originally part of [Home Assistant](https://www.home-assistant.io). Historical sources and authorship information is available as part of the Home Assistant project:
- [ZoneMinder Platform](https://github.com/home-assistant/home-assistant/commits/dev/homeassistant/components/zoneminder.py)
- [ZoneMinder Camera](https://github.com/home-assistant/home-assistant/commits/dev/homeassistant/components/camera/zoneminder.py)
- [ZoneMinder Sensor](https://github.com/home-assistant/home-assistant/commits/dev/homeassistant/components/sensor/zoneminder.py)
- [ZoneMinder Switch](https://github.com/home-assistant/home-assistant/commits/dev/homeassistant/components/switch/zoneminder.py)
## Installation
### PyPI
```bash
$ pip install zm-py
```
## Usage
```python
from zoneminder.zm import ZoneMinder
SERVER_HOST = "{{host}}:{{port}}"
USER = "{{user}}"
PASS = "{{pass}}"
SERVER_PATH = "{{path}}"
zm_client = ZoneMinder(
server_host=SERVER_HOST, server_path=SERVER_PATH, username=USER, password=PASS, verify_ssl=False
)
#Zoneminder authentication
zm_client.login()
#Get all monitors
monitors = zm_client.get_monitors()
for monitor in monitors:
print(monitor)
>>> Monitor(id='monitor_id', name='monitor_name', controllable='is_controllable')
#Move camera down
controllable_monitors = [m for m in monitors if m.controllable]
for monitor in controllable_monitors:
zm_client.move_monitor(monitor, "right")
```
| zm-py | /zm-py-0.5.2.tar.gz/zm-py-0.5.2/README.md | README.md |
import logging
from typing import List, Optional
from urllib.parse import urljoin
from urllib.parse import quote
import requests
from zoneminder.monitor import Monitor
from zoneminder.run_state import RunState
from zoneminder.exceptions import ControlTypeError, MonitorControlTypeError
_LOGGER = logging.getLogger(__name__)
class ZoneMinder:
"""The ZoneMinder API client itself. Create one of these to begin."""
DEFAULT_SERVER_PATH = "/zm/"
DEFAULT_ZMS_PATH = "/zm/cgi-bin/nph-zms"
DEFAULT_TIMEOUT = 10
LOGIN_RETRIES = 2
MONITOR_URL = "api/monitors.json"
def __init__(
self,
server_host,
username,
password,
server_path=DEFAULT_SERVER_PATH,
zms_path=DEFAULT_ZMS_PATH,
verify_ssl=True,
) -> None:
"""Create a ZoneMinder API Client."""
self._server_url = ZoneMinder._build_server_url(server_host, server_path)
self._zms_url = ZoneMinder._build_zms_url(server_host, zms_path)
self._username = username
self._password = password
self._verify_ssl = verify_ssl
self._cookies = None
self._auth_token = None
def login(self):
"""Login to the ZoneMinder API."""
_LOGGER.debug("Attempting to login to ZoneMinder")
login_post = {}
if self._username:
login_post["user"] = self._username
if self._password:
login_post["pass"] = self._password
req = requests.post(
urljoin(self._server_url, "api/host/login.json"),
data=login_post,
verify=self._verify_ssl,
)
if req.ok:
try:
self._auth_token = req.json()["access_token"]
return True
except KeyError:
# Try legacy auth below
pass
return self._legacy_auth()
def _legacy_auth(self):
login_post = {"view": "console", "action": "login"}
if self._username:
login_post["username"] = self._username
if self._password:
login_post["password"] = self._password
req = requests.post(
urljoin(self._server_url, "index.php"),
data=login_post,
verify=self._verify_ssl,
)
self._cookies = req.cookies
# Login calls returns a 200 response on both failure and success.
# The only way to tell if you logged in correctly is to issue an api
# call.
req = requests.get(
urljoin(self._server_url, "api/host/getVersion.json"),
cookies=self._cookies,
timeout=ZoneMinder.DEFAULT_TIMEOUT,
verify=self._verify_ssl,
)
if not req.ok:
_LOGGER.error("Connection error logging into ZoneMinder")
return False
return True
def get_state(self, api_url) -> dict:
"""Perform a GET request on the specified ZoneMinder API URL."""
return self._zm_request("get", api_url)
def change_state(self, api_url, post_data) -> dict:
"""Perform a POST request on the specific ZoneMinder API Url."""
return self._zm_request("post", api_url, post_data)
def _zm_request(self, method, api_url, data=None, timeout=DEFAULT_TIMEOUT) -> dict:
"""Perform a request to the ZoneMinder API."""
token_url_suffix = ""
if self._auth_token:
token_url_suffix = "?token=" + self._auth_token
try:
# Since the API uses sessions that expire, sometimes we need to
# re-auth if the call fails.
for _ in range(ZoneMinder.LOGIN_RETRIES):
req = requests.request(
method,
urljoin(self._server_url, api_url) + token_url_suffix,
data=data,
cookies=self._cookies,
timeout=timeout,
verify=self._verify_ssl,
)
if not req.ok:
self.login()
else:
break
else:
_LOGGER.error("Unable to get API response from ZoneMinder")
try:
return req.json()
except ValueError:
_LOGGER.exception(
"JSON decode exception caught while" 'attempting to decode "%s"',
req.text,
)
return {}
except requests.exceptions.ConnectionError:
_LOGGER.exception("Unable to connect to ZoneMinder")
return {}
def get_monitors(self) -> List[Monitor]:
"""Get a list of Monitors from the ZoneMinder API."""
raw_monitors = self._zm_request("get", ZoneMinder.MONITOR_URL)
if not raw_monitors:
_LOGGER.warning("Could not fetch monitors from ZoneMinder")
return []
monitors = []
for raw_result in raw_monitors["monitors"]:
_LOGGER.debug("Initializing camera %s", raw_result["Monitor"]["Id"])
monitors.append(Monitor(self, raw_result))
return monitors
def get_run_states(self) -> List[RunState]:
"""Get a list of RunStates from the ZoneMinder API."""
raw_states = self.get_state("api/states.json")
if not raw_states:
_LOGGER.warning("Could not fetch runstates from ZoneMinder")
return []
run_states = []
for i in raw_states["states"]:
raw_state = i["State"]
_LOGGER.info("Initializing runstate %s", raw_state["Id"])
run_states.append(RunState(self, raw_state))
return run_states
def get_active_state(self) -> Optional[str]:
"""Get the name of the active run state from the ZoneMinder API."""
for state in self.get_run_states():
if state.active:
return state.name
return None
def set_active_state(self, state_name):
"""
Set the ZoneMinder run state to the given state name, via ZM API.
Note that this is a long-running API call; ZoneMinder changes the state
of each camera in turn, and this GET does not receive a response until
all cameras have been updated. Even on a reasonably powerful machine,
this call can take ten (10) or more seconds **per camera**. This method
sets a timeout of 120, which should be adequate for most users.
"""
_LOGGER.info("Setting ZoneMinder run state to state %s", state_name)
return self._zm_request("GET", "api/states/change/{}.json".format(state_name), timeout=120)
def get_zms_url(self) -> str:
"""Get the url to the current ZMS instance."""
return self._zms_url
def get_url_with_auth(self, url) -> str:
"""Add the auth credentials to a url (if needed)."""
if not self._username:
return url
url += "&user={:s}".format(quote(self._username))
if not self._password:
return url
return url + "&pass={:s}".format(quote(self._password))
@property
def is_available(self) -> bool:
"""Indicate if this ZoneMinder service is currently available."""
status_response = self.get_state("api/host/daemonCheck.json")
if not status_response:
return False
return status_response.get("result") == 1
@property
def verify_ssl(self) -> bool:
"""Indicate whether urls with http(s) should verify the certificate."""
return self._verify_ssl
@staticmethod
def _build_zms_url(server_host, zms_path) -> str:
"""Build the ZMS url to the current ZMS instance."""
return urljoin(server_host, zms_path)
@staticmethod
def _build_server_url(server_host, server_path) -> str:
"""Build the server url making sure it ends in a trailing slash."""
server_url = urljoin(server_host, server_path)
if server_url[-1] == "/":
return server_url
return "{}/".format(server_url)
def move_monitor(self, monitor: Monitor, direction: str):
"""Call Zoneminder to move."""
try:
result = monitor.ptz_control_command(direction, self._auth_token, self._server_url)
if result:
_LOGGER.info("Success to move camera to %s", direction)
else:
_LOGGER.error("Impossible to move camera to %s", direction)
except ControlTypeError:
_LOGGER.exception("Impossible move monitor")
except MonitorControlTypeError:
_LOGGER.exception("Impossible to use direction") | zm-py | /zm-py-0.5.2.tar.gz/zm-py-0.5.2/zoneminder/zm.py | zm.py |
from enum import Enum
import logging
from typing import Optional
from urllib.parse import urlencode
from requests import post
from .exceptions import ControlTypeError, MonitorControlTypeError
_LOGGER = logging.getLogger(__name__)
# From ZoneMinder's web/includes/config.php.in
STATE_ALARM = 2
class ControlType(Enum):
"""Represents the possibles movements types of the Monitor."""
RIGHT = "moveConRight"
LEFT = "moveConLeft"
UP = "moveConUp"
DOWN = "moveConDown"
UP_LEFT = "moveConUpLeft"
UP_RIGHT = "moveConUpRight"
DOWN_LEFT = "moveConDownLeft"
DOWN_RIGHT = "moveConDownRight"
@classmethod
def from_move(cls, move) -> Enum:
"""Get the corresponding direction from the move.
Example values: 'right', 'UP-RIGHT', 'down', 'down-left', or 'up_left'.
"""
for move_key, move_obj in ControlType.__members__.items():
if move_key == move.upper().replace("-", "_"):
return move_obj
raise ControlTypeError()
class MonitorState(Enum):
"""Represents the current state of the Monitor."""
NONE = "None"
MONITOR = "Monitor"
MODECT = "Modect"
RECORD = "Record"
MOCORD = "Mocord"
NODECT = "Nodect"
class TimePeriod(Enum):
"""Represents a period of time to check for events."""
@property
def period(self) -> str:
"""Get the period of time."""
# pylint: disable=unsubscriptable-object
return self.value[0]
@property
def title(self) -> str:
"""Explains what is measured in this period."""
# pylint: disable=unsubscriptable-object
return self.value[1]
@staticmethod
def get_time_period(value):
"""Get the corresponding TimePeriod from the value.
Example values: 'all', 'hour', 'day', 'week', or 'month'.
"""
for time_period in TimePeriod:
if time_period.period == value:
return time_period
raise ValueError("{} is not a valid TimePeriod".format(value))
ALL = ("all", "Events")
HOUR = ("hour", "Events Last Hour")
DAY = ("day", "Events Last Day")
WEEK = ("week", "Events Last Week")
MONTH = ("month", "Events Last Month")
class Monitor:
"""Represents a Monitor from ZoneMinder."""
def __init__(self, client, raw_result):
"""Create a new Monitor."""
self._client = client
self._raw_result = raw_result
raw_monitor = raw_result["Monitor"]
self._monitor_id = int(raw_monitor["Id"])
self._monitor_url = "api/monitors/{}.json".format(self._monitor_id)
self._name = raw_monitor["Name"]
self._controllable = bool(int(raw_monitor["Controllable"]))
self._mjpeg_image_url = self._build_image_url(raw_monitor, "jpeg")
self._still_image_url = self._build_image_url(raw_monitor, "single")
self._fmt = "{}(id={}, name={}, controllable={})"
def __repr__(self) -> str:
"""Representation of a Monitor."""
return self._fmt.format(self.__class__.__name__, self.id, self.name)
def __str__(self) -> str:
"""Representation of a Monitor."""
return self.__repr__()
@property
def id(self) -> int:
"""Get the ZoneMinder id number of this Monitor."""
# pylint: disable=invalid-name
return self._monitor_id
@property
def name(self) -> str:
"""Get the name of this Monitor."""
return self._name
def update_monitor(self):
"""Update the monitor and monitor status from the ZM server."""
result = self._client.get_state(self._monitor_url)
self._raw_result = result["monitor"]
@property
def function(self) -> MonitorState:
"""Get the MonitorState of this Monitor."""
self.update_monitor()
return MonitorState(self._raw_result["Monitor"]["Function"])
@function.setter
def function(self, new_function):
"""Set the MonitorState of this Monitor."""
self._client.change_state(self._monitor_url, {"Monitor[Function]": new_function.value})
@property
def controllable(self) -> bool:
"""Indicate whether this Monitor is movable."""
return self._controllable
@property
def mjpeg_image_url(self) -> str:
"""Get the motion jpeg (mjpeg) image url of this Monitor."""
return self._mjpeg_image_url
@property
def still_image_url(self) -> str:
"""Get the still jpeg image url of this Monitor."""
return self._still_image_url
@property
def is_recording(self) -> Optional[bool]:
"""Indicate if this Monitor is currently recording."""
status_response = self._client.get_state(
"api/monitors/alarm/id:{}/command:status.json".format(self._monitor_id)
)
if not status_response:
_LOGGER.warning("Could not get status for monitor %s.", self._monitor_id)
return None
status = status_response.get("status")
# ZoneMinder API returns an empty string to indicate that this monitor
# cannot record right now
if status == "":
return False
return int(status) == STATE_ALARM
@property
def is_available(self) -> bool:
"""Indicate if this Monitor is currently available."""
status_response = self._client.get_state(
"api/monitors/daemonStatus/id:{}/daemon:zmc.json".format(self._monitor_id)
)
if not status_response:
_LOGGER.warning("Could not get availability for monitor %s.", self._monitor_id)
return False
# Monitor_Status was only added in ZM 1.32.3
monitor_status = self._raw_result.get("Monitor_Status", None)
capture_fps = monitor_status and monitor_status["CaptureFPS"]
return status_response.get("status", False) and capture_fps != "0.00"
def get_events(self, time_period, include_archived=False) -> Optional[int]:
"""Get the number of events that have occurred on this Monitor.
Specifically only gets events that have occurred within the TimePeriod
provided.
"""
date_filter = "1%20{}".format(time_period.period)
if time_period == TimePeriod.ALL:
# The consoleEvents API uses DATE_SUB, so give it
# something large
date_filter = "100%20year"
archived_filter = "/Archived=:0"
if include_archived:
archived_filter = ""
event = self._client.get_state(
"api/events/consoleEvents/{}{}.json".format(date_filter, archived_filter)
)
try:
events_by_monitor = event["results"]
if isinstance(events_by_monitor, list):
return 0
return events_by_monitor.get(str(self._monitor_id), 0)
except (TypeError, KeyError, AttributeError):
return None
def _build_image_url(self, monitor, mode) -> str:
"""Build and return a ZoneMinder camera image url."""
query = urlencode(
{
"mode": mode,
"buffer": monitor["StreamReplayBuffer"],
"monitor": monitor["Id"],
}
)
url = "{zms_url}?{query}".format(zms_url=self._client.get_zms_url(), query=query)
_LOGGER.debug("Monitor %s %s URL (without auth): %s", monitor["Id"], mode, url)
return self._client.get_url_with_auth(url)
def ptz_control_command(self, direction, token, base_url) -> bool:
"""Move camera."""
if not self.controllable:
raise MonitorControlTypeError()
ptz_url = "{}index.php".format(base_url)
params = {
"view": "request",
"request": "control",
"id": self.id,
"control": ControlType.from_move(direction).value,
"xge": 43,
"token": token,
}
req = post(url=ptz_url, params=params)
return bool(req.ok) | zm-py | /zm-py-0.5.2.tar.gz/zm-py-0.5.2/zoneminder/monitor.py | monitor.py |
===============================
zabbix-cli-ent
===============================
Zabbix management command-line client
* Free software: Apache license
* Source: https://github.com/orviz/zabbix-cli-ent
* Bugs: https://github.com/orviz/zabbix-cli-ent/issues
zabbix-cli-ent allows to perform actions on a Zabbix server
through command line.
Zabbix web interface usually is the most suitable way for
administration tasks, however there are times when you
need to modify settings non-interactively (scripts, ..) or
even certain actions easily done by command line.
Getting Started
---------------
Install `zabbix-cli-ent` using `pip`, either by getting the
version uploaded in PyPi:
.. code:: bash
$ pip install zm
or the one from the current repo:
.. code:: bash
$ git clone https://github.com/orviz/zabbix-cli-ent.git
$ cd zabbix-cli-ent && pip install .
Basic Usage
-----------
.. code:: bash
$ zm --help
will list the current actions that can be performed.
Depending on the subcommand it will have different options;
rely on the `--help` option to learn about each one.
**NOTE**: You can provide the connection details as options or
via a configuration file. Either way, the login, password
and url must be provided in order to get a successfully
connection.
Use it programatically
----------------------
You can also use zabbix-cli-ent as a Python library to get data
from a Zabbix API.
For that you first need to provide the credentials to be able to
access any of the available functionality. As an example:
.. code:: Python
import zm.trigger
from oslo.config import cfg
CONF = cfg.CONF
CONF.username="foo"
CONF.password="bar"
CONF.url="https://example.com/zabbix"
print zm.trigger.list(host="host.example.com",
priority="DISASTER",
omit_ack=True,)
Extending Functionality
-----------------------
The code allows to easily extend the functionality. To do
so:
1. Create a new ``Command`` inherited class that will
handle the new functionality.
- ``__init__()``, where you will define the new action's options.
- ``run()``, sets the work to be done.
2. Add the brand new class to: ``commands.py`` > ``add_command_parsers()``
There you go!
| zm | /zm-1.0.5.g4c1c78c.tar.gz/zm-1.0.5.g4c1c78c/README.rst | README.rst |
# python-zmanim
Python Zmanim library
This project is a port from Eliyahu Hershfeld's [KosherJava project](https://github.com/KosherJava/zmanim), with some Python niceties and other minor modifications.
## Usage
Some common examples include...
#### Zmanim calculations
```python
# Initialize a new ZmanimCalendar object, defaults to using today's date in GMT, located at Greenwich, England
from zmanim.zmanim_calendar import ZmanimCalendar
calendar = ZmanimCalendar()
calendar
#=> zmanim.zmanim_calendar.ZmanimCalendar(candle_lighting_offset=18, geo_location=zmanim.util.geo_location.GeoLocation(name='Greenwich, England', latitude=51.4772, longitude=0.0, time_zone=tzfile('/usr/share/zoneinfo/GMT'), elevation=0.0), date=datetime.datetime(2018, 8, 26, 11, 40, 29, 334774), calculator=<zmanim.util.noaa_calculator.NOAACalculator object at 0x10bbf7710>)
# Calculate the sunset for today at that location
calendar.sunset()
#=> datetime.datetime(2018, 8, 26, 18, 58, 40, 796469, tzinfo=tzfile('/usr/share/zoneinfo/GMT'))
# Prepare a new location
from zmanim.util.geo_location import GeoLocation
location = GeoLocation('Lakewood, NJ', 40.0721087, -74.2400243, 'America/New_York', elevation=15)
location
#=> zmanim.util.geo_location.GeoLocation(name='Lakewood, NJ', latitude=40.0721087, longitude=-74.2400243, time_zone=tzfile('/usr/share/zoneinfo/America/New_York'), elevation=15.0)
# Initialize a new ZmanimCalendar object, passing a specific location and date
from datetime import date
calendar = ZmanimCalendar(geo_location=location, date=date(2017, 12, 15))
calendar
#=> zmanim.zmanim_calendar.ZmanimCalendar(candle_lighting_offset=18, geo_location=zmanim.util.geo_location.GeoLocation(name='Lakewood, NJ', latitude=40.0721087, longitude=-74.2400243, time_zone=tzfile('/usr/share/zoneinfo/America/New_York'), elevation=15.0), date=datetime.date(2017, 12, 15), calculator=<zmanim.util.noaa_calculator.NOAACalculator object at 0x10bbf7828>)
# Calculate Sof Zman Krias Shma for that location/date per the opinion of GR"A
calendar.sof_zman_shma_gra()
#=> datetime.datetime(2017, 12, 15, 9, 32, 9, 383390, tzinfo=tzfile('/usr/share/zoneinfo/America/New_York'))
```
#### Date Calculations
```python
# Initialize a new JewishDate object with today's date
from zmanim.hebrew_calendar.jewish_date import JewishDate
date = JewishDate()
date
#=> <zmanim.hebrew_calendar.jewish_date.JewishDate gregorian_date=datetime.date(2018, 8, 26), jewish_date=(5778, 6, 15), day_of_week=1, molad_hours=0, molad_minutes=0, molad_chalakim=0>
# Calculate the JewishDate from 25 days ago
date - 25
#=> <zmanim.hebrew_calendar.jewish_date.JewishDate gregorian_date=datetime.date(2018, 8, 1), jewish_date=(5778, 5, 20), day_of_week=4, molad_hours=0, molad_minutes=0, molad_chalakim=0>
# Initialize a new JewishCalendar object for Pesach of this Jewish calendar year
from zmanim.hebrew_calendar.jewish_calendar import JewishCalendar
pesach = JewishCalendar(date.jewish_year, 1, 15)
pesach
#=> <zmanim.hebrew_calendar.jewish_calendar.JewishCalendar in_israel=False, gregorian_date=datetime.date(2018, 3, 31), jewish_date=(5778, 1, 15), day_of_week=7, molad_hours=0, molad_minutes=0, molad_chalakim=0>
pesach.significant_day()
#=> 'pesach'
pesach.is_yom_tov_assur_bemelacha()
#=> True
```
There is much more functionality included than demonstrated here. Feel free to experiment or read the source code to learn more!
---
## Contributing
Bug reports and pull requests are welcome on GitHub at https://github.com/pinnymz/python-zmanim. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct.
| zmanim | /zmanim-0.3.0.tar.gz/zmanim-0.3.0/README.md | README.md |
zmapio: reading and writing ZMAP Plus Grid files
------------------------------------------------
|Linux Build Status| |Windows Build Status| |PyPI version|
.. |Linux Build Status| image:: https://travis-ci.com/abduhbm/zmapio.svg?branch=master
:target: https://travis-ci.com/abduhbm/zmapio
.. |Windows Build Status| image:: https://github.com/abduhbm/zmapio/workflows/Windows%20CI/badge.svg?branch=master
:target: https://github.com/abduhbm/zmapio/actions?query=workflow%3A%22Windows+CI%22
.. |PyPI version| image:: https://img.shields.io/pypi/v/zmapio.svg
:target: https://pypi.python.org/pypi/zmapio
To install:
===========
.. code:: bash
$ pip install zmapio
Basic usage of zmapio
=====================
.. code:: python
import matplotlib.pyplot as plt
import numpy as np
from zmapio import ZMAPGrid
.. code:: python
%matplotlib inline
Reading a ZMAP file:
.. code:: python
z_file = ZMAPGrid('./examples/NSLCU.dat')
Accessing the comments header:
.. code:: python
for c in z_file.comments:
print(c)
.. parsed-literal::
Landmark Zmap grid file name: .\DATA\NSLCU.dat
Created/converted by Oasis Montaj, Geosoft Inc.
Plotting the grid data:
.. code:: python
z_file.plot()
.. image:: https://raw.githubusercontent.com/abduhbm/zmapio/master/_static/output_9_1.png
Counts for rows and columns:
.. code:: python
z_file.no_cols, z_file.no_rows
.. parsed-literal::
(435, 208)
Shape for z-values:
.. code:: python
z_file.z_values.shape
.. parsed-literal::
(208, 435)
Exporting to CSV file:
.. code:: python
z_file.to_csv('./output/output.csv')
.. code:: bash
head ./output/output.csv
.. parsed-literal::
-630000.0,2621000.0,-16481.9570313
-630000.0,2618000.0,-16283.9033203
-630000.0,2615000.0,-16081.5751953
-630000.0,2612000.0,-15856.7861328
-630000.0,2609000.0,-15583.7167969
-630000.0,2606000.0,-15255.734375
-630000.0,2603000.0,-14869.3769531
-630000.0,2600000.0,-14426.1513672
-630000.0,2597000.0,-13915.8769531
-630000.0,2594000.0,-13340.4677734
Exporting to WKT file:
.. code:: python
z_file.to_wkt('./output/output.wkt', precision=2)
Exporting to GeoJSON file:
.. code:: python
z_file.to_geojson('./output/output.json')
Exporting to Pandas Dataframe:
.. code:: python
df = z_file.to_dataframe()
.. code:: python
df.Z.describe()
.. parsed-literal::
count 90480.000000
mean -5244.434235
std 4692.845490
min -16691.371094
25% -10250.590088
50% -4003.433105
75% -1320.896881
max 2084.417969
Name: Z, dtype: float64
Write a new ZMAP file as 3 nodes per line format:
.. code:: python
z_file.write('./output/test.zmap', nodes_per_line=3)
.. code:: bash
head ./output/test.zmap
.. parsed-literal::
! Landmark Zmap grid file name: .\DATA\NSLCU.dat
! Created/converted by Oasis Montaj, Geosoft Inc.
@.\DATA\NSLCU.dat, GRID, 3
20, 1e+30, , 7, 1
208, 435, -630000.0, 672000.0, 2000000.0, 2621000.0
0.0, 0.0, 0.0
@
-16481.9570313 -16283.9033203 -16081.5751953
-15856.7861328 -15583.7167969 -15255.7343750
-14869.3769531 -14426.1513672 -13915.8769531
Creating a ZMAP object from string:
.. code:: python
z_text = """
!
! File created by DMBTools2.GridFileFormats.ZmapPlusFile
!
@GRID FILE, GRID, 4
20, -9999.0000000, , 7, 1
6, 4, 0, 200, 0, 300
0.0, 0.0, 0.0
@
-9999.0000000 -9999.0000000 3.0000000 32.0000000
88.0000000 13.0000000
-9999.0000000 20.0000000 8.0000000 42.0000000
75.0000000 5.0000000
5.0000000 100.0000000 35.0000000 50.0000000
27.0000000 1.0000000
2.0000000 36.0000000 10.0000000 6.0000000
9.0000000 -9999.0000000
"""
z_t = ZMAPGrid(z_text)
z_t.plot()
.. image:: https://raw.githubusercontent.com/abduhbm/zmapio/master/_static/output_28_1.png
Adding colorbar and colormap using matplotlib:
.. code:: python
z_obj = ZMAPGrid('./examples/NStopo.dat')
fig=plt.figure(figsize=(12, 6))
z_obj.plot(cmap='jet')
plt.colorbar()
.. image:: https://raw.githubusercontent.com/abduhbm/zmapio/master/_static/output_30_1.png
Creating a new ZMAP object from 2D-Numpy array with shape (no_cols,
no_rows):
.. code:: python
z_val = z_obj.z_values
print('Z-values shape: ', z_val.shape)
new_zgrid = ZMAPGrid(z_values=z_val, min_x=-630000.0000, max_x=672000.0000,
min_y=2000000.0000, max_y=2621000.0000)
.. parsed-literal::
Z-values shape: (435, 208)
.. code:: python
new_zgrid.plot(cmap='gist_earth')
.. image:: https://raw.githubusercontent.com/abduhbm/zmapio/master/_static/output_33_1.png
Customize writing a ZMAP file:
.. code:: python
new_zgrid.comments = ['this is', 'a test']
new_zgrid.nodes_per_line = 4
new_zgrid.field_width = 15
new_zgrid.decimal_places = 3
new_zgrid.name = 'test'
new_zgrid.write('./output/new_z.dat')
.. code:: bash
head ./output/new_z.dat
.. parsed-literal::
!this is
!a test
@test, GRID, 4
15, 1e+30, , 3, 1
208, 435, -630000.0, 672000.0, 2000000.0, 2621000.0
0.0, 0.0, 0.0
@
-67.214 -67.570 -67.147 -69.081
-73.181 -74.308 -72.766 -72.034
-70.514 -68.555 -66.195 -62.776
References
==========
* https://lists.osgeo.org/pipermail/gdal-dev/2011-June/029173.html
* https://gist.github.com/wassname/526d5fde3f3cbeb67da8
* Saltus, R.W. and Bird, K.J., 2003. Digital depth horizon compilations of the Alaskan North Slope and adjacent arctic regions. U.S. Geological Survey data release: https://doi.org/10.3133/ofr03230
| zmapio | /zmapio-0.6.0.tar.gz/zmapio-0.6.0/README.rst | README.rst |
zmats
==============================
[//]: # (Badges)
[](https://travis-ci.com/ReactionMechanismGenerator/zmats)
[](https://codecov.io/gh/ReactionMechanismGenerator/zmats/branch/master)
[](http://opensource.org/licenses/MIT)
A Z-matrix module written in Python
### Copyright
Copyright (c) 2020, Alon Grinberg Dana
#### Acknowledgements
Project based on the
[Computational Molecular Science Python Cookiecutter](https://github.com/molssi/cookiecutter-cms) version 1.1.
| zmats | /zmats-0.1.0.tar.gz/zmats-0.1.0/README.md | README.md |
from __future__ import print_function
try:
import configparser
except ImportError:
import ConfigParser as configparser
import errno
import json
import os
import re
import subprocess
import sys
class VersioneerConfig:
"""Container for Versioneer configuration parameters."""
def get_root():
"""Get the project root directory.
We require that all commands are run from the project root, i.e. the
directory that contains setup.py, setup.cfg, and versioneer.py .
"""
root = os.path.realpath(os.path.abspath(os.getcwd()))
setup_py = os.path.join(root, "setup.py")
versioneer_py = os.path.join(root, "versioneer.py")
if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)):
# allow 'python path/to/setup.py COMMAND'
root = os.path.dirname(os.path.realpath(os.path.abspath(sys.argv[0])))
setup_py = os.path.join(root, "setup.py")
versioneer_py = os.path.join(root, "versioneer.py")
if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)):
err = ("Versioneer was unable to run the project root directory. "
"Versioneer requires setup.py to be executed from "
"its immediate directory (like 'python setup.py COMMAND'), "
"or in a way that lets it use sys.argv[0] to find the root "
"(like 'python path/to/setup.py COMMAND').")
raise VersioneerBadRootError(err)
try:
# Certain runtime workflows (setup.py install/develop in a setuptools
# tree) execute all dependencies in a single python process, so
# "versioneer" may be imported multiple times, and python's shared
# module-import table will cache the first one. So we can't use
# os.path.dirname(__file__), as that will find whichever
# versioneer.py was first imported, even in later projects.
me = os.path.realpath(os.path.abspath(__file__))
me_dir = os.path.normcase(os.path.splitext(me)[0])
vsr_dir = os.path.normcase(os.path.splitext(versioneer_py)[0])
if me_dir != vsr_dir:
print("Warning: build in %s is using versioneer.py from %s"
% (os.path.dirname(me), versioneer_py))
except NameError:
pass
return root
def get_config_from_root(root):
"""Read the project setup.cfg file to determine Versioneer config."""
# This might raise EnvironmentError (if setup.cfg is missing), or
# configparser.NoSectionError (if it lacks a [versioneer] section), or
# configparser.NoOptionError (if it lacks "VCS="). See the docstring at
# the top of versioneer.py for instructions on writing your setup.cfg .
setup_cfg = os.path.join(root, "setup.cfg")
parser = configparser.SafeConfigParser()
with open(setup_cfg, "r") as f:
parser.readfp(f)
VCS = parser.get("versioneer", "VCS") # mandatory
def get(parser, name):
if parser.has_option("versioneer", name):
return parser.get("versioneer", name)
return None
cfg = VersioneerConfig()
cfg.VCS = VCS
cfg.style = get(parser, "style") or ""
cfg.versionfile_source = get(parser, "versionfile_source")
cfg.versionfile_build = get(parser, "versionfile_build")
cfg.tag_prefix = get(parser, "tag_prefix")
if cfg.tag_prefix in ("''", '""'):
cfg.tag_prefix = ""
cfg.parentdir_prefix = get(parser, "parentdir_prefix")
cfg.verbose = get(parser, "verbose")
return cfg
class NotThisMethod(Exception):
"""Exception raised if a method is not valid for the current scenario."""
# these dictionaries contain VCS-specific tools
LONG_VERSION_PY = {}
HANDLERS = {}
def register_vcs_handler(vcs, method): # decorator
"""Decorator to mark a method as the handler for a particular VCS."""
def decorate(f):
"""Store f in HANDLERS[vcs][method]."""
if vcs not in HANDLERS:
HANDLERS[vcs] = {}
HANDLERS[vcs][method] = f
return f
return decorate
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False,
env=None):
"""Call the given command(s)."""
assert isinstance(commands, list)
p = None
for c in commands:
try:
dispcmd = str([c] + args)
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen([c] + args, cwd=cwd, env=env,
stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr
else None))
break
except EnvironmentError:
e = sys.exc_info()[1]
if e.errno == errno.ENOENT:
continue
if verbose:
print("unable to run %s" % dispcmd)
print(e)
return None, None
else:
if verbose:
print("unable to find command, tried %s" % (commands,))
return None, None
stdout = p.communicate()[0].strip()
if sys.version_info[0] >= 3:
stdout = stdout.decode()
if p.returncode != 0:
if verbose:
print("unable to run %s (error)" % dispcmd)
print("stdout was %s" % stdout)
return None, p.returncode
return stdout, p.returncode
LONG_VERSION_PY['git'] = '''
# This file helps to compute a version number in source trees obtained from
# git-archive tarball (such as those provided by githubs download-from-tag
# feature). Distribution tarballs (built by setup.py sdist) and build
# directories (produced by setup.py build) will contain a much shorter file
# that just contains the computed version number.
# This file is released into the public domain. Generated by
# versioneer-0.18 (https://github.com/warner/python-versioneer)
"""Git implementation of _version.py."""
import errno
import os
import re
import subprocess
import sys
def get_keywords():
"""Get the keywords needed to look up the version information."""
# these strings will be replaced by git during git-archive.
# setup.py/versioneer.py will grep for the variable names, so they must
# each be defined on a line of their own. _version.py will just call
# get_keywords().
git_refnames = "%(DOLLAR)sFormat:%%d%(DOLLAR)s"
git_full = "%(DOLLAR)sFormat:%%H%(DOLLAR)s"
git_date = "%(DOLLAR)sFormat:%%ci%(DOLLAR)s"
keywords = {"refnames": git_refnames, "full": git_full, "date": git_date}
return keywords
class VersioneerConfig:
"""Container for Versioneer configuration parameters."""
def get_config():
"""Create, populate and return the VersioneerConfig() object."""
# these strings are filled in when 'setup.py versioneer' creates
# _version.py
cfg = VersioneerConfig()
cfg.VCS = "git"
cfg.style = "%(STYLE)s"
cfg.tag_prefix = "%(TAG_PREFIX)s"
cfg.parentdir_prefix = "%(PARENTDIR_PREFIX)s"
cfg.versionfile_source = "%(VERSIONFILE_SOURCE)s"
cfg.verbose = False
return cfg
class NotThisMethod(Exception):
"""Exception raised if a method is not valid for the current scenario."""
LONG_VERSION_PY = {}
HANDLERS = {}
def register_vcs_handler(vcs, method): # decorator
"""Decorator to mark a method as the handler for a particular VCS."""
def decorate(f):
"""Store f in HANDLERS[vcs][method]."""
if vcs not in HANDLERS:
HANDLERS[vcs] = {}
HANDLERS[vcs][method] = f
return f
return decorate
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False,
env=None):
"""Call the given command(s)."""
assert isinstance(commands, list)
p = None
for c in commands:
try:
dispcmd = str([c] + args)
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen([c] + args, cwd=cwd, env=env,
stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr
else None))
break
except EnvironmentError:
e = sys.exc_info()[1]
if e.errno == errno.ENOENT:
continue
if verbose:
print("unable to run %%s" %% dispcmd)
print(e)
return None, None
else:
if verbose:
print("unable to find command, tried %%s" %% (commands,))
return None, None
stdout = p.communicate()[0].strip()
if sys.version_info[0] >= 3:
stdout = stdout.decode()
if p.returncode != 0:
if verbose:
print("unable to run %%s (error)" %% dispcmd)
print("stdout was %%s" %% stdout)
return None, p.returncode
return stdout, p.returncode
def versions_from_parentdir(parentdir_prefix, root, verbose):
"""Try to determine the version from the parent directory name.
Source tarballs conventionally unpack into a directory that includes both
the project name and a version string. We will also support searching up
two directory levels for an appropriately named parent directory
"""
rootdirs = []
for i in range(3):
dirname = os.path.basename(root)
if dirname.startswith(parentdir_prefix):
return {"version": dirname[len(parentdir_prefix):],
"full-revisionid": None,
"dirty": False, "error": None, "date": None}
else:
rootdirs.append(root)
root = os.path.dirname(root) # up a level
if verbose:
print("Tried directories %%s but none started with prefix %%s" %%
(str(rootdirs), parentdir_prefix))
raise NotThisMethod("rootdir doesn't start with parentdir_prefix")
@register_vcs_handler("git", "get_keywords")
def git_get_keywords(versionfile_abs):
"""Extract version information from the given file."""
# the code embedded in _version.py can just fetch the value of these
# keywords. When used from setup.py, we don't want to import _version.py,
# so we do it with a regexp instead. This function is not used from
# _version.py.
keywords = {}
try:
f = open(versionfile_abs, "r")
for line in f.readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["full"] = mo.group(1)
if line.strip().startswith("git_date ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["date"] = mo.group(1)
f.close()
except EnvironmentError:
pass
return keywords
@register_vcs_handler("git", "keywords")
def git_versions_from_keywords(keywords, tag_prefix, verbose):
"""Get version information from git keywords."""
if not keywords:
raise NotThisMethod("no keywords at all, weird")
date = keywords.get("date")
if date is not None:
# git-2.2.0 added "%%cI", which expands to an ISO-8601 -compliant
# datestamp. However we prefer "%%ci" (which expands to an "ISO-8601
# -like" string, which we must then edit to make compliant), because
# it's been around since git-1.5.3, and it's too difficult to
# discover which version we're using, or to work around using an
# older one.
date = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
refnames = keywords["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("keywords are unexpanded, not using")
raise NotThisMethod("unexpanded keywords, not a git-archive tarball")
refs = set([r.strip() for r in refnames.strip("()").split(",")])
# starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
# just "foo-1.0". If we see a "tag: " prefix, prefer those.
TAG = "tag: "
tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)])
if not tags:
# Either we're using git < 1.8.3, or there really are no tags. We use
# a heuristic: assume all version tags have a digit. The old git %%d
# expansion behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
tags = set([r for r in refs if re.search(r'\d', r)])
if verbose:
print("discarding '%%s', no digits" %% ",".join(refs - tags))
if verbose:
print("likely tags: %%s" %% ",".join(sorted(tags)))
for ref in sorted(tags):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %%s" %% r)
return {"version": r,
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": None,
"date": date}
# no suitable tags, so version is "0+unknown", but full hex is still there
if verbose:
print("no suitable tags, using unknown + full revision id")
return {"version": "0+unknown",
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": "no suitable tags", "date": None}
@register_vcs_handler("git", "pieces_from_vcs")
def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
"""Get version from 'git describe' in the root of the source tree.
This only gets called if the git-archive 'subst' keywords were *not*
expanded, and _version.py hasn't already been rewritten with a short
version string, meaning we're inside a checked out source tree.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root,
hide_stderr=True)
if rc != 0:
if verbose:
print("Directory %%s not under git control" %% root)
raise NotThisMethod("'git rev-parse --git-dir' returned error")
# if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
# if there isn't one, this yields HEX[-dirty] (no NUM)
describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty",
"--always", "--long",
"--match", "%%s*" %% tag_prefix],
cwd=root)
# --long was added in git-1.5.5
if describe_out is None:
raise NotThisMethod("'git describe' failed")
describe_out = describe_out.strip()
full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
if full_out is None:
raise NotThisMethod("'git rev-parse' failed")
full_out = full_out.strip()
pieces = {}
pieces["long"] = full_out
pieces["short"] = full_out[:7] # maybe improved later
pieces["error"] = None
# parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]
# TAG might have hyphens.
git_describe = describe_out
# look for -dirty suffix
dirty = git_describe.endswith("-dirty")
pieces["dirty"] = dirty
if dirty:
git_describe = git_describe[:git_describe.rindex("-dirty")]
# now we have TAG-NUM-gHEX or HEX
if "-" in git_describe:
# TAG-NUM-gHEX
mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe)
if not mo:
# unparseable. Maybe git-describe is misbehaving?
pieces["error"] = ("unable to parse git-describe output: '%%s'"
%% describe_out)
return pieces
# tag
full_tag = mo.group(1)
if not full_tag.startswith(tag_prefix):
if verbose:
fmt = "tag '%%s' doesn't start with prefix '%%s'"
print(fmt %% (full_tag, tag_prefix))
pieces["error"] = ("tag '%%s' doesn't start with prefix '%%s'"
%% (full_tag, tag_prefix))
return pieces
pieces["closest-tag"] = full_tag[len(tag_prefix):]
# distance: number of commits since tag
pieces["distance"] = int(mo.group(2))
# commit: short hex revision ID
pieces["short"] = mo.group(3)
else:
# HEX: no tags
pieces["closest-tag"] = None
count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"],
cwd=root)
pieces["distance"] = int(count_out) # total number of commits
# commit date: see ISO-8601 comment in git_versions_from_keywords()
date = run_command(GITS, ["show", "-s", "--format=%%ci", "HEAD"],
cwd=root)[0].strip()
pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
return pieces
def plus_or_dot(pieces):
"""Return a + if we don't already have one, else return a ."""
if "+" in pieces.get("closest-tag", ""):
return "."
return "+"
def render_pep440(pieces):
"""Build up version string, with post-release "local version identifier".
Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you
get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty
Exceptions:
1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += plus_or_dot(pieces)
rendered += "%%d.g%%s" %% (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0+untagged.%%d.g%%s" %% (pieces["distance"],
pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def render_pep440_pre(pieces):
"""TAG[.post.devDISTANCE] -- No -dirty.
Exceptions:
1: no tags. 0.post.devDISTANCE
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += ".post.dev%%d" %% pieces["distance"]
else:
# exception #1
rendered = "0.post.dev%%d" %% pieces["distance"]
return rendered
def render_pep440_post(pieces):
"""TAG[.postDISTANCE[.dev0]+gHEX] .
The ".dev0" means dirty. Note that .dev0 sorts backwards
(a dirty tree will appear "older" than the corresponding clean one),
but you shouldn't be releasing software with -dirty anyways.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "g%%s" %% pieces["short"]
else:
# exception #1
rendered = "0.post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += "+g%%s" %% pieces["short"]
return rendered
def render_pep440_old(pieces):
"""TAG[.postDISTANCE[.dev0]] .
The ".dev0" means dirty.
Eexceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
else:
# exception #1
rendered = "0.post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
return rendered
def render_git_describe(pieces):
"""TAG[-DISTANCE-gHEX][-dirty].
Like 'git describe --tags --dirty --always'.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render_git_describe_long(pieces):
"""TAG-DISTANCE-gHEX[-dirty].
Like 'git describe --tags --dirty --always -long'.
The distance/hash is unconditional.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render(pieces, style):
"""Render the given version pieces into the requested style."""
if pieces["error"]:
return {"version": "unknown",
"full-revisionid": pieces.get("long"),
"dirty": None,
"error": pieces["error"],
"date": None}
if not style or style == "default":
style = "pep440" # the default
if style == "pep440":
rendered = render_pep440(pieces)
elif style == "pep440-pre":
rendered = render_pep440_pre(pieces)
elif style == "pep440-post":
rendered = render_pep440_post(pieces)
elif style == "pep440-old":
rendered = render_pep440_old(pieces)
elif style == "git-describe":
rendered = render_git_describe(pieces)
elif style == "git-describe-long":
rendered = render_git_describe_long(pieces)
else:
raise ValueError("unknown style '%%s'" %% style)
return {"version": rendered, "full-revisionid": pieces["long"],
"dirty": pieces["dirty"], "error": None,
"date": pieces.get("date")}
def get_versions():
"""Get version information or return default if unable to do so."""
# I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have
# __file__, we can work backwards from there to the root. Some
# py2exe/bbfreeze/non-CPython implementations don't do __file__, in which
# case we can only use expanded keywords.
cfg = get_config()
verbose = cfg.verbose
try:
return git_versions_from_keywords(get_keywords(), cfg.tag_prefix,
verbose)
except NotThisMethod:
pass
try:
root = os.path.realpath(__file__)
# versionfile_source is the relative path from the top of the source
# tree (where the .git directory might live) to this file. Invert
# this to find the root from __file__.
for i in cfg.versionfile_source.split('/'):
root = os.path.dirname(root)
except NameError:
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None,
"error": "unable to find root of source tree",
"date": None}
try:
pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose)
return render(pieces, cfg.style)
except NotThisMethod:
pass
try:
if cfg.parentdir_prefix:
return versions_from_parentdir(cfg.parentdir_prefix, root, verbose)
except NotThisMethod:
pass
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None,
"error": "unable to compute version", "date": None}
'''
@register_vcs_handler("git", "get_keywords")
def git_get_keywords(versionfile_abs):
"""Extract version information from the given file."""
# the code embedded in _version.py can just fetch the value of these
# keywords. When used from setup.py, we don't want to import _version.py,
# so we do it with a regexp instead. This function is not used from
# _version.py.
keywords = {}
try:
f = open(versionfile_abs, "r")
for line in f.readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["full"] = mo.group(1)
if line.strip().startswith("git_date ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["date"] = mo.group(1)
f.close()
except EnvironmentError:
pass
return keywords
@register_vcs_handler("git", "keywords")
def git_versions_from_keywords(keywords, tag_prefix, verbose):
"""Get version information from git keywords."""
if not keywords:
raise NotThisMethod("no keywords at all, weird")
date = keywords.get("date")
if date is not None:
# git-2.2.0 added "%cI", which expands to an ISO-8601 -compliant
# datestamp. However we prefer "%ci" (which expands to an "ISO-8601
# -like" string, which we must then edit to make compliant), because
# it's been around since git-1.5.3, and it's too difficult to
# discover which version we're using, or to work around using an
# older one.
date = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
refnames = keywords["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("keywords are unexpanded, not using")
raise NotThisMethod("unexpanded keywords, not a git-archive tarball")
refs = set([r.strip() for r in refnames.strip("()").split(",")])
# starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
# just "foo-1.0". If we see a "tag: " prefix, prefer those.
TAG = "tag: "
tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)])
if not tags:
# Either we're using git < 1.8.3, or there really are no tags. We use
# a heuristic: assume all version tags have a digit. The old git %d
# expansion behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
tags = set([r for r in refs if re.search(r'\d', r)])
if verbose:
print("discarding '%s', no digits" % ",".join(refs - tags))
if verbose:
print("likely tags: %s" % ",".join(sorted(tags)))
for ref in sorted(tags):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %s" % r)
return {"version": r,
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": None,
"date": date}
# no suitable tags, so version is "0+unknown", but full hex is still there
if verbose:
print("no suitable tags, using unknown + full revision id")
return {"version": "0+unknown",
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": "no suitable tags", "date": None}
@register_vcs_handler("git", "pieces_from_vcs")
def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
"""Get version from 'git describe' in the root of the source tree.
This only gets called if the git-archive 'subst' keywords were *not*
expanded, and _version.py hasn't already been rewritten with a short
version string, meaning we're inside a checked out source tree.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root,
hide_stderr=True)
if rc != 0:
if verbose:
print("Directory %s not under git control" % root)
raise NotThisMethod("'git rev-parse --git-dir' returned error")
# if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
# if there isn't one, this yields HEX[-dirty] (no NUM)
describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty",
"--always", "--long",
"--match", "%s*" % tag_prefix],
cwd=root)
# --long was added in git-1.5.5
if describe_out is None:
raise NotThisMethod("'git describe' failed")
describe_out = describe_out.strip()
full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
if full_out is None:
raise NotThisMethod("'git rev-parse' failed")
full_out = full_out.strip()
pieces = {}
pieces["long"] = full_out
pieces["short"] = full_out[:7] # maybe improved later
pieces["error"] = None
# parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]
# TAG might have hyphens.
git_describe = describe_out
# look for -dirty suffix
dirty = git_describe.endswith("-dirty")
pieces["dirty"] = dirty
if dirty:
git_describe = git_describe[:git_describe.rindex("-dirty")]
# now we have TAG-NUM-gHEX or HEX
if "-" in git_describe:
# TAG-NUM-gHEX
mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe)
if not mo:
# unparseable. Maybe git-describe is misbehaving?
pieces["error"] = ("unable to parse git-describe output: '%s'"
% describe_out)
return pieces
# tag
full_tag = mo.group(1)
if not full_tag.startswith(tag_prefix):
if verbose:
fmt = "tag '%s' doesn't start with prefix '%s'"
print(fmt % (full_tag, tag_prefix))
pieces["error"] = ("tag '%s' doesn't start with prefix '%s'"
% (full_tag, tag_prefix))
return pieces
pieces["closest-tag"] = full_tag[len(tag_prefix):]
# distance: number of commits since tag
pieces["distance"] = int(mo.group(2))
# commit: short hex revision ID
pieces["short"] = mo.group(3)
else:
# HEX: no tags
pieces["closest-tag"] = None
count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"],
cwd=root)
pieces["distance"] = int(count_out) # total number of commits
# commit date: see ISO-8601 comment in git_versions_from_keywords()
date = run_command(GITS, ["show", "-s", "--format=%ci", "HEAD"],
cwd=root)[0].strip()
pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
return pieces
def do_vcs_install(manifest_in, versionfile_source, ipy):
"""Git-specific installation logic for Versioneer.
For Git, this means creating/changing .gitattributes to mark _version.py
for export-subst keyword substitution.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
files = [manifest_in, versionfile_source]
if ipy:
files.append(ipy)
try:
me = __file__
if me.endswith(".pyc") or me.endswith(".pyo"):
me = os.path.splitext(me)[0] + ".py"
versioneer_file = os.path.relpath(me)
except NameError:
versioneer_file = "versioneer.py"
files.append(versioneer_file)
present = False
try:
f = open(".gitattributes", "r")
for line in f.readlines():
if line.strip().startswith(versionfile_source):
if "export-subst" in line.strip().split()[1:]:
present = True
f.close()
except EnvironmentError:
pass
if not present:
f = open(".gitattributes", "a+")
f.write("%s export-subst\n" % versionfile_source)
f.close()
files.append(".gitattributes")
run_command(GITS, ["add", "--"] + files)
def versions_from_parentdir(parentdir_prefix, root, verbose):
"""Try to determine the version from the parent directory name.
Source tarballs conventionally unpack into a directory that includes both
the project name and a version string. We will also support searching up
two directory levels for an appropriately named parent directory
"""
rootdirs = []
for i in range(3):
dirname = os.path.basename(root)
if dirname.startswith(parentdir_prefix):
return {"version": dirname[len(parentdir_prefix):],
"full-revisionid": None,
"dirty": False, "error": None, "date": None}
else:
rootdirs.append(root)
root = os.path.dirname(root) # up a level
if verbose:
print("Tried directories %s but none started with prefix %s" %
(str(rootdirs), parentdir_prefix))
raise NotThisMethod("rootdir doesn't start with parentdir_prefix")
SHORT_VERSION_PY = """
# This file was generated by 'versioneer.py' (0.18) from
# revision-control system data, or from the parent directory name of an
# unpacked source archive. Distribution tarballs contain a pre-generated copy
# of this file.
import json
version_json = '''
%s
''' # END VERSION_JSON
def get_versions():
return json.loads(version_json)
"""
def versions_from_file(filename):
"""Try to determine the version from _version.py if present."""
try:
with open(filename) as f:
contents = f.read()
except EnvironmentError:
raise NotThisMethod("unable to read _version.py")
mo = re.search(r"version_json = '''\n(.*)''' # END VERSION_JSON",
contents, re.M | re.S)
if not mo:
mo = re.search(r"version_json = '''\r\n(.*)''' # END VERSION_JSON",
contents, re.M | re.S)
if not mo:
raise NotThisMethod("no version_json in _version.py")
return json.loads(mo.group(1))
def write_to_version_file(filename, versions):
"""Write the given version number to the given _version.py file."""
os.unlink(filename)
contents = json.dumps(versions, sort_keys=True,
indent=1, separators=(",", ": "))
with open(filename, "w") as f:
f.write(SHORT_VERSION_PY % contents)
print("set %s to '%s'" % (filename, versions["version"]))
def plus_or_dot(pieces):
"""Return a + if we don't already have one, else return a ."""
if "+" in pieces.get("closest-tag", ""):
return "."
return "+"
def render_pep440(pieces):
"""Build up version string, with post-release "local version identifier".
Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you
get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty
Exceptions:
1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += plus_or_dot(pieces)
rendered += "%d.g%s" % (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0+untagged.%d.g%s" % (pieces["distance"],
pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def render_pep440_pre(pieces):
"""TAG[.post.devDISTANCE] -- No -dirty.
Exceptions:
1: no tags. 0.post.devDISTANCE
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += ".post.dev%d" % pieces["distance"]
else:
# exception #1
rendered = "0.post.dev%d" % pieces["distance"]
return rendered
def render_pep440_post(pieces):
"""TAG[.postDISTANCE[.dev0]+gHEX] .
The ".dev0" means dirty. Note that .dev0 sorts backwards
(a dirty tree will appear "older" than the corresponding clean one),
but you shouldn't be releasing software with -dirty anyways.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "g%s" % pieces["short"]
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += "+g%s" % pieces["short"]
return rendered
def render_pep440_old(pieces):
"""TAG[.postDISTANCE[.dev0]] .
The ".dev0" means dirty.
Eexceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
return rendered
def render_git_describe(pieces):
"""TAG[-DISTANCE-gHEX][-dirty].
Like 'git describe --tags --dirty --always'.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render_git_describe_long(pieces):
"""TAG-DISTANCE-gHEX[-dirty].
Like 'git describe --tags --dirty --always -long'.
The distance/hash is unconditional.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render(pieces, style):
"""Render the given version pieces into the requested style."""
if pieces["error"]:
return {"version": "unknown",
"full-revisionid": pieces.get("long"),
"dirty": None,
"error": pieces["error"],
"date": None}
if not style or style == "default":
style = "pep440" # the default
if style == "pep440":
rendered = render_pep440(pieces)
elif style == "pep440-pre":
rendered = render_pep440_pre(pieces)
elif style == "pep440-post":
rendered = render_pep440_post(pieces)
elif style == "pep440-old":
rendered = render_pep440_old(pieces)
elif style == "git-describe":
rendered = render_git_describe(pieces)
elif style == "git-describe-long":
rendered = render_git_describe_long(pieces)
else:
raise ValueError("unknown style '%s'" % style)
return {"version": rendered, "full-revisionid": pieces["long"],
"dirty": pieces["dirty"], "error": None,
"date": pieces.get("date")}
class VersioneerBadRootError(Exception):
"""The project root directory is unknown or missing key files."""
def get_versions(verbose=False):
"""Get the project version from whatever source is available.
Returns dict with two keys: 'version' and 'full'.
"""
if "versioneer" in sys.modules:
# see the discussion in cmdclass.py:get_cmdclass()
del sys.modules["versioneer"]
root = get_root()
cfg = get_config_from_root(root)
assert cfg.VCS is not None, "please set [versioneer]VCS= in setup.cfg"
handlers = HANDLERS.get(cfg.VCS)
assert handlers, "unrecognized VCS '%s'" % cfg.VCS
verbose = verbose or cfg.verbose
assert cfg.versionfile_source is not None, \
"please set versioneer.versionfile_source"
assert cfg.tag_prefix is not None, "please set versioneer.tag_prefix"
versionfile_abs = os.path.join(root, cfg.versionfile_source)
# extract version from first of: _version.py, VCS command (e.g. 'git
# describe'), parentdir. This is meant to work for developers using a
# source checkout, for users of a tarball created by 'setup.py sdist',
# and for users of a tarball/zipball created by 'git archive' or github's
# download-from-tag feature or the equivalent in other VCSes.
get_keywords_f = handlers.get("get_keywords")
from_keywords_f = handlers.get("keywords")
if get_keywords_f and from_keywords_f:
try:
keywords = get_keywords_f(versionfile_abs)
ver = from_keywords_f(keywords, cfg.tag_prefix, verbose)
if verbose:
print("got version from expanded keyword %s" % ver)
return ver
except NotThisMethod:
pass
try:
ver = versions_from_file(versionfile_abs)
if verbose:
print("got version from file %s %s" % (versionfile_abs, ver))
return ver
except NotThisMethod:
pass
from_vcs_f = handlers.get("pieces_from_vcs")
if from_vcs_f:
try:
pieces = from_vcs_f(cfg.tag_prefix, root, verbose)
ver = render(pieces, cfg.style)
if verbose:
print("got version from VCS %s" % ver)
return ver
except NotThisMethod:
pass
try:
if cfg.parentdir_prefix:
ver = versions_from_parentdir(cfg.parentdir_prefix, root, verbose)
if verbose:
print("got version from parentdir %s" % ver)
return ver
except NotThisMethod:
pass
if verbose:
print("unable to compute version")
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None, "error": "unable to compute version",
"date": None}
def get_version():
"""Get the short version string for this project."""
return get_versions()["version"]
def get_cmdclass():
"""Get the custom setuptools/distutils subclasses used by Versioneer."""
if "versioneer" in sys.modules:
del sys.modules["versioneer"]
# this fixes the "python setup.py develop" case (also 'install' and
# 'easy_install .'), in which subdependencies of the main project are
# built (using setup.py bdist_egg) in the same python process. Assume
# a main project A and a dependency B, which use different versions
# of Versioneer. A's setup.py imports A's Versioneer, leaving it in
# sys.modules by the time B's setup.py is executed, causing B to run
# with the wrong versioneer. Setuptools wraps the sub-dep builds in a
# sandbox that restores sys.modules to it's pre-build state, so the
# parent is protected against the child's "import versioneer". By
# removing ourselves from sys.modules here, before the child build
# happens, we protect the child from the parent's versioneer too.
# Also see https://github.com/warner/python-versioneer/issues/52
cmds = {}
# we add "version" to both distutils and setuptools
from distutils.core import Command
class cmd_version(Command):
description = "report generated version string"
user_options = []
boolean_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
vers = get_versions(verbose=True)
print("Version: %s" % vers["version"])
print(" full-revisionid: %s" % vers.get("full-revisionid"))
print(" dirty: %s" % vers.get("dirty"))
print(" date: %s" % vers.get("date"))
if vers["error"]:
print(" error: %s" % vers["error"])
cmds["version"] = cmd_version
# we override "build_py" in both distutils and setuptools
#
# most invocation pathways end up running build_py:
# distutils/build -> build_py
# distutils/install -> distutils/build ->..
# setuptools/bdist_wheel -> distutils/install ->..
# setuptools/bdist_egg -> distutils/install_lib -> build_py
# setuptools/install -> bdist_egg ->..
# setuptools/develop -> ?
# pip install:
# copies source tree to a tempdir before running egg_info/etc
# if .git isn't copied too, 'git describe' will fail
# then does setup.py bdist_wheel, or sometimes setup.py install
# setup.py egg_info -> ?
# we override different "build_py" commands for both environments
if "setuptools" in sys.modules:
from setuptools.command.build_py import build_py as _build_py
else:
from distutils.command.build_py import build_py as _build_py
class cmd_build_py(_build_py):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
_build_py.run(self)
# now locate _version.py in the new build/ directory and replace
# it with an updated value
if cfg.versionfile_build:
target_versionfile = os.path.join(self.build_lib,
cfg.versionfile_build)
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
cmds["build_py"] = cmd_build_py
if "cx_Freeze" in sys.modules: # cx_freeze enabled?
from cx_Freeze.dist import build_exe as _build_exe
# nczeczulin reports that py2exe won't like the pep440-style string
# as FILEVERSION, but it can be used for PRODUCTVERSION, e.g.
# setup(console=[{
# "version": versioneer.get_version().split("+", 1)[0], # FILEVERSION
# "product_version": versioneer.get_version(),
# ...
class cmd_build_exe(_build_exe):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
target_versionfile = cfg.versionfile_source
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
_build_exe.run(self)
os.unlink(target_versionfile)
with open(cfg.versionfile_source, "w") as f:
LONG = LONG_VERSION_PY[cfg.VCS]
f.write(LONG %
{"DOLLAR": "$",
"STYLE": cfg.style,
"TAG_PREFIX": cfg.tag_prefix,
"PARENTDIR_PREFIX": cfg.parentdir_prefix,
"VERSIONFILE_SOURCE": cfg.versionfile_source,
})
cmds["build_exe"] = cmd_build_exe
del cmds["build_py"]
if 'py2exe' in sys.modules: # py2exe enabled?
try:
from py2exe.distutils_buildexe import py2exe as _py2exe # py3
except ImportError:
from py2exe.build_exe import py2exe as _py2exe # py2
class cmd_py2exe(_py2exe):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
target_versionfile = cfg.versionfile_source
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
_py2exe.run(self)
os.unlink(target_versionfile)
with open(cfg.versionfile_source, "w") as f:
LONG = LONG_VERSION_PY[cfg.VCS]
f.write(LONG %
{"DOLLAR": "$",
"STYLE": cfg.style,
"TAG_PREFIX": cfg.tag_prefix,
"PARENTDIR_PREFIX": cfg.parentdir_prefix,
"VERSIONFILE_SOURCE": cfg.versionfile_source,
})
cmds["py2exe"] = cmd_py2exe
# we override different "sdist" commands for both environments
if "setuptools" in sys.modules:
from setuptools.command.sdist import sdist as _sdist
else:
from distutils.command.sdist import sdist as _sdist
class cmd_sdist(_sdist):
def run(self):
versions = get_versions()
self._versioneer_generated_versions = versions
# unless we update this, the command will keep using the old
# version
self.distribution.metadata.version = versions["version"]
return _sdist.run(self)
def make_release_tree(self, base_dir, files):
root = get_root()
cfg = get_config_from_root(root)
_sdist.make_release_tree(self, base_dir, files)
# now locate _version.py in the new base_dir directory
# (remembering that it may be a hardlink) and replace it with an
# updated value
target_versionfile = os.path.join(base_dir, cfg.versionfile_source)
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile,
self._versioneer_generated_versions)
cmds["sdist"] = cmd_sdist
return cmds
CONFIG_ERROR = """
setup.cfg is missing the necessary Versioneer configuration. You need
a section like:
[versioneer]
VCS = git
style = pep440
versionfile_source = src/myproject/_version.py
versionfile_build = myproject/_version.py
tag_prefix =
parentdir_prefix = myproject-
You will also need to edit your setup.py to use the results:
import versioneer
setup(version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(), ...)
Please read the docstring in ./versioneer.py for configuration instructions,
edit setup.cfg, and re-run the installer or 'python versioneer.py setup'.
"""
SAMPLE_CONFIG = """
# See the docstring in versioneer.py for instructions. Note that you must
# re-run 'versioneer.py setup' after changing this section, and commit the
# resulting files.
[versioneer]
#VCS = git
#style = pep440
#versionfile_source =
#versionfile_build =
#tag_prefix =
#parentdir_prefix =
"""
INIT_PY_SNIPPET = """
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
"""
def do_setup():
"""Main VCS-independent setup function for installing Versioneer."""
root = get_root()
try:
cfg = get_config_from_root(root)
except (EnvironmentError, configparser.NoSectionError,
configparser.NoOptionError) as e:
if isinstance(e, (EnvironmentError, configparser.NoSectionError)):
print("Adding sample versioneer config to setup.cfg",
file=sys.stderr)
with open(os.path.join(root, "setup.cfg"), "a") as f:
f.write(SAMPLE_CONFIG)
print(CONFIG_ERROR, file=sys.stderr)
return 1
print(" creating %s" % cfg.versionfile_source)
with open(cfg.versionfile_source, "w") as f:
LONG = LONG_VERSION_PY[cfg.VCS]
f.write(LONG % {"DOLLAR": "$",
"STYLE": cfg.style,
"TAG_PREFIX": cfg.tag_prefix,
"PARENTDIR_PREFIX": cfg.parentdir_prefix,
"VERSIONFILE_SOURCE": cfg.versionfile_source,
})
ipy = os.path.join(os.path.dirname(cfg.versionfile_source),
"__init__.py")
if os.path.exists(ipy):
try:
with open(ipy, "r") as f:
old = f.read()
except EnvironmentError:
old = ""
if INIT_PY_SNIPPET not in old:
print(" appending to %s" % ipy)
with open(ipy, "a") as f:
f.write(INIT_PY_SNIPPET)
else:
print(" %s unmodified" % ipy)
else:
print(" %s doesn't exist, ok" % ipy)
ipy = None
# Make sure both the top-level "versioneer.py" and versionfile_source
# (PKG/_version.py, used by runtime code) are in MANIFEST.in, so
# they'll be copied into source distributions. Pip won't be able to
# install the package without this.
manifest_in = os.path.join(root, "MANIFEST.in")
simple_includes = set()
try:
with open(manifest_in, "r") as f:
for line in f:
if line.startswith("include "):
for include in line.split()[1:]:
simple_includes.add(include)
except EnvironmentError:
pass
# That doesn't cover everything MANIFEST.in can do
# (http://docs.python.org/2/distutils/sourcedist.html#commands), so
# it might give some false negatives. Appending redundant 'include'
# lines is safe, though.
if "versioneer.py" not in simple_includes:
print(" appending 'versioneer.py' to MANIFEST.in")
with open(manifest_in, "a") as f:
f.write("include versioneer.py\n")
else:
print(" 'versioneer.py' already in MANIFEST.in")
if cfg.versionfile_source not in simple_includes:
print(" appending versionfile_source ('%s') to MANIFEST.in" %
cfg.versionfile_source)
with open(manifest_in, "a") as f:
f.write("include %s\n" % cfg.versionfile_source)
else:
print(" versionfile_source already in MANIFEST.in")
# Make VCS-specific changes. For git, this means creating/changing
# .gitattributes to mark _version.py for export-subst keyword
# substitution.
do_vcs_install(manifest_in, cfg.versionfile_source, ipy)
return 0
def scan_setup_py():
"""Validate the contents of setup.py against Versioneer's expectations."""
found = set()
setters = False
errors = 0
with open("setup.py", "r") as f:
for line in f.readlines():
if "import versioneer" in line:
found.add("import")
if "versioneer.get_cmdclass()" in line:
found.add("cmdclass")
if "versioneer.get_version()" in line:
found.add("get_version")
if "versioneer.VCS" in line:
setters = True
if "versioneer.versionfile_source" in line:
setters = True
if len(found) != 3:
print("")
print("Your setup.py appears to be missing some important items")
print("(but I might be wrong). Please make sure it has something")
print("roughly like the following:")
print("")
print(" import versioneer")
print(" setup( version=versioneer.get_version(),")
print(" cmdclass=versioneer.get_cmdclass(), ...)")
print("")
errors += 1
if setters:
print("You should remove lines like 'versioneer.VCS = ' and")
print("'versioneer.versionfile_source = ' . This configuration")
print("now lives in setup.cfg, and should be removed from setup.py")
print("")
errors += 1
return errors
if __name__ == "__main__":
cmd = sys.argv[1]
if cmd == "setup":
errors = do_setup()
errors += scan_setup_py()
if errors:
sys.exit(1) | zmats | /zmats-0.1.0.tar.gz/zmats-0.1.0/versioneer.py | versioneer.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev) | zmb-distributions | /zmb_distributions-0.2.5.tar.gz/zmb_distributions-0.2.5/zmb_distributions/Gaussiandistribution.py | Gaussiandistribution.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Binomial(Distribution):
""" Binomial distribution class for calculating and
visualizing a Binomial distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats to be extracted from the data file
p (float) representing the probability of an event occurring
n (int) number of trials
TODO: Fill out all functions below
"""
def __init__(self, prob=.5, size=20):
self.n = size
self.p = prob
Distribution.__init__(self, self.calculate_mean(), self.calculate_stdev())
def calculate_mean(self):
"""Function to calculate the mean from p and n
Args:
None
Returns:
float: mean of the data set
"""
self.mean = self.p * self.n
return self.mean
def calculate_stdev(self):
"""Function to calculate the standard deviation from p and n.
Args:
None
Returns:
float: standard deviation of the data set
"""
self.stdev = math.sqrt(self.n * self.p * (1 - self.p))
return self.stdev
def replace_stats_with_data(self):
"""Function to calculate p and n from the data set
Args:
None
Returns:
float: the p value
float: the n value
"""
self.n = len(self.data)
self.p = 1.0 * sum(self.data) / len(self.data)
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev()
def plot_bar(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.bar(x = ['0', '1'], height = [(1 - self.p) * self.n, self.p * self.n])
plt.title('Bar Chart of Data')
plt.xlabel('outcome')
plt.ylabel('count')
def pdf(self, k):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
a = math.factorial(self.n) / (math.factorial(k) * (math.factorial(self.n - k)))
b = (self.p ** k) * (1 - self.p) ** (self.n - k)
return a * b
def plot_bar_pdf(self):
"""Function to plot the pdf of the binomial distribution
Args:
None
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
x = []
y = []
# calculate the x values to visualize
for i in range(self.n + 1):
x.append(i)
y.append(self.pdf(i))
# make the plots
plt.bar(x, y)
plt.title('Distribution of Outcomes')
plt.ylabel('Probability')
plt.xlabel('Outcome')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Binomial distributions with equal p
Args:
other (Binomial): Binomial instance
Returns:
Binomial: Binomial distribution
"""
try:
assert self.p == other.p, 'p values are not equal'
except AssertionError as error:
raise
result = Binomial()
result.n = self.n + other.n
result.p = self.p
result.calculate_mean()
result.calculate_stdev()
return result
def __repr__(self):
"""Function to output the characteristics of the Binomial instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}, p {}, n {}".\
format(self.mean, self.stdev, self.p, self.n) | zmb-distributions | /zmb_distributions-0.2.5.tar.gz/zmb_distributions-0.2.5/zmb_distributions/Binomialdistribution.py | Binomialdistribution.py |
.. image:: https://travis-ci.org/flyte/zmcat.svg?branch=develop
:target: https://travis-ci.org/flyte/zmcat
ZMCat
=====
A simple command line tool to test ZMQ push/pull/pub/sub sockets. Based on https://github.com/lucasdicioccio/zmcat
Installation
============
::
pip install zmcat
Usage
=====
::
zmcat <socket_type> <uri> [--bind] [--key]
socket_type
***********
The type of ZMQ socket you require (pub, sub, push, pull).
uri
***
The URI to bind/connect to. For example, tcp://127.0.0.1:5555 or ipc:///tmp/mysocket
--bind
******
Bind to an interface instead of connecting to an existing socket. Relevant only for socket_type push and pull.
--key
*****
The key to use for a pub/sub socket.
Examples
========
::
zmcat pub tcp://*:5555
zmcat sub tcp://localhost:5555
zmcat pub tcp://*:5555 --key=mykey
zmcat sub tcp://localhost:5555 --key=mykey
zmcat push tcp://localhost:5555
zmcat pull tcp://*:5555 --bind
zmcat push tcp://*:5555 --bind
zmcat pull tcp://localhost:5555 | zmcat | /zmcat-0.0.11.tar.gz/zmcat-0.0.11/README.rst | README.rst |
<div align=left>
<img src = 'https://img.shields.io/badge/latest_version-1.0.5-blue.svg'>
</div>
# zmcli 命令行工具
## 安装
zmcli 基于 Python3 编写,所以需首先安装 Python3 环境。
lib 和 pkg 的下载依赖 aria2,可通过 `brew install aria2` 安装
`pip3 install zmcli`
> 一般情况 `pip` 的安装路径为 `/usr/bin` 但是公司发的最新机器使用 `pip3` 命令安装到了 `~/Library/Python/3.8/bin` 目录下,这种情况需在 `~/.zshrc` 文件中加入 `export PATH=$HOME/bin:~/Library/Python/3.8/bin:/usr/local/bin:$PATH` 命令行程序才可正常执行,否则会报 `Command not found` 的错误。
## 使用
基本命令可通过 `zmcli -h` 查看帮助菜单
首次运行时,请按提示输入你的 Artifacory 的 User Name 和 API Key,这两个参数会保存成一个配置文件,在 `~/.zmcli_conf` 文件中。
### 示例
**在工程目录下执行**
```bash
feature
├── Bin
├── client
├── common
├── dependencies
├── ltt
├── mac-client
├── thirdparties
├── vendors
└── zoombase
```
例如工程目录结构如上,则需在 feature 目录下执行 `zmcli` 命令
> download-pkg 命令可不在工程目录下执行
#### 切换所有仓库到指定分支
`zmcli batch-checkout feature-client-5.12`
切换分支后拉去各个仓库最新代码
`zmcli batch-checkout feature-client-5.12 --force-pull=1`
#### 查看某个 repo 下 build 列表
`zmcli show-builds feature-client-5.12 --arch mac_arm64 --num 3`
`--arch` 指定架构,默认为 `mac_x86_64`
`--num` 指定显示多少条数据,默认值为 `10`
```bash
+------------------------------------------------------+
| Latest builds for feature-client-5.12(mac_arm64) |
+-------------+---------------------------+------------+
| Version | Created At | Arch_type |
+-------------+---------------------------+------------+
| 5.12.0.491 | 2022-07-20T06:23:35.683Z | mac_arm64 |
| 5.12.0.484 | 2022-07-19T06:49:09.939Z | mac_arm64 |
| 5.12.0.461 | 2022-07-13T09:18:49.346Z | mac_arm64 |
+-------------+---------------------------+------------+
```
#### 回滚到某一次 Build 的 commit 并替换 lib
`zmcli rollback feature-client-5.12 --arch mac_arm64 --build 5.12.0.491`
`--build` 回滚到指定 Build,若不指定,则回滚到当前 Repo 的最新 Build
`--arch` 指定架构,默认为 `mac_x86_64`
#### 所有仓库切换到某分支,拉取最新代码并替换该分支下的最新 lib
`zmcli update-all feature-client-5.12 --arch mac_arm64`
`--arch` 指定架构,默认为 `mac_x86_64`
#### Only replace libs
`zmcli relace-lib client-5.x-release --arch=mac_arm64 --build=5.12.0.9654`
`--build` 回滚到指定 Build,若不指定,则替换到当前 Repo 的最新 Build
`--arch` 指定架构,默认为 `mac_x86_64`
#### Only exexute git pull on all repos
`zmcli batch-pull`
#### Download pkg file
`zmcli download-pkg release-client-5.12.x --arch=mac_x86_64 --build 5.12.6.11709`
`--build` 指定 Build,若不指定,则下载最新 Build 下的 pkg
`--arch` 指定架构,默认为 `mac_x86_64`
`--no-log` 下载无 Log 包
#### 下载 Lib 缓存
从 Artifactory 下载下来的 lib 会存放在 Downloaded_libs 文件夹下做磁盘缓存避免重复下载 libs,必要时可删除文件夹下所有文件,然后重新从 Artifactory 下载。
> 每次下载完成会将 CreateTime 至今大于 7 天的 lib 文件删除。 | zmcli | /zmcli-1.0.5.tar.gz/zmcli-1.0.5/README.md | README.md |
u"""
Usage:
zmcli (-a|--all)
zmcli (-h|--help)
zmcli (-v|--version)
zmcli batch-checkout <branch> [--force-pull=<force_pull>]
zmcli rollback <branch> [--arch=<arch_type>] [--build=<build_version>]
zmcli show-builds <branch> [--arch=<arch_type>] [--num=<num_of_items>]
zmcli update-all <branch> [--arch=<arch_type>]
zmcli replace-lib <branch> [--arch=<arch_type>] [--build=<build_version>]
zmcli download-pkg <branch> [--arch=<arch_type>] [--build=<build_version>] [--no-log]
zmcli batch-pull
Options:
-h --help Show Help doc.
-v --version Show Version.
-a --all show all params
--arch=<arch_type> assign an arch type
--no-log dowload/replace a log-disabled pkg/lib
--num=<num_of_items> number of items will be showed
--build=<build_version> assign an build version
--force-pull=<force_pull> excute git pull after batch_checkout
"""
__version__="1.0.5"
from ast import For, arg
from email import header
from filecmp import cmp
import os
import json
from urllib import response
import requests
from tqdm import tqdm
from prettytable import PrettyTable
import zipfile
import time
from docopt import docopt
from functools import cmp_to_key
import git
from colorama import Back, Fore, Style, init
import hashlib
import time
# Options below are no need to edit
artifacts_end_point = 'https://artifacts.corp.zoom.us/artifactory' # Artifactory EndPoint No need to edit
artifacts_repo = 'client-generic-dev'
local_repo_names = ['zoombase', 'common', 'ltt', 'client', 'thirdparties', 'mac-client'] # Repos that should checkout.
def version():
return "version:"+__version__
def CalcSha1(filepath):
with open(filepath,'rb') as f:
sha1obj = hashlib.sha1()
sha1obj.update(f.read())
hash = sha1obj.hexdigest()
return hash
def CalcSha256(filepath):
with open(filepath,'rb') as f:
md5obj = hashlib.sha256()
md5obj.update(f.read())
hash = md5obj.hexdigest()
return hash
def cmp(build_info_1, build_info_2):
t1 = time.mktime(time.strptime(build_info_1['created'], "%Y-%m-%dT%H:%M:%S.%fZ"))
t2 = time.mktime(time.strptime(build_info_2['created'], "%Y-%m-%dT%H:%M:%S.%fZ"))
if t1 < t2:
return 1
elif t1 == t2:
return 0
return -1
class CommandLineTool:
def __init__(self, api_key, user_name, work_space_path):
self.api_key = api_key
self.user_name = user_name
self.work_space_path = work_space_path
def checkout_repo(self, build_info):
print(Fore.MAGENTA + '🤖 Start checking out repos...')
repo_infos = build_info['repo_infos']
for info in repo_infos:
repo_name = info['repo']
branch_name = info['branch']
commit_hash = info['commit_hash']
path = self.work_space_path + repo_name
if not os.access(path, os.W_OK):
print(Fore.RED + '🤖 ' + path + ' is not writable')
return False
repo = git.Repo.init(path)
unstaged_list = [item.a_path for item in repo.index.diff(None)]
if len(unstaged_list) > 0:
print(Fore.RED + "You have unstaged files on repo " + repo_name)
for line in unstaged_list:
print('\t' + Fore.YELLOW + line)
untracked_list = repo.untracked_files
return False
print(Fore.MAGENTA + "[" +repo_name + "] Start checking out to " + commit_hash + '...')
res = repo.git.checkout(commit_hash)
print(res)
self.execute_gen_proto_sh()
return True
def get_latest_lib_build_info(self, lib):
path = '/' + lib['repo'] + '/' + lib['path'] + '/' + lib['name']
headers = {
'content-type' : 'application/json',
'X-JFrog-Art-Api' : self.api_key
}
params = {
'deep' : 0,
'listFolders' : 1,
'mdTimestamps' : 1,
'includeRootPath' : 0,
}
r = requests.get(artifacts_end_point + '/api/storage' + path + '?list', headers=headers, params=params)
if r.status_code == 200:
response = r.json()
files = response['files']
if len(files) > 0:
build_info = {};
for file in files:
uri = file['uri']
resource_url = artifacts_end_point + path + uri
if str(uri).endswith('build_info.json'):
r = requests.get(resource_url, headers=headers)
data = r.json()
build_version = data['env']['BUILDVERSION']
build_info['build_version'] = build_version
commits = data['commits']
repo_infos = []
for commit in commits:
target = str(commit['target']).lower()
commit_hash = commit['commit']
branch = commit['branch']
if str(target).lower() in local_repo_names:
info = {'repo': target, 'branch' : branch, 'commit_hash' : commit_hash}
repo_infos.append(info)
build_info['repo_infos'] = repo_infos
if str(uri).endswith('libs_' + lib['name'] + '.zip'):
build_info['lib_url'] = resource_url
build_info['lib_sha1'] = file['sha1']
build_info['lib_sha2'] = file['sha2']
build_info['lib_size'] = file['size']
return build_info
return None
print(Fore.RED, r.status_code, r.text)
return None
def get_latest_pkg_info(self, lib):
path = '/' + lib['repo'] + '/' + lib['path'] + '/' + lib['name']
return self.get_file_info(path)
def get_file_info(self, url):
headers = {
'content-type' : 'application/json',
'X-JFrog-Art-Api' : self.api_key
}
params = {
'deep' : 0,
'listFolders' : 1,
'mdTimestamps' : 1,
'includeRootPath' : 0,
}
r = requests.get(artifacts_end_point + '/api/storage' + url + '?list', headers=headers, params=params)
if r.status_code == 200:
response = r.json()
files = response['files']
if len(files) > 0:
pkgs = []
for file in files:
uri = file['uri']
isFolder = file['folder']
if isFolder:
temp_pkgs = self.get_file_info(url+uri)
for temp_pkg in temp_pkgs:
pkgs.append(temp_pkg)
if str(uri).endswith('.pkg') or str(uri).endswith('.ipa') or str(uri).endswith('.apk') or str(uri).endswith('.exe') or str(uri).endswith('.msi'):
resource_url = artifacts_end_point + url + uri
pkgs.append({'uri' : uri[1:], 'pkg_url' : resource_url})
return pkgs
print(Fore.RED + 'No files founded')
return None
print(Fore.RED, r.status_code, r.text)
return None
def download_by_aria(self, url, sha1, sha2):
print(Fore.MAGENTA + '🤖 Start downloading...')
print(Fore.CYAN + 'Download Link: ' + Fore.YELLOW + url)
target_folder = self.work_space_path + 'Downloaded_libs'
if not os.path.exists(target_folder):
os.system('mkdir ' + target_folder)
self.clear_cache_files(target_folder)
file_name = str(url).split('/')[-1]
target_path = target_folder + '/' + file_name
if os.path.exists(target_path) and sha1 and sha2:
if CalcSha1(target_path) == sha1 and CalcSha256(target_path) == sha2:
print(Fore.YELLOW + 'File already exists.')
return target_path
os.system('rm -rf ' + target_path)
print(Fore.CYAN + file_name + ' will be download to ' + target_folder)
cmd = 'aria2c --http-user ' + self.user_name + ' --http-passwd ' + '\"' + self.api_key + '\"' + ' -d ' + target_folder + ' --max-concurrent-downloads 10 --max-connection-per-server 15 --split 10 --min-split-size 3M ' + url
os.system(cmd)
return target_path
def download_pkg_by_aria(self, url):
print(Fore.MAGENTA + '🤖 Start downloading...')
print(Fore.CYAN + 'Download Link: ' + Fore.YELLOW + url)
target_folder = os.getcwd()
file_name = str(url).split('/')[-1]
print(Fore.CYAN + file_name + ' will be download to ' + target_folder)
cmd = 'aria2c --http-user ' + self.user_name + ' --http-passwd ' + '\"' + self.api_key + '\"' + ' -d ' + target_folder + ' --max-concurrent-downloads 10 --max-connection-per-server 15 --split 10 --min-split-size 3M ' + url
os.system(cmd)
def unzip_lib(self, zip_path, release_path):
if not os.path.exists(zip_path):
return
print(Fore.MAGENTA + '🤖 Start replacing libs...')
cmd = 'unzip -o ' + zip_path + ' -d ' + release_path
os.system(cmd)
print(Fore.GREEN + "Finished replacing libs")
def execute_gen_proto_sh(self):
sh_file = self.work_space_path + 'common/proto_files/generate_protos_all.sh'
if os.path.exists(sh_file):
os.system('sh ' + sh_file)
def batch_checkout(self, branch, pull):
print(Fore.MAGENTA + '🤖 Start checking out all repos to ' + branch + '.\n')
for dir in os.listdir(self.work_space_path):
if dir in local_repo_names:
path = self.work_space_path + dir
if not os.access(path, os.W_OK):
print(Fore.RED + path + ' is not writable')
return False
repo = git.Repo.init(path)
unstaged_list = [item.a_path for item in repo.index.diff(None)]
if len(unstaged_list) > 0:
print(Fore.RED + "You have unstaged files on repo " + dir)
for line in unstaged_list:
print('\t' + Fore.YELLOW + line)
untracked_list = repo.untracked_files
return False
repo.git.checkout(branch)
print(Fore.GREEN + '🤖 [' + dir + '] - ' + "Sucessfully checked out to " + branch + '!')
if pull:
print(Fore.MAGENTA + '[' + dir + '] Start pulling...')
res = repo.git.pull()
print(res + '\n')
self.execute_gen_proto_sh()
return True
def batch_pull(self):
for dir in os.listdir(self.work_space_path):
if dir in local_repo_names:
path = self.work_space_path + dir
if not os.access(path, os.W_OK):
print(Fore.RED + path + ' is not writable')
return False
repo = git.Repo.init(path)
print(Fore.MAGENTA + '[' + dir + '] Start pulling...')
res = repo.git.pull()
print(res + '\n')
print(Fore.GREEN + 'All repos are up to date')
def replace_lib(self, branch, build_version, arch_type, roll_back):
if roll_back:
print(Fore.MAGENTA + '🤖 Start rolling back...')
else:
print(Fore.MAGENTA + '🤖 Start getting lib info...')
list = self.get_latest_builds(branch, arch_type, 0)
roll_back_build = list[0]
if build_version:
flag = False
for build_info in list:
if build_info['name'] == build_version:
roll_back_build = build_info
flag = True
if not flag:
print(Fore.RED + 'Didn\'t find build_version ' + build_version + ' for arch_type ' + arch_type + ' on branch ' + branch)
return
print(Fore.GREEN + 'Finded build_version ' + build_version + ' for arch_type ' + arch_type + ' on branch ' + branch)
print(Fore.MAGENTA + "🤖 Start getting lib url for " + roll_back_build['name'])
build = self.get_latest_lib_build_info(roll_back_build)
if build is None:
print(Fore.RED + "Failed to get lib url for " + roll_back_build['name'])
return
if roll_back:
if self.checkout_repo(build):
release_path = self.release_path(arch_type)
dest_path = self.download_by_aria(url=build['lib_url'], sha1=build['lib_sha1'], sha2=build['lib_sha2'])
if dest_path is None:
return
self.unzip_lib(dest_path, release_path)
else:
release_path = self.release_path(arch_type)
dest_path = self.download_by_aria(url=build['lib_url'], sha1=build['lib_sha1'], sha2=build['lib_sha2'])
if dest_path is None:
return
self.unzip_lib(dest_path, release_path)
def download_pkg(self, branch, build_version, arch_type, disable_log):
if arch_type == None:
list_arch = self.get_archs(branch)
if len(list_arch) <= 0:
print(Fore.RED + 'No packages found')
print(Fore.GREEN + 'Please select a folder:')
for i in range(len(list_arch)):
print('\t', i+1, ' - ' + list_arch[i])
index = int(input(Fore.YELLOW + 'Index of folder:'))
while index > len(list_arch) or index <= 0:
print(Fore.RED + 'Incorrect input')
index = int(input(Fore.YELLOW + 'Index of folder:'))
arch_type = list_arch[index-1]
list = self.get_latest_builds(branch, arch_type, 0)
taget_build = list[0]
if build_version:
flag = False
for build_info in list:
if build_info['name'] == build_version:
taget_build = build_info
flag = True
if not flag:
print(Fore.RED + 'Didn\'t find build_version ' + build_version + ' for arch_type ' + arch_type + ' on branch ' + branch)
return
print(Fore.GREEN + 'Finded build_version ' + build_version + ' for arch_type ' + arch_type + ' on branch ' + branch)
# Finde the latest log-disabled pkg
elif disable_log:
success = False
for build in list:
if 'properties' not in build.keys():
continue
properties = build['properties']
for property in properties:
if property['key'] == 'build.log' and property['value'] == 'n':
taget_build = build
success = True
break
if success:
break
if not success:
print(Fore.RED + 'Didn\'t find a no-log version for arch_type ' + arch_type + ' on branch ' + branch)
return
print(Fore.MAGENTA + "🤖 Start getting pkg url for " + taget_build['name'])
pkgs = self.get_latest_pkg_info(taget_build)
if len(pkgs) == 0:
print(Fore.RED + "No packages find for taget build " + taget_build['name'])
return
print(Fore.GREEN + 'Choose which package you want to download')
for i in range(len(pkgs)):
print(Fore.CYAN + '\t', i+1, '-', pkgs[i]['uri'])
index = int(input(Fore.YELLOW + 'Index of package:'))
while(index > len(pkgs) or index <= 0):
print(Fore.RED + 'Incorrect input')
index = int(input(Fore.YELLOW + 'Index of package:'))
pkg_url = pkgs[index - 1]['pkg_url']
self.download_pkg_by_aria(pkg_url)
def get_latest_builds(self, branch, arch_type, num):
params = {
'$or' : [{
'type' : 'folder'
}, {
'type' : 'file'
}],
'repo' : {
'$eq' : 'client-generic-dev'
},
'path' : {
'$eq' : 'zoom/client/' + branch + '/' + arch_type
}
}
headers = {
'content-type' : 'text/plain',
'X-JFrog-Art-Api' : self.api_key
}
data = 'items.find('+json.dumps(params)+').include(\"property\").transitive()'
r = requests.post(artifacts_end_point+'/api/search/aql', data=data, headers=headers)
if r.status_code == 200:
json_data = json.loads(r.text)
results = json_data['results']
results = sorted(results, key=cmp_to_key(cmp))
res = []
if num > 0:
results = results[:num]
for build_info in results:
res.append(build_info)
return res
return None
def get_archs(self, branch):
params = {
'$or' : [{
'type' : 'folder'
}, {
'type' : 'file'
}],
'repo' : {
'$eq' : 'client-generic-dev'
},
'path' : {
'$eq' : 'zoom/client/' + branch
}
}
headers = {
'content-type' : 'text/plain',
'X-JFrog-Art-Api' : self.api_key
}
data = 'items.find('+json.dumps(params)+').include(\"property\").transitive()'
r = requests.post(artifacts_end_point+'/api/search/aql', data=data, headers=headers)
if r.status_code == 200:
json_data = json.loads(r.text)
results = json_data['results']
results = sorted(results, key=cmp_to_key(cmp))
res = []
for build_info in results:
res.append(build_info['name'])
return res
return None
def update_repos(self, branch, arch_type):
if branch:
if not self.batch_checkout(branch,True):
return
list = self.get_latest_builds(branch, arch_type, 0)
if list is None:
return
roll_back_build = list[0]
build = self.get_latest_lib_build_info(roll_back_build)
if build is None:
return
release_path = self.release_path(arch_type)
dest_path = self.download_by_aria(url=build['lib_url'], sha1=build['lib_sha1'], sha2=build['lib_sha2'])
if dest_path is None:
return
self.unzip_lib(dest_path, release_path)
def clear_cache_files(self, dir):
for filePath in os.listdir(dir):
file = dir + '/' + filePath
if os.path.isfile(file):
t = os.path.getctime(file)
if int(time.time()) - int(t) > 7 * 3600 * 24:
os.system('rm -rf ' + file)
def release_path(self, arch_type):
release_path = self.work_space_path + 'Bin/'
if arch_type == 'mac_x86_64':
release_path += 'Mac/Release'
else:
release_path += 'Mac_arm64/Release'
return release_path
def cmd(conf):
if conf is None:
conf_file_path = os.path.expanduser('~') + '/.zmcli_conf'
with open(conf_file_path,'r') as load_f:
conf = json.load(load_f)
load_f.close()
is_at_work_space = False
for dir in os.listdir():
if dir in local_repo_names:
is_at_work_space = True
break
args = docopt(__doc__)
cli = CommandLineTool(api_key=conf['artifactory_api_key'], user_name=conf['artifactory_user_name'], work_space_path=(os.getcwd() + '/'))
if args.get('-h') or args.get('--help'):
print(__doc__)
return
elif args.get("-a") or args.get("--all"):
print(args)
return
elif args.get('-v') or args.get('--version'):
print(__version__)
return
elif args.get('download-pkg'):
branch_name = args.get('<branch>')
build_version = args.get('--build')
arch_type = args.get('--arch') if args.get('--arch') else None
disable_log = args.get('--no-log')
if branch_name:
cli.download_pkg(branch_name, build_version, arch_type, disable_log)
return
if args.get('show-builds'):
branch_name = args.get('<branch>')
arch_type = args.get('--arch') if args.get('--arch') else 'mac_x86_64'
num = int(args.get('--num')) if args.get('--num') else 10
if branch_name:
print(Fore.MAGENTA + '🤖 Getting latest build info for ' + branch_name + '(' + arch_type + ')')
res = []
if arch_type:
res = cli.get_latest_builds(branch_name, arch_type, num)
else:
res = cli.get_latest_builds(branch_name, None, num)
table = PrettyTable(['Version','Created At', 'Arch_type'], title='Latest builds for ' + branch_name + '(' + arch_type + ')')
if len(res) <= 0:
print(Fore.RED + '🤖 Did not find latest build info for ' + branch_name + '(' + arch_type + ')')
for build_info in res:
table.add_row([build_info['name'], build_info['created'], arch_type])
print(table)
if not is_at_work_space:
print(Fore.RED + 'Please cd to your work space dir')
return
if args.get('batch-checkout'):
branch_name = args.get('<branch>')
if branch_name:
if args.get('--force-pull'):
cli.batch_checkout(branch_name, True)
else:
cli.batch_checkout(branch_name, False)
elif args.get('rollback'):
branch_name = args.get('<branch>')
build_version = args.get('--build')
arch_type = args.get('--arch') if args.get('--arch') else 'mac_x86_64'
if branch_name:
cli.replace_lib(branch_name, build_version, arch_type, True)
elif args.get('update-all'):
branch_name = args.get('<branch>')
arch_type = args.get('--arch') if args.get('--arch') else 'mac_x86_64'
if branch_name:
cli.update_repos(branch=branch_name, arch_type=arch_type)
elif args.get('replace-lib'):
branch_name = args.get('<branch>')
build_version = args.get('--build')
arch_type = args.get('--arch') if args.get('--arch') else 'mac_x86_64'
if branch_name:
cli.replace_lib(branch_name, build_version, arch_type, False)
elif args.get('batch-pull'):
cli.batch_pull()
def main():
init(autoreset=True)
conf_file_path = os.path.expanduser('~') + '/.zmcli_conf'
if not os.path.exists(conf_file_path):
print(Fore.MAGENTA + '🤖 Setup config file...')
artifactory_user_name = input(Fore.CYAN + 'Your artifactory user name:\n')
artifactory_api_key = input(Fore.CYAN + 'Your artifactory api key:\n')
conf = { 'artifactory_user_name' : artifactory_user_name,
'artifactory_api_key' : artifactory_api_key}
with open(conf_file_path,"w") as f:
json.dump(conf,f)
print(Fore.YELLOW + "Config file is at '~/.zmcli_conf'")
f.close()
cmd(conf)
else:
cmd(None)
if __name__ == '__main__':
main() | zmcli | /zmcli-1.0.5.tar.gz/zmcli-1.0.5/source/cli.py | cli.py |
<img align="right" width="33.3%" src="logo.png">
# Zmei code generator
[](https://codeclimate.com/github/zmei-framework/generator/maintainability)
[](https://codecov.io/gh/zmei-framework/generator)
[](https://travis-ci.org/zmei-framework/generator)
[](#contributors)
[](https://pypi.org/project/zmei-cli/)
[](https://pypi.org/project/zmei-cli/)
Zmei generator is started as simple scaffolding tool for Django. Now it is powerfull
code generator that automate routine work and gently integrates generated sources into your custom code.
## Features
- Quick create configured Django project
- Compact dsl for generating Django-views and models
- Pycharm plugin for syntax highlighting
- Automatic django-admin generation including complex ones: polymorphic, inlines, translatable
- Powerful CRUD generator
- React application generator (TODO: not documented) + channels websocket integration
- Flutter application generator (TODO: not documented)
- Automatic generation of REST endpoints
- Flexible plugin system
## Installation
Generator is written in python. Install with pip python packaging tool (preferably in virtual environment):
`pip install zmei-cli`
## Quick start
Create file "main.col" with page declaration:
[index: /]
@markdown {
# Hello, world!
}
And run zmei command:
zmei gen up
In less than a minute you will get all dependency installed and django application
with hello world page on http://127.0.0.1:8000/
## Next steps
See [documentation](https://zmei-framework.com/generator/).
Read tests [unit](https://github.com/zmei-framework/generator/tree/master/tests/unit),
[en2end](https://github.com/zmei-framework/generator/tree/master/tests/end2end).
## Help
Ask on [https://www.reddit.com/r/zmei/](https://www.reddit.com/r/zmei/) if you need
any help, or fill an issue.
### Articles
- [Django application in 60 seconds](https://zmei-framework.com/generator/blog/0_Zmei_quick_start.html)
- [Django REST & Admin with a nice theme in 60 seconds](https://zmei-framework.com/generator/blog/1_Zmei_quick_start_2.html#sec-2)
## Plugins
- [Flutter plugin](https://github.com/zmei-framework/zmei-gen-flutter)
## Contribution
Contributions are highly appreciated. Project is huge and it is hard to develop it alone.
You can contribute by:
- Improve [documentation](https://github.com/zmei-framework/generator/tree/master/docs)
- Test, write bug reports, propose features
- Add new features
- Fix bugs, improve code base, add your features
- Write articles, blog-posts with your experience using the generator
- Write plugins, improve existing ones
## Authors
- Alex Rudakov @ribozz
## Contributors
Thanks goes to these wonderful people ([emoji key](https://github.com/all-contributors/all-contributors#emoji-key)):
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore -->
<table><tr><td align="center"><a href="https://github.com/rumjantsevv"><img src="https://avatars3.githubusercontent.com/u/41049901?v=4" width="100px;" alt="rumjantsevv"/><br /><sub><b>rumjantsevv</b></sub></a><br /><a href="https://github.com/zmei-framework/generator/issues?q=author%3Arumjantsevv" title="Bug reports">🐛</a> <a href="#userTesting-rumjantsevv" title="User Testing">📓</a></td><td align="center"><a href="https://github.com/EternalSoul"><img src="https://avatars0.githubusercontent.com/u/1576654?v=4" width="100px;" alt="Vladimir Bezuglõi"/><br /><sub><b>Vladimir Bezuglõi</b></sub></a><br /><a href="https://github.com/zmei-framework/generator/issues?q=author%3AEternalSoul" title="Bug reports">🐛</a> <a href="https://github.com/zmei-framework/generator/commits?author=EternalSoul" title="Code">💻</a></td></tr></table>
<!-- ALL-CONTRIBUTORS-LIST:END -->
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!
### Thanks to
- github, travis-ci.com and codecov.io for free great services for Open Source projects!
## LEGAL NOTICE
Source code is distributed under GNU General Public License v3.0 licence. Full licence text is available in LICENSE file.
In-short about GPLv3:
- All software that use Zmei-generator as it's part **MUST** be open-sourced as well: plugins, other generators
based on it, etc.
- You **CAN NOT** take Zmei-generator and sell it as a paid service without open-sourcing it
- But, you **CAN** use Zmei generator as a tool to write any software including private closed source software
Software is free for non-commercial use. For commercial use ask for dual-licensing options.
| zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/README.md | README.md |
from zmei_generator.domain.application_def import ApplicationDef
from zmei_generator.domain.extensions import ApplicationExtension
from zmei_generator.parser.gen.ZmeiLangParser import ZmeiLangParser
from zmei_generator.parser.utils import BaseListener
class ServiceConfig(object):
def __init__(self) -> None:
super().__init__()
self.name = None
self.vars = {}
class SeleniumPytestConfig(object):
def __init__(self) -> None:
super().__init__()
self.services = {}
self.vars = {}
class DeploymentConfig(object):
def __init__(self) -> None:
super().__init__()
self.manual_deploy = False
self.branch = None
self.environment = None
self.hostname = None
self.vars = {}
@property
def coverage(self):
return 'coverage' in self.vars
@property
def deployment(self):
return self.environment.replace('/', '-')
class GitlabAppExtension(ApplicationExtension):
def __init__(self, application):
super().__init__(application)
self.configs = [] # type: list[DeploymentConfig]
self.test = None # type: SeleniumPytestConfig
def get_name(cls):
return 'gitlab'
class GitlabAppExtensionParserListener(BaseListener):
def __init__(self, application: ApplicationDef) -> None:
super().__init__(application)
self.dp = None # type: DeploymentConfig
self.test = None # type: SeleniumPytestConfig
self.service = None # type: ServiceConfig
def enterAn_gitlab(self, ctx: ZmeiLangParser.An_gitlabContext):
extension = GitlabAppExtension(self.application)
self.application.extensions.append(
extension
)
self.application.register_extension(extension)
def enterAn_gitlab_test_declaration_selenium_pytest(self,
ctx: ZmeiLangParser.An_gitlab_test_declaration_selenium_pytestContext):
self.test = SeleniumPytestConfig()
def exitAn_gitlab_test_declaration_selenium_pytest(self,
ctx: ZmeiLangParser.An_gitlab_test_declaration_selenium_pytestContext):
self.application[GitlabAppExtension].test = self.test
self.test = None
def enterAn_gitlab_test_service(self, ctx: ZmeiLangParser.An_gitlab_test_serviceContext):
self.service = ServiceConfig()
self.service.name = ctx.an_gitlab_test_service_name().getText()
def exitAn_gitlab_test_service(self, ctx: ZmeiLangParser.An_gitlab_test_serviceContext):
self.test.services[self.service.name] = self.service
self.service = None
def enterAn_gitlab_branch_declaration(self, ctx: ZmeiLangParser.An_gitlab_branch_declarationContext):
self.dp = DeploymentConfig()
def enterAn_gitlab_branch_deploy_type(self, ctx: ZmeiLangParser.An_gitlab_branch_deploy_typeContext):
self.dp.manual_deploy = ctx.getText() == '~>'
def enterAn_gitlab_branch_name(self, ctx: ZmeiLangParser.An_gitlab_branch_nameContext):
val = ctx.getText()
if '*' in val:
val = val.replace('/', '\/')
val = val.replace('*', '.*')
val = f'/^{val}$/'
self.dp.branch = val
def enterAn_gitlab_deployment_name(self, ctx: ZmeiLangParser.An_gitlab_deployment_nameContext):
val = ctx.getText()
self.dp.environment = val
def enterAn_gitlab_deployment_host(self, ctx: ZmeiLangParser.An_gitlab_deployment_hostContext):
val = ctx.getText()
self.dp.hostname = val
def enterAn_gitlab_deployment_variable(self, ctx: ZmeiLangParser.An_gitlab_deployment_variableContext):
if self.service:
obj = self.service
elif self.test:
obj = self.test
elif self.dp:
obj = self.dp
else:
raise Exception('No destination for variable. Grammar error?')
obj.vars[ctx.an_gitlab_deployment_variable_name().getText()] = ctx.an_gitlab_deployment_variable_value().getText()
def exitAn_gitlab_branch_declaration(self, ctx: ZmeiLangParser.An_gitlab_branch_declarationContext):
super().exitAn_gitlab_branch_declaration(ctx)
if 'coverage' in self.dp.vars:
val = self.dp.vars['coverage'].strip('"\'')
if not val.endswith('/'):
val += '/'
self.dp.vars['coverage'] = val
self.application[GitlabAppExtension].configs.append(self.dp)
self.dp = None | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/gitlab/extensions/application/gitlab.py | gitlab.py |
from zmei_generator.domain.application_def import FieldDeclaration
from zmei_generator.parser.errors import GlobalScopeValidationError as ValidationException
from zmei_generator.domain.field_def import FieldDef
from zmei_generator.generator.utils import gen_args
class FilerFileFieldDef(FieldDef):
"""
Image field
"""
sizes = None
def get_model_field(self):
args = self.prepare_field_arguemnts({
'related_name': '+'
})
return FieldDeclaration(
[('filer.fields.file', 'FilerFileField')],
'FilerFileField(on_delete=models.SET_NULL, {})'.format(gen_args(args))
)
def get_rest_field(self):
return FieldDeclaration(
[
('cratis_filer.serializers', 'FileSerializer')
],
'FileSerializer()'
)
class FilerFileFolderDef(FilerFileFieldDef):
"""
Image field
"""
def get_model_field(self):
args = self.prepare_field_arguemnts({
'related_name': '+'
})
return FieldDeclaration(
[('filer.fields.folder', 'FilerFolderField')],
'FilerFolderField(on_delete=models.SET_NULL, {})'.format(gen_args(args))
)
# def get_rest_field(self):
# return FieldDeclaration(
# [
# ('cratis_filer.serializers', 'FileSerializer')
# ],
# 'FileSerializer()'
# )
class ImageSize(object):
name = None
width = None
height = None
filters = None
class FilerImageFieldDef(FilerFileFieldDef):
"""
Image field
"""
sizes = None
def get_model_field(self):
args = self.prepare_field_arguemnts({
'related_name': '+'
})
return FieldDeclaration(
[('filer.fields.image', 'FilerImageField')],
'FilerImageField(on_delete=models.SET_NULL, {})'.format(gen_args(args))
)
def get_rest_field(self):
sizes_prepared = []
for size in self.sizes:
filters = []
for filter in size.filters:
if filter.name not in ('crop', 'upscale'):
raise ValidationException('Unknown image filter: {}'.format(filter.name))
filters.append(filter.name)
sizes_prepared.append('"{}": Size({}, {}, crop={}, upscale={})'.format(
size.name, size.width, size.height, 'crop' in filters, 'upscale' in filters
))
sizes_prepared = '{\n%s\n}' % (', \n'.join(sizes_prepared))
return FieldDeclaration(
[
('cratis_filer.serializers', 'ThumbnailImageField'),
('cratis_filer.utils', 'Size')
],
'ThumbnailImageField({})'.format(sizes_prepared)
)
@property
def admin_list_renderer(self):
return """
from cratis_filer.serializers import ThumbnailImageField
from cratis_filer.utils import Size
try:
return '<img src="{}" style="width: 60px; height: 60px;" />'.format(ThumbnailImageField({'thumb': Size(60, 60)}).to_representation(obj.%s)['thumb']['url'])
except KeyError as e:
return '-'
""" % self.name
class FilerImageFolderFieldDef(FilerImageFieldDef):
def get_model_field(self):
args = self.prepare_field_arguemnts({
'related_name': '+'
})
return FieldDeclaration(
[('filer.fields.folder', 'FilerFolderField')],
'FilerFolderField(on_delete=models.SET_NULL, {})'.format(gen_args(args))
)
def get_rest_field(self):
args = self.prepare_field_arguemnts()
return FieldDeclaration(
[
('cratis_filer.serializers', 'ImageFolderSerializer'),
('cratis_filer.utils', 'Size')
],
'ImageFolderSerializer({}, {})'.format(self.sizes, gen_args(args))
) | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/filer/fields/filer.py | filer.py |
import re
from zmei_generator.domain.application_def import ApplicationDef
from zmei_generator.domain.extensions import PageExtension
from zmei_generator.domain.frozen import FrozenClass
from zmei_generator.parser.gen.ZmeiLangParser import ZmeiLangParser
from zmei_generator.parser.utils import BaseListener
class StreamModel(FrozenClass):
def __init__(self, page) -> None:
self.page = page
self.model_app_name = None
self.model_class_name = None
self.filter_expr = None
self.fields = None
self._freeze()
@property
def class_name(self):
return self.target
@property
def stream_name(self):
return re.sub('[^a-z0-9]+', '_', self.target.lower())
class StreamPageExtension(PageExtension):
# stream
def __init__(self, page) -> None:
super().__init__(page)
self.models = []
def get_required_apps(self):
return ['django_query_signals']
def get_required_deps(self):
return ['django-query-signals']
class StreamPageExtensionParserListener(BaseListener):
def __init__(self, application: ApplicationDef) -> None:
super().__init__(application)
self.stream_model = None
def enterAn_stream(self, ctx: ZmeiLangParser.An_streamContext):
stream = StreamPageExtension(self.page)
self.application.extensions.append(
stream
)
self.page.register_extension(stream)
def enterAn_stream_target_model(self, ctx: ZmeiLangParser.An_stream_target_modelContext):
super().enterAn_stream_target_model(ctx)
stream_model = StreamModel(self.page)
target = ctx.getText()
if target.startswith('#'):
model = self.application.resolve_model(target[1:])
stream_model.model_class_name = model.class_name
stream_model.model_app_name = model.application.app_name
else:
stream_model.model_class_name = target.split('.')[-1]
stream_model.model_app_name = '.'.join(target.split('.')[:-1])
self.page[StreamPageExtension].models.append(
stream_model
)
self.stream_model = stream_model
def enterAn_stream_target_filter(self, ctx: ZmeiLangParser.An_stream_target_filterContext):
self.stream_model.filter_expr = self._get_code(ctx)
def enterAn_stream_field_name(self, ctx: ZmeiLangParser.An_stream_field_nameContext):
field_name = ctx.getText()
if not self.stream_model.fields:
self.stream_model.fields = [field_name]
else:
self.stream_model.fields.append(field_name) | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/channels/extensions/pages/stream.py | stream.py |
from zmei_generator.contrib.channels.extensions.application.channels import ChannelsAppExtension
from zmei_generator.contrib.channels.extensions.pages.stream import StreamPageExtension
from zmei_generator.generator.imports import ImportSet
from zmei_generator.generator.utils import generate_file
def generate(target_path, project):
if not project.applications_support(ChannelsAppExtension):
return
streams = []
imports = ImportSet()
for app in project.applications.values():
for page in app.pages_with(StreamPageExtension):
streams.append((app, page))
imports.add(f'{app.app_name}.channels', f'{page.view_name}Consumer')
generate_file(target_path, f'app/routing.py', 'channels.routing_main.tpl', context={
'streams': streams,
'imports': imports,
})
for app in project.applications.values():
if app.pages_support(StreamPageExtension):
imports = ImportSet()
imports.add('channels.layers', 'get_channel_layer')
imports.add('channels.generic.websocket', 'AsyncWebsocketConsumer')
imports.add('django.db.models.signals', 'post_save', 'm2m_changed', 'post_delete')
imports.add('django_query_signals', 'post_bulk_create', 'post_delete as post_delete_bulk',
'post_get_or_create', 'post_update_or_create', 'post_update')
imports.add('channels.db', 'database_sync_to_async')
imports.add('asgiref.sync', 'async_to_sync')
imports.add('asyncio', 'sleep')
imports.add('django.dispatch', 'receiver')
imports.add('app.utils.rest', 'ZmeiJsonEncoder')
pages = app.pages_with(StreamPageExtension)
for page in pages:
for model in page[StreamPageExtension].models:
imports.add(f'{model.model_app_name}.models', model.model_class_name)
imports.add(f'.views', page.view_name)
generate_file(target_path, f'{app.app_name}/channels.py', 'channels.py.tpl', context={
'pages': pages,
'ext': StreamPageExtension,
'imports': imports,
}) | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/channels/generator/channels.py | channels.py |
from zmei_generator.domain.application_def import FieldDeclaration, ApplicationDef
from zmei_generator.parser.errors import GlobalScopeValidationError as ValidationException
from zmei_generator.domain.extensions import ModelExtension
from zmei_generator.parser.gen.ZmeiLangParser import ZmeiLangParser
from zmei_generator.parser.utils import BaseListener
class RestModelExtension(ModelExtension):
def __init__(self, model) -> None:
super().__init__(model)
self.config = None
self.rest_conf = {}
self.published_apis = {}
self.rest_mode = None
def get_required_apps(self):
return ['rest_framework']
def get_required_deps(self):
return ['djangorestframework']
def post_process(self):
for config in self.model[RestModelExtension].rest_conf.values():
config.post_process()
class RestModelExtensionParserListener(BaseListener):
def __init__(self, application: ApplicationDef) -> None:
super().__init__(application)
self.rest_config = None
self.rest_config_stack = []
def ensure_defaults(self):
# fields
if not self.rest_config.fields:
self.rest_config.set_fields(self.rest_config.model.filter_fields(['*'], include_refs=True))
def enterAn_rest(self, ctx: ZmeiLangParser.An_restContext):
if not self.model.supports(RestModelExtension):
ext = RestModelExtension(self.model)
self.application.extensions.append(ext)
self.model.register_extension(ext)
self.rest_config = RestSerializerConfig(self.model.class_name, self.model)
def exitAn_rest_main_part(self, ctx: ZmeiLangParser.An_rest_main_partContext):
self.ensure_defaults()
def exitAn_rest(self, ctx: ZmeiLangParser.An_restContext):
self.ensure_defaults()
# Place config where it should be
if not self.model[RestModelExtension].rest_conf:
self.model[RestModelExtension].rest_conf = {}
self.model[RestModelExtension].rest_conf[self.rest_config.descriptor] = self.rest_config
def enterAn_rest_descriptor(self, ctx: ZmeiLangParser.An_rest_descriptorContext):
self.rest_config.set_descriptor(ctx.getText())
def enterAn_rest_fields(self, ctx: ZmeiLangParser.An_rest_fieldsContext):
self.rest_config.set_fields(self.rest_config.model.filter_fields(self._get_fields(ctx), include_refs=True))
def enterAn_rest_i18n(self, ctx: ZmeiLangParser.An_rest_i18nContext):
self.rest_config.i18n = ctx.BOOL().getText() == 'true'
def enterAn_rest_str(self, ctx: ZmeiLangParser.An_rest_strContext):
self.rest_config.str = ctx.BOOL().getText() == 'true'
def enterAn_rest_fields_write_mode(self, ctx: ZmeiLangParser.An_rest_fields_write_modeContext):
if ctx.write_mode_expr():
self.rest_config.rest_mode = ctx.write_mode_expr().WRITE_MODE().getText()
def enterAn_rest_user_field(self, ctx: ZmeiLangParser.An_rest_user_fieldContext):
self.rest_config.user_field = ctx.id_or_kw().getText()
def enterAn_rest_query(self, ctx: ZmeiLangParser.An_rest_queryContext):
self.rest_config.query = self._get_code(ctx.python_code())
def enterAn_rest_on_create(self, ctx: ZmeiLangParser.An_rest_on_createContext):
self.rest_config.on_create = self._get_code(ctx.python_code())
def enterAn_rest_filter_in(self, ctx: ZmeiLangParser.An_rest_filter_inContext):
self.rest_config.filter_in = self._get_code(ctx.python_code())
def enterAn_rest_filter_out(self, ctx: ZmeiLangParser.An_rest_filter_outContext):
self.rest_config.filter_out = self._get_code(ctx.python_code())
def exitAn_rest_auth_type(self, ctx: ZmeiLangParser.An_rest_auth_typeContext):
ref = None
if ctx.an_rest_auth_token_model():
ref = ctx.an_rest_auth_token_model().getText()
if ctx.an_rest_auth_token_class():
ref = ctx.an_rest_auth_token_class().getText()
self.rest_config.add_auth_method(ctx.an_rest_auth_type_name().getText(), ref)
def enterAn_rest_inline_decl(self, ctx: ZmeiLangParser.An_rest_inline_declContext):
name = ctx.an_rest_inline_name().getText()
field = self.rest_config.fields_index[name]
inline_model = field.get_rest_inline_model()
if not inline_model:
raise ValidationException('field "{}" can not be used as inline'.format(field.name))
serializer_name = self.rest_config.serializer_name + '_Inline' + inline_model.class_name
new_config = RestSerializerConfig(serializer_name, inline_model, field)
self.rest_config.inlines[name] = new_config
self.rest_config.extension_serializers.append(new_config)
self.rest_config.field_imports.append(
FieldDeclaration('{}.models'.format(inline_model.application.app_name),
inline_model.class_name))
self.rest_config.field_declarations.append(
(field.name, '{}Serializer(many={}, read_only={})'.format(serializer_name, repr(field.is_many),
repr(
field.name in self.rest_config.read_only_fields))))
self.rest_config_stack.append(self.rest_config)
self.rest_config = new_config
def exitAn_rest_inline_decl(self, ctx: ZmeiLangParser.An_rest_inline_declContext):
self.rest_config = self.rest_config_stack.pop()
def enterAn_rest_read_only(self, ctx: ZmeiLangParser.An_rest_read_onlyContext):
self.rest_config.read_only_fields = [f.name for f in
self.rest_config.model.filter_fields(self._get_fields(ctx))]
def enterAn_rest_annotate_count(self, ctx: ZmeiLangParser.An_rest_annotate_countContext):
field = ctx.an_rest_annotate_count_field().getText()
kind = 'count'
if ctx.an_rest_annotate_count_alias():
alias = ctx.an_rest_annotate_count_alias().getText()
else:
alias = '{}_{}'.format(field, kind)
self.rest_config.field_declarations.append((alias, 'serializers.IntegerField()'))
self.rest_config.field_names.append(alias)
self.rest_config.annotations.append("{}=Count('{}', distinct=True)".format(alias, field))
self.rest_config.field_imports.append(FieldDeclaration('django.db.models', 'Count'))
class RestSerializerConfig(object):
def __init__(self, serializer_name, model, parent_field=None) -> None:
self.descriptor = '_'
self.fields = None
self.fields_index = {}
self.i18n = False
self.str = True
self.model = model
self.parent_field = parent_field
self.serializer_name = serializer_name
self.field_declarations = []
self.field_imports = []
self.field_names = None
self.annotations = []
self.auth_methods = {}
self.auth_method_classes = []
self.query = 'all()'
self.on_create = ''
self.filter_in = ''
self.filter_out = ''
self.rest_mode = 'r'
self.user_field = None
self.extension_serializers = []
self.inlines = {}
self.read_only_fields = []
self.field_names = ['id']
self.processed = False
def set_descriptor(self, descriptor):
self.descriptor = descriptor
if descriptor != '_':
self.serializer_name = f'{self.serializer_name}{descriptor.capitalize()}'
def set_fields(self, fields):
self.fields = fields
self.fields_index = {f.name: f for f in fields}
def add_auth_method(self, method, ref):
if method == 'token':
self.field_imports.append(('rest_framework.authentication', 'TokenAuthentication'))
self.auth_method_classes.append(f'{self.serializer_name}TokenAuthentication')
if ref:
if ref[0] == '#':
ref = ref[1:]
ref_model = self.model.application.resolve_model(ref)
cls = ref_model.class_name
self.field_imports.append((f'{ref_model.application.app_name}.models', cls))
else:
cls = ref
else:
raise ValidationException('@rest->auth->token require an argument)')
self.auth_methods[method] = {'model': cls}
if method == 'session':
self.field_imports.append(('rest_framework.authentication', 'SessionAuthentication'))
self.auth_methods[method] = {}
self.auth_method_classes.append('SessionAuthentication')
if method == 'basic':
self.field_imports.append(('rest_framework.authentication', 'BasicAuthentication'))
self.auth_methods[method] = {}
self.auth_method_classes.append('BasicAuthentication')
self.field_imports.append(('rest_framework.permissions', 'IsAuthenticated'))
def post_process(self):
if self.processed:
raise Exception('Already processed!')
self.processed = True
if len(self.auth_method_classes) == 0:
self.field_imports.append(('rest_framework.permissions', 'AllowAny'))
for config in self.extension_serializers:
config.post_process()
for field in self.fields:
if self.parent_field and hasattr(self.parent_field,
'source_field_name') and field.name == self.parent_field.source_field_name:
continue
if self.user_field and field.name == self.user_field:
continue
self.field_names.append(field.name)
if field.name not in self.inlines:
rest_field = field.get_rest_field()
if rest_field:
for line in rest_field.import_def:
self.field_imports.append(line)
self.field_declarations.append((field.name, rest_field.declaration))
if self.str:
self.field_names.append('__str__')
self.read_only_fields.append('__str__')
@property
def descriptor_suffix(self):
if self.descriptor == '_':
return ''
return '_' + self.descriptor
@property
def rest_class(self):
if self.rest_mode == 'rw':
return 'rest_framework.viewsets', 'ModelViewSet'
elif self.rest_mode == 'w':
return 'cratis_api.views', 'WriteOnlyModelViewSet'
else:
return 'rest_framework.viewsets', 'ReadOnlyModelViewSet'
@property
def is_writable(self):
return 'w' in self.rest_mode
@property
def is_root(self):
return self.parent_field is None
def configure_imports(self, imports):
imports.add(*self.rest_class)
imports.add('.serializers', f'{self.serializer_name}Serializer')
imports.add(f'{self.model.application.app_name}.models', self.model.class_name)
for import_line in self.field_imports:
imports.add(*import_line)
for conf in self.extension_serializers:
conf.configure_imports(imports)
def configure_model_imports(self, imports):
if self.model:
imports.add(f'{self.model.application.app_name}.models', self.model.class_name)
for conf in self.extension_serializers:
conf.configure_model_imports(imports) | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/drf/extensions/model/rest.py | rest.py |
import os
from zmei_generator.contrib.drf.extensions.model.api import ApiModelExtension
from zmei_generator.contrib.drf.extensions.model.rest import RestModelExtension
from zmei_generator.generator.imports import ImportSet
from zmei_generator.generator.utils import generate_file, package_to_path, generate_package
def generate(target_path, project):
has_api = False
for application in project.applications.values():
if not application.models_support(ApiModelExtension) and not application.models_support(RestModelExtension):
continue
app_name = application.app_name
has_api = True
imports = ImportSet()
imports.add('rest_framework', 'serializers')
for model in application.models_with(RestModelExtension):
for name, rest_conf in model[RestModelExtension].rest_conf.items():
rest_conf.configure_model_imports(imports)
generate_file(target_path, '{}/serializers.py'.format(app_name), 'serializers.py.tpl', {
'imports': imports.import_sting(),
'application': application,
'rest_ext': RestModelExtension,
'api_ext': ApiModelExtension,
'models': [(model.ref, model) for model in application.models_with(RestModelExtension)],
})
url_imports = ImportSet()
url_imports.add('django.conf.urls', 'url')
url_imports.add('django.conf.urls', 'include')
url_imports.add('rest_framework', 'routers')
for model in application.models_with(RestModelExtension):
for rest_conf in model[RestModelExtension].published_apis.values():
url_imports.add('.views_rest', f'{rest_conf.serializer_name}ViewSet')
context = {
'package_name': app_name,
'application': application,
'ext': RestModelExtension,
'url_imports': url_imports.import_sting(),
}
filepath = os.path.join(package_to_path(app_name), 'urls_rest.py')
generate_file(target_path, filepath, 'urls_rest.py.tpl', context)
# views_rest.py
imports = ImportSet()
for model in application.models_with(RestModelExtension):
for name, rest_conf in model[RestModelExtension].rest_conf.items():
rest_conf.configure_imports(imports)
generate_file(target_path, f'{app_name}/views_rest.py', 'views_rest.py.tpl', {
'package_name': app_name,
'application': application,
'ext': RestModelExtension,
'models': [(name, model) for name, model in application.models.items() if model.supports(RestModelExtension)],
'imports': imports
})
has_pages = False
for application in project.applications.values():
if len(application.pages):
has_pages = True
if has_api or has_pages:
generate_package('app.utils', path=target_path)
generate_file(target_path, 'app/utils/rest.py', template_name='rest.utils.py.tpl') | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/drf/generator/serializers.py | serializers.py |
import os
import random
import string
from zmei_generator.contrib.celery.extensions.application.celery import CeleryAppExtension
from zmei_generator.contrib.channels.extensions.application.channels import ChannelsAppExtension
from zmei_generator.contrib.docker.extensions.application.docker import DockerAppExtension
from zmei_generator.generator.application import ZmeiProject
from zmei_generator.generator.utils import generate_file
def generate(target_path, project: ZmeiProject):
has_docker = any([app.supports(DockerAppExtension) for app in project.applications.values()])
has_celery = any([app.supports(CeleryAppExtension) for app in project.applications.values()])
has_channels = any([app.supports(ChannelsAppExtension) for app in project.applications.values()])
if not has_docker:
return
context = {
'req_file': os.environ.get('ZMEI_REQUIREMNETS_FILE', 'requirements.txt'),
'has_channels': has_channels,
'has_celery': has_celery,
'admin_pass': ''.join(random.choice(
string.ascii_letters + string.digits + string.punctuation.replace('"', '')) for _ in range(16)),
'admin_user': 'admin-' + ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(4)),
}
generate_file(target_path, 'requirements.prod.txt', 'docker/requirements.prod.txt.tpl', context)
generate_file(target_path, 'app/settings_prod.py', 'docker/settings_prod.py.tpl', context)
generate_file(target_path, 'Dockerfile', 'docker/dockerfile.tpl', context)
generate_file(target_path, '.dockerignore', 'docker/dockerignore.tpl', context)
generate_file(target_path, 'docker-compose.yaml', 'docker/docker-compose.yaml.tpl', context)
generate_file(target_path, 'deploy/init.sh', 'docker/init.sh.tpl', context)
generate_file(target_path, 'deploy/nginx.conf', 'docker/nginx.conf.tpl', context)
if not has_channels:
generate_file(target_path, 'deploy/uwsgi.ini', 'docker/uwsgi.ini.tpl', context)
else:
generate_file(target_path, 'app/asgi.py', 'docker/asgi.py.tpl', context) | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/docker/generator/docker.py | docker.py |
from zmei_generator.contrib.web.fields.expression import ExpressionFieldDef
from zmei_generator.domain.application_def import FieldDeclaration
from zmei_generator.domain.reference_field import ReferenceField
from zmei_generator.parser.errors import GlobalScopeValidationError as ValidationException
from zmei_generator.domain.extensions import Extension, ModelExtension
from zmei_generator.contrib.web.fields.date import AutoNowDateTimeFieldDef, AutoNowAddDateTimeFieldDef
class AdminModelExtension(ModelExtension):
def __init__(self, model) -> None:
super().__init__(model)
self.model = model
# fields for different purposes
self.admin_list = None
self.read_only = None
self.list_editable = None
self.list_filter = None
self.list_search = None
self.fields = None
self.inlines = []
# mapping of field.name => tab.id
self.tab_fields = {}
# id of tabs
self.tabs = []
# mapping of tab.id => Verbose name of tab
self.tab_names = {}
self.has_polymorphic_inlines = False
self.tabs_raw = []
# media files
self.css = []
self.js = []
def register_tab(self, name, verbose_name, fields_expr, prepend=False):
self.tabs_raw.append(
(name, verbose_name, fields_expr, prepend)
)
def add_tab(self, name, verbose_name, fields_expr, prepend):
"""
Do delayed field calculation as we need to wait
until all Reference fields are created
"""
fields = self.model.filter_fields(fields_expr, include_refs=True)
self.add_tab_fieldset(name, verbose_name, fields, prepend)
def add_tab_fieldset(self, name, verbose_name, fields, prepend):
# filter out fields that are not meant to be rendered
if self.fields:
fields = [f for f in fields if f in self.fields or isinstance(f, ReferenceField)]
self.tab_names[name] = verbose_name or name.capitalize()
for field in fields:
self.tab_fields[field.name] = name
if name not in self.tabs:
if prepend:
self.tabs = [name] + self.tabs
else:
self.tabs.append(name)
def check_tab_consistency(self):
general_fields = []
for field in (self.fields or self.model.all_fields):
if field.name not in self.tab_fields:
general_fields.append(field)
if len(general_fields) > 0:
self.add_tab_fieldset('general', 'General', general_fields, prepend=True)
def post_process(self):
if self.model.parent and self.model.parent.supports(AdminModelExtension):
self.tab_fields = self.model.parent[AdminModelExtension].tab_fields.copy()
self.tabs = self.model.parent[AdminModelExtension].tabs.copy()
self.tab_names = self.model.parent[AdminModelExtension].tab_names.copy()
self.inlines.extend(self.model.parent[AdminModelExtension].inlines.copy())
for tab in self.tabs_raw:
self.add_tab(*tab)
self.check_tab_consistency()
# set tabs to inlines
inline_map = {x.inline_name: x for x in self.inlines}
for field_name, tab in self.tab_fields.items():
if field_name in inline_map:
inline_map[field_name].tab = tab
for inline in self.inlines:
inline.post_process()
# check for auto-fields
for field in self.model.all_and_inherited_fields_map.values():
if isinstance(field, (AutoNowDateTimeFieldDef, AutoNowAddDateTimeFieldDef)):
if not self.read_only:
self.read_only = []
self.read_only.append(field)
def fields_for_tab(self, tab):
fields = []
for name, tab_name in self.tab_fields.items():
# skip references
if isinstance(self.model.all_and_inherited_fields_map[name], ReferenceField):
continue
if isinstance(self.model.all_and_inherited_fields_map[name], ExpressionFieldDef):
continue
if tab_name == tab:
fields.append(self.model.all_and_inherited_fields_map[name])
return fields
@property
def class_declaration(self):
return ', '.join([x[1] for x in self.classes])
@property
def classes(self):
classes = []
model_admin_added = False
if self.has_polymorphic_inlines:
classes.append(('polymorphic.admin', 'PolymorphicInlineSupportMixin'))
if self.model.parent:
classes.append(('polymorphic.admin', 'PolymorphicChildModelAdmin'))
model_admin_added = True
elif self.model.polymorphic:
children_have_admins = len([x for x in self.model.child_models if x.supports(AdminModelExtension)]) > 0
if children_have_admins:
classes.append(('polymorphic.admin', 'PolymorphicParentModelAdmin'))
model_admin_added = True
if self.model.sortable:
classes.append(('suit.admin', 'SortableModelAdmin'))
model_admin_added = True
if self.model.tree:
classes.append(('mptt.admin', 'DraggableMPTTAdmin'))
model_admin_added = True
if self.model.translatable:
classes.append(('modeltranslation.admin', 'TabbedTranslationAdmin'))
model_admin_added = True
if not model_admin_added:
classes.append(('django.contrib.admin', 'ModelAdmin'))
return classes
@property
def inline_classes(self):
return [x.class_name for x in self.inlines]
class AdminInlineConfig(object):
def __init__(self, admin, name):
self.admin = admin
self.model = admin.model
self.inline_name = name
self.fields_expr = ['*']
self.extension_count = 0
self.inline_type = 'tabular'
self.field = None
self.source_field_name = None
self.target_model = None
self.field_set = None
self.tab = None
def post_process(self):
model = self.admin.model
if not model.check_field_exists(self.inline_name):
raise ValidationException(
'Field name specified for admin inline: "{}" does not exist'.format(self.inline_name))
field = model.all_and_inherited_fields_map[self.inline_name]
if not isinstance(field, ReferenceField):
raise ValidationException(
'Field name specified for admin inline: "{}" is not a reference field (back relation).'.format(
self.inline_name))
self.field = field
self.source_field_name = field.source_field_name
self.target_model = field.target_model
self.field_set = [f for f in field.target_model.filter_fields(self.fields_expr) if
f.name != self.source_field_name]
if self.extension_count:
if self.extension_count > 0 and self.inline_type == 'polymorphic':
raise ValidationException('{}->{}: When using inline type "polymorphic" extension must be 0'.format(
self.model.name,
self.inline_name
))
@property
def field_names(self):
return [f.name for f in self.field_set]
@property
def class_name(self):
return '{}{}Inline'.format(
self.model.class_name,
''.join([x.capitalize() for x in self.inline_name.split('_')])
)
@property
def type_declarations(self):
declarations = []
if self.inline_type == 'tabular':
if self.target_model.sortable:
declarations.append(FieldDeclaration('suit.admin', 'SortableTabularInline'))
else:
declarations.append(FieldDeclaration('django.contrib.admin', 'TabularInline'))
elif self.inline_type == 'polymorphic':
if self.target_model.sortable:
declarations.append(FieldDeclaration('cratis_admin.admin', 'StackedPolymorphicSortableInline'))
else:
declarations.append(FieldDeclaration('polymorphic.admin', 'StackedPolymorphicInline'))
else:
if self.target_model.sortable:
declarations.append(FieldDeclaration('suit.admin', 'SortableStackedInline'))
else:
declarations.append(FieldDeclaration('django.contrib.admin', 'StackedInline'))
if self.target_model.translatable:
declarations.append(FieldDeclaration('modeltranslation.admin', 'TranslationInlineModelAdmin'))
return declarations
@property
def parent_classes(self):
return [x.declaration for x in self.type_declarations] | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/admin/extensions/model/admin.py | admin.py |
from zmei_generator.contrib.admin.extensions.model.admin import AdminModelExtension
from zmei_generator.generator.imports import ImportSet
from zmei_generator.generator.utils import generate_file
def generate(target_path, project):
for app_name, application in project.applications.items():
if not application.models_support(AdminModelExtension):
continue
imports = ImportSet()
imports.add('django.contrib', 'admin')
imports.add('django', 'forms')
for model in application.models_with(AdminModelExtension):
imports.add('{}.models'.format(app_name), model.class_name)
for class_import in model[AdminModelExtension].classes:
imports.add(*class_import)
if model.polymorphic:
for child in model.child_models:
imports.add('{}.models'.format(app_name), child.class_name)
# inlines
for inline in model[AdminModelExtension].inlines:
for declaration in inline.type_declarations:
imports.add(*declaration)
if inline.inline_type == 'polymorphic':
for target_model in inline.target_model.child_models:
if target_model.translatable:
imports.add('cratis_i18n.admin', 'TranslatableInlineModelAdmin')
imports.add('{}.models'.format(app_name), target_model.class_name)
imports.add('{}.models'.format(app_name), inline.target_model.class_name)
for field in model.fields.values():
if field.get_admin_widget():
import_data, model_field = field.get_admin_widget()
for source, what in import_data:
imports.add(source, what)
generate_file(target_path, '{}/admin.py'.format(app_name), 'admin.py.tpl', {
'imports': imports.import_sting(),
'ext': AdminModelExtension,
'application': application,
'models': [(name, model) for name, model in application.models.items() if model.supports(AdminModelExtension)]
}) | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/admin/generator/admin.py | admin.py |
from zmei_generator.contrib.admin.extensions.application.suit import SuitAppExtension
from zmei_generator.contrib.admin.extensions.model.admin import AdminModelExtension, AdminInlineConfig
from zmei_generator.domain.application_def import ApplicationDef
from zmei_generator.parser.errors import TabsSuitRequiredValidationError
from zmei_generator.parser.gen.ZmeiLangParser import ZmeiLangParser
from zmei_generator.parser.utils import BaseListener
class AdminParserListener(BaseListener):
def __init__(self, application: ApplicationDef) -> None:
super().__init__(application)
self.inline = None # type
############################################
# Admin
############################################
def enterAn_admin(self, ctx: ZmeiLangParser.An_adminContext):
self.application.extensions.append(AdminModelExtension(self.model))
def enterAn_admin_list(self, ctx: ZmeiLangParser.An_admin_listContext):
self.model[AdminModelExtension].admin_list = self.model.filter_fields(self._get_fields(ctx))
def enterAn_admin_read_only(self, ctx: ZmeiLangParser.An_admin_read_onlyContext):
self.model[AdminModelExtension].read_only = self.model.filter_fields(self._get_fields(ctx))
def enterAn_admin_list_editable(self, ctx: ZmeiLangParser.An_admin_list_editableContext):
self.model[AdminModelExtension].list_editable = self.model.filter_fields(self._get_fields(ctx))
def enterAn_admin_list_filter(self, ctx: ZmeiLangParser.An_admin_list_filterContext):
self.model[AdminModelExtension].list_filter = self.model.filter_fields(self._get_fields(ctx))
def enterAn_admin_list_search(self, ctx: ZmeiLangParser.An_admin_list_searchContext):
self.model[AdminModelExtension].list_search = self.model.filter_fields(self._get_fields(ctx))
def enterAn_admin_fields(self, ctx: ZmeiLangParser.An_admin_fieldsContext):
self.model[AdminModelExtension].fields = self.model.filter_fields(self._get_fields(ctx))
def enterAn_admin_tabs(self, ctx: ZmeiLangParser.An_admin_tabsContext):
if not self.application.supports(SuitAppExtension):
raise TabsSuitRequiredValidationError(ctx.start)
def enterAn_admin_tab(self, ctx: ZmeiLangParser.An_admin_tabContext):
fields = self._get_fields(ctx)
name = ctx.tab_name().getText()
vname = ctx.tab_verbose_name()
if vname:
vname = vname.getText().strip('"\' ')
else:
vname = None
self.model[AdminModelExtension].register_tab(name, vname, fields)
def enterAn_admin_inline(self, ctx: ZmeiLangParser.An_admin_inlineContext):
self.inline = AdminInlineConfig(self.model[AdminModelExtension], ctx.inline_name().getText())
def enterInline_type(self, ctx: ZmeiLangParser.Inline_typeContext):
type_name = ctx.inline_type_name().getText()
if type_name == 'polymorphic':
self.model[AdminModelExtension].has_polymorphic_inlines = True
self.inline.inline_type = type_name
def enterInline_fields(self, ctx: ZmeiLangParser.Inline_fieldsContext):
self.inline.fields_expr = self._get_fields(ctx)
def enterInline_extension(self, ctx: ZmeiLangParser.Inline_extensionContext):
self.inline.extension_count = int(ctx.DIGIT().getText())
def enterAn_admin_css_file_name(self, ctx: ZmeiLangParser.An_admin_css_file_nameContext):
self.model[AdminModelExtension].css.append(ctx.getText().strip('"\''))
def enterAn_admin_js_file_name(self, ctx: ZmeiLangParser.An_admin_js_file_nameContext):
self.model[AdminModelExtension].js.append(ctx.getText().strip('"\''))
def exitAn_admin_inline(self, ctx: ZmeiLangParser.An_admin_inlineContext):
self.model[AdminModelExtension].inlines.append(
self.inline
)
self.inline = None | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/admin/parsers/admin.py | admin.py |
from copy import copy
from zmei_generator.contrib.web.extensions.page.block import InlinePageBlock, InlineTemplatePageBlock
from zmei_generator.domain.extensions import PageExtension
from zmei_generator.parser.gen.ZmeiLangParser import ZmeiLangParser
from zmei_generator.parser.utils import BaseListener
class ReactPageExtension(PageExtension):
def __init__(self, page) -> None:
super().__init__(page)
self.react_type = None
self.area = None
self.code = None
self.include_child = False
# def get_required_deps(self):
# return ['py_mini_racer']
@property
def can_inherit(self):
return self.include_child
def get_required_apps(self):
return ['rest_framework']
def get_required_deps(self):
return ['djangorestframework']
def modify_extension_bases(self, bases):
return super().modify_extension_bases(bases)
def filter_blocks(self, area, blocks, platform):
filtered = []
if platform == ReactPageExtension:
for block in blocks:
if isinstance(block, InlineTemplatePageBlock):
if block.template_name.startswith('theme/'):
new_context = copy(block.context)
block = InlineTemplatePageBlock(template_name='react/' + block.template_name, ref=block.ref)
block.context = new_context
else:
continue
filtered.append(block)
else:
# other platforms:
filtered = []
for block in blocks:
if isinstance(block, InlineTemplatePageBlock):
continue
filtered.append(block)
return filtered
def post_process(self):
super().post_process()
# page.react_pages.update({page.page_component_name: (react_components_imports, body, 'lala')})
reducer_cmp = f'Page{self.page.name.capitalize()}Reducer'
cmp = f'Page{self.page.name.capitalize()}Component'
self.page.imports.append(('app.utils.react', 'ZmeiReactViewMixin'))
self.page.add_block('content', InlinePageBlock(f"""
<div id="reactEl-{cmp}">{{{{ react_page_{cmp}|default:""|safe }}}}</div>
"""))
self.page.add_block('css', InlinePageBlock(f"""
<link media="all" rel="stylesheet" href="/static/react/all.bundle.css" />
<!-- <link media="all" rel="stylesheet" href="/static/react/{self.page.application.app_name}.bundle.css" /> -->
"""))
self.page.add_block('js', InlinePageBlock(f"""
<script type="text/javascript" src="/static/react/all.bundle.js"></script>
<!-- <script type="text/javascript" src="/static/react/{self.page.application.app_name}.bundle.js"></script> -->
<script>
R.renderClient(R.{reducer_cmp}, null, {{{{ react_state|safe }}}}, document.getElementById('reactEl-{cmp}'), false);
</script>
"""))
if 'ZmeiDataViewMixin' in self.page.extension_bases:
self.page.extension_bases.remove('ZmeiDataViewMixin')
self.page.extension_bases.append('ZmeiReactViewMixin')
class ReactPageExtensionParserListener(BaseListener):
def enterAn_react(self, ctx: ZmeiLangParser.An_reactContext):
extension = ReactPageExtension(self.page)
self.application.extensions.append(extension)
extension.react_type = ctx.an_react_type().getText()
self.page.register_extension(extension)
def enterAn_react_child(self, ctx: ZmeiLangParser.An_react_childContext):
if str(ctx.BOOL()) == 'true':
self.page[ReactPageExtension].include_child = True | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/react/extensions/page/react.py | react.py |
import re
import sys
from textwrap import indent, dedent
from zmei_generator.contrib.channels.extensions.pages.stream import StreamPageExtension
from zmei_generator.contrib.react.extensions.page.react import ReactPageExtension
from zmei_generator.generator.imports import ImportSet
from zmei_generator.generator.utils import generate_file, format_uri
REACT_IMPORT_RX = re.compile('^import\s+((\*\s+as\s+)?[a-zA-Z0-9_]+)?\s*,?\s*({\s*([a-zA-Z0-9_]+\s*(,\s*[a-zA-Z0-9_]+)*)\s*})?\s+from\s+"([^;]+)";$', re.MULTILINE)
def to_camel_case(name):
return ''.join([x.capitalize() for x in name.split('_')])
def generate(target_path, project):
has_react = False
react_pages = []
react_pages_by_app = {}
index_imports = ImportSet()
for app_name, application in project.applications.items():
if not application.pages_support(ReactPageExtension):
continue
has_react = True
react_pages_by_app[app_name] = {}
app_cmp_name = to_camel_case(app_name)
for name, page in application.pages.items():
ext = page.get_own_or_parent_extension(ReactPageExtension)
if not ext:
continue
page_cmp_name = to_camel_case(page.name)
name = f'{page_cmp_name}Page'
name_ui = f'{page_cmp_name}Ui'
if page.uri:
react_pages.append((app_cmp_name, name_ui, format_uri(page.uri)))
react_pages_by_app[app_name][page.name] = format_uri(page.uri)
page_component_name = f'Page{page_cmp_name}'
react_components_imports = ImportSet()
react_components_imports.add('react', 'React')
# react_components_imports.add(f'../Components/{el.tag}', '*' + el.tag)
imports = ImportSet()
imports.add('react', 'React')
imports.add(f'../../layout', 'BaseLayout')
imports.add(f'../layouts/{app_cmp_name}Layout', f'{app_cmp_name}Layout')
react_components_imports.add(f'../Reducers/{page_component_name}Reducers',
f'*reloadPageDataAction')
index_imports.add(f'./pages/{name}', '*' + name)
wrappers = [
f'PageContextProvider',
f'BaseLayout',
f'{app_cmp_name}Layout'
]
def wrap(source, cmp_list):
source = indent(source, ' ')
if not len(cmp_list):
return dedent(source)
w = cmp_list.pop()
return wrap(f'<{w} {{...this.props}} page={{this}}>\n{source}\n</{w}>', wrappers)
blocks_rendered = {}
for area, blocks in page.get_blocks(platform=ReactPageExtension).items():
blocks_rendered[area] = []
for index, block in enumerate(blocks):
rendered = block.render(area=area, index=index)
try:
import_section, rendered = rendered.split('<', maxsplit=1)
except ValueError:
pass
else:
for im in REACT_IMPORT_RX.findall(import_section):
what = ['*' + x.strip() for x in im[3].split(',') if x != '']
def_what = im[0].strip(' ,')
if len(def_what):
what.append(def_what)
src = im[5]
imports.add(src, *what)
rendered = '<' + rendered
blocks_rendered[area].append((block.ref, rendered))
streams = page.list_own_or_parent_extensions(StreamPageExtension)
generate_file(target_path, 'react/src/{}/pages/{}.jsx'.format(app_name, name),
'react/page.jsx.tpl', {
'imports': imports.import_sting_js(),
'name': name,
'page': page,
'blocks': blocks_rendered,
'ext': ReactPageExtension,
'app_name': app_name,
'streams': streams,
'to_camel_case': to_camel_case,
'source': wrap('{this.renderContent()}', wrappers)
})
ui_imports = ImportSet()
ui_imports.add('react', 'React')
ui_imports.add(f'../pages/{name}', name)
generate_file(target_path, f'react/src/{app_name}/ui/{name_ui}.jsx',
'react/ui.jsx.tpl', {
'imports': ui_imports.import_sting_js(),
'parent': name,
'name': name_ui
})
generate_file(target_path, f'react/src/{app_name}/layouts/{app_cmp_name}Layout.jsx',
'react/cmp.jsx.tpl', {
'name': f'{app_cmp_name}Layout',
'source': '<>{children}</>'
})
if not len(react_pages_by_app[app_name]):
del react_pages_by_app[app_name]
# sort react_pages
react_pages = sorted(react_pages)
# sort routes
temp_dict = {}
for key in sorted(react_pages_by_app):
temp_dict.update({key: react_pages_by_app[key]})
react_pages_by_app = temp_dict
if has_react:
generate_file(target_path, f'react/src/layout.jsx',
'react/cmp.jsx.tpl', {
'name': f'BaseLayout',
'source': '<>{children}</>'
})
generate_file(target_path, f'react/src/index.scss', 'react/index.scss.tpl', {
'pages': react_pages
})
generate_file(target_path, 'app/utils/react.py', 'react/utils.py.tpl')
generate_file(target_path, 'react/src/index.jsx', 'react/index.jsx.tpl')
generate_file(target_path, 'react/src/state.jsx', 'react/state.js.tpl')
generate_file(target_path, 'react/src/streams.jsx', 'react/streams.js.tpl')
generate_file(target_path, 'react/src/reducer.jsx', 'react/reducer.js.tpl', {
'name': 'Root'
})
generate_file(target_path, 'react/src/router.jsx', 'react/router.jsx.tpl', {
'name': 'Root',
'pages': react_pages,
'pages_index': react_pages_by_app,
'to_camel_case': to_camel_case
})
generate_file(target_path, 'react/package.json', 'react/package.json.tpl', {
'packages': {}
})
generate_file(target_path, 'react/webpack.config.js', 'react/webpack.config.js.tpl', {
'entries': {'all': f'./src/index.jsx'}
}) | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/react/generator/react.py | react.py |
from django.conf.locale import LANG_INFO
from zmei_generator.domain.application_def import ApplicationDef
from zmei_generator.domain.extensions import ApplicationExtension
from zmei_generator.parser.errors import ValidationError
from zmei_generator.parser.gen.ZmeiLangParser import ZmeiLangParser
from zmei_generator.parser.utils import BaseListener
class LangsAppExtension(ApplicationExtension):
def __init__(self, application):
super().__init__(application)
self.langs = None
def get_name(cls):
return 'langs'
def get_required_deps(self):
return ['django-modeltranslation==0.13-beta1']
def get_required_apps(self):
return ['modeltranslation']
@classmethod
def write_settings(cls, apps, f):
_langs = {}
for app in apps.values():
if not app.supports(LangsAppExtension):
continue
for code in app[LangsAppExtension].langs:
try:
name = LANG_INFO[code]['name_local'].capitalize()
_langs[code] = name
except KeyError:
raise ValidationError(0,0, f'Unknown language {code}. Available options are: {", ".join(LANG_INFO.keys())}')
if len(_langs) == 0:
_langs = {'en': 'English'}
langs = tuple([(code, name) for code, name in _langs.items()])
f.write('\n# LANGUAGE SETTINGS')
f.write('\nLANGUAGES = {}'.format(repr(langs)))
f.write('\nMAIN_LANGUAGE = {}\n'.format(repr(langs[0][0])))
f.write('\nMIDDLEWARE += ["django.middleware.locale.LocaleMiddleware"]')
f.write("\nLOCALE_PATHS = [os.path.join(BASE_DIR, 'locale')]")
class LangsAppExtensionParserListener(BaseListener):
def __init__(self, application: ApplicationDef) -> None:
super().__init__(application)
self.langs_extension = None
def enterAn_langs(self, ctx: ZmeiLangParser.An_langsContext):
self.langs_extension = LangsAppExtension(self.application)
self.application.extensions.append(self.langs_extension)
self.application.register_extension(self.langs_extension)
def exitAn_langs(self, ctx: ZmeiLangParser.An_langsContext):
self.langs_extension = None
def enterAn_langs_list(self, ctx: ZmeiLangParser.An_langs_listContext):
self.langs_extension.langs = [x.strip() for x in ctx.getText().split(',')] | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/extensions/application/langs.py | langs.py |
import re
from copy import copy
from zmei_generator.contrib.web.extensions.page.block import InlineTemplatePageBlock
from zmei_generator.parser.errors import GlobalScopeValidationError as ValidationException, GlobalScopeValidationError
from zmei_generator.domain.page_def import PageDef
from zmei_generator.domain.page_expression import PageExpression
from zmei_generator.domain.extensions import PageExtension
def to_camel_case(name):
return ''.join([x.capitalize() for x in name.split('_')])
def to_camel_case_decap(name):
name = ''.join([x.capitalize() for x in name.split('_')])
return name[0].lower() + name[1:]
def to_camel_case_path(name):
return '.'.join([to_camel_case_decap(x) for x in name.split('.')])
def format_field_names(names):
return '{%s}' % ', '.join([f"'{key}': _('{val}')" for key, val in names.items()])
class CrudField(object):
def __init__(self) -> None:
self.spec = None
self.filter_expr = None
class CrudParams(object):
def __init__(self) -> None:
self.model = None
self.filter_expr = None
self.spec = None
self.query = None
self.fields = None
self.list_fields = None
self.list_type = 'stacked'
self.header = True
self.skip = None
self.block_name = None
self.theme = None
self.url_prefix = None
self.pk_param = None
self.object_expr = None
self.can_edit = None
self.item_name = None
self.link_extension = None
self.link_suffix = None
self.next_page = {}
class CrudPageExtension(PageExtension):
crud_page = None
list_type = None
header = None
link_extension = None
model_name = None
model_name_plural = None
link_extension_params = None
name_prefix = None
name_suffix = None
block_name = None
theme = 'default'
url_prefix = None
pk_param = None
crud_pages = None
app_name = None
model_cls = None
fields = None
list_fields = None
formatted_query = None
context_object_name = None
object_expr = None
can_edit = None
query = None
next_page_expr = None
item_name = None
items_name = None
field_filters = None
create_list = True
link_suffix = ''
parent_base_page = None
@classmethod
def get_name(cls):
return 'crud'
def __init__(self, page, params=None, descriptor=None):
self.descriptor = descriptor
super().__init__(page)
self.params = params or CrudParams()
self.parent_crud = None
if page.defined_uri is None and not page.override: # override -> allow empty urls for override @crud's functionality
raise ValidationException('@crud annotations require page to have url')
self.page.template_libs.append('i18n')
def format_extension_value(self, current_value):
if not current_value:
current_value = {}
descriptor = self.descriptor or '_'
if descriptor not in current_value:
current_value[descriptor] = {}
if self.get_name() in current_value[descriptor]:
raise GlobalScopeValidationError("Can not add another crud with same descriptor. Consider to specify"
"explicit descriptors for subsequent descriptors")
current_value[descriptor][self.get_name()] = self
return current_value
def get_extension_type(self):
return CrudPageExtension
def post_process(self):
self.page.register_extension(self)
self.prepare_environment(self.params, self.page)
self.build_pages(self.page)
def prepare_environment(self, crud, page):
if self.crud_page in crud.next_page:
next_page = crud.next_page[self.crud_page]
elif 'all' in crud.next_page:
next_page = crud.next_page['all']
else:
next_page = None
if next_page:
self.next_page_expr = f"{next_page}"
else:
self.next_page_expr = f"url=self.request.get_full_path()" + self.link_suffix
self.field_filters = {}
if crud.fields:
all_fields = []
for field in crud.fields:
all_fields.append(field.spec)
if field.filter_expr and not field.spec.startswith('^'):
self.field_filters[field.spec] = field.filter_expr
crud_fields = all_fields
else:
crud_fields = None
# appname, model_cls, fields
if crud.model.startswith('#'):
model = page.application.resolve_model(crud.model[1:])
self.app_name = model.application.app_name + '.models'
self.model_cls = model.class_name
self.model_name = model.name or model.class_name
self.model_name_plural = model.name_plural or f'{self.model_name} items'
self.fields = {field.name: field.verbose_name or field.name.replace('_', ' ').capitalize() for field in
model.filter_fields(crud_fields or '*') if not field.read_only}
self.list_fields = {field.name: field.verbose_name or field.name.replace('_', ' ').capitalize() for field in
model.filter_fields(crud.list_fields or crud_fields or '*')}
else:
parts = crud.model.split('.')
self.app_name = '.'.join(parts[:-1]) + '.models'
self.model_cls = parts[-1]
self.model_name = self.model_cls
self.model_name_plural = f'{self.model_name} items'
self.fields = {field: field.replace('_', ' ').capitalize() for field in crud_fields}
self.list_fields = {field: field.replace('_', ' ').capitalize() for field in
crud.list_fields or crud_fields}
if not self.fields:
raise ValidationException('@crud -> fields for external models are required: {}'.format(crud.model))
# link extension
if crud.link_extension:
self.link_extension = crud.link_extension
link_extension_params = []
for item in re.split('\s+', self.link_extension):
key, val = item.split('=')
link_extension_params.append(f"'{key}': {val}")
self.link_extension_params = ', '.join(link_extension_params)
else:
self.link_extension = ''
self.link_extension_params = ''
if crud.link_suffix:
self.link_suffix = crud.link_suffix
# name_prefix
if self.descriptor:
self.name_prefix = f'{self.descriptor}_'
self.name_suffix = f'_{self.descriptor}'
else:
self.name_prefix = ''
self.name_suffix = ''
self.context_object_name = 'item'
# block name
if crud.block_name:
self.block_name = crud.block_name
else:
self.block_name = 'content'
# crud theme
if crud.theme:
self.theme = crud.theme
if crud.list_type:
self.list_type = crud.list_type
self.header = crud.header
# url prefix
if crud.url_prefix:
self.url_prefix = crud.url_prefix
if not self.url_prefix.endswith('/'):
self.url_prefix = self.url_prefix + '/'
elif self.descriptor:
self.url_prefix = f'{self.descriptor}/'
else:
self.url_prefix = ''
if page.defined_uri:
if not page.defined_uri.endswith('/'):
self.url_prefix = '/' + self.url_prefix
# pk
if crud.pk_param:
self.pk_param = crud.pk_param
self.pk_param = f'{self.name_prefix}pk'
self.item_name = crud.item_name or f"{self.name_prefix}item"
self.items_name = f"{self.item_name}_list"
# formatted_query
if crud.query:
self.query = crud.query.strip()
self.formatted_query = '.filter({})'.format(self.query)
else:
self.formatted_query = '.all()'
# object
if crud.object_expr:
self.object_expr = crud.object_expr
else:
self.object_expr = f'{self.model_cls}.objects{self.formatted_query}.get(pk=url.{self.pk_param})'
# auth
if crud.can_edit:
self.can_edit = crud.can_edit
else:
self.can_edit = repr(True)
if self.descriptor:
self.can_edit_item = f'{self.descriptor}_can_edit'
else:
self.can_edit_item = f'can_edit'
if crud.skip:
self.create_list = 'list' not in list(crud.skip)
else:
self.create_list = True
# pages that are not needed
self.crud_pages = [
x for x in ['detail', 'create', 'edit', 'delete', 'list'] if x not in list(crud.skip or [])
]
if self.parent_crud:
link_crud = self.parent_crud
link_page = self.parent_base_page
else:
link_crud = self
link_page = self.page
self.links = {x: f"{link_page.application.app_name}.{link_page.name}" f"{link_crud.name_suffix}_{x}" for x in
link_crud.crud_pages}
if self.create_list:
self.links['list'] = f"{link_page.application.app_name}.{link_page.name}"
if self.parent_base_page and 'list' in self.links:
self.links['parent'] = self.links['list']
def format_link(self, kind):
if kind not in self.links:
return ''
url = repr(self.links[kind])
for param in self.page.get_uri_params():
if param != self.pk_param:
url += f' {param}=url.{param}'
if kind in ('edit', 'detail', 'delete'):
url += f' {self.pk_param}={self.item_name}.pk'
return "{% url " + url + " %}" + self.link_suffix
def format_link_django(self, kind):
if kind not in self.links:
return ''
params = []
for param in self.page.get_uri_params():
if param != self.pk_param:
params.append(f'"{param}":url.{param}')
if kind in ('edit', 'detail', 'delete'):
params.append(f'"{self.pk_param}":{self.item_name}.pk')
url = f"reverse_lazy({repr(self.links[kind])}, kwargs={{{','.join(params)}}})"
if self.link_suffix:
url += ' + ' + repr(self.link_suffix)
return url
def format_link_react(self, kind):
if kind not in self.links:
return ''
params = []
for param in self.page.get_uri_params():
if param != self.pk_param:
params.append(f'"{param}":url.{param}')
if kind in ('edit', 'detail', 'delete'):
params.append(f'"{self.pk_param}": this.props.store.data.{self.item_name}? this.props.store.data.{self.item_name}.id : {self.item_name}.id')
url = f"reverse(routes.{self.links[kind]}, {{{','.join(params)}}})"
if self.link_suffix:
url += ' + ' + repr(self.link_suffix)
return url
def format_link_flutter(self, kind):
if kind not in self.links:
return ''
params = []
for param in self.page.get_uri_params():
if param != self.pk_param:
params.append(f'{to_camel_case(param)}:url.{param}')
if kind in ('edit', 'detail', 'delete'):
params.append(f"{to_camel_case_decap(self.pk_param)}: {to_camel_case_decap(self.item_name)}['id']")
url = f"App.url.{to_camel_case_path(self.links[kind])}({','.join(params)})"
if self.link_suffix:
url += ' + ' + repr(self.link_suffix)
return url
def build_pages(self, base_page):
from zmei_generator.contrib.web.extensions.page.crud_list import build_list_page
for crud_page in self.crud_pages:
if crud_page == 'list':
build_list_page(self, base_page)
continue
crud_page_name = f"{base_page.name}{self.name_suffix}_{crud_page}"
if crud_page_name in base_page.application.pages:
append = True
new_page = base_page.application.pages[crud_page_name]
else:
append = False
new_page = PageDef(self.page.application)
new_page.parent_name = base_page.name
new_page.name = crud_page_name
base_page.application.pages[new_page.name] = new_page
# copy extensions
for ext in base_page.get_extensions():
if isinstance(ext, PageExtension) and ext.can_inherit:
new_page.register_extension(ext)
if crud_page == 'create':
new_page.set_uri(f"./{self.url_prefix}{crud_page}")
else:
new_page.set_uri(f"./{self.url_prefix}<{self.pk_param}>/{crud_page}")
# after we have assigned uri, we can remove override flag
# new_page.override = False
new_page.template_libs.append('i18n')
new_page.auto_page = True
params = copy(self.params)
link_extension_params = ', '.join([f"{x}=url.{x}" for x in self.page.get_uri_params() if x != self.pk_param])
if crud_page not in params.next_page:
if 'all' not in params.next_page:
params.next_page[
crud_page] = f"'{base_page.application.app_name}.{base_page.name}', {link_extension_params}"
crud = crud_cls_by_name(crud_page)(new_page, params=params, descriptor=self.descriptor,
parent_crud=self, parent_base_page=base_page, append=append)
base_page.application.extensions.append(crud)
def crud_cls_by_name(name):
from zmei_generator.contrib.web.extensions.page.crud_create import CrudCreatePageExtension
from zmei_generator.contrib.web.extensions.page.crud_delete import CrudDeletePageExtension
from zmei_generator.contrib.web.extensions.page.crud_detail import CrudDetailPageExtension
from zmei_generator.contrib.web.extensions.page.crud_edit import CrudEditPageExtension
from zmei_generator.contrib.web.extensions.page.crud_list import CrudListPageExtension
return dict((
("create", CrudCreatePageExtension),
("delete", CrudDeletePageExtension),
("detail", CrudDetailPageExtension),
("edit", CrudEditPageExtension),
("list", CrudListPageExtension),
))[name]
class BaseCrudSubpageExtension(CrudPageExtension):
crud_page = None
def __init__(self, page, params=None, descriptor=None, parent_crud=None, parent_base_page=None, append=False):
super().__init__(page, params, descriptor)
self.append = append
self.parent_crud = parent_crud
self.parent_base_page = parent_base_page
def build_pages(self, base_page):
pass | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/extensions/page/crud.py | crud.py |
from zmei_generator.domain.application_def import ApplicationDef
from zmei_generator.domain.extensions import PageExtension
from zmei_generator.contrib.web.extensions.page.block import InlineTemplatePageBlock
from zmei_generator.parser.errors import ValidationError
from zmei_generator.parser.gen.ZmeiLangParser import ZmeiLangParser
from zmei_generator.parser.utils import BaseListener
class MenuPageExtension(PageExtension):
def __init__(self, page) -> None:
super().__init__(page)
self.descriptor = 'main'
self.items = []
page.imports.append(
('django.urls', 'reverse_lazy')
)
self.page_refs = []
def format_extension_value(self, current_value):
if not current_value:
current_value = {}
descriptor = self.descriptor or '_'
if descriptor not in current_value:
current_value[descriptor] = {}
current_value[descriptor] = self
return current_value
def render_ref(self, item):
if item.page:
page = self.page.application.resolve_page(item.page)
if not page.uri:
raise ValidationError(0, 0, f"You are trying to add page {item.page} to a menu, however page has no url defined.")
self.page_refs.append(
(item.page, item.ref, page)
)
return f"{page.application.app_name}.{page.name}"
def render_url(self, item):
if item.page:
return f"reverse_lazy('{self.render_ref(item)}')"
elif item.url:
return item.url
else:
return f"{item.expr}"
def post_process(self):
menu_code = self.page.page_code or '\n'
menu_code += f"if 'menu' not in data:\n data['menu'] = {{}}\n\n"
menu_code += f"data['menu']['{self.descriptor}'] = {{\n"
menu_code += "'active': None,\n"
menu_code += "'items': {\n"
for item in self.items:
if item.args:
args = ', \'args\': ' + repr(item.args)
else:
args = ''
menu_code += f"'{item.ref}': {{'label': _({repr(item.label)}), 'link': {self.render_url(item)}{args} }},\n"
menu_code += "}\n"
menu_code += "}\n"
self.page.page_code = menu_code
for page_ref, item_ref, page in self.page_refs:
page.methods[f'activate_{self.descriptor}_menu'] = \
f"kwargs['menu']['{self.descriptor}']['items']['{item_ref}']['active'] = True\n" \
f"kwargs['menu']['{self.descriptor}']['active'] = kwargs['menu']['{self.descriptor}']['items']['{item_ref}']"
code = page.page_code or '\n'
code += f"self.activate_{self.descriptor}_menu(menu=data['menu'])\n"
page.page_code = code
self.page.add_block(
f"menu_{self.descriptor}",
InlineTemplatePageBlock(f"theme/menu[_{self.descriptor}].html", {
'menu_descriptor': self.descriptor
}, ref=f"menu_{self.descriptor}")
)
class MenuItem(object):
def __init__(self) -> None:
super().__init__()
self.ref = None
self.label = None
self.url = None
self.page = None
self.expr = None
self.args = {}
class MenuPageExtensionParserListener(BaseListener):
def __init__(self, application: ApplicationDef) -> None:
super().__init__(application)
self.menu = None
self.menu_item = None
def enterAn_menu(self, ctx: ZmeiLangParser.An_menuContext):
self.menu = MenuPageExtension(self.page)
extension = self.menu
self.application.extensions.append(
extension
)
def exitAn_menu(self, ctx: ZmeiLangParser.An_menuContext):
self.page.register_extension(self.menu)
def enterAn_menu_descriptor(self, ctx: ZmeiLangParser.An_menu_descriptorContext):
self.menu.descriptor = ctx.getText()
def enterAn_menu_item(self, ctx: ZmeiLangParser.An_menu_itemContext):
self.menu_item = MenuItem()
self.menu_item.ref = f'item_{len(self.menu.items)}'
self.menu.items.append(self.menu_item)
def enterAn_menu_label(self, ctx: ZmeiLangParser.An_menu_labelContext):
self.menu_item.label = ctx.getText().strip('\'"')
def enterAn_menu_item_url(self, ctx: ZmeiLangParser.An_menu_item_urlContext):
self.menu_item.url = ctx.getText()
def enterAn_menu_item_page(self, ctx: ZmeiLangParser.An_menu_item_pageContext):
self.menu_item.page = ctx.an_menu_item_page_ref().getText()
def enterAn_menu_item_code(self, ctx: ZmeiLangParser.An_menu_item_codeContext):
self.menu_item.expr = ctx.code_line().PYTHON_CODE().getText().strip()
def enterAn_menu_item_arg(self, ctx: ZmeiLangParser.An_menu_item_argContext):
self.menu_item.args[ctx.an_menu_item_arg_key().getText()] = ctx.an_menu_item_arg_val().getText().strip('\'"') | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/extensions/page/menu.py | menu.py |
from textwrap import dedent
from zmei_generator.domain.page_def import PageDef
from zmei_generator.domain.page_expression import PageExpression
from zmei_generator.contrib.web.extensions.page.block import InlineTemplatePageBlock
from zmei_generator.contrib.web.extensions.page.crud import BaseCrudSubpageExtension
from zmei_generator.contrib.web.extensions.page.crud_parser import CrudBasePageExtensionParserListener
from zmei_generator.parser.gen.ZmeiLangParser import ZmeiLangParser
class CrudCreatePageExtension(BaseCrudSubpageExtension):
@classmethod
def get_name(cls):
return 'crud_create'
@property
def crud_page(self):
return 'create'
def get_form_class(self):
return 'ModelForm'
def get_form_name(self):
return ''.join(
[x.capitalize() for x in f"{self.model_cls}_{self.name_prefix}{self.get_name()[5:]}".split('_')]) + 'Form'
def get_form_code(self):
"""
Form code
:return:
"""
fields = self.get_model_fields()
return dedent(f"""
prefix = {repr(self.descriptor)}
class Meta:
model = {self.model_cls}
fields = {fields}
""")
def get_model_fields(self):
return '[' + ', '.join([repr(x) for x in self.fields]) + ']'
def get_form_init(self):
return f"request.POST if request.method == 'POST' else None, request.FILES if request.method == 'POST' else None, instance={self.model_cls}({self.query})"
def get_form_action(self):
form_name = f'form{self.name_suffix}'
return f"""
if {form_name}.is_valid():
model = {form_name}.save()
"""
def format_next_page_expr(self):
return f"""
raise RedirectAction({self.next_page_expr})
"""
def build_pages(self, base_page: PageDef):
base_page.imports.append(
('django.forms', 'ModelForm')
)
base_page.imports.append(
('app.utils.views', 'RedirectAction')
)
# Form requires post
# we handle post with same method
base_page.allow_post = True
# form name contains prefix in case of several forms are here
form_name = f'form{self.name_suffix}'
items = {}
items[form_name] = PageExpression(
form_name, f"{self.get_form_name()}({self.get_form_init()})", base_page)
code = f"""
if request.method == "POST":
{self.get_form_action()}
"""
if self.next_page_expr:
base_page.imports.append(('app.utils.views', 'RedirectAction'))
code += self.format_next_page_expr()
base_page.add_form(
self.get_form_name(),
self
)
base_page.add_block(
self.block_name,
InlineTemplatePageBlock(self.get_template_name(), {
'page': base_page,
'crud': self,
'form_name': form_name,
}, ref=f"{self.get_name()}{self.name_suffix}"),
append=self.append
)
base_page.page_code = base_page.page_code or ''
if self.append:
base_page.page_code = dedent(code) + base_page.page_code
items.update(base_page.page_items)
base_page.page_items = items
else:
base_page.page_code += dedent(code)
base_page.page_items.update(items)
def get_template_name(self):
return "theme/crud_create.html"
class CrudCreatePageExtensionParserListener(CrudBasePageExtensionParserListener):
def enterAn_crud_create(self, ctx: ZmeiLangParser.An_crud_createContext):
self.extension_start(CrudCreatePageExtension, ctx)
def exitAn_crud_create(self, ctx: ZmeiLangParser.An_crud_createContext):
self.extension_end(CrudCreatePageExtension, ctx) | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/extensions/page/crud_create.py | crud_create.py |
from zmei_generator.generator.utils import render_template, render_file
class Block(object):
def __init__(self, ref=None) -> None:
super().__init__()
self.sorting = 0
self.ref = ref
def render(self, area=None, index=None):
pass
class BlockPlaceholder(Block):
pass
class InlinePageBlock(Block):
def __init__(self, source, ref=None) -> None:
super().__init__(ref=ref)
self.source = source
def render(self, area=None, index=None):
return self.source
class InlineTemplatePageBlock(Block):
def __init__(self, template_name, context=None, ref=None) -> None:
super().__init__(ref=ref)
self.template_name = template_name
self.context = context
def render(self, area=None, index=None):
return render_template(self.template_name, context=self.context)
class InlineFilePageBlock(Block):
def __init__(self, template_name, ref=None) -> None:
super().__init__(ref=ref)
self.template_name = template_name
def render(self, area=None, index=None):
return render_file(self.template_name)
class ThemeFileIncludePageBlock(Block):
def __init__(self, page, source, template_name, ns, theme='default', with_expr=None, ref=None) -> None:
super().__init__(ref=ref)
self.template_name = f'{ns}/{theme}/{template_name}'
self.theme = theme
self.page = page
self.source = source
self.with_expr = with_expr or ' '
page.themed_files[self.template_name] = source
def render(self, area=None, index=None):
return f"{{% include '{self.template_name}'{self.with_expr} %}}"
#
# class BlocksPageExtension(PageExtension):
#
# @classmethod
# def get_name(cls):
# return 'block'
#
# def __init__(self, parsed_result, page):
# super().__init__(parsed_result, page)
#
# area_name = parsed_result.descriptor or 'content'
#
# blocks = [PageBlock(source=parsed_result.extension_body, area_name=area_name)]
#
# if area_name not in page.blocks:
# page.blocks[area_name] = blocks
# else:
# page.blocks[area_name] = page.blocks[area_name] + blocks | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/extensions/page/block.py | block.py |
from zmei_generator.domain.page_def import PageDef
from zmei_generator.domain.page_expression import PageExpression
from zmei_generator.contrib.web.extensions.page.crud_create import CrudCreatePageExtension
from zmei_generator.contrib.web.extensions.page.crud_parser import CrudBasePageExtensionParserListener
from zmei_generator.parser.gen.ZmeiLangParser import ZmeiLangParser
class CrudDeletePageExtension(CrudCreatePageExtension):
@classmethod
def get_name(cls):
return 'crud_delete'
@property
def crud_page(self):
return 'delete'
def get_model_fields(self):
return repr([])
def format_next_page_expr(self):
return ''
def get_form_action(self):
form_name = f'form{self.name_suffix}'
return f"""
{form_name}.full_clean()
try:
{self.item_name}.delete()
raise RedirectAction({self.next_page_expr})
except ProtectedError as e:
v.add_error(None, str(e))
"""
def get_form_init(self):
return f"request.POST if request.method == 'POST' else None, request.FILES if request.method == 'POST' else None, instance={self.item_name}"
def build_pages(self, base_page: PageDef):
base_page.imports.append(
('django.db.models', 'ProtectedError')
)
items = {}
items[self.item_name] = PageExpression(
self.item_name, self.object_expr, base_page)
if self.append:
items.update(base_page.page_items)
base_page.page_items = items
else:
base_page.page_items.update(items)
super().build_pages(base_page)
def get_template_name(self):
return "theme/crud_delete.html"
class CrudDeletePageExtensionParserListener(CrudBasePageExtensionParserListener):
def enterAn_crud_delete(self, ctx:ZmeiLangParser.An_crud_deleteContext):
self.extension_start(CrudDeletePageExtension, ctx)
def exitAn_crud_delete(self, ctx:ZmeiLangParser.An_crud_deleteContext):
self.extension_end(CrudDeletePageExtension, ctx) | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/extensions/page/crud_delete.py | crud_delete.py |
import re
from zmei_generator.domain.application_def import ApplicationDef
from zmei_generator.domain.page_def import PageDef
from zmei_generator.contrib.web.extensions.page.crud import CrudField, CrudPageExtension
from zmei_generator.parser.gen.ZmeiLangParser import ZmeiLangParser
from zmei_generator.parser.utils import BaseListener
class CrudBasePageExtensionParserListener(BaseListener):
def __init__(self, application: ApplicationDef) -> None:
super().__init__(application)
self.crud = None
self.crud_field = None
self.page_stack = []
self.crud_stack = []
def extension_start(self, cls, ctx):
extension = cls(self.page)
self.application.extensions.append(
extension
)
# self.application.crud = True
if self.crud:
self.crud_stack.append(self.crud)
self.crud = extension
def extension_end(self, cls, ctx):
if len(self.crud_stack):
self.crud = self.crud_stack.pop()
else:
self.crud = None
def enterAn_crud_descriptor(self, ctx: ZmeiLangParser.An_crud_descriptorContext):
self.crud.descriptor = ctx.getText()
# Crud page overrides
def enterAn_crud_page_override(self, ctx: ZmeiLangParser.An_crud_page_overrideContext):
page = self.page
self.page_stack.append(self.page)
crud_name = ctx.an_crud_view_name().getText()
if self.crud.descriptor:
name_suffix = f'_{self.crud.descriptor}'
else:
name_suffix = ''
crud_page_name = f"{page.name}{name_suffix}_{crud_name}"
if crud_page_name not in page.application.pages:
self.page = PageDef(self.application, override=True)
self.page.page_items = {}
self.page.parent_name = page.name
self.page.name = crud_page_name
page.application.pages[crud_page_name] = self.page
else:
self.page = page.application.pages[crud_page_name]
def exitAn_crud_page_override(self, ctx: ZmeiLangParser.An_crud_page_overrideContext):
self.page = self.page_stack.pop()
# Params
def enterAn_crud_target_model(self, ctx: ZmeiLangParser.An_crud_target_modelContext):
self.crud.params.model = ctx.getText().strip()
def enterAn_crud_theme(self, ctx: ZmeiLangParser.An_crud_themeContext):
self.crud.params.theme = ctx.id_or_kw().getText().strip()
def enterAn_crud_skip(self, ctx: ZmeiLangParser.An_crud_skipContext):
self.crud.params.skip = re.split('\s*,\s*', ctx.an_crud_skip_values().getText().strip())
def enterAn_crud_fields(self, ctx: ZmeiLangParser.An_crud_fieldsContext):
self.crud.params.fields = []
def enterAn_crud_field(self, ctx: ZmeiLangParser.An_crud_fieldContext):
field = CrudField()
field.spec = ctx.an_crud_field_spec().getText().strip()
if ctx.an_crud_field_filter():
field.filter_expr = self._get_code(ctx.an_crud_field_filter())
self.crud.params.fields.append(field)
def enterAn_crud_list_fields(self, ctx: ZmeiLangParser.An_crud_list_fieldsContext):
self.crud.params.list_fields = []
def enterAn_crud_list_field(self, ctx: ZmeiLangParser.An_crud_list_fieldContext):
field = ctx.an_crud_list_field_spec().getText().strip()
self.crud.params.list_fields.append(field)
def enterAn_crud_pk_param(self, ctx: ZmeiLangParser.An_crud_pk_paramContext):
self.crud.params.pk_param = ctx.id_or_kw().getText().strip()
def enterAn_crud_item_name(self, ctx: ZmeiLangParser.An_crud_item_nameContext):
self.crud.params.item_name = ctx.id_or_kw().getText().strip()
def enterAn_crud_block(self, ctx: ZmeiLangParser.An_crud_blockContext):
self.crud.params.block_name = ctx.id_or_kw().getText().strip()
def enterAn_crud_object_expr(self, ctx: ZmeiLangParser.An_crud_object_exprContext):
self.crud.params.object_expr = self._get_code(ctx)
def enterAn_crud_can_edit(self, ctx: ZmeiLangParser.An_crud_can_editContext):
self.crud.params.can_edit = self._get_code(ctx)
def exitAn_crud_list_type_var(self, ctx: ZmeiLangParser.An_crud_list_type_varContext):
self.crud.params.list_type = ctx.getText()
def exitAn_crud_header_enabled(self, ctx:ZmeiLangParser.An_crud_header_enabledContext):
self.crud.params.header = ctx.getText() == 'true'
def enterAn_crud_target_filter(self, ctx: ZmeiLangParser.An_crud_target_filterContext):
self.crud.params.query = self._get_code(ctx)
def enterAn_crud_url_prefix_val(self, ctx: ZmeiLangParser.An_crud_url_prefix_valContext):
self.crud.params.url_prefix = ctx.getText().strip(' "\'')
def enterAn_crud_link_suffix_val(self, ctx: ZmeiLangParser.An_crud_link_suffix_valContext):
self.crud.params.link_suffix = ctx.getText().strip(' "\'')
def enterAn_crud_next_page(self, ctx: ZmeiLangParser.An_crud_next_pageContext):
code = self._get_code(ctx)
self.add_next_page(code, ctx)
def enterAn_crud_next_page_url(self, ctx: ZmeiLangParser.An_crud_next_page_urlContext):
code = ctx.an_crud_next_page_url_val().getText()
self.add_next_page(code, ctx)
def add_next_page(self, code, ctx):
if ctx.an_crud_next_page_event_name():
event = ctx.an_crud_next_page_event_name().getText()
else:
event = 'all'
self.crud.params.next_page[event] = code
class CrudPageExtensionParserListener(CrudBasePageExtensionParserListener):
def enterAn_crud(self, ctx: ZmeiLangParser.An_crudContext):
self.extension_start(CrudPageExtension, ctx)
def exitAn_crud(self, ctx: ZmeiLangParser.An_crudContext):
self.extension_end(CrudPageExtension, ctx) | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/extensions/page/crud_parser.py | crud_parser.py |
tokens = dict(
AN_THEME='@theme',
AN_PRIORITY='@@',
AN_FILE='@file',
AN_GET='@get',
AN_MENU='@menu',
AN_CRUD='@crud',
AN_CRUD_DETAIL='@crud_detail',
AN_CRUD_LIST='@crud_list',
AN_CRUD_DELETE='@crud_delete',
AN_CRUD_EDIT='@crud_edit',
AN_CRUD_CREATE='@crud_create',
AN_POST='@post',
AN_ERROR='@error',
AN_AUTH='@auth',
AN_MARKDOWN='@markdown',
AN_HTML='@html',
AN_TREE='@tree',
AN_DATE_TREE='@date_tree',
AN_MIXIN='@mixin',
AN_M2M_CHANGED='@m2m_changed',
AN_POST_DELETE='@post_delete',
AN_PRE_DELETE='@pre_delete',
AN_POST_SAVE='@post_save',
AN_PRE_SAVE='@pre_save',
AN_CLEAN='@clean',
AN_ORDER='@order',
AN_SORTABLE='@sortable',
AN_LANGS='@langs',
)
keywords = dict(
COL_FIELD_TYPE_LONGTEXT='text',
COL_FIELD_TYPE_HTML='html',
COL_FIELD_TYPE_HTML_MEDIA='html_media',
COL_FIELD_TYPE_FLOAT='float',
COL_FIELD_TYPE_DECIMAL='decimal',
COL_FIELD_TYPE_DATE='date',
COL_FIELD_TYPE_DATETIME='datetime',
COL_FIELD_TYPE_CREATE_TIME='create_time',
COL_FIELD_TYPE_UPDATE_TIME='update_time',
COL_FIELD_TYPE_IMAGE='image',
COL_FIELD_TYPE_FILE='file',
COL_FIELD_TYPE_FILER_IMAGE='filer_image',
COL_FIELD_TYPE_FILER_FILE='filer_file',
COL_FIELD_TYPE_FILER_FOLDER='filer_folder',
COL_FIELD_TYPE_FILER_IMAGE_FOLDER='filer_image_folder',
COL_FIELD_TYPE_TEXT='str',
COL_FIELD_TYPE_INT='int',
COL_FIELD_TYPE_SLUG='slug',
COL_FIELD_TYPE_BOOL='bool',
COL_FIELD_TYPE_ONE='one',
COL_FIELD_TYPE_ONE2ONE='one2one',
COL_FIELD_TYPE_MANY='many',
COL_FIELD_CHOICES='choices',
KW_THEME='theme',
KW_INSTALL='install',
KW_HEADER='header',
KW_SERVICES='services',
KW_SELENIUM_PYTEST='selenium_pytest',
KW_CHILD='child',
KW_FILTER_OUT='filter_out',
KW_FILTER_IN='filter_in',
KW_PAGE='page',
KW_LINK_SUFFIX='link_suffix',
KW_URL_PREFIX='url_prefix',
KW_CAN_EDIT='can_edit',
KW_OBJECT_EXPR='object_expr',
KW_BLOCK='block',
KW_ITEM_NAME='item_name',
KW_PK_PARAM='pk_param',
KW_LIST_FIELDS='list_fields',
KW_DELETE='delete',
KW_EDIT='edit',
KW_CREATE='create',
KW_DETAIL='detail',
KW_SKIP='skip',
KW_FROM='from',
KW_POLY_LIST='+polymorphic_list',
KW_CSS='css',
KW_JS='js',
KW_INLINE_TYPE_TABULAR='tabular',
KW_INLINE_TYPE_STACKED='stacked',
KW_INLINE_TYPE_POLYMORPHIC='polymorphic',
KW_INLINE='inline',
KW_TYPE='type',
KW_USER_FIELD='user_field',
KW_ANNOTATE='annotate',
KW_ON_CREATE='on_create',
KW_QUERY='query',
KW_AUTH='auth',
KW_COUNT='count',
KW_I18N='i18n',
KW_EXTENSION='extension',
KW_TABS='tabs',
KW_LIST='list',
KW_READ_ONLY='read_only',
KW_LIST_EDITABLE='list_editable',
KW_LIST_FILTER='list_filter',
KW_LIST_SEARCH='list_search',
KW_FIELDS='fields',
KW_IMPORT='import',
KW_AS='as'
) | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/grammar/tokens.py | tokens.py |
import os
from zmei_generator.generator.utils import generate_file, format_file, generate_package
def generate(target_path, project):
# config
has_rest = False
has_i18n_pages = False
has_normal_pages = False
i18n_langs = []
# urls
imports = set()
urls = ['urlpatterns = [']
urls += [
" url(r'^admin/', admin.site.urls),",
]
# Sort applications alphabetically
temp_dict = {}
for key in sorted(project.applications):
temp_dict.update({key: project.applications[key]})
project.applications = temp_dict
for app_name, application in project.applications.items():
generate_package(app_name, path=target_path)
for app_name, application in project.applications.items():
for import_def, url_def in application.get_required_urls():
urls.append(url_def)
if import_def:
imports.add(import_def)
if application.pages:
# Sort urls alphabetically
temp_dict = {}
for key in sorted(application.pages):
temp_dict.update({key: application.pages[key]})
application.pages = temp_dict
for page in application.pages.values():
if page.i18n:
has_i18n_pages = True
else:
has_normal_pages = True
if has_normal_pages:
urls.append(f" url(r'^', include({app_name}.urls)),")
imports.add(f'{app_name}.urls')
urls.append(']')
for app_name, application in project.applications.items():
if any([page.i18n for page in application.pages.values()]):
urls += [
'urlpatterns += i18n_patterns(',
f" url(r'^', include({app_name}.urls_i18n)),",
")"
]
imports.add(f'{app_name}.urls_i18n')
# sort url imports
imports = sorted(imports)
# urls
with open(os.path.join(target_path, 'app/_urls.py'), 'w') as f:
f.write('from django.conf.urls import url, include\n')
f.write('from django.contrib import admin\n')
f.write('\n')
f.write('\n'.join([f'import {app_name}' for app_name in imports]))
if has_i18n_pages:
f.write('\nfrom django.conf.urls.i18n import i18n_patterns\n')
f.write('\n\n')
f.write('\n'.join(urls))
generate_file(target_path, 'app/urls.py', template_name='urls_main.py.tpl')
# settings
req_settings = {}
installed_apps = [app.app_name for app in project.applications.values() if
len(app.pages) > 0 or len(app.models) > 0]
extension_classes = list()
for application in sorted(project.applications.values(), key=lambda x: x.app_name):
installed_apps.extend(application.get_required_apps())
req_settings.update(application.get_required_settings())
for extension in application.extensions:
if type(extension) not in extension_classes:
extension_classes.append(type(extension))
# sort apps alphabetically
# prevent _settings repopulate
installed_apps = sorted(installed_apps)
# remove duplicates preserving order
seen = set()
seen_add = seen.add
installed_apps = [x for x in installed_apps if not (x in seen or seen_add(x))]
with open(os.path.join(target_path, 'app/settings.py'), 'r') as fb:
with open(os.path.join(target_path, 'app/_settings.py'), 'a') as f:
f.write(fb.read())
f.write('\nINSTALLED_APPS = [\n')
f.write("\n 'app',\n")
f.write('\n'.join([f" '{app_name}'," for app_name in installed_apps]))
f.write('\n] + INSTALLED_APPS\n\n') # settings
for key, val in req_settings.items():
f.write(f'{key} = {repr(val)}\n')
for extension in extension_classes:
extension.write_settings(project.applications, f)
generate_file(target_path, 'app/settings.py', template_name='settings.py.tpl')
format_file(target_path, 'app/_settings.py')
for extension in extension_classes:
extension.generate(project.applications, target_path)
# base template
generate_file(target_path, 'app/templates/base.html', template_name='theme/base.html')
requirements = [
'wheel',
'django>2',
]
for application in project.applications.values():
requirements.extend(application.get_required_deps())
requirements = list(sorted(set(requirements)))
# requirements
with open(os.path.join(target_path, '_requirements.txt'), 'w') as f:
f.write('\n'.join(requirements))
generate_file(target_path, 'requirements.txt', template_name='requirements.txt.tpl')
if i18n_langs:
for lang in i18n_langs:
os.makedirs(os.path.join(target_path, f'locale/{lang}'))
with open(os.path.join(target_path, f'locale/{lang}/readme.txt'), 'w') as f:
f.write('Collect translations:\n')
f.write('django-admin makemessages --all\n')
f.write('\n')
f.write('Compile translations:\n')
f.write('django-admin compilemessages\n') | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/generator/common.py | common.py |
from zmei_generator.generator.imports import ImportSet
from zmei_generator.generator.utils import generate_file, generate_urls_file, ThemeConfig, generate_package
def generate(target_path, project):
has_views = False
for app_name, application in project.applications.items():
if not len(application.pages): # and not application.rest:
continue
has_views = True
imports = ImportSet()
for col in application.models.values():
imports.add('{}.models'.format(app_name), col.class_name)
generated_templates = []
if len(application.pages.items()) > 0:
imports.add('django.views.generic', 'TemplateView')
for page in application.pages.values():
if not page.parent_name:
imports.add('app.utils.views', 'Data')
for import_spec in page.get_imports():
imports.add(*import_spec)
for item in page.page_items.values():
for import_spec in item.get_imports():
imports.add(*import_spec)
if page.template:
template = page.defined_template_name
if template:
template_name = f'{app_name}/templates/{template}'
generate_file(target_path, template_name, 'theme/default.html', {
'app_name': app_name,
'page': page,
'parent': application.resolve_page(page.parent_name) if page.parent_name else None
})
generated_templates.append(template_name)
generate_file(target_path, '{}/views.py'.format(app_name), 'views.py.tpl', {
'imports': imports.import_sting(),
'application': application,
'pages': application.pages.values()
})
# urls
pages_i18n = [page for page in application.pages.values() if page.has_uri and page.i18n]
if len(pages_i18n) > 0:
generate_urls_file(
target_path,
app_name,
application,
pages_i18n,
i18n=True
)
# urls i18n
pages = [page for page in application.pages.values() if page.has_uri and not page.i18n]
if application.pages:
generate_urls_file(
target_path,
app_name,
application,
pages,
i18n=False
)
if len(application.pages) > 0:
generate_file(target_path, '{}/templates/{}/_base.html'.format(app_name, app_name),
template_name='theme/base_app.html', context={
'application': application,
})
if has_views:
generate_package('app.utils', path=target_path)
generate_file(target_path, 'app/utils/views.py', template_name='views.utils.py.tpl') | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/generator/views.py | views.py |
from zmei_generator.domain.model_def import ModelDef
from zmei_generator.domain.application_def import FieldDeclaration
from zmei_generator.domain.field_def import FieldDef
from zmei_generator.generator.imports import ImportSet
from zmei_generator.generator.utils import generate_file
def generate(target_path, project):
for app_name, application in project.applications.items():
if not len(application.models):
continue
imports = ImportSet()
imports.add('django.db', 'models')
for model in application.models.values(): # type: ModelDef
for handlers, code in model.signal_handlers:
for signal_import in handlers:
imports.add('django.dispatch', 'receiver')
imports.add(*signal_import)
if model.polymorphic and model.tree:
imports.add('polymorphic_tree.models', 'PolymorphicMPTTModel', 'PolymorphicTreeForeignKey')
else:
if model.polymorphic:
imports.add('polymorphic.models', 'PolymorphicModel')
if model.tree:
imports.add('mptt.models', 'MPTTModel', 'TreeForeignKey')
if model.validators:
imports.add('django.core.exceptions', 'ValidationError')
if model.mixin_classes:
for import_decl in model.mixin_classes:
pkg, cls, alias = import_decl
if alias != cls:
cls = '{} as {}'.format(cls, alias)
imports.add(*(pkg, cls))
for field in model.own_fields: # type: FieldDef
model_field = field.get_model_field()
if model_field:
import_data, model_field = model_field # type: FieldDeclaration
for source, what in import_data:
imports.add(source, what)
generate_file(target_path, '{}/models.py'.format(app_name), 'models.py.tpl', {
'imports': imports.import_sting(),
'application': application,
'models': application.models.items()
}) | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/generator/models.py | models.py |
from zmei_generator.domain.model_def import ModelDef
from zmei_generator.domain.application_def import FieldDeclaration
from zmei_generator.domain.reference_field import ReferenceField
from zmei_generator.parser.errors import GlobalScopeValidationError as ValidationException
from zmei_generator.domain.field_def import FieldDef
from zmei_generator.generator.utils import gen_args
class RelationDef(FieldDef):
def __init__(self, model: ModelDef, field) -> None:
super().__init__(model, field)
self.ref_model_def = None # raw data
# filled in post process
self.related_name = None
self.ref_model = None
self.related_class = None
self.related_app = None
self.on_delete = 'PROTECT'
def post_process(self):
if self.ref_model_def:
ref_model = self.model.application.resolve_model(self.ref_model_def)
self.ref_model = ref_model
self.related_class = f'{ref_model.application.app_name}.{ref_model.class_name}'
if self.ref_model and self.related_name in self.ref_model.fields:
raise ValidationException('Can not override field with related field: {}'.format(self.related_name))
if self.ref_model and self.related_name:
self.ref_model.fields[self.related_name] = \
ReferenceField(self.ref_model, self.model, self.related_name, self)
@property
def qualifier(self):
return ''
@property
def is_many(self):
return None
@property
def is_many_reverse(self):
return None
@property
def widget(self):
return None
def get_rest_field(self):
return None
def get_rest_inline_model(self):
return self.ref_model
class RelationOneDef(RelationDef):
def prepare_field_arguemnts(self, own_args=None):
args = super().prepare_field_arguemnts(own_args)
args['on_delete'] = f'models.{self.on_delete}'
return args
def get_flutter_field(self):
if self.ref_model:
return f'{self.ref_model.class_name}'
else:
return 'dynamic'
def get_flutter_from_json(self, name):
if self.ref_model:
return f"{self.ref_model.class_name}.fromJson(data['{name}'])"
else:
return f"data['{name}']"
def get_model_field(self):
args = self.prepare_field_arguemnts({'related_name': self.related_name or '+'})
return FieldDeclaration(
[('django.db', 'models')],
'models.ForeignKey("{}", {})'.format(self.related_class, gen_args(args, ['on_delete']))
)
def get_admin_widget(self):
return None
# if select2 is installed
# return FieldDeclaration(
# [('django_select2.forms', 'Select2Widget')],
# 'Select2Widget'
# )
@property
def qualifier(self):
return '1'
@property
def is_many(self):
return False
@property
def is_many_reverse(self):
return True
class RelationOne2OneDef(RelationDef):
def get_flutter_field(self):
if self.ref_model:
return f'{self.ref_model.class_name}'
else:
return 'dynamic'
def get_flutter_from_json(self, name):
if self.ref_model:
return f"{self.ref_model.class_name}.fromJson(data['{name}'])"
else:
return f"data['{name}']"
def get_model_field(self):
args = self.prepare_field_arguemnts({'related_name': self.related_name or '+'})
return FieldDeclaration(
[('django.db', 'models')],
'models.OneToOneField("{}", {}, on_delete=models.CASCADE)'.format(self.related_class, gen_args(args, ['on_delete']))
)
def get_admin_widget(self):
return None
# return FieldDeclaration(
# [('django_select2.forms', 'Select2Widget')],
# 'Select2Widget'
# )
@property
def qualifier(self):
return '1'
@property
def is_many(self):
return False
@property
def is_many_reverse(self):
return False
class RelationManyDef(RelationDef):
def get_flutter_field(self):
if self.ref_model:
return f'List<{self.ref_model.class_name}>'
else:
return 'dynamic'
def get_flutter_from_json(self, name):
if self.ref_model:
return f"data['{name}'].map<{self.ref_model.class_name}>((item) => {self.ref_model.class_name}.fromJson(item)).toList()"
else:
return f"data['{name}']"
def get_model_field(self):
args = self.prepare_field_arguemnts({'related_name': self.related_name or '+'})
if 'null' in args:
del args['null']
return FieldDeclaration(
[('django.db', 'models')],
'models.ManyToManyField("{}", {})'.format(self.related_class, gen_args(args, ['on_delete']))
)
def get_admin_widget(self):
return None # TODO: requires to install package and url, which is not implemented now
# return FieldDeclaration(
# [('django_select2.forms', 'Select2MultipleWidget')],
# 'Select2MultipleWidget'
# )
@property
def admin_list_renderer(self):
return """return ', '.join(map(str, obj.{}.all()))""".format(self.name)
@property
def qualifier(self):
return '*'
@property
def is_many(self):
return True
@property
def is_many_reverse(self):
return True | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/fields/relation.py | relation.py |
from zmei_generator.domain.application_def import FieldDeclaration
from zmei_generator.domain.model_def import ModelDef
from zmei_generator.parser.errors import GlobalScopeValidationError as ValidationException
from zmei_generator.domain.field_def import FieldDef, FieldConfig
from zmei_generator.generator.utils import gen_args
class DefaultTextMixin(FieldDef):
def get_flutter_field(self):
return 'String'
def prepare_field_arguemnts(self, own_args=None):
args = super().prepare_field_arguemnts(own_args)
if not self.extension_args_append and self.extension_args:
return args
# if not self.required and 'required' not in args:
# args['default'] = ''
return args
class TextFieldDef(DefaultTextMixin, FieldDef):
max_length = 100
choices = None
def get_model_field(self):
args = self.prepare_field_arguemnts({'max_length': self.max_length})
if self.choices:
args['choices'] = self.choices
return FieldDeclaration(
[('django.db', 'models')],
'models.CharField({})'.format(gen_args(args))
)
class SlugFieldDef(DefaultTextMixin, FieldDef):
def __init__(self, model: ModelDef, field: FieldConfig) -> None:
super().__init__(model, field)
self.field_names = []
def parse_options(self):
if isinstance(self.options, str) and self.options.strip() != '':
self.field_names = tuple([x.strip() for x in self.options.split(',')])
else:
raise ValidationException(
'Slug field "{}" argument should be names of another fields in same model separated by ","'.format(
self.name))
def get_model_field(self):
max_len = 0
# TODO: get validation back
# for field_name in self.field_names:
#
# if field_name not in self.model.all_and_inherited_fields_map:
# raise ValidationException(
# 'Slug field "{}" can not find field "{}" in the model'.format(self.name, field_name))
#
# target_field = self.model.all_and_inherited_fields_map[field_name]
#
# if not isinstance(target_field, TextFieldDef):
# raise ValidationException(
# 'Slug field "{}" target field is not of type text()'.format(self.name))
#
# max_len += target_field.max_length
args = self.prepare_field_arguemnts({'max_length': 100})
return FieldDeclaration(
[('django.db', 'models')],
'models.SlugField({})'.format(gen_args(args))
)
def get_prepopulated_from(self):
return self.field_names
class LongTextFieldDef(DefaultTextMixin, FieldDef):
def get_model_field(self):
args = self.prepare_field_arguemnts()
return FieldDeclaration(
[('django.db', 'models')],
'models.TextField({})'.format(gen_args(args))
)
class RichTextFieldDef(DefaultTextMixin, FieldDef):
def get_model_field(self):
args = self.prepare_field_arguemnts()
return FieldDeclaration(
[('ckeditor.fields', 'RichTextField')],
'RichTextField({})'.format(gen_args(args))
)
def get_required_deps(self):
return ['django-ckeditor']
def get_required_apps(self):
return ['ckeditor']
def get_required_settings(self):
return {
'CKEDITOR_UPLOAD_PATH': 'uploads/'
}
class RichTextFieldWithUploadDef(DefaultTextMixin, FieldDef):
def get_model_field(self):
args = self.prepare_field_arguemnts()
return FieldDeclaration(
[('ckeditor_uploader.fields', 'RichTextUploadingField')],
'RichTextUploadingField({})'.format(gen_args(args))
)
def get_required_deps(self):
return ['django-ckeditor', 'Pillow']
def get_required_apps(self):
return ['ckeditor', 'ckeditor_uploader']
def get_required_settings(self):
return {
'CKEDITOR_UPLOAD_PATH': 'uploads/'
}
def get_required_urls(self):
return [
(None, "url(r'^ckeditor/', include('ckeditor_uploader.urls')),")
] | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/fields/text.py | text.py |
from zmei_generator.contrib.web.extensions.application.langs import LangsAppExtension
from zmei_generator.domain.application_def import ApplicationDef
from zmei_generator.domain.page_def import PageDef, PageFunction
from zmei_generator.domain.page_expression import PageExpression
from zmei_generator.parser.errors import LangsRequiredValidationError
from zmei_generator.parser.gen.ZmeiLangParser import ZmeiLangParser
from zmei_generator.parser.utils import BaseListener
class PageParserListener(BaseListener):
def __init__(self, application: ApplicationDef) -> None:
super().__init__(application)
self.page = None # type: PageDef
############################################
# Page
############################################
def enterPage(self, ctx: ZmeiLangParser.PageContext):
self.page = PageDef(self.application)
self.page.page_items = {}
self.page.name = ctx.page_header().page_name().getText()
base_name = ctx.page_header().page_base()
if base_name:
base_name = base_name.getText()
self.page.extend_name = base_name[-2] == '~'
self.page.parent_name = base_name[:-2]
if self.page.parent_name and self.page.extend_name:
self.page.name = f'{self.page.parent_name}_{self.page.name}'
self.application.pages[self.page.name] = self.page
def enterPage_alias_name(self, ctx: ZmeiLangParser.Page_alias_nameContext):
self.page.defined_url_alias = ctx.getText()
def enterPage_url(self, ctx: ZmeiLangParser.Page_urlContext):
url = ctx.getText().strip()
if url[0] == '$':
if not self.application.supports(LangsAppExtension):
raise LangsRequiredValidationError(ctx.start)
self.page.set_uri(url)
def enterPage_template(self, ctx: ZmeiLangParser.Page_templateContext):
tpl = ctx.getText().strip()
if '{' in tpl:
self.page.parsed_template_expr = tpl.strip('{}')
else:
self.page.parsed_template_name = tpl
def enterPage_field(self, ctx: ZmeiLangParser.Page_fieldContext):
field = ctx.page_field_name().getText()
val = ctx.page_field_code().getText()
expr = PageExpression(field, val, self.page)
self.page.page_items[field] = expr
def enterPage_function(self, ctx: ZmeiLangParser.Page_functionContext):
super().enterPage_function(ctx)
func = PageFunction()
func.name = ctx.page_function_name().getText()
if ctx.page_function_args():
all_args = ctx.page_function_args().getText().split(',')
func.out_args = [x.strip(' .') for x in all_args]
func.args = [x for x in all_args if x not in ('url', 'request')]
else:
func.args = []
if ctx.code_block():
func.body = self._get_code(ctx)
self.page.functions[func.name] = func
def enterPage_code(self, ctx: ZmeiLangParser.Page_codeContext):
self.page.page_code = self._get_code(ctx.python_code()) + '\n'
def exitPage(self, ctx: ZmeiLangParser.PageContext):
self.page = None | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/parsers/page.py | page.py |
from zmei_generator.contrib.celery.extensions.application.celery import CeleryAppExtensionParserListener
from zmei_generator.contrib.docker.extensions.application.docker import DockerAppExtensionParserListener
from zmei_generator.contrib.filer.extensions.application.filer import FilerAppExtensionParserListener
from zmei_generator.contrib.gitlab.extensions.application.gitlab import GitlabAppExtensionParserListener
from zmei_generator.contrib.react.extensions.page.react import ReactPageExtensionParserListener
from zmei_generator.contrib.web.extensions.application.file import FileAppExtensionParserListener
from zmei_generator.contrib.web.extensions.application.langs import LangsAppExtensionParserListener
from zmei_generator.contrib.web.extensions.application.theme import ThemeAppExtensionParserListener
from zmei_generator.contrib.web.extensions.page.crud_create import CrudCreatePageExtensionParserListener
from zmei_generator.contrib.web.extensions.page.crud_delete import CrudDeletePageExtensionParserListener
from zmei_generator.contrib.web.extensions.page.crud_detail import CrudDetailPageExtensionParserListener
from zmei_generator.contrib.web.extensions.page.crud_edit import CrudEditPageExtensionParserListener
from zmei_generator.contrib.web.extensions.page.crud_list import CrudListPageExtensionParserListener
from zmei_generator.contrib.web.extensions.page.crud_parser import CrudPageExtensionParserListener
from zmei_generator.contrib.web.extensions.page.html import HtmlPageExtensionParserListener
from zmei_generator.contrib.web.extensions.page.markdown import MarkdownPageExtensionParserListener
from zmei_generator.contrib.web.extensions.page.menu import MenuPageExtensionParserListener
from zmei_generator.contrib.web.extensions.page.placeholder import PlaceholderPageExtensionParserListener
from zmei_generator.contrib.web.parsers.fields import FieldsParserListener
from zmei_generator.contrib.web.parsers.imports import ImportsParserListener
from zmei_generator.contrib.web.parsers.model import ModelParserListener
from zmei_generator.contrib.web.parsers.page import PageParserListener
parsers = [
FieldsParserListener,
ImportsParserListener,
ModelParserListener,
PageParserListener,
# extensions
ThemeAppExtensionParserListener,
GitlabAppExtensionParserListener,
DockerAppExtensionParserListener,
FileAppExtensionParserListener,
CeleryAppExtensionParserListener,
LangsAppExtensionParserListener,
FilerAppExtensionParserListener,
MenuPageExtensionParserListener,
MarkdownPageExtensionParserListener,
PlaceholderPageExtensionParserListener,
ReactPageExtensionParserListener,
HtmlPageExtensionParserListener,
CrudPageExtensionParserListener,
CrudDetailPageExtensionParserListener,
CrudDeletePageExtensionParserListener,
CrudEditPageExtensionParserListener,
CrudListPageExtensionParserListener,
CrudCreatePageExtensionParserListener,
] | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/parsers/all_stage1.py | all_stage1.py |
from zmei_generator.contrib.web.extensions.application.langs import LangsAppExtension
from zmei_generator.contrib.web.fields.custom import CustomFieldDef
from zmei_generator.contrib.web.fields.expression import ExpressionFieldDef
from zmei_generator.contrib.web.fields.text import TextFieldDef
from zmei_generator.domain.model_def import ModelDef
from zmei_generator.domain.application_def import ApplicationDef
from zmei_generator.domain.field_def import FieldConfig
from zmei_generator.parser.errors import LangsRequiredValidationError
from zmei_generator.parser.gen.ZmeiLangParser import ZmeiLangParser
from zmei_generator.parser.utils import BaseListener
class ModelParserListener(BaseListener):
def __init__(self, application: ApplicationDef) -> None:
super().__init__(application)
self.model = None # type: ModelDef
self.field = None # type
self.field_config = None # type: FieldConfig
############################################
# Model
############################################
def enterCol(self, ctx: ZmeiLangParser.ColContext):
self.model = ModelDef(self.application)
def enterCol_name(self, ctx: ZmeiLangParser.Col_nameContext):
ref = ctx.getText().strip()
if self.model.extend_name:
self.model.ref = '{}_{}'.format(self.model.parent.ref, ref)
else:
self.model.ref = ref
self.model.short_ref = ref
self.model.name = ' '.join(ref.split('_')).capitalize()
def enterCol_base_name(self, ctx: ZmeiLangParser.Col_base_nameContext):
base = ctx.getText()
separator = base[-2:]
base = base[:-2].strip()
self.model.extend_name = separator == '~>'
self.model.set_parent(base)
def enterCol_verbose_name(self, ctx: ZmeiLangParser.Col_verbose_nameContext):
name = ctx.getText()[1:].strip()
if '/' in name:
name, plural = name.split('/')
name = name.strip(' "\'')
plural = plural.strip(' "\'')
self.model.name_plural = plural
self.model.name = name.strip(' "\'')
def enterCol_str_expr(self, ctx: ZmeiLangParser.Col_str_exprContext):
self.model.to_string = ctx.getText().strip()[2:-1].strip()
def exitCol(self, ctx: ZmeiLangParser.ColContext):
self.application.models[self.model.ref] = self.model
self.model = None
############################################
# Model field
############################################
def enterCol_field(self, ctx: ZmeiLangParser.Col_fieldContext):
self.field_config = FieldConfig()
def enterCol_field_verbose_name(self, ctx: ZmeiLangParser.Col_field_verbose_nameContext):
self.field_config.verbose_name = ' '.join([x.getText() for x in ctx.string_or_quoted().children]).strip('"\' ')
def enterCol_field_help_text(self, ctx: ZmeiLangParser.Col_field_help_textContext):
self.field_config.field_help = ' '.join([x.getText() for x in ctx.string_or_quoted().children]).strip('"\' ')
def enterCol_modifier(self, ctx: ZmeiLangParser.Col_modifierContext):
m = ctx.getText()
if m == "$":
if not self.application.supports(LangsAppExtension):
raise LangsRequiredValidationError(ctx.start)
self.field_config.translatable = True
elif m == "!":
self.field_config.index = True
elif m == "=":
self.field_config.display_field = True
elif m == "*":
self.field_config.required = True
elif m == "~":
self.field_config.not_null = True
elif m == "&":
self.field_config.unique = True
def enterCol_field_name(self, ctx: ZmeiLangParser.Col_field_nameContext):
self.field_config.name = ctx.getText().strip()
def exitCol_field(self, ctx: ZmeiLangParser.Col_fieldContext):
if not self.field:
self.field = TextFieldDef(self.model, self.field_config)
self.field.load_field_config()
self.model.fields[self.field.name] = self.field
self.field = None
self.field_config = None
# Calculated field
def enterCol_field_expr(self, ctx: ZmeiLangParser.Col_field_exprContext):
self.field = ExpressionFieldDef(self.model, self.field_config)
def enterCol_field_expr_marker(self, ctx: ZmeiLangParser.Col_field_expr_markerContext):
marker = ctx.getText().strip()
if marker == '@=':
self.field.static = True
def enterCol_feild_expr_code(self, ctx: ZmeiLangParser.Col_feild_expr_codeContext):
expr = ctx.getText().strip()
if expr[0] == '!':
expr = expr[1:].strip()
self.field.boolean = True
self.field.expression = expr
def enterCol_field_extend_append(self, ctx: ZmeiLangParser.Col_field_extend_appendContext):
self.field.extension_args_append = True
def enterCol_field_extend(self, ctx: ZmeiLangParser.Col_field_extendContext):
self.field.extension_args = self._get_code(ctx)
# Custom
def enterCol_field_custom(self, ctx: ZmeiLangParser.Col_field_customContext):
self.field = CustomFieldDef(self.model, self.field_config)
self.field.custom_declaration = self._get_code(ctx) | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/parsers/model.py | model.py |
from zmei_generator.contrib.filer.fields.filer import FilerImageFolderFieldDef, FilerImageFieldDef, ImageSize, \
FilerFileFieldDef, FilerFileFolderDef
from zmei_generator.contrib.web.fields.bool import BooleanFieldDef
from zmei_generator.contrib.web.fields.date import DateFieldDef, DateTimeFieldDef, AutoNowDateTimeFieldDef, \
AutoNowAddDateTimeFieldDef
from zmei_generator.contrib.web.fields.image import ImageFieldDef, SimpleFieldDef
from zmei_generator.contrib.web.fields.number import IntegerFieldDef, FloatFieldDef, DecimalFieldDef
from zmei_generator.contrib.web.fields.relation import RelationOneDef, RelationOne2OneDef, RelationManyDef
from zmei_generator.contrib.web.fields.text import TextFieldDef, SlugFieldDef, LongTextFieldDef, RichTextFieldDef, \
RichTextFieldWithUploadDef
from zmei_generator.domain.application_def import ApplicationDef
from zmei_generator.parser.gen.ZmeiLangParser import ZmeiLangParser
from zmei_generator.parser.utils import BaseListener
class FieldsParserListener(BaseListener):
def __init__(self, application: ApplicationDef) -> None:
super().__init__(application)
self.image_size = None # type: ImageSize
# Text
def enterField_text(self, ctx: ZmeiLangParser.Field_textContext):
self.field = TextFieldDef(self.model, self.field_config)
def enterField_text_size(self, ctx: ZmeiLangParser.Field_text_sizeContext):
mlen = ctx.getText()
if mlen == '?':
self.field.max_length = 0
else:
self.field.max_length = int(mlen)
def enterField_text_choices(self, ctx: ZmeiLangParser.Field_text_choicesContext):
self.field.choices = {}
def enterField_text_choice(self, ctx: ZmeiLangParser.Field_text_choiceContext):
choice_key = ctx.field_text_choice_key()
val = ctx.field_text_choice_val().getText().strip('"\' ')
if choice_key:
key = choice_key.getText().strip(': ')
else:
key = val
self.field.choices[key] = val
def exitField_text_choices(self, ctx: ZmeiLangParser.Field_text_choicesContext):
if not self.field.max_length:
for key in self.field.choices.keys():
self.field.max_length = max(self.field.max_length, len(key))
self.field.choices = tuple(self.field.choices.items())
# Integer
def enterField_int(self, ctx: ZmeiLangParser.Field_intContext):
self.field = IntegerFieldDef(self.model, self.field_config)
def enterField_int_choices(self, ctx: ZmeiLangParser.Field_int_choicesContext):
self.field.choices = {}
def enterField_int_choice(self, ctx: ZmeiLangParser.Field_int_choiceContext):
choice_key = ctx.field_int_choice_key()
if choice_key:
key = int(choice_key.getText().strip(': '))
else:
key = len(self.field.choices)
val = ctx.field_int_choice_val().getText().strip('"\' ')
self.field.choices[key] = val
def exitField_int_choices(self, ctx: ZmeiLangParser.Field_int_choicesContext):
self.field.choices = tuple(self.field.choices.items())
# Float field
def enterField_float(self, ctx: ZmeiLangParser.Field_floatContext):
self.field = FloatFieldDef(self.model, self.field_config)
# Decimal field
def enterField_decimal(self, ctx: ZmeiLangParser.Field_decimalContext):
self.field = DecimalFieldDef(self.model, self.field_config)
# Slug field
def enterField_slug(self, ctx: ZmeiLangParser.Field_slugContext):
self.field = SlugFieldDef(self.model, self.field_config)
def enterField_slug_ref_field_id(self, ctx: ZmeiLangParser.Field_slug_ref_field_idContext):
self.field.field_names.append(ctx.getText())
# LongText field
def enterField_longtext(self, ctx: ZmeiLangParser.Field_longtextContext):
self.field = LongTextFieldDef(self.model, self.field_config)
# Html field
def enterField_html(self, ctx: ZmeiLangParser.Field_htmlContext):
self.field = RichTextFieldDef(self.model, self.field_config)
# Html media field
def enterField_html_media(self, ctx: ZmeiLangParser.Field_html_mediaContext):
self.field = RichTextFieldWithUploadDef(self.model, self.field_config)
# Bool field
def enterField_bool(self, ctx: ZmeiLangParser.Field_boolContext):
self.field = BooleanFieldDef(self.model, self.field_config)
def enterField_bool_default(self, ctx: ZmeiLangParser.Field_bool_defaultContext):
self.field.default = ctx.getText().strip() == 'true'
# Date field
def enterField_date(self, ctx: ZmeiLangParser.Field_dateContext):
self.field = DateFieldDef(self.model, self.field_config)
# Datetime field
def enterField_datetime(self, ctx: ZmeiLangParser.Field_datetimeContext):
self.field = DateTimeFieldDef(self.model, self.field_config)
# update_time field
def enterField_update_time(self, ctx: ZmeiLangParser.Field_update_timeContext):
self.field = AutoNowDateTimeFieldDef(self.model, self.field_config)
# create_time field
def enterField_create_time(self, ctx: ZmeiLangParser.Field_create_timeContext):
self.field = AutoNowAddDateTimeFieldDef(self.model, self.field_config)
# image and image_folder field
def enterField_image(self, ctx: ZmeiLangParser.Field_imageContext):
type_name = ctx.filer_image_type().getText()
if type_name == 'image':
self.field = ImageFieldDef(self.model, self.field_config)
elif type_name == 'filer_image_folder':
self.field = FilerImageFolderFieldDef(self.model, self.field_config)
else:
self.field = FilerImageFieldDef(self.model, self.field_config)
self.field.sizes = []
def enterField_image_size(self, ctx: ZmeiLangParser.Field_image_sizeContext):
self.image_size = ImageSize()
self.image_size.filters = []
def enterField_image_size_name(self, ctx: ZmeiLangParser.Field_image_size_nameContext):
self.image_size.name = ctx.getText().strip()
def enterField_image_size_dimensions(self, ctx: ZmeiLangParser.Field_image_size_dimensionsContext):
width, height = ctx.getText().strip().split('x')
self.image_size.width = int(width)
self.image_size.height = int(height)
def enterField_image_filter(self, ctx: ZmeiLangParser.Field_image_filterContext):
self.image_size.filters.append(ctx.getText()[1:])
def exitField_image_size(self, ctx: ZmeiLangParser.Field_image_sizeContext):
self.field.sizes.append(self.image_size)
self.image_size = None
# filer_file field
def enterField_filer_file(self, ctx: ZmeiLangParser.Field_filer_fileContext):
self.field = FilerFileFieldDef(self.model, self.field_config)
# file field
def enterField_file(self, ctx: ZmeiLangParser.Field_fileContext):
self.field = SimpleFieldDef(self.model, self.field_config)
# folder field
def enterField_filer_folder(self, ctx: ZmeiLangParser.Field_filer_folderContext):
self.field = FilerFileFolderDef(self.model, self.field_config)
# Relation field
def enterField_relation_type(self, ctx: ZmeiLangParser.Field_relation_typeContext):
type_name = ctx.getText()
if type_name == 'one':
self.field = RelationOneDef(self.model, self.field_config)
if type_name == 'one2one':
self.field = RelationOne2OneDef(self.model, self.field_config)
if type_name == 'many':
self.field = RelationManyDef(self.model, self.field_config)
def enterField_relation_cascade_marker(self, ctx: ZmeiLangParser.Field_relation_cascade_markerContext):
if ctx.getText() == '!':
self.field.on_delete = 'CASCADE'
elif ctx.getText() == '~':
self.field.on_delete = 'SET_NULL'
def enterField_relation_target_class(self, ctx: ZmeiLangParser.Field_relation_target_classContext):
self.field.related_class = ctx.getText()
def enterField_relation_target_ref(self, ctx: ZmeiLangParser.Field_relation_target_refContext):
self.field.ref_model_def = ctx.getText()[1:]
def enterField_relation_related_name(self, ctx: ZmeiLangParser.Field_relation_related_nameContext):
related_name = ctx.getText()[2:].strip()
self.field.related_name = related_name | zmei-cli | /zmei-cli-2.1.10.tar.gz/zmei-cli-2.1.10/zmei_generator/contrib/web/parsers/fields.py | fields.py |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.